A little candy before the next news post comes around.
There is currently a lot of work going on about game mechanics and more but more will be not disclosed yet. In the mean time to bridge the time until all is prepared for the news post have some Drag[en]gine goodies that also took place during this time. The focus in this news post is on Height Terrain Navigation Spaces and Navigation Space Optimizations. So no long words but instead a video.
ModDB: Height Terrain Navigation Space
And here a little short-list of what else went on that is Drag[en]gine specific.
This time around the Audio System received some overdue additions in preparation for what is to come.
Audio systems in games are notoriously static in nature. A buzz word is "sound shaders" but this is more parametrize play parameters than really dynamic sound. For this game project I need a sound system which supports dynamic sound and this is what I've cooked up in the last month.
At the core of the dynamic sound systems sit the Synthesizers. The link points to the wiki page which contains in-detail information I'm going to leave out here. In a nut-shell synthesizers allow to generate sound at run-time using sound production rules (or sources). Synthesizers are assigned to speakers and played back like regular sound files just that they are dynamic not static. Controllers can be defined to manipulate the generated sound at run-time, and live while playing back! The important feature here is that the synthesizers are generic in nature like the rest of the game engine. All of their actual use cases are implemented inside game scripts using/driving synthesizers instead of being hard-coded into the game engine. This provides much more flexibility to me and do you if you work later on with this game engine. For this the new synthesizer editor has been added so synthesizers can be easily created and tested.
[p]Two example implementations of synthesizer driving scripts are included in the game engine distribution: Dynamic music and announcers. Both use a simple synthesizer with a single chain source and are ready to use.
Dynamic music allows to modify music playing by transitioning through music parts using switches. As a test example I used the dynamic music files from Stalker Clear-Sky since they are well suited for this test-case. The included scripts load the dynamic music from an XML file created by hand. The file contains the music parts (sound files), switches used by the game and transitions between music parts depending on switch states. All is implemented in simple game scripts so it can be altered and extended without limits.
Announcers allow to produce in a simple way announcement systems like automated train announcement systems using a list of recorded words. As a test example I used the VOX files from half-life 1 since they are well suited for this test-case. The included scripts load the announcer from an XML file created by hand. The file defines where the word sound files are located. Once loaded a sentence can be given to the script and it plays back the announcement.
The video below shows these two scripts in action from inside the test project. This is a project included in the game engine distribution and is a sort of demo-project to learn the ropes. Copyrighted material as used for my implementation tests are obviously not included.
[p]These are only two small examples which the game will build upon. Since these are scripts it is simple to extend and improve. And now to something different.
The game project uses reusable world geometry a lot. For this reason material sounds are not as simple as assiging a sound type to an object. Especially material-material impact sounds require usually a lot of work with recording tons of sound samples. Since I don't have a sound engineer and not this level of equipment I decided to cut down the number of sounds by using combined collision sounds. Instead of playing one sound for each individual material combination impacts play now a sound for each material involved. This reduces work a lot while allowing for more combinations. Material types support now a range of different sound events from impacts to actor movement sounds. Sounds are either pre-recorded sound samples or possibly synthesizers. Former is used right now for easier use but later can be used for special tricks.
To improve this the physics system has been also improved to properly handle kinematic and dynamic collisions in a similar way. Collision shape properties are now used on all elements to link collision shapes to object materials. The world editor supports now properties on component textures as seen in the screenshot below.
[p]This allows to assign arbitrary properties to textures while re-defining them in the editor. The game scripts use this to re-define per-texture material type in addition to those defined in element classes. The video below is work in progress on adding more material sounds as well as getting all objects their appropriate sounds assigned.
With the synthesizer system in place I can now do this nifty little surprise I'm twiddling around in my head for a long time. I'm not going to say more for the time being :D .
The last month a lot of work went into AI development, performance optimizing scripts and the usual feature improvements and bug-fixing. Since this game is a lot about detective investigation the game revolves a lot around dynamic interaction with characters and the environment. This requires a powerful conversation system and powerful AI. The AI does not just stand around waiting for the player to blow them in the face nor does it act as stupid cannon fodder. The AI system features now the required basic functionality.
Game AI usually consists of a behaviour tree or state machine. This works for simple AI but is rigid and error prone for complex AI. In this project player and non-player actors are handled with as much shared code as possible (if player can do so can CPU; if CPU can do so can player). To achieve this the game uses a 3-stage AI behaviour:
Drag[en]gine provides path calculation for AI using the Navigation System (static obstacles). Collision avoidance (or steering) of dynamic obstacles along the path is job of the scripts since this is very game specific. The improved collision avoidance deals with two typical scenarios: Move Around and Wait Behind.
A typical solution is using physical approach where actors and obstacles are magnets repelling each other. This solution works but movement looks like what it is: two repelling magnets. This is unnatural and not matching the game goals. To solve this problem collision avoidance is implemented similar to real humans avoiding collisions. Approaching an obstacle they change direction once not repel like magnets. Actors test collision along their moving direction in regular intervals (1-2s). If a future collision is detected a deviation angle is calculated winging the obstacle. This deviation is applied for 1-2s then the actor heads again for his intended goal. The closer the actor is to the obstacle the larger is the deviation angle. Combined with testing at larger intervals leads to a pleasant avoiding behaviour. Waiting behind obstacles is required if actors pass through narrow spaces like doors. Instead of winging actors adjust their speed if the obstacle is slower. Actors detect nearby walls so if they choose the wall side of the obstacle but don't fit through they pick the other side. Navigation space cost functions are used to pick cheaper movement costs. This keeps actors inside virtual corridors avoiding unfavourable terrain while avoid collisions. The video below gives an impression on how this looks like in action.
AI has to react to dynamic changes in the world and react intelligent to static world. To achieve this goal AI supports notifications. Objects can send notifications to actors with localized hints which it can respect or ignore. This is used for a couple of situations:
The game defines a small set of unique characters to begin with. The majority of actors are generated from XML templates (modders can rejoice). You can modify appearance (random texture sets), starting inventory (random props), animations (all animators ever used) and more. As you progress with your investigation the game (CM) converts elected generated characters to important actors tying them into the story. So actors passing by can later become important making each investigation unique.
The goal is to create a living world the player investigates. Interacting with actors in different situations and locations alters flow and outcome. The right place and time is as important as asking the right questions. These AI have been implemented so far (more to follow).
Besides AI I've also started working on a game engine related topic. I think you can guess what it's all about from the image below. Readers of the previous news post (and wiki) might recognize something ;)
Preparations have been also done for a special unique feature of this game. I'm not going to talk about it now since this would spoil the fun so look forward to it! But I'm sure you are going to like it (especially follow indie-devs and modders).
What also happened for those interested in details (as text file to not bloat the news post)
This project is still in need of helping hands on the content production side. If you are a model artist (skilled in world props, buildings or humanoids) or texture artist you are welcome to get in contact with me. If you have other skills and want to help don't be shy and send me a PM too.
You can now not only track the game engine and game project on the Drag[en]gine and Epsylon profiles, you can now also track it on the EpsylonGame Twitter Channel.
So if you want to support the love for this project tell your friends about this channel or re-tweet what you find there. Every support helps.