This is the 12th installment in guest blogger Frank Klepacki’s series on music production. Today Frank talks about sound effects for video games. If you missed Frank’s previous post, you can read it here.
Sound effects in video games have a different angle and approach than working in linear media.
With TV and film, you are working with all the respective tracks in a mix, placing it in surround, and ensuring that the desired experience is heard appropriately the same way every time. You have full control over that, and you remain in the comfort zone of your DAW.
The content creation aspect in games is the most familiar process to other media, such as sound designing, field recording, and mixing cinematic sequences. But that’s where the familiarity ends.
Beyond that, as it pertains to the gameplay aspects, every asset you create is a separate audio file, and gets put into a database or folder of sounds that the game will access any time it needs to during gameplay. The files will need to be compressed to a desirable setting (for the sake of saving not only physical space, but also amount of memory / buffering space the game needs to use any given time), While you can normalize and balance all of all these files, they will have to obviously vary in their volume playback and distance perception, when played in the 3D world of the game. The audio playback is always relative to the position of the player in the world, so things can be a bit different every time.
Games are built on “engines.” This is the core foundation that is programmed to run it, and do all the things that the game is designed for – how things move, interaction, physics, rendering, all the way down to the artificial intelligence, and of course, sound support.
Some of these components within the game engine can be licensed if it makes sense in the grand scope per project, vs. the cost to program it from scratch. We call these licensed components “middleware.” There are several sound options for middleware that are used in games, such as Miles Sound System, FMOD, or WWise. These middleware systems integrate into a game, allowing developers the ability to play the audio for their game on any format, whether it’s PC, Xbox, Playstation, Wii, or all of the above. They also offer different tool sets you can use to tweak your sounds, organize them, and get a rough idea of how they will sound in the game, so that you can have some independence on the implementation side of things. Once everything is plugged in and functioning, an audio programmer is essential for bridging the middleware seamlessly into the game, and supporting any custom playback scripts you might require that are special to how the game functions.
While in a perfect world you’d hope to be able to do everything you need to with using this middleware and toolsets, you really can’t ever escape the need for some customization. The audio quality experience of any game will only ever be as good as its implementation. You need a dedicated audio programmer to make sure you have the support you need to ensure the sounds are playing correctly, properly attached to their respective objects, movements, events, ambience, custom scripting, etc. Then you need a means of mixing the relative volumes and / or filtering applied to these sounds on a global level. The games “mixing board” if you will. We’ll discuss this aspect in part 2.
– Frank Klepacki
Frank Klepacki is an award-winning composer for video games and television for such titles as Command & Conquer, Star Wars: Empire at War, and MMA sports programs such as Ultimate Fighting Championship and Inside MMA. He resides as audio director for Petroglyph, in addition to being a recording artist, touring performer, and producer. For more info, visit www.frankklepacki.com