Image courtesy of Arabian Art Studios.
In this installment of guest blogger Frank Klepacki’s series on music production Frank talks about how to program audio for virtual reality games. If you missed Frank’s previous post, you can read it here.
Virtual reality, or VR for short, has been all the rage at the trade shows the past couple of years. There had been quite a bit of anticipation that this may be the next big thing in entertainment and in the video game world. Some impressive demos for the Oculus Rift had circulated and games started surfacing, showing off the technology and potential immersion.
The concern was that it was quite costly in the beginning, but you can say the same for any new tech that comes out. Eventually, a more cost-effective way presents itself and then it becomes more widely accepted. Which is exactly what started happening afterward, with Gear VR, which uses a Samsung smartphone inserted into goggles, and the consoles like Sony PlayStation coming out with their own supported version.
So then with regard to audio, the challenge is how to implement it properly for this type of experience. What I found is that it really comes down to the finer details, and clever positioning of the audio and where it comes from at any given time.
Let’s start with ambiance. If a player is walking around outside you want to simulate what that sounds like if you were actually there. The way to do this is to capture a discrete-track surround mix of ambiance and separate each track to its own audio emitter and place them around the map the player will be in. When these are in a fixed position around the player, he will perceive being in the environment because when he turns his head the ambiance will still be doing its thing cohesively in its same position. Intermittent ambient sounds like birds and insects can trigger randomly on timers, low in volume, in positioned emitters as well. For example, a bird emitter could be in trees. Insect emitters could be near bushes. Frog emitters could be near water. The position of the emitters will also be tracking from the height perspective as well. So you can really put things where they would actually feel like they should be.
Detail of assigning sounds to the physical “bones” objects that a player interacts with is also more tedious. For example, the footstep of a Mech walking around in VR would require assigning the right sound to the exact foot bone of the model that the sound would come from. Any servo sounds would have to be assigned to the corresponding legs, and so forth.
Simulating things in the virtual world realistically requires you to think differently about hooking up sound. In a fantasy game where you face a huge dragon, you might consider having the high-end frequency flames emitting from the dragon’s mouth, and low-end frequency emitting simultaneously from the position of his chest, thereby creating the effect of a rumble in front of you, while the mouth is heard above you. Combine this with the first-person player position of getting hit, and you feel like you’re in the fight.
Not everything, though, is cut and dry in the implementation process. There are curveballs you will face varying from platform to platform. For example, putting in every last detail might serve the PC experience just fine, but it may not work so well on a Gear VR that uses your smartphone. Phones can get hot real quick if you push all their processing power constantly. So the game has to be programmed in a way to prevent that. Art may need to be more simplified, such as making textures smaller or reducing the polygon count in the environments. Audio is susceptible to this as well. All the detail work you have in mind may be a contributor to hogging the resources and also must be cut back. So how do you decide?
Maybe you cut back the amount of emitters if you went crazy putting them everywhere. Maybe they only trigger when the player is at a certain distance. Each situation as you play the game has to be analyzed as to what causes a potential overload. So you do what makes the most sense and keep the highest priority items in the forefront.
I worked on a VR experience called “Cursed Sanctum.” It’s a fantasy-based “choose-your-own-adventure” kind of experience. The atmospheres look gorgeous, and the light combat and different outcomes of where you choose to go are fun to watch. There were sections where audio was perfectly placed in a map, and some sections where it needed to be more sparse. We did, in fact, trigger things as the player approached, and even had some cases where we had to keep certain sounds in front of the player to make key things more obvious, and make sure the sounds directed going towards certain areas. Some pre-mixed trickery was done for cinematic sequences and things that we wanted the player to turn their attention to. You’d be surprised how making things that are subtle can completely go unnoticed. So at the end of the day you have to do what best serves the game, whether it’s a design decision or a technical decision.
“Cursed Sanctum” for Gear VR is available for download at the Oculus store here: https://www.oculus.com/experiences/gear-vr/1198492796853678/
Frank Klepacki is an award-winning composer for video games and television, including Command & Conquer, Star Wars: Empire at War, and MMA sports programs such as Ultimate Fighting Championship and Inside MMA. He is an audio director, in addition to being a recording artist, touring performer, and producer. For more info, visit www.frankklepacki.com
Follow Frank on: