On Wednesday 5th March I visited the sound design team at Electronic Arts, based in Guildford.I wanted to visit EA in order to find out how they go about producing and implementing sounds within games, as a reference point for how I might develop further interaction projects.
First of all, I spoke to sound designer, Dave Newby, he explained the process of developing sound libraries for new games and the way in which they implemented sounds during the development stages. When creating a new sound library, the team make a list of all of the possible sounds and atmospheric tracks that they might use, from here, they either go out to record sounds on location, or take sounds from their existing libraries of effects in order to generate a new library of sounds, specific to the current project. Within this library, sounds are categorised according to where they will be used and as a new project is developed, the sound designers are provided with prototype videos of the visuals upon which they can begin to place and arrange sound effects and atmospherics. As other areas of the project develop, the sound designers work to develop the sounds being used to make the sound aspect of the game as immersive as possible.
Speaking with Barney Pratt, another of the sound designers who works with atmospheric tracks, helped me to find out about how sounds are edited and treated in order to achieve the right feel, and also, how they are triggered and arranged within the game world.
– Within the game world, sounds are represented by emitters. As I understand it, the placement of emitters in the game world determine how atmospherics are applied to the sound being played – they do this by altering the frequency range of the sound sample. For example, reducing the high and low frequencies of a sound sample will make it ‘sound’ as if it is further away. This adjustment, coupled with other effects such as reverb and chorus contribute to the processing of the sound to make it suit the environment in which the emitter is placed.
If, for example, an emitter for the sound is placed directly in front of a character on screen, the emitter will make the sound appear close by if, however, the emitter is placed further away from a character on screen, possibly even behind walls or other objects, the emitter will take into account the things in the game world which obstruct the character from the sound, and adjust the sound sample accordingly.
On a technical level, I also found out about how sounds are looped and mix seamlessly during runtime – something which I have been having difficulties with in my flash experiments. In the context of some of the examples which I saw, their approach was to create a sound sample of around 8 seconds which maintains a constant volume, this is then streamed within the game so that memory usage is kept down, and can be faded up or down, and looped using code which is triggered by different events in the game or interactive experience. This method saves having to load a separate sample for the start, duration and end of an atmospheric track and allows for transitions and adjustments to be made more quickly.
The other interesting technique which I found out about is the use of mid/side recording. This is a technique whereby a stereo and a mono recording are combined and the perception of depth within sound can be changed by increasing or decreasing the volume of the mono sound source – it’s a bit more complicated than that but I have a more accurate reference link below. The resulting effect is that you can audibly zoom in and out within a sound scape or atmospheric track, – this may be an interesting technique to try out in some sort of interactive experiment.
Overall, I found the experience really interesting and although the format in which I am working is slightly different, I think that the insight into how sound is dealt with in another form of interactive entertainment gave me ideas about how I can develop certain aspects of my project. I also feel that I got a lot of good information about the technical aspects of sound design in terms of what considerations to make about how the sound could be altered by different interactions.