Virtual reality sound in The Turning Forest

Published: 5 May 2016
  • Chris Pike (MEng PhD)

    Chris Pike (MEng PhD)

    Lead R&D Engineer - Audio

In April 2016, The Turning Forest premiered at the Tribeca Film Festival and made it into Wired.com's eight favourite VR pieces at Tribeca.

The Turning Forest is a magical sound-led VR fairytale written by Shelley Silas and directed by Oscar Raby. It was created for the Oculus Rift and at Tribeca we also used a SubPac (a haptic vest) to enhance the audio experience. The experience took place in a magical forest installation built from acoustic blankets to reduce the noise levels.

Listen to an audio feature on the making of The Turning Forest.

The original production was one of three short dramas that were commissioned by the EPSRC Programme Grant S3A: Future Spatial Audio for an Immersive Listener Experience at Home (EP/L000539/1) and the BBC as part of the BBC Audio Research Partnership. Eloise Whitmore and Edwina Pitman produced a short audio feature (above), telling the story of how The Turning Forest went from research project to film festival.

BBC Click recently featured The Turning Forest after its world premiere at Tribeca. They give a great introduction to binaural sound and the work that we're doing on this project and beyond. The full episode is available on the iPlayer but the piece on the Turning Forest is embedded below.  I also give a little more technical detail on how the 3D sound was created.

There has been a lot of discussion about the importance of sound in virtual reality this year. There are now tools available for creating and distributing 360˚ and VR experiences with dynamic binaural sound i.e. headphone sound that gives a 3D spatial impression and updates according to your orientation. With the Turning Forest VR project our aim was to demonstrate the impact that high-quality 3D sound production can make in virtual reality content. To achieve this we built two major components of our audio research work into a production workflow for VR, dynamic binaural rendering and the Audio Definition Model.

Our binaural production system, previously used to create the Fright Night radio dramas, was used to make a broadcast quality sound mix for headphones, using real-time tracking to adapt the 3D audio scene to the listener’s orientation. It was integrated with a synchronised 360˚ video viewer to allow for spatial alignment of visual and sound sources, as previously used on the Unearthed production for BBC Taster.

The big difference between this and previous projects was that this was not just 360˚ video production but virtual reality, where a 3D world was created using computer graphics and the listener could move within the scene (within a limited range). We created the audio first and then commissioned the wonderful Oscar Raby and his VRTOV studio to help us to turn it into a VR experience. Therefore we needed a workflow that allowed them to build an interactive visual world around our 3D sound scene. Using the Audio Definition Model, we could export the audio sources and their dynamic position data into a single WAV file from the binaural production system.

Our software for binaural rendering and handling the Audio Definition Model was then built into plug-ins for the Unity game engine, which was used by VRTOV to produce the graphical content. So we could export our complete object-based 3D audio mix from the audio workstation to the game engine via a single file.

One additional trick that we used for the installation at the Tribeca festival was to add a low frequency effects signal through a device called a SubPac, a backpack that translates the LFE into body vibrations. This shook the listener with the footsteps of the creature in the forest, which was fun.

We plan to give more details on these tools and the workflow in a technical paper in the future. There was obviously a lot learned during this process which can be improved upon with further development, but we feel it allowed us to create a rich and immersive sound scene that greatly enhanced the virtual reality experience.

The Turning Forest VR would not have been possible without the excellent work of a large team of talented audio engineers:

BBC R&D Developers Richard Taylor Richard Day Tom Nixon

S3A Researchers James Woodcock Andreas Franck Phil Coleman Dylan Menzies

Sound Production Team Eloise Whitmore Tom Parnell Steven Marsh Ben Young Paul Cargill

Head of BBC R&D Audio Team Frank Melchior

The original production was one of three short dramas that were commissioned by the EPSRC Programme Grant S3A: Future Spatial Audio for an Immersive Listener Experience at Home (EP/L000539/1) and the BBC as part of the BBC Audio Research Partnership. They have already been used in several research studies. The content itself is available in object-based Audio Definition Model WAV files from the University of Salford and a paper discussing the production was presented at the AES Convention in Paris.

Search by Tag:

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Demo mode

Hides preview environment warning banner on preview pages.

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: