DD DAY 2021 Masterclass 3 – music and sound for moving image

Nainita Desai & David Butler

Welcome to the transcript from our third masterclass/expert talk session for our DD Day 2021 programme. This time we invited award winning film composer Nainita Desai with host David Butler (DD Day trustee and DD researcher). They talked about why and how Delia is an inspiration for Nainita and a few examples of Nainit’s work making music and sound design for film.  This was a chance to get under the hood of how a highly dedicated and passionate composer creates their work and how this relates to the pioneers like Delia. 

With grateful thanks to Arts Council England for supporting this year’s project, whose theme was imagination.

DAVID: Hello everybody and welcome to the third of these Delia Derbyshire Day online sessions running throughout the latter half of 2021. We are funded by Arts Council England and Granada Foundation.

So, this is the third of the online sessions that Delia Derbyshire Day is doing around the theme of imagination. I’ve always said one of Delia’s greatest gifts is to our imagination, that kind of wondrous thing of taking an everyday sound, which might be a metal lampshade, most famously, or a knock on the door or her own voice and then transforming that into something extraordinary. It’s that gift to the imagination, which I think is one of the most profound things that her work does.

That certainly applies to our guest for tonight’s session, Nainita Desai. So Nainita is one of the most inventive and distinctive composers for contemporary screen media. That’s running across film, television and video games. She was a Bafta breakthrough Brit, she won a breakthrough composer award in 2020, which was for the IFMCA.

Her recent credits would include An American Murder (2020) which is, I think currently Netflix are saying is their most watched documentary; For Sama (2019), which is another documentary and that one won the RTS Craft Award for best score. And most recently, The Reason I Jump (2021), released earlier this year, some of you might have caught it in cinemas in the UK, it’s streaming in the US at the moment isn’t it, Nainita, I think.

NAINITA: It’s actually being released on DVD and Blu-ray, and digitally I think on iTunes on the 6th September! So just a few more days and you can watch it on these platforms.

DAVID: So there’s so much to talk about there. And what we are going to do is, Nainita’s put together a series of clips from her work and we’ll play those, talk about them and really just reflect on the role of sound and music within screen media, film, television and we might talk about video games as well.

I thought to begin with, in time-honoured fashion, it would be lovely to hear from you Nainita, sort of how you got into sound and music, how you developed that love of audio and that might lead us into some connections with Delia, because there are so many parallels in that both you and Delia have a deep interest in mathematics, you both studied mathematics and also both of you are sound designers and composers, so there are lots of points of reference there. So maybe if you could start by saying how you actually developed that love and interest?

NAINITA: Absolutely. Thank you David. And hello, everyone, it’s lovely to connect with you today on this very special Delia Derbyshire Day session. I was born and brought up in London. My musical background is different and unusual I guess, my career path is different to the normal film composer route.

Music was always a part of my DNA. I went to a Church of England school and so I was very immersed in the school choirs, thank goodness for free music lessons in the day when I learnt the violin and the piano at school. So, I was a part of the school orchestra at primary school playing the violin. When I got into secondary school I immersed myself in music. I was also very academic and at home I was brought up as a British-Indian. I was trotted off to the Hindu Temple at the weekend by my parents and was then forced to learn the Sitar and Indian classical music. So I have a grounding and a love of all different musical styles and genres. At secondary school, I had my own band, I wanted to be a singer. I was very much into jazz, I was introduced to Flamenco music, I was very much into world music as it’s called, for want of a better word, I suppose. There was never any delineation between anything, sound interested me.

I was pushed down this road of being very academic. I felt I had to have a backup plan. So in terms of going into the arts, it was never a career option. I remember going to the careers department at my secondary school at the age of about 13 or 12 years old. I looked up various books and pamphlets on, about what career you can do, what do you do when you leave school. My dream career had to involve technology, I loved technology, I was a bit of a geek at school, I was into computers and music and all of that. I thought I was really interested in becoming a studio manager at the BBC.

It encapsulated my love of technology and creativity and I thought that there was a career path perhaps one day a dream that I could go into work at the BBC Radiophonic Workshop. That was my career goal. That was my fantasy, to work for the BBC Radiophonic Workshop. I thought, how do I get into it, because it’s all word-of-mouth, in this sort of department up in heaven in the BBC somewhere!

Of course, I was too late. Because by the time I was coming up through the ranks, the department closed down. As a child, I was very, very interested. My ears were perked up to ’80s, ’70s television tunes, television theme tunes. Of course, as a young kid, being exposed to the Doctor Who theme tune was a huge deal for me and incredibly inspiring.

Then I discovered connections, I made those connections from being at school, the school careers department, the Radiophonic Workshop, BBC, Doctor Who and loving Blake’s 7 and all the theme tunes, you know, all of that, children’s TV, Grange Hill, Rhubarb and Custard, classic theme tunes. What was the other one? Roald Dahl’s creepy Tales of the Unexpected. Then of course Doctor Who. My Doctor Who was Tom Baker, I grew up with Tom Baker and entered a Blue Peter competition once. I loved to draw. I was really into art. I drew my favourite character and it happened to be Tom Baker. And his scarf of course.

So that is when I discovered Delia. She was the first female role model for me in terms of music and sound – I mean there are very few female role models. As a teenager, I got interested in sound, you know, the sonic world. Having a love of world music, musical genres didn’t mean much to me.

Being interested in recording, tape recording, I bought this cassette four-track multitrack machine and my parents had an old reel-to-reel in their attic which I brought down, and I’d experiment with recording and playing. Then I got my first synthesiser. It was in the school music department, I was about 14 years old I think. And there was this EMS VCS3 synthesiser which, you know, for those in the know, is a legendary synthesiser – pioneering. The school bought it for about £2,000. Now, if you want it, they’re rare and they probably cost about £20,000 on the vintage synthesiser market.

I said to the Head of Music, do you mind if I borrow this, I want to fiddle around with it. And they said, yes, of course. And so I took it home, and I didn’t take it back! And I had it for I think two-and-a-half years.

(Laughter)

I was playing around. That was my introduction to synthesis and fiddling around with the little spokes and the grid and my introduction to sound synthesis.When I left school, I felt this immense sense of guilt and I took it back.

(Laughter)

They said, thank you for this, we had no idea that you’d had it all this time, you know. So I shouldn’t have been so honest perhaps!

DAVID: That was such a fateful day. Whoever bought that VCS3. Then also the generosity of the school to let you do that. So you were having to work out how to make sense of that yourself because it’s not the most immediately obvious piece of kit to find your way around. So you were having to discover what it could do?

NAINITA: Yes, to discover what it could do on my own and it was all about having the time, as a hobby, to develop that skillset and learn what sound synthesis is and VCO’s and oscillators and all of that.

I was also very academically inclined, very much into the sciences. I had a natural affinity for mathematics. I loved maths. From a very abstract point of view, I was very interested in the beauty of numbers. That has stayed with me through my whole career and my whole life. I ended up going to university and doing a degree in mathematics.

Music and sound and film, my love of film, was very much a part of my secret hidden life, if you like, when I was going through university. So I did a thesis and I tried to find everywhere I could to link my education with sound and physics and I wasn’t bright enough to take physics as a degree. But I did do maths. So I did my thesis on the wave equation. I was very much into computer programming. I developed this for my thesis. I developed this, I looked at the wave equation and developed my own form of synthesis, as part of my interest in synthesisers. Little did I know, it was actually FM synthesis which is the forerunner of the basic fundamentals of Yamaha’s FM modulation and frequency modulation looking at the wave equation is basically the single element, the component of what sound is made up of. When you have modulations and frequencies at certain speeds, that is the fundamental element and basis of synthesised sound.

I did this thesis on the wave equation and that led to me doing a post-grad course at City University on music information technology. It was the closest that I could get to music in those days in terms of a formal education. Because I hadn’t done, or wasn’t bright enough to do physics, I really wanted to go to Surrey University to do an amazing qualification. I studied psychoacoustics, music streaming, music and emotion and all these things, coding and midi-programming, so all the peripheral areas around music technology – we are looking at the early 1990s here. Things were still not quite developed. It was the early days of the Internet as well. It was quite primitive in those days.

DAVID: Yes.

NAINITA: And by chance, at City University, when I was there, I met Peter Gabriel who came to the university to see what the students were doing. And he was given a tour and it was like meeting the Queen! I stood in a row, and I shook his hand, as did several other students.

We had to explain what we were doing. One of my projects was, because of my love of world music, and my background learning Indian percussion and tablas, I was very interested in the combination of maths and time signatures and Indian classical rhythms which have incredibly complex time signatures, you know. Like, you have to speak, which is called Bol, I would speak a rhythm in 7-4 and then play the tablas in 10-4. So you have these really complex rhythms and so I wrote this piece of software to simplify this, an educational piece of software to help people understand how to work with Indian classical music and rhythms. Peter was very interested in that. He said, look me up when you leave your university, when you graduate and of course I didn’t.

I was really into sound, and I bought myself a portable recorder, a Sony TCD-D3 I think it was. And I used to go out and record sounds in my environment. I was very much into film and film sound and trying to tell stories through sound, not just music, but through sound.

And sound design. I was very interested in the Coen Brothers, the sound design in their films. Barton Fink (1991) for example, really powerful sound in their storytelling, incredible sound scapes. Of course someone like Walter Murch, from Apocalypse Now (1979). You know, the sound of the helicopters in Vietnam and the thing with the Chinooks perhaps and how that then blurs into the sound of a ceiling fan. Sound is very potent on films like that. And so I decided I wanted to go into the film industry and be a sound designer. Or a sound editor, as they were called in those days. I got a scholarship to go to the National Film & Television School to go and study sound for film.

I went to Film School and did that. But at the same time, I was getting a lot of work outside, working in the real world and I remember getting my first job which was as an assistant sound designer for Bernardo Bertolucci’s film Little Buddha (1993). And I pestered the supervising sound editor for about six months until he relented and gave me a job on the team! That began my career as a sound designer. And so I worked on a lot of features. It gave me a fantastic grounding for what I do now because I kind of had this impostor syndrome where I thought, what I really want to do is become a composer for film, but I didn’t know how to do it. So I got an understanding and experience and a grounding in all the other peripheral areas like sound design and music engineering and so on.

The first feature film I ever worked on as a sound designer where I did all the sound design has become a bit of a cult classic and it’s a feature called Death Machine (1994) and it’s made by the director of, if you have heard of a film with Wesley Snipes, called Blade (1998). A vampire movie. He made The League of Extraordinary Gentlemen (2003). Here is the trailer for Death Machine.

(Clip: …Contain some kind of death machine giant robot.

And your friends are letting it out.

Shut up!

GUNFIRE.

Panting and screaming….)

DAVID: That is a lot of sound, Nainita! I just was saying before, I can’t believe my teenage years missed out on that film. It’s a mash-up of Aliens (1986) and The Terminator (1984). When I looked into it, I found out it was banned in several countries. For being very scary and shocking and violent. And your sound design must have been a significant part of that ban actually because it’s so visceral. Could you say a bit about that? Were you left to your own devices on that?

NAINITA: Yes. I had a great working relationship with the director who gave me a lot of creative freedom. It’s a cross between Robocop (1987) and Terminator and it was a great job. I worked on it for three months. I had no idea what I was doing. I just created my own process and I brought in my Mac-Pro Quadra 650 at that time. I had a Roland 760 sampler and 32MB of ram to play with. A huge amount. Now I have 128 on my Mac-Pro on my desk top Mac.

I’d be in a Soho cutting room editing suite next to the picture editors and dialogue editors. So you had a very strict division of roles. You had a dialogue editor whose job was just to clean up all of the dialogue because a lot of it was shot against green screen and special effects. They had to do a lot of ADR, automatic dialogue replacement, where they have to re-record the dialogue that the actors speak. They look at themselves and they say their lines again and someone’s job is to match it syllable by syllable. I had experience of doing that on other features before, but my job was purely creative just to come up with the sounds, the monster, the creature effects. Every little metallic monster, it had the blades coming out of its fingers, and every little swipe and snipe, I would have to create a sound for. And every time it walked on the ground, I had multi-layered all the different sounds together to create the desired monster footsteps. Everything had to be created from scratch, that wasn’t recorded on location at Pinewood. That was a big baptism of fire for me. I had a lot of fun and creative freedom working on that.

DAVID: Do you keep your sound libraries so that, you know, if you are recording a lot of the original sounds for Death Machine, do you keep it, stockpile it and think, I might use it, if you come back to something from 20 years ago?

NAINITA: Absolutely, I created my own custom sound library. The first job I had, I worked as an assistant sound manager at De Lane Lea at Dean Street in Soho in London, and they had one of the only Synclaviers at the time. It’s a forerunner of the sampler. They had 256MB of RAM in this huge machine that cost £250,000 which is the price of a house! There are only five or six of these machines in the UK, there were only five or six sound designers in the UK, which is what they were called. I wanted to do that, so I became an assistant sound designer working on that at De Lane Lea.

I would develop and catalogue the database and sound effects on a Mac-Pro that they gave me. If I was working on various feature films, I would collect all the sound effects from location that had been recorded and I would put them in a presentable form, catalogue them, edit them, create the database for the sound editors to use so over the years, I mean I was a sound designer for about two years before my career progressed and I digressed into another area. But I still have my old DAT tapes of sound effects from all the different features catalogued. Then I digitised them and transferred them into my computer about 15 years ago. I manipulate and treat them. So that was a really interesting career diversion. I missed music and wanted to get into it. I was still very much interested in sonics. As a teenager, I would love to listen to the sound of records. I was listening to Peter Gabriel’s music and The Last Temptation of Christ (1988) which was pioneering and revolutionary in its day.

I was working in Germany on a film for a year and I really missed London. I was working as a Foley artist in Germany doing all the sound effects for an American imported drama series. I came back to London because I missed the buzz of it. I was without a job and then I remembered, this was about two years after I met Peter Gabriel and I remember him saying look me up when you finish your degree, and I hadn’t. So I wrote a letter and said I would love to visit your studios in Bath, the legendary recording studios in Bath called Real World Studios and he has a record label called Real World. It’s one of the most technologically advanced studios in its day and still is actually. I got a phone call from the studio manager a couple of weeks later after receiving this letter and they said we got your letter would you like to come down and visit. So I did.

I spent four hours at the studios, I was given a tour, had a wonderful chat. Basically, I talked myself into a job. And they said would you like to be Peter’s assistant and music engineer during the Real World recording sessions which was for me a dream come true. So, I learnt so much.

I started work immediately and worked there in his central hub studio, working with the world’s best recording engineers like Dave Bottrill, the engineer at the time, and working with amazing producers and musicians. My first job was really interesting, in that Peter’s process is that he records everything that’s going on in the studio – if he’s got musicians coming in and they are improvising, we have the record button on all the time. So, his modus operandi is that he wants to capture the magic of performance because if you record with a musician and you don’t have the record button on, you can bet your bottom dollar that you will miss a fantastic take.

DAVID: Yes.

NAINITA: And so, I learnt a lot about how to work with people, manage people and the psychology of working with creativity and creative musicians that way. My first job was to write in this big Bible, this big book, everything that Peter said, everything that was said in the studio during the day. So, if we were recording with Billy Cobham on drums, for example, I would write down, “take two, Peter said that that was a really great take.” You know, I would make a note and have five stars next to it!

(Laughter)

Then you’d bet that two hours later, after everything that we were doing, a lot of things were said, a lot of dialogue, a lot of activity going on, sort of two, two-and-a-half hours later, Peter would say “there was a take that Billy did on the drums a couple of hours ago, can you find that take and play it back for us so we can listen to it.” And so, I would have to be on the ball and remember everything. So, I would go back through my notes and rewind and find the take at such and such a point on the DAT tape one hour 20 minutes in, that Peter asked for. I played back the take and they were like “thanks for reminding us, that is a great one, can we do it like that again”. So, he’s quite fastidious and meticulous and the attention to detail is incredible.

Having that attention to detail and the degree in maths helped. I did that and I’ll come back to maths later on as well. But after a while of working with Peter and working as a freelance engineer on other people’s music, I really wanted to work on my own music. At the same time, I’d been building up my own recording studio at home with equipment and playing around with synthesizers and drum machines and technology of the day. My first proper synthesizer was a Roland D70. Then I progressed and bought the rack-mounted version of the Yamaha FM7-T G77 which I still love just for one particular sound which is a beautiful sound. But I’ve got the plug-in version of that now. And so, I met a music supervisor who knew that I wanted to be a composer for film and television, and he gave me a break and I got my first job that way which was to write the music for the Lonely Planet travel series on Channel 4 and that encapsulated a lot of my grounding in world music and technology and love of film. That was it. From then, I officially became a film and television composer.

DAVID: Never looked back.

NAINITA: Yes. Never looked back.

DAVID: There are so many things to pick up on. One thing I wanted to ask you – and in relation to Delia as well, was the bridging between sound design, special sound as they called it back then and music and of course in Delia’s time the initial years, they were told that what they were doing was not music, could not be categorised or credited as music – and that goes back famously to Louis and Bebe Barron with Forbidden Planet (1956), you know, it’s not music. I wondered if you could say something about that relationship between the composer and the sound designer and because you have such a range of direct experience of doing the roles yourself, talking with other people involved, whether it’s Foley or whatever. When you are working on a film on the score, how does it work in terms of how you do that negotiation of sonic territory? I know that people like Ren Klyce and Howard Shore, they talked about having a good dialogue between sound designer and music; Dick Mills would make a point of putting his sound design in the same key as the scores Dudley Simpson was creating, so there was that kind of working together. I wonder if you could say something about the relationship between the composer and the sound designer, what you found effective and when do you like to have those conversations, when they can go awry and when they work really effectively?

NAINITA: Yes. These days, things tend to be very compartmentalised. You are working in isolation as a composer. I get sent rough cuts and rough sequences of the films or the TV shows or whatever I’m working on. The sound, you just have the very rough, raw sync sound on there that’s recorded at location. It’s not cleaned up, it’s quite often in mono. I use Logic Pro (software). You have things like Pro Tools and those systems as well and Cubase, they all do the same thing, but I work in Logic, import a Quick Time movie into my system and I’ll extract the audio into my timeline, the dialogue and the effects that are there. I always have the audio in my session – I’m always listening to it when I’m composing – it’s always there because I’m crafting the music around the dialogue. Dialogue is king. It’s the most important thing. I treat the spoken word, the dialogue like a lead vocal in a song.

I craft and shape the music around the sound effects. And I will have conversations with the sound designer. They tend to be brought on much later than I’m brought on. So, I’m doing a lot of my music, I’m composing a long way before the sound designer is brought on. If it’s a big budget project and they are brought on early enough, as was the case with The Reason I Jump, for example, I’m having this conversation with the director and the editor, who I regard as the Holy Trinity – the director, the editor and the composer, we are in constant conversation and communication all the time.

On something like another film called For Sama we are trying to blur the lines between music and sound design. I’ll always have a conversation with the sound designer to say, what frequency of sounds are you using, I’m listening to the work, if there’s a hum that’s going on in the background, I will sometimes put the key to my music so that it doesn’t clash with the background noise that’s going on.

I’ll have the discussion as to what is going to lead, is music going to lead, sound design going to lead? For example, I am scoring a big action sequence for on American drama at the moment. There’s so many explosions and loud sound effects going on, that there’s no point in me competing with it with big music at the same time because if I do that, I know that we’ll get to the final mix and all that will happen is that it will all clash and the music will get dropped right down, the explosions will always win out. Sound effects tend to win out.

So, I have to check my ego at the door. It’s all part of the sonic experience and the telling a story through sound. Having certain sounds is really important and I understand that sometimes music leads, sometimes sound effects have to lead, and I will have that dialogue with the sound designer constantly. So, I’ll then send my rough music mixes to the sound designer so that they can hear what I’ve done while I’m working on a film. As the director and the editor are editing the film, I’m in my studio working away writing to picture, delivering rough musical ideas, musical themes, rough mixes and we go backwards and forwards until we were honing the film down to its final state.

Then I will polish off and record musicians if I need to, or sometimes I’ll bring musicians in during the writing process and experiment as well. It’s a very organic, evolving process with the sound designer, myself, the editor and the director – a bit of a chicken and egg situation, sometimes I’ll write music that inspires the editor and then they will give that to the sound designer. That will inspire them. So, we are going backwards and forwards in this circle all the time. There’s no set routine, if you like. It’s quite a messy process. But you get the happy accidents that happen. Sometimes what I do will inform the sound designer and vice versa and that’s always lovely when the two work really harmoniously together.

That is what happened with The Reason I Jump where you have a lot of care and craft, time and thought. The thinking process is really important. Coming up with a strong concept that is related to the storytelling, that really is, it’s not just about showing off with what you can do, with sound effects and music, there has to be a reason for it that’s related to the story and the narrative. Ultimately, I am telling a story through music and sound design.

DAVID: Let’s play a clip related to that. This is the clip that you picked out from American Murder. One thing I love about this is that you are incorporating everyday sounds. I don’t want to spoil it. Maybe we’ll play it first then talk about it. But what you were just saying there about how sometimes you might do something, and the sound designer might respond to that and vice versa, so in this case we’ve got social media typing, tapping sounds. Let’s play the clip then talk about that and how that came about?

NAINITA: Sounds good.

(Clip from American Murder with a woman’s text messages on the screen and being narrated, with gentle screen tapping percussive sounds and orchestral strings).

DAVID: That is a brilliant example. I have listened to that clip a few times again and listened to it again there where that blurring between at what point initially thinking that is just Foley sound, then that is a musical idea. And then back again and it’s the way that you segue between those, it’s so beautifully done. I was thinking of Mica Levi’s score for Zola (2020) and they’ve done social media sounds incorporated into that score because it’s based around a Twitter feed, and you have done something similar. If you could say something about that particular project, but also that is a very good example of the kind of thing you were talking about incorporating sound into the music.

NAINITA: So yes, the film is American Murder. It’s on Netflix. The most watched documentary ever on Netflix apparently. It’s about a woman who is murdered. So, the film was told entirely from the perspective of Shanann Watts, the murdered woman. And you think, how is this possible? What happened was she put her entire life on social media and the filmmakers had access to all her phone messages, text messages, phone conversations, Facebook videos, home videos, everything that was digitally there on social media. So, they decided to tell the story through her social media in a first-person narrative.

Social media is so important and prevalent in this film that I thought it would be interesting to take the sounds of the mobile phone, you know, with the fingers tapping on the mobile phone, tapping text messages and to create percussion rhythms out of that. So, where you see these text messages coming up on the screen like you do in the clip, you hear the sounds of that, the real sounds of that tapping on the screen. And then, as the piece of music evolves, I then create an actual musical percussion rhythm out of the same sounds. And it was a way for me to connect with the audience in a more visceral way. The audience probably don’t notice it, you know, it’s quite subliminal and it affects you subconsciously. You hear that subconsciously but I’m creating tension and, of course, we have got musicians from the London Contemporary Orchestra playing the quartet there. Combining the musical drive and emotional music with those subtle sounds was a way for me to find my way into the film.

On every project that I take on, I want to be treating it with authenticity and integrity, as much as possible. For me, I think using the sounds of the location is quite important in helping to tell that story. There is quite a horrible scene which we are not going to watch. It’s before the watershed. The bodies were found in the big oil tanks, oil containers. So, I took the sounds of the oil tanks and metallic sounds and then created these atonal textures out of the actual sound of effects in the film which is quite disturbing. Watch it and see what you make of it, if you can catch it on Netflix. So that was a really interesting project to work on.

DAVID: Following on from that, you mentioned about the visceral quality. We absolutely should talk about The Reason I Jump. It’s an extraordinary film Nainita, but the score, I guess, one of the aims behind the film is to give the audience some kind of sense of the experience of the subjectivity of autism, non-verbal autism in particular.

This was a film that you were involved in really early on, unusually so really for the vast majority of composers working in film. One question I wanted to ask you first of all is the discussions you had about how visceral you could go because there are moments there where, in terms of intensity and to give some kind of sense of ‘this is what it might be like’, how far do you push that in terms of the listening experience? Some of my students last year did a brilliant audio feature about tinnitus and they offered it up to the British Tinnitus Association and they had to make really difficult decisions about how damaging it would be to listen to. So, with the score, was that something talking with the director about how far you could push that visceral experience?

I guess, related to that, the balance between more conventional music to push the narrative forward, whereas a lot of the music here is putting us into that world with, in a way that the world is the narrative, I suppose, how we experience that world, that is the story isn’t it?

NAINITA: Yes.

DAVID: Yes. That is… a lot to pick up on there.

NAINITA: For people who don’t know, The Reason I Jump is based on a book written by Naoki Higashida, a boy who is now 23-year-old, he’s non-verbal autistic and living in Tokyo, Japan. The film is based on that book and the book is a series of questions, 53 questions where Naoki Higashida tries to explain to the reader and the world what it feels like to be autistic.

This book embraces those questions. We look at five or six different characters, all young teenagers, living around the world who are all non-verbal autistics. And interwoven with their stories and their experiences of being autistic are these poetic abstract sequences from the book which are stylised sequences, beautifully shot in 70mm widescreen or macro-photography.

And so, we use sound to help drive the story forward. The aim of the soundscape was to evoke and illustrate the intense sensory worlds of the book using music and sound. Jerry Rothwell, the director, wanted me to create this cinematic immersive experience to represent the world of Naoki Higashida and we were blurring the lines between sound and music. So, I came on board very early, worked on the film for about 15 months and filming was still going on while I was writing experimental ideas. So, we had these conversations with the sound designer and myself, the editor and Jerry where there was no right or wrong. We didn’t know what we were going to do. The only known core element was the book. So, my approach was to do research, read scientific papers on sound and music related to autism and how autistic people perceive the environment and the world around them.

I took these concepts, ideas from the book, and tried to translate them into music, working closely with the sound designer. So, there are several elements core to the experience of experiencing sound. One is that… and this is a generalisation, I’m not speaking on behalf of all people with autism. Autistic people perceive time in a different way. A lot of autistic people will see the detail in objects before they see the whole picture form, whereas neuro-typical people like myself or yourself, you know, if you are not autistic, I’ll walk into a room and I’ll see and get the general image of the room around me before I focus on a particular detail. With neuro-divergent people, it’s the other way around. I was thinking, how can we translate that into sound and music? Also because sound and the environment is so influential and is so important, I wanted to create the entire score not using synthesisers, taking found sound from locations and because the characters are non-verbal, I wanted to use the human voice in an experimental way. I wanted to use acoustic instruments like the cello or the violin. So, it talks to all the different elements and focused on things like taking little elements of sounds that sound like fragmented elements that then come together and it’s a bit like forming pieces of a jigsaw puzzle that come together and form a whole. That is how autistic people experience a situation or a scenario in front of them. Again, what is very cathartic for some of the characters is repetition and is very cathartic for a couple of the characters. I took little fragments of sounds, and I would loop them. As a homage to Walter Murch from Apocalypse Now, I took the sound of a ceiling fan which you see in the scene. I would take a little fragment of sound and then loop it so that it forms a repetitive loop and upon which I would then grow and build this piece of music.

The character in the scene rocks backwards and forwards because it’s a very cathartic, therapeutic form of experience for them. Maybe we should play the fan clip because that might help explain be what I am trying to say.

NAINITA: We created this in a very immersive soundscape. So, we mixed this in Dolby Atmos and 360 immersive sounds so you are not going to get that experience now, but if you buy the DVD or get it on iTunes, you will hear it in its full glory in binaural sound.

DAVID: I was going to ask you about that because one of my favourite British films of the last five or six years is Notes on Blindness (2016) – I had the experience of listening to that. Again, another one, a film about trying to give you an experiential sense of how it is, in that case to have lost sight. This is the fans clip.

(Fans clip – fans whirring in a room with the sound amplified and looping to as music emerges from the repetitive sound)

Voice in the clip: For a very long time she had so much anger, but she couldn’t tell us about it. She must have had some bullying moments in school. She was wanting to make friends, but others did not know how to be friendly with her. She would scream for hours and then she would cry, and I would cry and not knowing what to do.

That was the time when she used to really jot the crayon on one particular spot, the paper would tear and there were impressions on her desk. But within that, it was such a strong message for others. And I realised that it wasn’t just art, she was giving me the routine of her day and what all was happening, so she would draw what she was eating, she would draw other students around her, she would draw the landscape.

And that is how the journey began.

DAVID: That is stunningly good, Nainita, just incredible. That sequence alone. One of the things about that for me I find is again, the riskiness of how far you push things in terms of losing the audience’s comprehension. You were saying earlier about how in dominant cinema, dialogue is really sitting at the top of the hierarchy of the soundtrack. And there, I find myself at times, there are so many sonic elements in the score. Different fragments. It’s like, there’s a bit over there and something on the right and at the same time I’m not losing the thread of the dialogue. So, the mixing, I can only speak for myself, but that is hugely impressive.

NAINITA: I did that mixing, pushed the music a bit higher. I’m taking fragments of these elements of different sounds and the whole point, something that I learnt about streaming audio, the motional effects of different streams and layers. So, I am creating poly-rhythmic layers, your attention is drawn to different elements poking through all the time. You have the rhythms that are not clashing, they are in normal, double time, syncopated rhythm so it creates an effect that washes over you, but you can still focus in on the detail.

That’s a way of translating the concept of autism that I mentioned about paying attention to the details of the sounds, but you can still hear the whole picture unfurl in front of you. Even with the vocals, you can hear consonants and vowels of sung words, I do all the vocals in the score, and even then, you don’t know as the audience what I am singing but I’m singing the phrase, beautiful circle, which I translated into Japanese, then I deconstructed it into its own syllables and vowels again. So that is sung in a broken fragmented way. I do that several times in other elements where you hear this sung word. I am singing in Japanese but in a slightly kind of garbled way because the characters are trying to make themselves understood and the film is about communication, it’s about how do we communicate with one another. So, I am trying to get across that concept of communication in the sung words as well.

DAVID: So many layers and depths to that score. I didn’t know that that was you singing there.

NAINITA: Yes.

DAVID: There is a clip which it would be lovely to share with people, the one from the recording sessions. Particularly the one with the bass clarinet. It’s lovely to see again, what you are listening to, and then when you actually see it and the brain makes sense of it, that is how you are getting that sound.

NAINITA: Yes.

DAVID: I think it’s a lovely example of how when we talk about sound design, it’s easy for a lot of people to think that must be some kind of electronic manipulation. But sound design, we are doing that all the time with acoustic instruments, and this is a really good example of that.

NAINITA: Yes. Normally I would bring in on a normal score – “normal”! – You tend to bring in the orchestra at the end once all the music has been written and approved, I’ll bring in musicians to play all the played parts. Because we were finding our path through the entire process here, working on it for 15 months, I would bring in musicians to contribute to the score, play on the music and sometimes I would have a musician come to me, I would have no idea what they were going to play, we’d make it up on the day. So, I held semi-improvised recording sessions. I’m never so unprepared, I always have a rough skeleton framework of an idea of what we are going to do, but in normal recording sessions I’ll write the musical parts out.

On American Murder for example, we knew exactly what we were playing with the orchestra. But on this, it was great because it was more like we were playing around. So, I created and we recorded some prepared clarinet and I brought in this clarinettist, you can play her clip. I took the video on my iPhone, so the quality is not very good I’m afraid.

DAVID: So, this was you putting into practise the Peter Gabriel lesson of just record everything?

NAINITA: Yes!

(Clarinet clip – a note being played through a clarinet with tin foil on the end of the instrument at the bell, and water in the tin foil)

DAVID: I love that. A complete stranger would say, it’s a clarinet of Special Brew or something, they have sat there for ages, and it’s been fermenting, a distillery of clarinet or something. Can you say a bit more about the use of acoustic instruments and treating them?

NAINITA: If you heard that properly, it sounds like a didgeridoo, so if you ever want to simulate the sound of a didgeridoo, just take a clarinet, put some water in the bell, some foil over it and blow into it! But I would do a whole bunch of recording with musicians like that. Taking the concept of circular and loops, I brought in the saxophonist who does circular breathing. Even in the techniques of the method of playing the instrument, I would experiment with loops and circularity, and he did circular breathing with the saxophone-playing, which I used in the opening piece a little bit.

In terms of treatment, I would do a lot of recording and then create my own sound library out of custom sound libraries out of the recording sessions, bring in musicians, do a whole load of recording, put it into the computer and chop it up and create my own instrument samplers.

In terms of manipulation, I used a lot of granular synthesis and apps.With the vocals, I did that a bit, finding the balance between multiple effects and treating vocals on that particular fan clip for example, I treated it so extensively that when I played it to the director, it sounded very robotic and electronic. And he said, no, we need to make it sound as though it still has a route in organic acoustic sound. So, you can treat sound so much that they end up sounding too electronic and cease to sound as though they are created by humans, and we didn’t want that a lot of the time. We still wanted to maintain an organic root. So, I would have to tone down the effect a bit and experiment with granular synthesis apps.

One of my oldest VST plug-ins that I owned, the first plug-in I ever bought was by a French company called GRM Tools and they created the plug-ins and there was a granular app on there. Not having used it for 20 years, I actually came back to it and used it on this. It just worked a treat. Then again, Paul stretch, this free app that you can get where you can stretch, take a five-second sound and you can stretch it so much that it will actually play from start-to-finish and will take 800 days, you know, or 80 years to play that sound from start-to-finish. So, I would then stretch a sound and then take little elements of that.

Then I would use that as a starting point. So, I would have a main tree trunk of a recording and I would go off into branches. And you can just start off with something and end up with something totally different. That was an experimental process. The way I recorded the musicians, I had a recording session with a cellist who, I brought her in because she’s autistic. She’s the cultural ambassador for the National Autistic Society in the UK. Her contribution on the score was very emotional, very emotive and very personal as well.

She did a totally improvised session. I would say, we started off recording, I would create parameters for myself, I would say OK we are going to have five notes that can be played, these five notes and when I point my finger upwards you will go up a tone and when I point my finger downwards you will go down a tone, and so she’d play the cello and we’d do one line and then she would play on top of that line the same thing. We’d do those seven or eight times. And by the end of that little recording session, that piece, we’d have a semi-improvised performance based on my direction going up and down with my finger and not knowing what we were going to do, and we’d have a layer of seven or eight cello lines all interweaving together to create this sea of cello. It was an incredible effect. It’s quite an organic process which was a lot of fun and diving into the unknown and creating pieces that I don’t think we could ever have created had we not had that sort of approach. I like to limit my pallete of sounds. I’ll say OK I’ll only allow myself to use X, Y, Z, and then within that, there’s a huge amount of creativity that can follow. That goes through the chaos and order.

I like to have a lot of order around me the way I structure the sessions or compositions, but within that, I allow room for chaos to happen! And there is that mathematical balance. When instructing a musical cue or piece, I will have to have this innate sense of equilibrium, yin and yang, and that comes from my maths background and my love of Bach for instance. There’s a lot of order and precision and attention to detail with every note or the way the music is constructed. But then I like to allow for chaos to happen. Which we don’t know what you are going to expect. Allow for unknown elements to come through. That I think is a very exciting balance in composing. We have hit a philosophical note there.

DAVID: In terms of not knowing what is going to come through, maybe now is the time to see what we might get from the audience. For me, it’s like, it’s such a perfect integration of something which is conceptually and aesthetically so unified. Sometimes when you get a conceptual score, you can be intellectual and impressive, but you don’t feel it and that is not the case here, it works in all respects.

NAINITA: Ultimately, it’s easy to get carried away with the technology. Ultimately on everything I wrote and always keeping in check with the story and with the director there has to be a narrative reason for every piece of music in the film and the approach we took. While I can get carried away with being pseudo-intellectual with the technical side and get carried away, there has to be a reason for doing it and the ideas and the concepts of autism. I do that on every project where it’s allowed, I think.

DAVID: I think there are some key decisions, like the decision to keep it acoustic and to avoid electronics. Again, I don’t know if that was a consideration, but to me that avoids that prevailing stereotype of autism and the robot which is quite a pernicious stereotype. It gets trotted out again and again. Acoustic instruments give it that human element.

NAINITA: Exactly. The human touch. It’s about communication and human relationships. Yes. Those key things of using the human voice. People always connect with the human voice and so I am using it, but I wanted to find a way of using it that wasn’t clichéd. I always try and steer away from cliché in my scores unless the director wants it. You also have to give the audience what they expect.

For certain types of film scores, you have to create what you expect and there’s a story to tell. But here, there was an opportunity to do something different and be out of the box and the director wanted that. So, I had a huge sand pit to play in. I can do anything, you know, I have a computer, I have millions of sounds. I can achieve anything with technology now. And that is one of the problems, it can work against you as a composer living in 2021 with what we can achieve, you know. With Delia and what she had, they had the limits of their technology being pushed with the equipment that they had. They were inventing tape machines, you know, that could do so much, at the Radiophonic Workshop.

When I started, I didn’t have much money, so I had to make the most of my limited resources. Now I have so much that I can do anything. That is a detriment sometimes. I think, I’ll use only three instruments on this, let’s see if I can do it.

DAVID: It’s like creating one of the most idiosyncratic and innovative things out there. A question: What are the considerations for you as a composer when creating the score for an immersive film? Would you record instrumentation, sounds sonically or place them in 360 spheres in post?

NAINITA: What we did with The Reason I Jump, which was unusual, is that I delivered stereo stems. Normally I’ll deliver a stereo mix. I did that then broke it down further.

When we got to the final mix in the dubbing theatre, mixing in Dolby Atmos, we were sometimes reconstructing pieces of music in the dub itself when we were listening to all of the different sound design elements that came through. Our perception totally transformed when we got into the final mix itself. So, there’s one cue that we haven’t played, the Green Boxes with the choir and I layered my voice 50 times and I sing a big choir cue, I turn my voice into a choir. And we were using height and width at the same time in 360. A lot of the sound design in the film was recorded in Ambisonics in 360 sounds on location, so for every track that was recorded, they had 17 different microphones on location, so the poor sound recordist was recording in Ambisonics and had loads of mics and different channels around.

That was a lot of data and metadata. There was a lot of material to work with. And a lot of choices, creative choices to be made. With that particular cue, the green boxes, which you can hear on the soundtrack on Spotify or Apple, we were using height and panning things all around us to make us feel totally immersed when you watched it in the cinema, the film was in the cinema a few months ago. We didn’t want to use Dolby sound and Atmos as a gimmick, there had to be a story related to the scene.

DAVID: Time to squeeze in two quick questions. Quickly, a question from Fiona – how did you create the incredible monster stomps from the first film?

And Hugh has a question – interested to hear what a composer/sound designer thinks about playback volume in cinemas and whether this may have any effect on the composition process.

NAINITA: Monster sounds. I probably recorded a lot of kitchen utensils and for the monster sound, my own voice and well. Animal growls. You can use lots of things. My cat, you know, I’ll record anything then pitch it down and create and treat it and manipulate it.

Everyday found sounds, just banging and scraping and clipping and clomping everywhere.

The other one was about a playback volume. Too loud. I hate it. I mean, there is a huge debate over the last few years about Chris Nolan’s films and the sounds in his films. Was it ‘Inception’ or Interstellar (2014), I can’t remember which one, about not being able to hear the dialogue and it’s the director’s creative decision that he didn’t want you to hear the dialogue, it’s about creating a wash of sound? Yeah, whether that has any effect on the composition process. Yes, I mean, of course, those decisions are made.

I have just finished a film which will be out on Netflix at the end of the year called 14 Peaks and it’s about a man who climbs the mountains in seven months. It’s a big symphonic sound but you hear the crunch of the footsteps on the snow, on the mountains and it’s really important to put you there, you hear the detail in those sounds and then when the music comes in, it has an impact. So, in the way I chart and map the music out over the course of a film, it’s really important. I want to take the viewer on an emotional journey through a film as much as the filmmaker wants to take the audience on an emotional journey using the acting and the script and the story and the cinematography. I will also try and do that with the music and take you on an emotional journey through highs and lows with big cues or very small intimate cues and to just mix it up a lot of the time.

DAVID: That is a great place to end Nainita. It’s been such a journey tonight as well and I just wish it wasn’t coming to an end – but it isn’t! Because we’re all going to off now and I think listen very differently after tonight. So, thank you so much for that.

Shout out to Granada Foundation and Arts Council England as well for supporting us and everybody at DD Day and Caro for putting this together and Delia herself and her ongoing inspiration. Thank you so much everybody. Keep in touch! Thank you again Nainita, it’s been absolutely wonderful. It’s been a joy. I just wish we had more time.

NAINITA: Thank you. It’s been wonderful to speak with you tonight. Thank you everybody.

DAVID: I will do the leave button now, I wish we could do more!

NAINITA: Part 2!

DAVID: Yes. Exactly! So glad you mentioned Blake’s 7 (1978-1981). I would not be here if it wasn’t for Elizabeth Parker’s sound design for Blake’s 7.

NAINITA: Yes, Elizabeth Parker! I love her work. She was a huge inspiration to me as well.

DAVID: Was she?

NAINITA: Yes. I had the privilege of meeting her at a BBC composers Christmas party about eight or nine years ago and she was quite something, an incredible composer.

DAVID: So generous. One of my students, a former student, Zoe Kent, a talented musician and composer in her own right, Zoe was writing about Radiophonic Workshop sound design for her dissertation and got in touch with Elizabeth and she was so generous with her time and advice for Zoe. Yes, lots of respect to her.

NAINITA: Yes.

DAVID: I felt the people in the Radiophonic Workshop had that generosity about them. It wasn’t like we have these secrets; it was like, we want to share that.

NAINITA: Keeping the legacy going as well. When I met Mark Ayres, it was like a parting of the waters for me. I mean, he was a hero of mine as well.

DAVID: Yes. Me too. And you are keeping that tradition going, you are keeping that legacy going through to your practice and your principles. I think it’s in that lineage, you know, so even though the workshop itself as an entity at the BBC, is no more, but its legacy is alive and well and thriving in people.

NAINITA: Then I got to work with the BBC Natural History Unit and my first job at the BBC was writing the music for Wildlife on One, the David Attenborough wildlife show 20-odd years ago, it no longer exists, the show, but I love working for the BBC because they give so much. They care so much. Especially the Natural History Unit about using sound. I do a lot of wildlife films as well and sound is very important there as well. Using audio and music to tell stories because there’s no dialogue which is fantastic. So, the music and the sound have to do a lot of the heavy lifting in terms of telling the story.

DAVID: Yes. Very much so. I think we are going into part 2 were so we’d better stop there. It’s been wonderful. Thanks a lot again.

NAINITA: Thank you, bye.