The mNAP sound installation

Experiencing this sound installation involves stepping into a custom-built soundproofed box (the mNAP) and listening to the twelve-minute piece I made on stereo headphones. Simple feedback on the piece can made by the participants via a custom iPad interface I programmed using MaxMSP and the Mira app.

IMG_0057 copy

The mNAP in action at the India Habitat Centre, Delhi. One happy participant is just stepping out of the sound-proofed listening pod (mNAP) after having heard the piece.

 

The audio file that mNAP participants hear inside the box can be heard below:

 

Recordings

The stereo piece I made for this installation used recordings I collected at various locations in and around Delhi from 30/11-4/12/2014. I used these recordings exclusively to make the piece: no other sound files, synthesised or otherwise recorded were used.

The piece was mixed during an intensive three-day period in my hotel room in the Jangpura Extension area of Delhi. This was by no means quiet or ideal. In fact, the traffic noise outside the hotel often blended in with similar sounds in the recordings; this was quite confusing at times. But noise was reduced considerably for both myself, when mixing, and for participants, when listening, by using Beyerdynamic DT770 closed-back headphones (the piece was deliberately mixed for and on these mid-range cans).

The recordings were made with a Zoom H6 using OKM II in-ear microphones to create a binaural recording. The microphones that come with the H6 are perfectly decent but can’t compete with the OKMs for bass response; neither do they provide the 3D listening experience that these mics create when positioned in the ears during recording. Several people who listened to the piece commented on a strong sense of ‘being there’—sometimes to such an extent that they lifted the headphones off the ears to confirm whether or not they were hearing sound from outside the mNAP.

IMG_3726

OKM II in-ear microphones were plugged into a Zoom H6 digital sound recorder.

 

Of course I make no claim to be using the best equipment available here. All of this equipment can be had for up to a few hundred—rather than thousands of—pounds at the most. I wanted to make the piece on and for the kind of equipment that other groups in India and elsewhere might be able to source and use for their own version of this project.

Reaper and mixing choices

Similarly, I wanted to use mixing software (a Digital Audio Workstation or DAW) available to all. I chose to forego my usual DAW—Nuendo, which is, frankly, expensive—and instead use the very generously, even permissively licensed Cockos Reaper. As I was also giving a mixing workshop in Delhi as part of this project’s presentation at the Unbox Festival, I wanted participants to be able to be up and running with the same software I was using as easily and cheaply as possible.

Although the sound installation was both an experiment and a piece of social research, it was also, of course, an artistic undertaking. I was therefore quite liberal in my choice, ordering, and mixing of sounds. I wanted to create a form which explored and presented various aspects of the widely divergent urban and rural environments I had recorded. I did not restrict myself to naked presentations of single recordings; instead I layered sounds—some of which were recorded in utterly different ambiences—in order to create a texture which was both sonically rich, varied, and formally dynamic, but also ran the gamut of sound pressure level extremes: from the soothingly quiet to the disturbingly loud.

For mixing geeksThis was my first project working with Reaper and I was very impressed with its stability, flexibility, and workflow. What pleased me most was the ability to apply clip-based processing. I haven’t worked with a system that offered this since my days of using Samplitude in the late 90s. What’s particularly useful about this approach is that you can apply one or more sound processing plugins to an individual clip, rather than having to process it offline or apply the plugins to a whole track (and thus end up using many more tracks than you should really need).

Mixing in spectral bands

The approach I took to mixing the various ambient recordings I made in Delhi was to think in terms of spectral bands. Just as colours can be seen in terms of their place in the electromagnetic spectrum—with frequency ascending as we move from red through green to blue and violet—we can view the audible frequencies of sound waves as grouped into different colours or bands.

Various approaches have been taken in dividing the frequency range audible to humans (approx. 20 to 20,000 Hertz (Hz)). Some are more scientific, such as the Bark Scale, with its  24 critical bands; others are more descriptive or pragmatic with regards to various bands’ effect on the perception and mixing of sound. So with the latter approach we speak of certain spectral bands giving a sense of, for example, power (the sub-bass region, up to about 60Hz), or brilliance (the highest frequencies, 6000Hz plus), or being wooly (too much bass: 100-200Hz), warm (mid-bass, 200-500Hz), or nasal (mids, 500-1000Hz). Of course these descriptions and the frequency ranges given are approximate and both subjective and dependent on context, but as guidelines they are useful to the mixing engineer.

I set up my Reaper session to have several tracks which were full frequency, i.e., to allow the unaltered recording to come through in the mix, but I also heavily used tracks set up with EQ–equalisation or filter–plugins to allow a particular band to pass. Not only does this approach significantly affect the overall colour and feel of the mix, but it encourages me to always think about which part of the frequency spectrum I’m interested in when I select a particular sound to add to the mix. It can also lead to clearer mixes overall as sounds often end up being used for a particular frequency band only, and so are not in what we might call spectral competition with other sounds in the mix.

(I recommend anyone interested in the dark art of equalisation to take a close look at David Moulton’s articles on Spectral Management as they are both insightful, technical, and practically applicable.)

For this piece I split the frequency spectrum into six bands only, as my feeling is that any more becomes too fussy and difficult to manage. More bands also end up competing with each other in a practical mixing session. I used the Linear Phase EQ LP10 from DDMF to do do the filtering. In this plugin, both high and low pass filters are 12db per octave. I used only these two types of filters rather than bandpass filters as well, so I needed one of each to create the effect of a bandpass. Going the long way around like this allowed me to tweak the strength, or Q, of each filter separately. Settings are given below for each band:

  1. Sub-bass: 60Hz Low Pass, Q=0.5
  2. Bass: 60Hz High Pass, Q=0.72; 250Hz Low Pass, Q=0.8
  3. Low mids: 250Hz High Pass, Q=0.72; 2000Hz Low Pass, Q=0.7
  4. High mids: 2000Hz High Pass, Q=0.8; 4000Hz Low Pass, Q=0.86
  5. Presence: 4000Hz High Pass, Q=0.81; 6000Hz Low Pass, Q=0.92
  6. Brilliance: 6000Hz High Pass, Q=0.72; 16000Hz Low Pass, Q=0.86

For mixing geeks: I found the LP10 was more neutral than Reaper’s built-in ReaEQ, which seemed a little boomy to me, especially in the sub-bass region. Also, the linear phase quality of the LP10 was attractive in this application as I often mixed several instances of the same clip with different EQ characteristics and so didn’t want phasing issues—created by non linear phase EQs—to smear up the mix.

Levels

Using an off-the-shelf sound pressure level meter from Radio Shack, I was regularly checking and annotating levels whilst recording. This was not only to garner data for comparing against safe working and living noise levels but also to make sure that I could recreate the appropriate levels at various points in the piece when those and similar recordings appeared in the mix. There were, then, loudness targets which I set up when planning the piece, and these were related to the levels measured during recording. For instance, the loudest part of the piece, towards the end, reaches 96dB on the Beyerdynamic DT770 headphones. This represents the average maximum loudness I experienced in and near beeping, loud traffic.

IMG_3747

The sound level meter. Good old Radio Shack does the job with enough accuracy for this application.

 

A simple, and by no means scientifically accurate but nonetheless pragmatically useful technique was used to ensure level translation: I checked mix levels by holding the business end of the level meter about as far away from the headphone speaker cone as the ear drum would be with the headphones in the normal listening position. Perhaps counterintuitively, given what I’ve just written, by placing such dynamically ‘true’ passages in artificial contexts within the flow of the piece and its mix, my premise was that it would be possible to attain a participant response quite different from the real context of the original sounds. Thus, while we might get used to a dangerously loud environment by ‘turning down the inner volume knob’ (essentially the same as becoming desensitised to the sonic overstimulation), having a relatively swift change from quiet to loud might re-sensitise the listener to the original levels. My idea was that this might provoke a different response from that arising in the usual context of these loud sounds, which might be characterised in general as an acceptance of something continuously loud which is beyond the individual’s control.

Formal divisions using pdivide

Blank page syndrome—or in this case the blank mix window—is well known to many artists. A form of writer’s block, the blank page is, if you’ll forgive me for overdramatising, a form of paralysing tyranny visited upon the artist by the expanse of nothingness which is the the not-quite-yet-underway project. Ideas may abound but the sheer amount of work and material needed before a final and finely-honed form appears is terrifying to contemplate. One tactic to avoid the fear is to divide things up into bite-sized chunks, but losing yourself in details and not knowing where the work as a whole is headed can lead to incoherence and a general lumpiness of form. My solution is generally to plan the work from the top-down: all the major goals and tendencies are established in advance, even if they are modified as the unfolding details begin to dictate their own slightly different paths.

I knew in this second mNAP piece, for instance, that I wanted it to be less linear in its overall progression from quiet to loud than the first mNAP piece created in Ahmedabad back in March 2014:

 

I also knew that shorter term goals would make the piece more dynamically variable and engaging, even unpredictable, and that this might help the research aspect of the piece. So I had to divide the projected twelve minute duration into smaller chunks. I tend to work with quite simple proportions when doing this, either starting with a ratio and working recursively outwards to an unspecified duration (as in hyperboles are the worst thing ever) or, as in this case, working iteratively inwards from a known overall duration.

For the latter approach I use a function from my algorithmic composition software package slippery chickenpdivide creates a list of proportionally related times, dividing a duration into a number of smaller durations a specified number of times. We start with a proportion as a ratio—such as 3/2, as used here—and divide the given duration into two parts according to that ratio. Then those two parts will be divided into the same ratios. This will iterate as much as the user needs.

Like most mixing software, Reaper allows the insertion of markers into the track session window. These can be set at specific times and will be visible in Reaper as variously coloured vertical lines dispersed across the timeline of all tracks. Markers are very useful for annotating a project and setting the location of important points in the piece that you’ll need to revisit again and again. In setting up my tracks in Reaper I determined to automatically add marker points according to pdivide timings.

mnap-reaper

Screenshot of the Reaper mix with algorithmically generated section markers (vertical lines) clearly visible. Click for a higher resolution image.

 

Here I was aided by another feature of Reaper which I greatly appreciate: its file format is plain text, i.e. it’s readable by humans, as opposed to the machine-only readable/intelligible binary formats preferred by most software. Within seconds I was able to find the part of the Reaper file which contained markers. A bit of programming then allowed me to automate the process of setting markers at the points generated by my pdivide algorithm. The code given below is written in my favourite language, Common Lisp; the marker information it generates is also given. I copied this output directly into the Reaper file, reloaded the project, and was happy to see all the beautifully proportionate target points appear in my mix window.

(As the pdivide algorithm specifies the number of times to iterate the proportional division process, I organised the marker code output to represent the level of division from one to five. I was thus able to view the overall form in terms of progressively major to minor formal divisions, or vice-versa.)

Common Lisp code to generate the formal division markers

(loop for level in
     (reverse
      (nth-value
       2 (pdivide 3/2 5 :duration (* 12 60) :alternate nil)))
   for level-num from 1 
   with done = '()
   do
     (loop for time in (butlast (rest level))
        for i from 1 do
          (setq time (decimal-places time 3))
          (unless (member time done)
            (format t "~&  MARKER ~a ~a \"level ~a\" 0 0 1"
		    i time level-num)
            (push time done))))

Output

 MARKER 1 432.0 "level 1" 0 0 1
 MARKER 1 259.2 "level 2" 0 0 1
 MARKER 3 604.8 "level 2" 0 0 1
 MARKER 1 155.52 "level 3" 0 0 1
 MARKER 3 362.88 "level 3" 0 0 1
 MARKER 5 535.68 "level 3" 0 0 1
 MARKER 7 673.92 "level 3" 0 0 1
 MARKER 1 93.312 "level 4" 0 0 1
 MARKER 3 217.728 "level 4" 0 0 1
 MARKER 5 321.408 "level 4" 0 0 1
 MARKER 7 404.352 "level 4" 0 0 1
 MARKER 9 494.208 "level 4" 0 0 1
 MARKER 11 577.152 "level 4" 0 0 1
 MARKER 13 646.272 "level 4" 0 0 1
 MARKER 15 701.568 "level 4" 0 0 1
 MARKER 1 55.987 "level 5" 0 0 1
 MARKER 3 130.637 "level 5" 0 0 1
 MARKER 5 192.845 "level 5" 0 0 1
 MARKER 7 242.611 "level 5" 0 0 1
 MARKER 9 296.525 "level 5" 0 0 1
 MARKER 11 346.291 "level 5" 0 0 1
 MARKER 13 387.763 "level 5" 0 0 1
 MARKER 15 420.941 "level 5" 0 0 1
 MARKER 17 469.325 "level 5" 0 0 1
 MARKER 19 519.091 "level 5" 0 0 1
 MARKER 21 560.563 "level 5" 0 0 1
 MARKER 23 593.741 "level 5" 0 0 1
 MARKER 25 629.683 "level 5" 0 0 1
 MARKER 27 662.861 "level 5" 0 0 1
 MARKER 29 690.509 "level 5" 0 0 1
 MARKER 31 712.627 "level 5" 0 0 1

Timeline annotations

I’ve analysed the mix post-hoc to identify the sources of the most prominent sound files used. The google map of the locations is at https://goo.gl/maps/yl3Ys. The sound’s loudness waveform was created with my rmsps software

mnap-delhi-annotations

Delhi recording locations mapped onto the loudness curve of the sound installation. If you click the image you’ll be taken to the high-resolution PDF

 

Here, in list form, are the annotations from the image above:

0:00 Sultanpur Bird Sanctuary traffic roar
0:33 Bird wings in Agrasen Ki Baoli step well
0:42 Voices in Tuklaghabad Tomb’s Dome
0:58 Habitat Centre Pendulum bells
1:25 Sultanpur Bird Sanctuary bird calls
1:33 Bird wings in Agrasen Ki Baoli step well
1:43 Habitat Centre crows
2:00 Habitat Centre water fountains
2:13 Lodhi Gardens distant bus gear change and voices
2:35 Tuklaghabad Fort steps crunching
2:52 Habitat Centre water fountains
3:02 Tuklaghabad Fort step well bass rumble
3:13 Chandni Chowk beeping and street noise
3:17 Habitat Centre Pendulum bells
3:18 Chandni Chowk lane baby and other voices
4:03 Tuklaghabad Fort aeroplane overhead
4:19 Tuklaghabad Fort cricket player’s claps
4:30 Chandni Chowk lane moped beeps
4:33 Dilli Haat Bazar toy ratchet and cartwheel noises
4:44 Dilli Haat Bazar violin melody
4:57 Chandni Chowk lane baby’s cries
5:16 Dilli Haat Bazar distant train hoot
5:26 Agrasen Ki Baoli step well cistern teenagers howling and laughing
5:54 Dilli Haat Bazar singer and violin
6:06 Chandni Chowk auto-rickshaw ride and traffic noise
7:00 Dilli Haat Bazar drumming
7:01 Agrasen Ki Baoli birds
7:06 Chandni Chowk beeping
7:12 Agrasen Ki Baoli close bird and female voice
7:36 Sultanpur Bird Sanctuary bird calls
7:49 Agrasen Ki Baoli female voices
8:14 Dilli Haat Bazar cough and rattling cart
8:39 Dilli Haat Bazar trumpet
8:50 Gurudwara Sis Ganj Sahib Sikh temple tabla and harmonium
9:12 Gurudwara singing cross-fades into Dilli Haat Bazar singer
9:24 Tuklaghabad Fort distant motorbike
9:46 Tuklaghabad Tomb’s Dome bass rumble of voices
9:58 Shri Digambar Jain Lal Mandir Jain temple traffic
10:05 Dilli Haat Bazar singer and  and Jain temple traffic beeping and voices
11:02 Habitat Centre crows and traffic
11:38 Habitat Centre pendulum bells and bird
12:07 ME thanking the pendulum swinger

Heilige Schlaf(-störung)

5.15am, December 23rd, 2014. Tharangambadi, Tamil Nadu, India.

Pitch black. The dead of night. Except that the stillness that accompanies those well-known phrases is all too conspicuously missing.

Karin is pacing the floor, furious at being woken up yet again. We thought it might be peaceful here but there’s no denying it: love it or hate it, India is just loud. Really loud. Unremittingly so, it would seem. This time though it’s not the traffic, the honking, the drumming, the chanting, or the aeroplanes—we’re miles away from most of that. Instead it’s a recording of Hindu religious music blaring out at around 105 decibels.

The only thing to do is get up and find the source. After all, it might lead to some colourful procession or cultural insight otherwise missed. And being a prying tourist voyeur is preferable to staying in your room, holding your hands over your ears, and damning the culprits’ complete indifference to your desperate need for sleep.

Our investigation was rewarded with village life already in full swing, as well as a strong cup of tea from a roadside stall. But this was after we’d traced the source of the disturbance to a tiny Hindu temple, completely empty both within and without, except for a single woman drawing a beautiful Kolam pattern on a wet floor, and seemingly oblivious to the hearing damage being inflicted upon her and everyone else within a hundred meter radius. The loudspeaker horns decorating the temple’s tiny turret were impressive and highly effective: there was no way anyone was going to sleep with this going on, and as we soon found out, the muezzin down the road was struggling in vain for a little attention of his own—perfectly illustrating in sonic form the religious demographics and perhaps even the tensions of India today.

Since my days of living in the old town in Salzburg—with its many church bells simultaneously belting out their invitations to worship on a Sunday morning—I’ve found the imposition of religiously motivated noise pollution both arrogant and annoying. Why inflict any one religion on all and sundry, of various religions and none, unless as a show of power? Why should anyone be woken up like this when religious people are perfectly capable of setting alarms if they want to get to their form of worship on time? I get that the peal of church bells and the like can be pleasing in certain contexts, but in my opinion nothing involving a loudspeaker should ever be countenanced. It is, thankfully, in many parts of the world against the law and enforced as such. Given that India has its own noise abatement ordinances, I’m curious to know why such flagrant abuse is tolerated on such a regular basis.

It’s not just India though. A few years back, in Cambodia, I was kept awake all night by Buddhist chanting. When someone dies there this can go on for days, apparently. And despite my Buddhist sympathies and all my various travels in Asia I’m certainly not deaf to the banality and complete lack of melodic invention in Buddhist chants. If the volume levels don’t get to you the continuous intonations on only three pitches certainly will. Carnatic musical forms are an absolute pleasure in comparison, except at 105 decibels and at five in the morning.

So why the amplification? Just because we can? Or is there a more sinister aspect to this, a sort of religious war of amplitudes, paralleling political wars—think of the use of the megaphone in contemporary Britain, particularly at election time—and echoing Hitler’s claim that “without the loudspeaker we could never have conquered Germany”?

I’m afraid to say that I’m less accepting of this racket than others. I’m not persuaded by appeals to cultural difference and tolerance. All humans suffer the same consequences of noise pollution and deafening attacks upon our personal space, be that from Audi making especially loud car horns for the Indian market, or the local temple installing loudspeakers. If you’re not in sympathy with my views, consider that a study of traffic police in Indian cities in the south concluded that three-quarters of them suffered from permanent hearing damage due to their work. As R. Murray Schafer commented, “it would seem that the world soundscape has reached an apex of vulgarity in our time, and many experts have predicted universal deafness as the ultimate consequence unless the problem can be brought quickly under control.”

mNAP software

As well as inviting our mNAP participants to take part in our questionnaire after the listening experience, we’ll be capturing their response in real time while they’re in the listening space. I’m using software I developed in MaxMSP to do this, in particular taking advantage of iPad<–>MaxMSP interfacing available via the Mira extension.

The mNAP software is a very simple iPad interface which allows the user to

  1. start playback when they are seated comfortably and ready to begin;
  2. see how many seconds remain to be played during the experiment;
  3. at any time during the experiment, to record their mood by pressing one of four buttons.

Following Maria’s suggestions we’re looking to capture one of four states:

Arousal              Affect
‘unexciting’     ‘pleasant’
‘exciting’          ‘disturbing’

The software works as follows:

Before guiding the participant into the mNAP, the researcher enters a single-word participant ID into a text field in the Macbook software. This ID should be unique and should also be handwritten onto the consent form and questionnaire so that we can link realtime responses to participants.

The participant enters the mNAP with the iPad and the door is closed. Note that the iPad is merely a controller: the sound file is played back from the Macbook, and the data files are written to the Macbook also.

When the participant presses a mood button, it is time-stamped in milliseconds relative to the start of the playback; the number of the button is recorded against this:

1 = unexciting: pleasant
2 = unexciting: disturbing
3 = exciting: pleasant
4 = exciting: disturbing

When the playback is finished, a text file with all the timestamps is automatically written to disc. The name of the file is the participant ID followed by the date and time it was written (to ensure a unique file name in the case that the researcher mistakenly reuses a participant ID), e.g. michael-2014-12-9-13.44.40.txt

A photo of the iPad interface is at the top of this post.

Sultanpur Bird Sanctuary

IMG_3947 copyYou’d be forgiven for looking at this stunning photography by Shradha Jain and thinking that we’d found ourselves in idyllic surroundings full of the sounds of nature in their purest state. Would that it were true. We rose at 5am to drive to Sultanpur, arriving at 7.15, just after the park opened. This spot was supposed to be the place I would be able to record nature in its timeless essence, without the sounds of modern mechanical life, in particular traffic noise and honking.

Instead of the calming effect of unadulterated wildlife sounds, our morning was full of frustration as it slowly dawned upon us that it was going to be be impossible to escape the constant drone of the high-geared trucks on the nearby road. The lack of hills or undulations, trees or other undergrowth meant little to no sound pollution was absorbed before spilling into the bird sanctuary and ruining an otherwise beautiful setting. Looking on the bright side though, this is the perfect example of what we’re trying to raise awareness of: that oases of tranquility essential to human rest and recuperation are being obliterated by sound pollution from traffic and other sources of unwanted noise—OK, there are still some places on earth where the roar of traffic does not penetrate, but they’re increasingly hard to find and seemingly completely absent around Delhi.

Dilli Haat Food and Craft Bazar

IMG_3901

Another shot of me with that far away look–no idea where that comes from because whilst recording these ambiences I feel very focussed on the present moment. In fact this type of recording process is extremely and instantly meditative for me, despite being in public and attracting a lot of stares and bemused looks from the passersby.

The Dilli Haat food and craft bazar had a very nice feel to it when we were there two nights ago. Ever so slightly buzzing but by no means too full, with no hard sell or over-eager attention from any of the vendors, the wares were of an exceedingly high quality—apparently all stallholders in this period are award winners of some kind or other.

IMG_3923We had some excellent food whilst there, from an organic restaurant using its own locally-grown ingredients. Savvy bunch of people this: no Coke or Pepsi on the premises, and they have their own seed bank, so they’re fighting the likes of Monsanto in a quest for seed freedom. I’d have bought a load of their products at the attached shop if I’d have had the luggage room.

The sound scape at Dilli Haat was very rich. A hand built wooden toy created very distinct ratcheting sounds, whereas various musical instruments were to be heard both in solo demonstration or in group performance, amplified. The sound file below is quite long at 6:39 but it encompasses a slow walk around the marketplace, creating some subtle ambient fades that I’d be at pains to recreate any other way.

step well cistern

IMG_3839This was a great find: a step well in the heart of Delhi, namely Agrasen Ki Baoli–see recordings location map. Not only did this have some interesting sonic reflections between the parallel walls you can see here but at the bottom it had a very low crawl space into the now empty water cistern. This was about 6m in diameter and perhaps 20m in height. The sound reflections were very special: very strong and clear but still allowing clarity for the spoken voice (it wasn’t so wide after all). Moving closer to the middle or the walls of the cistern resulted in quite different reflective patterns striking the ears. I held an impromptu speech in here not so much as an act of pedantry but—as it was empty except for me–so that listeners here could hear the special acoustic thumbprint of the space: