Reid Studio Project
Finished and ongoing Reid Studio projects.
Finished and ongoing Reid Studio projects.
I feel like i should contribute more to this page, so here we go: the start of a stream of blog posts from me…
Recently I have been doing a lot of recording both in the Reid and on location, and processing in the Reid. I’ve been mainly working on University “Classical” Ensembles. (more…)
Between 28th May and 2nd April I spent all day everyday mixing a project I’ve been working on for around 5 years. It goes under the name ‘Splintered Instruments’ and is a 7 track record, recorded mostly at Greenhouse Studios in Reykjavik (http://greenhouse.is/), partly at home, partly in various places where kind people had interesting instruments (including Cimbaloms, theremins and dulcitones), in Studio 1 (where Pete Furniss added Clarinet and Bass Clarinet) and also the Reid Hall itself (double bass and piano was overdubbed during these sessions).
It was a beast. It had gone on for years, and frankly was getting out of control. It began with Ben Frost (http://benfrost.bandcamp.com/) at the production helm, but due to his increasingly busy schedule the final mixes fell to myself and Dan Rejmer, an extremely talented engineer and friend. Dan taught me a lot about mixing during this project, (my favourite quote from Dan being: ‘Wrestling with too much distortion…try adding MORE distortion!’). Which sounds like madness but can really be made to work.
This was a big project. One track a day was tackled. Most tracks were made up of usually between 40 – 100 tracks which were run out of Ableton Live and through the desk. To my knowledge, this had not been attempted in the Reid, neither had a large scale mix been done in Ableton live in the studio. Much to some people’s dismay, I continue to use this piece of software for almost everything I do. It’s design allows you to be extremely creative very quickly and with a flexibility which is frankly unparalleled with other DAWs, in my opinion. Technically, it can be fiddly, and is not specifically designed for doing a lot of large mix projects. It can be unwieldy, but it’s what I’m used to, and personally, speed of work is paramount for best results, so I wanted to mix everything in Live. Since the whole record had been built up within it, it also enabled us to go back to certain stems and approaches if need be, and it helped expand my working practices in an environment i’m comfortable in.
To interface with the desk Kev and I did some monkeying around. Usually you can only get a maximum of 16 channels out of Live into the desk. Although since this session Michael has told me that an aggregate device he created would have done this, Kev and I made another one to achieve the full 24 tracks on the desk. We created an aggregate device using the 2 Myteks and an external soundcard; in this case my MOTU traveler mk3. The MOTU was synced via word clock and everything was linked via AES. I should be able to include more detail on this in the near future (as i’m not in the Reid as I write this…).
We also brought in a load of outboard gear to supplement the Reid’s arsenal. Different compressors were used for different tasks during the session, including the Drawmer Mercenary, a stereo Drawmer Compressor/Gate (supplied by Sean Williams) and an RNC Compressor (supplied by Owen Green). I had also hoped to use the SPL Vitalizer which had until this point lived in the Russolo Room (which I highly recommend using in mix situations), but it has since disappeared…This gear can be seen below (pictures were taken of all the external gear per track for future reference, these are the pics from the ‘Paris is Burning’ mix). The external gear was routed into the patch bay and then assigned as need be an inserts on individual channels.
Soon we had the full 24 channels plus a load of additional compressors running seamlessly out of Live.
Another trick we used extensively was the re-micing of the Reid Hall. Certain instruments had been recorded in different spaces and very dry, and we felt that an additional layer of 3-dimensionality would be added to tracks by using the Hall upstairs as a natural reverb unit. Hardly a new idea but something I highly recommend. No reverb plugin in the world ever sounds like a real space. (Discuss). If you want real instruments to sound real, re-mic them in a real space, if they’ve been recorded dry. This enabled various takes and parts from different places (and periods of time) to be mixed together and recorded as a seamless ensemble (see ‘Routine’ example, which features Cimbalom, Midi Harp, Theremin, Strings, Piano and more to be treated as one ensemble). This is no substitute for the real thing, but I believe it sounds far better than a plugin. The process of doing so is also slower and makes you really think about why you are considering using reverb in the first place (it is in my opinion the no.1 overused effect in contemporary music and a very cheap way at trying to add ‘atmosphere’ to something…). A pair of PMC’s were taken up into the Reid Hall (after it had been booked through the appropriate sources…) and sent up as a send from the desk downstairs. Two omni Schoeps mic’s were placed around 3 metres away and used to record whatever we chose to play.
Example 1: Section from ‘Routine’ using Re-Micing in the Reid Hall (unmastered mp3):
The EMT reverb was also used on most tracks. Usually only for a subtle depth to vocals or aspects of a track, but it is highlighted particularly on this part of ‘Paris is Burning’. The little staccato piano hit benefited greatly from that lush, slightly metallic tone of the EMT (@ 2:14, 2:21 etc):
Section from ‘Paris is Buring’ (unmastered mp3):
Groups of tracks were set up and run from the software to the desk. Individual tracks could be tweaked from within the digital environment, eq’ing etc. a little, and then each group was eq’ed at the desk. Much of the music relied on the interaction of multiple layers of the same take, often re-recorded through various other means (prepared amplifiers, megaphones, plugins, different speakers), so this gives you options at a number of levels. This made it clear that, in my opinion, the studio could do with a few more higher class digital plugins to shape things a little when doing large sessions in this way. This would enable you to be surgical in the DAW, and general at the desk. Also the studio lacks options for distortion in the digital realm – installing the newly purchased Soundtoys bundle would help this significantly (as the Decimator plugin is excellent for a variety of usages). The Voxengo tape saturator is a good starting point however but doesn’t offer a variety of options. The addition of Max for Live would also enable Live users to use Max patches to DSP too (which usually need a bit of tweaking to work with Live but is well worth it!)
The Manley EQ and Vertigo Compressor were run as mix-inserts (Manley first, then into Vertigo) to shape the entire mix subtly and add…well that irreplaceable thing that this equipment adds to mixes. I would recommend just trying this – setting up the Manley and Vertigo as a mix insert and A/B’ing it. You will notice that even though the Vertigo in particular may not be compressing anything (or just the incoming level tickling the needle a little), it will add a richness to the sound. NO DOUBT. I also find this to be true with the SSL Bus compressor; even if it’s barely doing anything, it adds something sonically. Running this as a final gain stage is also useful at times, as you can give yourself tons of headroom and make up the final level by using the gain on the compressor. I know many people will shudder when I say that, but I feel it works, and it’s one way of doing things.
Another approach used was testing the mixes on different monitors. We brought in Dan’s pair of YAMAHA NS-10 M’s (industry standard for such purposes) and set them up so that we could easily switch between them and the PMC’s through the desk. Much of the mix work was actually done through these rather than the PMC’s. This highlighted, to my ears at least, the need to have an alternative set of monitors in the Reid. As good as the PMC’s are, I feel that after working in the Reid a lot, much of my music doesn’t translate well on them. Loud, nasty, distorted guitars for instance, they don’t do well. And it is an important point that so much music is listened to on terrible laptop speakers and awful headphones, that is a sad reality of much music these days. Being able to hear your mixes on something which sounds awful is pretty important, in my opinion. If it sounds great on speakers which sound awful, it can only sound better on the PMC’s. The mixes were also tested again with a pair of SONY consumer headphones (which again, sound awful), a pair of earbuds (which sound even worse) and everyone’s favourite Beyer DT 770 pros. This helped a lot for ensuring the mixes transferring well to different media.
I’ve ended up writing this fairly quickly and expressing lots of personal opinions as well as technical documentation. One thing I think I would like to get across is that the Reid Studio can be used in many different and flexible ways. In my opinion there are a few additions which would make it more flexible, which would help get even greater results. Personally I would press for an alternative set of monitors in the Reid, plus a few more plugins (in particular distortion options).
More documentation of further projects I conducted in the Reid to follow…
On the evening of Thursday May 12th 2011, EU Composers’ Orchestra held their summer concert in the Reid Concert Hall. For me this was my first opportunity to test out recording a big(ish) ensemble in the Reid. However, there was a twist: the concert was being streamed live over the internet, and the audio was being matched to a 720p (almost HD) video being captured in the hall above.
Over the weekend of the 28th-30th of November I recorded two large orchestral ensembles in the hall. I write this post to document the mic set ups with the mix sessions following soon in a comment/other post (I say soon, this may be hopeful…).
The first night was the Edinburgh University Music Society’s Sinfonia. A large(ish) orchestra containing (as far as I could tell) pretty much anyone that wanted to play. A great initiative for players with the potential downsides not something I feel I should really comment on here… Anyway, these recordings are for my final year mustech project which will contain various mic set ups orchestra recording, hoping to draw useful comparisons and discover interesting perspectives from non-trained listeners through listening experiments which will be undertaken next semester.
As such, I chose to start off with the basic, tried and trusted method for orchestral recordings. The decca tree. Using the Schoeps omnis in a ratio of two across and one forward, making a t-shape approximately a meter and a half out from the front row of the orchestra and two meters up. I added a bit of a twist here in the form a mid-side pair (cardiod pointing forward, figure 8 pointing to the sides) positioned as close to the forward omni of the tree as possible. By doing this, I can compare the sound of the mid-side with the decca tree and also experiment with replacing the forward omni in the decca tree with the mid-side. Analysis of this to follow in the mix blog post.
One of the pieces had an extended brass section (3 tubas, hooray) and after sitting in on a rehearsal it became clear the winds would need some extra attention given their immediate proximity to these brassy beasts. The formation was horns and tubas stage right and trumpets and trombones stage left with the (doubled) wind section sitting right in front. The trumpets and trombones didn’t seem to be causing too much of a problem so I put up a KM140 cardiod facing as far away from them as possible whilst still pointing at the wind (so positioned just in front of the wind section pointing across and down. At the other side, the story was different. I decided to try out something I’d not done before in that I used a figure 8 schoeps pointed at the stage right winds and up into the ceiling. The reasoning behind this was to cancel the barrage coming from directly behind the winds in the figure 8’s null area. Of course there was some spill but in the end this seemed to work quite well with the winds clearly audible in the rough mix.
Lastly, there was a horn concerto so I thought I might as well try a close mic on the horn to mix in. Erring on the side of caution I went with the EV RE-20. Unfortunately, this mic had to be brought on and placed without much idea of where exactly the soloist would be standing (it was during concert so the soloist had to come on and play immediately, rather than wait for a mic to be placed). I would have just put this mic up for the whole concert but there was shuffling of chairs and such needed so it would have been in the way.
Given that this was my first big sessioin in the Reid, I was pretty overwhelmed with how amazing everything sounded straight off the bat (except the horn soloist spot… that may be end up in the track graveyard).
The second session was with the University of Edinburgh Chamber Orchestra performing a couple orchestral pieces and a piano concerto. Providing me with lots of joy and placement woes.
I used the same basic tree set up as last time with an ORTF pair replacing the mid-side pair. All in more or less the same positioning. Soon after, it transpired that the grand piano for the concerto would be placed almost directly beneath the tree which makes visual concert sense but did provide some interesting difficulties. As described by an undisclosed party, the lid of the piano was raised in a distinctly Nazi-esque fashion (all the way open). As such, the tree was bombarded with piano and while this wasn’t a great problem for the big orchestral sections, some of the more subtle accompaniment passages were being lost. Inspired by the results of my figure 8 wind spot two days previous, I put a couple of U89s up just beyond the piano towards the orchestra, spaced very widely (between 2nd and 3rd rows of violins). One end of each figure 8 pointed across the orchestra and the other into the wall/audience (not a major issue in theory though some coughs and sneezes came through, nice clear “whoops” during the applause too). The result of this wasn’t completely how I’d expected. They did the basic job, giving more orchestra when both piano and the rest were playing together but actually proved much more useful. I noticed that when the tree was A/B’d with and without the extra U89s, when they were in, they lifted the sound greatly, adding more clarity, brightness and widening of the stereo image. I imagine in much a similar fashion to a pair of wide spaced omnis. Either way, with all mics in and the orchestra playing superbly well, the mix sounded incredible.
Next stop is another orchestral recording this week for which I’ll be experimenting more with wide spaced mic positioning.
Oh, I should also mention that I went from the stage box clean through the desk and into the myteks. Recording in protools with digital outputs 1&2 patched to desk channels 17&18. The mic inputs were put on the record bus with protools coming out the mix bus so I could monitor what was happening coming in and what was being recorded.
More to come from further recording/mix sessions.
Můstek consists of Christos Michalakos (percussion & live electronics) and Lauren Sarah Hayes (piano & live electronics), both second year students in the university’s PhD Creative Music Practice programme. We spend a week (6th-12th December) in the Reid Hall using the new Reid Studio to record what will hopefully become two albums’ worth of material. We were also joined for an afternoon by Dr Nikki Moran (viola) and for two days by John Pope (double bass), who joined us from Newcastle.
We will gradually update this entry with documentation of our recording session and further mixing/editing sessions.
We are mixing on the SSL desk itself, sending 13 channels of recorded audio from Pro Tools to individual desk channels. A basic initial mix session looks like:
- Channels 1&2: Piano L&R (with Desk EQ)
- Channels 3-9: Percussion (with Desk EQ) -> Vertigo for parallel compression -> back in on Channels 17&18
- Channels 13-16: Electronics (with Desk EQ)
We are summing these 15 channels on the desk, and recording back into ProTools (Mix Out from Desk -> DAW inputs). However when we listen back to the recording on two clean channels panned L&R, the recording is quite different to the desk. Much of the ‘air’ is missing, but we cannot find a reason for this. Any ideas?
A few weeks ago, I was lucky enough to be among the first to use the Reid Studio for a mastering session, of Sllt., my Masters dissertation work. I had an interesting task to achieve: take a stereo recording made in the box in my home studio and use the Reid to master it through analogue outboard equipment, while at the same time applying performed reverberation in the centre, left surround and right surround channels of the surround system. I would then create a final 5.0 version of the piece, and also a stereo mixdown from all five channels. So the idea was to go from two channels up to five, using both analogue and digital processing, and then bring it all down back to two, something the Reid is ideally suited to. (Bear in mind when I’m discussing routing issues that the Lynx and TC Konnect described in Michael’s post below were not installed at the time of these sessions.) Here’s the finished stereo piece.
The first thing to do was perform the reverb channels. Because I was using Max/MSP to dynamically send the existing stereo recording to the reverb, and Max was not installed on the studio computer, I performed the reverb on my laptop. With a simple routing job from Logic to Max and back via Soundflower I had five channels of output, the original stereo and the three reverbs, along with three channels of the dry sends to the reverb plugins being recorded. The stereo and wet channels went through the patchbay into the desk, and from there to the appropriate surround monitors. The moment I pressed play, the superior monitoring and room treatment revealed more detail and depth in my recordings than I had ever heard before. Thankfully no appalling and previously unheard mistakes materialised, though there was clearly much work to be done.
With the monitoring up and running, I performed the reverb, tracing both the wet and dry signals in Logic. The reverb was Space Designer with the Europa IR from the Acousticas Bricasti M7 library. I then bounced the files out of Logic and copied them to the studio computer for mastering. I had had the intention of taking the dry reverb tracks and running them through the System 6000, but my total lack of Pro Tools HD routing skills let me down. (See here for things I should have found simple…) That was a pity, because the 6000 reverb is the finest I’ve ever come across. Settling for second best, I used the wet recordings from Space Designer. I set up a Logic session with the five channels going out and back in via the Myteks. The desk made the routing for monitoring and outboard processing a doddle. The stereo track went through the Manley EQ and Vertigo compressor in series, returned across two unused channels and were sent to the DAW and to the left and right monitors. The three effects channels simply came out of the DAW, ran across the desk and back again, and out to the centre, L surround and R surround monitors. In this way, the stereo channel, which makes up the bulk of the piece, got the best possible processing, and the effects channels still had the channel EQ and assignable dynamics available, though these were found to be unnecessary.
Because the piece is based heavily on live transpositions, the biggest job in mastering was to handle piercing high frequencies and massive subsonic energy. The sheer force of some events outside the range of normal hearing was causing unwanted triggering of the compressor, not to mention the standard subwoofer flapping and high end ear pain. Fortunately the Manley handled it all like a champion. After a bit of E-Series on the desk to broadly level things out a little, the Manley’s high pass took care of the rumble and compressor triggering, and the EQ turned what had been a tough listening job, with frequency stabs all over the place, into a very musically unified whole. It’s a very flattering and deceptively powerful piece of kit. Even small tweaks at extreme frequencies brought noticeable improvements.
Compressing the output of the Manley with the Vertigo was an equally painless and fruitful experience. In two separate passes I used a broad, low ratio squeeze to tighten things up a bit for the 5.0 mix, and after summing all five channels to stereo I squashed the whole thing with a high ratio and mid threshold for video use. Both were very forgiving and it always seemed like the boundaries of aesthetic taste would make themselves felt long before the circuitry of the unit became too apparent in the sound. Because I habitually record live performances at low levels to ensure against clipping, I had to raise the level of the signal overall. I chose to do this with the Vertigo’s makeup gain rather than by digital normalisation, and the benefits were great. The post makeup sound was denser and more unified than the plain recording, with a tangibly warmer overall sound. Lastly I boosted the reverb channels on the desk faders to compensate for the increased stereo level. After a few last tweaks of various settings, I ran the 5.0 mix into Logic, and reset everything to mix those tracks down to stereo.
For comparative purposes, here is a recording of the performance of Sllt. at Inspace two weeks after the sessions documented here. It was roughly mastered in The Russolo Room without the benefit of the analogue mastering chain in the Reid, but with Lexicon PCM96 reverb recorded live, rather than Space Designer. It’s also obviously a different performance. Still, I think the distinctions are clear, with the Reid version sonically superior. Between the sheer quality of the equipment and the listening environment, along with the fast and effective workflow, mastering in the Reid was realised the potential of the work as fully and completely as possible.
lapslap are Michael Edwards (saxophones, laptop), Martin Parker (horns, laptop), and Karin Schistek (piano, Nord synthesiser). We recorded free improvisations in the Reid Hall from the 8th to the 12th of September 2010. This was to be our fourth album on Leo Records. On September 11th, the American percussionist and 9/11 survivor Fritz Welch visited us and guested on a 3-hour recording session.
Aware in advance that the decision was insane, Martin and I decided to engineer the recordings ourselves, despite being performers. We wanted to put the new studio through its paces and try out some of its more esoteric possibilities, e.g. ADAT sends from the studio to the hall via the CAT 5 extenders. We couldn’t inflict that and the up to 12-hour days on some poor unsuspecting engineer….
In contrast to our last recording–where we aimed at maximum separation of the instruments to isolate signals–we took the counter-intuitive approach of placing the instruments close together, relying on spot mics for separation where necessary. The theory was that, whatever bleed happened, say, from the sax into the piano mics, if they were close then we wouldn’t have strange-sounding delays i.e. the bleeding signal (if you will) would at least be direct, not reverberant. Having listened to the results I have to say this worked.
We wanted to capture the acoustic instruments with both close and distant miking techniques. To this end we used Neumann U89s in cardioid mode as the main air mics on the sax and horn. These were placed about 15-30 centimetres from the horns’ bells and they were isolated with SE Reflexion Filters. As we’re fans of double-miking horns etc. Martin also used his beloved DPA 4061 omni as a clip-on whereas I opted for an Electrovoice RE-20 as my second on the saxes.
The piano was close-miked with two AKG 414s (in wide cardioid mode) about 15 centimetres from the bass and treble strings. These were mainly aimed at picking up Karin’s inside-piano effects but also blend nicely with other mics to vary the closeness of the sound (if you’re careful to avoid or deal with the proximity effect). We used two Schoeps omnis on the piano as well. These were spaced just over a metre apart and a similar distance away from the piano. Clearly these picked up a considerable amount of horn signals and room as well, but the omni pattern at this distance transduced some amazing bass from the Steinway piano.
The ensemble was captured by a central Schoeps mid-side coincident pair with the cardioid focussed on the piano. The pair was about two metres high and perhaps three metres away from the piano and horns. The horns had clear sight lines to the mic pair and were facing pretty much straight into the sides of the figure-of-eight.
The last pair of Schoeps omnis was in row three of the audience seating, about five seats or so in. The main aim of these was to pick up hall ambience and if we’re lucky these will allow us to get away with adding no artificial reverb to the instrument signals. (It should be added, however, that we did put Altiverb reverb on the laptop signals when we monitored during playing. This really helped the performances and without it the laptop processing would have been too dry to interact with sensitively. We recorded these signals dry though, and will add varying degrees of reverb to them in the mix, perhaps even using Altiverb again with an impulse response of the hall which we took at the close of the final session.)
Placing Fritz was difficult as there was no room in the centre for him to set up the percussion. We opted to put him on the rear raised stage, about 3 meters behind the piano, with clear sight lines to the mid-side and ambient mics. We used a Neumann U89 as an overhead (Martin donated his and took the Neumann TLM 103 I’d brought in for his horn) along with a Sennheiser MD421 Mk2. I was surprised how well matched these were as a pair, actually.
Despite the studio being designed as a 16 channel system we actually recorded 24 channels. This would not be possible on our ProTools HD system (16 channels total with the two Myteks with ProTools cards), but as the 8-channel limitations of the ProTools Core Audio driver had forced us to buy a Lynx AES-16e-SRC card anyway (to support Logic, Nuendo etc.) we opted to record on Nuendo (my preferred DAW) using an Aggregate Device made up of the Lynx and the TC Konnekt 32. We’d mainly thought of the latter as a digital format converter but it’s actually a pretty solid 16-channel firewire sound card too.
I have to admit that I wasn’t absolutely confident that this approach would work so we had a backup plan involving a separate computer to record the 8-channel ADAT stream from the laptops. The idea was to sync the two recording systems through such a primitive device as a hand clap, aligning these when merging the tracks onto one system. Thankfully we didn’t have to do this.
The Aggregate Device (now available to all you Logic and Nuendo users as the “LynxTC” device on the studio Mac) was rock solid. It offers 32 channels of digital input and output over Firewire and AES via the patch bays. The first 16 channels are the Lynx card, the second 16 are the Konnekt 32.
Below are my pre-production notes. Martin and I both have the possibility in Max/MSP to output 4 channels for quadraphonic playback. Thinking of a possible future surround release we decided to capture these rather than record a stereo mixdown.
24 input channels needed: 4 piano, 2 synthesiser, 2 horn, 2 sax, 2 percussion, 2 room, 2 distant ambient, 4 martin laptop, 4 michael laptop (16 mic inputs, 8 digital over the ADAT extenders) so we'll need an aggregate device made from the lynx and the tc konnekt to make 24 channels total I/O. first 8 mics go straight into desk, from there to mytek 1 and from there to lynx at 96k (clocked from mytek1) last 8 mics go into desk, from there to mytek 2 and from there into tc konnekt (running at 96k also, clocked from mytek1) 8 digital go from martin's fface to adat extenders, from there to the adat->aes converter and from there into the lynx, using SRC on the lynx for 48k to 96k conversion. compression: we'll put limiters on one sax and horn mic (SSL dynamics 1&2), and on the piano close mics (Vertigo). threshold c. -3DBFS, ratio c. 10:1, fastest attack poss, so it's only there as a protection should we get an unexpected transient. we use the desk's track busses 1-4 for each of the instruments' mono mixes (for laptop processing and headphones monitoring), routing these to the 3rd mytek running at 48k (internal clock). nuendo is set up to do the headphones mix of the max signals over the desk's track busses 5&6, also routing to the 3rd mytek. 3rd mytek then sends 6 AES to the adat-aes convertor and into the extender back up to the hall. so the adat loop is: michael 4 channels of maxmsp to martin, martin adds his 4 channels and sends all 8 over adat extender. studio routes back over the adat extender 4 mono channels of instruments for processing plus a stereo headphone mix of the laptops. this goes to michael's adat in and he routes the 4 monos to martin along with the 4 laptop; the headphones mix goes out michael's analogue outs to the headphone amp: IN from extender michael 1: piano michael 2: horn michael 3: sax michael 4: guest michael 5: headphones L -> analogue out michael 6: headphones R -> analogue out michael 7 michael 8 OUT to martin michael 1: piano michael 2: horn michael 3: sax michael 4: guest michael 5: max 1 michael 6: max 2 michael 7: max 3 michael 8: max 4 IN from michael martin 1: piano martin 2: horn martin 3: sax martin 4: guest martin 5: michael max 1 martin 6: michael max 2 martin 7: michael max 3 martin 8: michael max 4 OUT to extender martin 1: martin max 1 martin 2: martin max 2 martin 3: martin max 3 martin 4: martin max 4 martin 5: michael max 1 martin 6: michael max 2 martin 7: michael max 3 martin 8: michael max 4
Obviously, running essentially two digital systems at two different sampling rates is not ideal. We had to do this though as we wanted the sonic benefit of recording the mic signals at 96k, even though the laptops were limited to 48k (any higher and the CPU couldn’t cope with what we needed to do). However, the 96k system (Lynx, Konnekt 32, two Mytek convertors) is all clocked from the first Mytek. The 48k ADAT system (Mytek 3, ADAT->AES convertor, two laptops) is clocked from the third Mytek and feeds into the Lynx, which does sampling-rate conversion (SRC) from 48k up to the recorded 96k. This is the reason we couldn’t route the ADAT signal into the Konnekt 32. This would be the ideal choice if everything was at the same sampling rate, because the Konnekt 32 would do the ADAT to AES conversion for us. As it has SRC on its inputs too, we thought we could use it even with the two sampling rates, but it turned out that as soon as we ran it at 96k, it thought ADAT signals needed to be S/MUX’ed so our signals got munged. The Lynx with SRC turned on was the way to go then.
Beware though: the Lynx only does SRC on inputs, not outputs. I was hoping it would be bidirectional so, for instance, in a 96k session we could still use, say, a Fireworx FX processor running at its maximum 48k i.e. SRC’ing both out and in. Not possible I’m afraid.
You might still have expected–and I did wonder–that coupling the separately clocked 48k and 96k systems via the Lynx–even with SRC–might cause dropouts and other nasty little clocking problems. But it seems the Lynx handles this perfectly and whatever it does with the incoming clock and the external clock that’s driving it, it works. Once the system was up and running it didn’t give us a single problem.
When the TC Konnekt 32 firewire cable is in (i.e. when it’s running as a sound card and not just a digital format converter) you can’t change the clock source and sampling rate settings on the front of the hardware. Instead, you have to change them in software, with the TC Near Control Panel (in the Applications folder), on the System Settings page. Set it (and Mytek 2 if you’re using it) to the sampling rate of Mytek 1, and the clock source to external word clock. Similarly, change the Lynx clocking to external and set its sampling rate in the Lynx control panel. (Both this and the TC Near software will start up automatically on the studio Mac.)
If you change the sampling rate of e.g. Mytek 1, you have to change it in the TC Near and Lynx control panels too (and Mytek 2 if appropriate). Always make sure all systems are running at the same sampling rate (unless you’re using SRC). If you don’t, you may not immediately notice problems, but you’ll probably find drop-outs (perhaps as long as half a second), digital burbles or pops, or various other nasty things creeping into what should be a pristine recording.
If you’re having problems getting the TC Konnekt 32 to work as a sound card and locking properly in the LynxTC aggregate 32-channel device, open Nuendo or Logic and first load the TC Near driver as if you were only going to use it as a 16-channel firewire sound card. The sampling rate should then be alignable with the Lynx and Myteks. If that works you can load the LynxTC Aggregate Device and all 32 channels should appear.
The studio is now fully 24-channel compatible and sounds fantastic. Really, I’m not sure I’ve ever heard anything sound better than this. The combination of top-notch mics, SSL pre-amps and analogue processing, Mytek AD/DA conversion, and the PMC speakers, is a real winner.
We’re going to edit the sessions in our home studios using Nuendo and the SSL Duende channel strip plugins (probably no compression though). When we’re ready with the mix, we’ll move to the desk and transfer the Duende settings to the desk’s analogue EQ and use all 24 channels to create an analogue sum (maybe using the Mixbuss compressor too). I’m looking forward to that. I’ll post sound examples and photos asap.
To get to know the various dynamics, EQ, and reverb effects on the TC System 6000 I tried a mastering session with the 5.0 mix files of my piece 24/7: freedom fried for viola d’amore and computer. This was recorded in September 2006 at ZKM Karlsruhe, Germany, by Garth Knox. It was (is?) to be released on the Wergo label with video art by Brian O’Reilly; however it seems to be still mired in legal issues relating to the other pieces on the disc. [ update: someone must be listening: it’s out. ]
We recorded the viola d’amore in surround and even went to the trouble of re-recording the electronics in the same hall. We did this by playing the files through four D&B speakers and recording with a surround mic array in order to capture the ambience and create a more natural sounding mix with the live viola. I was never really happy with the room sound though, so I was curious to see if a little TC surround reverb would help out.
The mastering chain
The mastering was a three-step insert chain process: MDX5.1 –> 5.1 EQ –> VSS 6.1 Generic Reverb
The MDX5.1 dynamics processing was perhaps the most impressive effect used here. It has a radically different approach to dynamics, raising lower levels but not higher, so there’s an overall increase in weight to the sound but the transients don’t get squashed. Very effective; very slick. There’s a soft limiter (which I didn’t use) and a brick-wall limiter too (which I did use, just for safety–the limiting light did flash a couple of times in the piece but no more than that).
5.1 EQ was a joy to use after being forced in the past to EQ surround mixes with an array of three stereo plugins (or even worse: three passes with stereo outboard). Because there was some pretty untamed bass in the original mix (I thought I’d monitored correctly back then…hmm….) I did a pretty heavy high-pass (9DB per octave) starting as high as 68 Hz. I also lifted a touch with a shelf at 8Kz and two fairly wide parametrics at 700Hz and 2.5KHz to give the sound a little more body and presence. I was surprised at how characterless, or rather transparent, the filters were; I was also shocked at how much boost or cut could be added without wrecking the sound (no harshness to my ears).
At this point I had to print the effects to disc as–running at 96KHz–I had no more processing power on the TC to do the reverb.
As I had a complete 5.0 mix I used the VSS 6.1 Generic Reverb algorithm in 5.0 mode.* I used the Vienna hall preset and first tuned the reverb alone by fading the early reflections and dry sound right down (there’s no wet/dry setting here). I low-passed the reverb drastically, taking out everything above 1.5KHz, as well as everything under 174Hz, if I remember correctly. I then adjusted the decay time to around 1.5secs, where I had the feeling that the individual events weren’t bleeding into each other but there was quite a thick hall ambience. Then I played with the early reflections, brought up the dry level to unity, and adjusted the reverb level to -21db. I was surprised here just how much difference +/- 0.5db reverb made. In any case, without being too present as an obvious effect, the reverb added a real touch of class and depth; it couldn’t take away the original room sound completely of course, but I think it distracted significantly.
Surround channel order in multichannel files tends to cause a lot of confusion, especially in the compressed formats. The ogg vorbis file order for surround is supposed to be as follows:
5.0: front left, center, front right, rear left, rear right
5.1: front left, center, front right, rear left, rear right, LFE
I used Max to encode an Ogg Vorbis 5.0 file at 256kbps. The channel order I then got was L=1,C=2,Ls=3,Rs=4,R=5. Before listening to the piece you should probably route your channels according to the following test file:
For comparison, here’s the opening of the unmastered version:
The whole five-channel mastered piece runs to 14mins32 and yet is under 27MB, which is pretty astonishing really:
Maybe these will be of use to future Reid surround masterers:
ProTools file: TCSys6000MasterAug10
[NB due to wordpress restrictions I wasn’t able to upload a .ptf protools file, so I renamed it to pdf and all was fine (great security!). So rename to .ptf and this should work in PT8]
Screen grabs of Hardware Setup and I/O settings
Notes on the Mytek settings for digital routing to and from the TC
mytek 1: source to digital out: aes (i.e. TC return is on mytek1 aes) source to analog out: dio card1 for protools aes for TC direct (i.e. main analog monitor outs are on mytek1) mytek 2: source to digital out: dio card 1 (i.e. TC send is on mytek2 aes) source to analog out: irrelevant but if dio card 1 will send dry mix to ch 9-15 on desk desk channels 1-6 are outputs select EXTA as MON SRC (RH of desk) set channels 1-6 track busses as 1-6 also (ITU Surround Order: L R C LFE Ls Rs)
* TC distinguish between Source and Generic reverbs in their algorithms. From the manual “Until 15-20 years ago, digital reverb was mostly used as a generic effect applied to many sources of a mix. Nowadays, where more aux send and returns are at disposal, new approaches have emerged. Elements of the mix are being treated individually, adding room character, flavor and depth in more creative and complex ways. At TC, we call this a Source based approach, and we have put more than 30 man-years of development time into design and refinement of Source based room simulation. When Generic digital reverbs were invented, they stretched the DSPpower and memory bandwidth capabilities of their time; and Source specific processing was completely out of the question. Even though we may now consider Generic types to be less than ideal, they still have applications for which they may be chosen instead of their Source based cousins.”
Jessica Aslan, Lauren Hayes and Christos Michalakos performed solo pieces and a group improvisation on 9th of July. This was a great chance to test the new studio under live concert recording conditions.
Schoeps omni and fig-8 as mid/side
DPA 4006 omni spaced pair
Tandy/Realistic PZM pair
DI from mixer
This was the first time I had used PZMs and I put them on the floor spaced about 2.5m apart in front of the drumkit. AMAZING!!!! Mixed with a bit of the Schoeps mid/side pair and put through the Vertigo (only for monitoring) I managed to get by far the best drum sound I’ve ever got. Even the toms sounded good!
So, I definitely had the best seat in the house for this gig, and getting the DI feed from the laptop outputs enabled the performers to get all the right parts to make some high quality mixes which, no doubt, they will post here soon.
On 25th of June I booked in Sax Ecosse, Flutes En Route, and Michael Haywood and Laura Grime for a full test run of the studio as part of a music education project run by Live Music Now. This session required foldback on headphones, talkback, and 8 channels of microphone recording of instruments as grand piano, fiddle, baritone sax, boom-whackers and voices.
It was a useful way of testing out the foldback paths and the general usability of the patchbay. This had taken Kevin Hay, Owen Green and myself a vast amount of time and effort to plan and required soldering almost 1500 connections. As you can imagine, it was to my great satisfaction that the session went very smoothly indeed with only a couple of minor hiccups, both stemming from my unfamiliarity with the desk rather than any errors in the wiring.
For future notice, the HPF on the SSL mic input channels is part of the EQ. There may be a way to use it separately but I didn’t find it in this session.
The Schoeps sounded lovely, especially the figure-of-eights, and judicious use of the screens enabled me to get some surprisingly dry results for some of the instruments, giving the capability of keeping the natural reverb but having it attenuated at the same time.
All in all this was a really rewarding session to inaugurate the studio, especially given the strong departmental representation within the Music In The Community field.
Audio clips will be posted soonish.