Wednesday 1 August 2007

new other blog

this blog has been superseded by new exciting other blog !!!

http://littlevegetables.blogspot.com/

Monday 25 June 2007

aa - semester 1 2007 assignment



Recording Tristan Louth-Robins demo style.

Tristan plays acoustic guitar and accompanies himself with voice.

Also on the track featured, Seb Tomczak sequenced an outro using 4 bit gameboy power, and Lauren Sutter plays violin - twice.

We recorded this song (and two others) over several days at the Electronic Music Unit at Adelaide University.
Then I spent far too long mixing it down (don't bring up software problems please :).

tragic.mp3

EDIT: just noted that the mp3 file on the link was truncated, have adjusted link.






Here follows documentation of the events.
http://www.hddweb.com/93140/PreProduction_Tristan_LouthRobins.pdf

http://www.hddweb.com/93140/Preproduction_session_plan.pdf

http://www.hddweb.com/93140/Recording.pdf

http://www.hddweb.com/93140/production.pdf

cc - sem 01 2007 assignment


Composition in the style of Musique Concrete

Edward Kelly

3' 00


This piece was created in regard to the aesthetic of presenting pre-recorded sounds in an unrelated abstract manner, thereby recontextualising them into a new pattern.
From this presentation a piece of sound art is created.

The material was deliberately a minimal choice of source sounds. There were four; a vocal sample, a drum sample, an accoustic guitar sample, and a paper sound sample.
These sounds were imported into various editing software, and therein manipulated using cutting/pasting, pitch shifting, time compression/expansion, equalisation, delay, volume and panning information.


This was a deliberate choice to emulate (to an extent) working with physical tape through the limitation of processing techniques.

The piece is a linear transition through sound variation and sources. Generally sounds are introduced, repeated, moved and manipulated, then they are superseded by the introduction of new sounds.

It has no set rhythym or tempo other than the length of the sounds used in a repeating manner. These sounds are generally effected in a way that there is little perfect repetition, creating movement and flow.


download: Score.pdf, Composition in the style of Musique Concrete.mp3



Music Concrete : A technique wherein natural sounds - such as a voice, an instrument, or the ticking of a clock - are recorded and then subjected to modification by means of altered playback speed, reversed tape direction, fragmentation and splicing of the tape, creation of a tape loop, echo effect, and other timbral manipulations.
highered.mcgraw-hill.com/sites/0072852607/student_view0/part6/chapter28/key_terms.html


... (known to some as Musique Abstraite, literally, Abstract Music) as the sounds are recorded first then built into a tune as opposed to a tune being written then given to players to turn into sound. ...
en.wikipedia.org/wiki/Musique_concrete

Monday 4 June 2007

f - week 12 - improvisation/composition

The dichotomy of improvisation and composition.

"Improvisation is not composition."[1]

"Comprovisation."[1]

This leads me to an idea I've been working on for a few pieces. During the course of this forum[2] it evolved into several movements :)

Basic idea - say at least 5 musicians with discrete parts. Give some idea of tempo, each one different. Make them start :) all at differing tempos. Then each player is to either speed up or slow down until some sort of sync is reached, where rhythmically the piece seems to make sense. Stay there for a little. Stop.
Repeat with new information for seperate movements.

Each part is to be very harmonically simple, potentially single notes, or mayhaps 2 note arpeggios or simple rhythyms. eg....





[1]Whittington, Stephen. Quoting an unknown source. University of Adelaide, Schultz building. 31 May 2007.

[2] “Music Technology Forum – Week 11 – Composition/Improvisation". University of Adelaide, Schultz building. 10 May 2007.
Presented by David Harris and Stephen Whittington.

cc - week 12 - preproduction form [1]

Well here it is. My preproduction form.

It documents how I intend to go about creating a piece using the techniques and software that we have covered over this semester.


The document itself.doc .
sem_1_2007_CC_PreProductionComp.doc


It pretty much outlines how I go about making most of my sound/music.
I start with a sound then run with it.

It is all about the sound and what it suggests upon repeated listens.
You put it through effects, listen...

Faith.


[1] Haines, Christian. Creative Computing week 12 lecture. University of Adelaide, 31 May 2007.

aa - week 12 - more mixing

this weeks task - mix down once more eskimo joe with more stuff !!! (ie compression, fx...) [1]

Once again I headed into the studio to listen to eskimo joe... new york indeed.

I chose a more interesting part of the song, in that it wasn't just chorus but had a bit of outro verse . This made it more difficult; more tracks overall to incorporate, and a bit more excitement to build.

First mix, I panned, adjusted levels, added eq (a lot, but I think I added more as the mixes progressed as well), and compressed the bass and vocals.
ej_eq_comp.mp3

Next up a bit of reverb to the vocals, and exciter to some of the guitar. The reverb I did by sending a number of vocal tracks to aux 1 (this also for compression) and then using a send to aux 2 with the reverb on that (the picture manages to miss the reverb track). I also used a seperate reverb on one vocal channel, this I used as an insert and played with the mix to sit it nicely.
ej_eq_comp_fx.mp3


Finally I tried a bit of mastering, and yes it is louder :)
I don't know why but I think it sounds clearer with a bit of standard compression first, followed by the maxim plug-in.
ej_eq_comp_fx_master.mp3

I did find that I was changing the mix slightly for each version as well, deciding that this guitar was now too loud etc. This picture is from the final mix, with compression and maxim on the master fader.


Oh, and I didn't get to try patching through the real desk, not enough time. Maybe next semester :)

[1] Fieldhouse, Steve. Mixing Basics 2. University of Adelaide, 22 May 2007.

Wednesday 30 May 2007

cc - week 11 - meta synth montage goodness

_____This week we were introduced to the metasynth montage room.[1] Basically what I wanted last week.______



I got a bit more into the image synth.
Analysing a sound I made, and then using the selection/transfer tool to select areas and then drag/copy around using the different possibilities (additive, subtractive, erase etc).
And the selection of either left, right or left and right (can't remember the boolean symbols :) made for good stereo processing.

I had a brief toy with the image filter, spectrum synth and sequencer but enjoyed the image synth/effect room so much more that I did the bulk of my processing in these rooms.
Again the graphical editing of parameters over time, was very pleasant!!



The montage room was also fun, although possibly a bit budget in some respects (unless it was all user ignorance :). A global pan for the entire channel for example.
I still found it quite challenging to do what I consider very simple tasks such as eliminating the pop/click at the end of some rendered samples. Using studio 2 made it necessary to do this in the montage room and zoom/scrolling takes a bit of getting used to, and the fade outs also. Hence there's still at least one click in the finished product.
I also didn't work out if there was a way to change the effects on a channel over time either.

cc week 11.mp3

Getting it to an mp3 was a bit of a challenge in studio 2, but sort of worked out how to use soundflower !!



[1] Haines, Christian. Creative Computing week 11 lecture. University of Adelaide, 24 May 2007.

Sunday 27 May 2007

aa - week 11 - eskimo joe

Mix a bit of an eskimo joe track.[1]

This was quite fun, but felt kind of cheating - having all the recording done and everything was presented quite neatly.


I went through and did 3 mixes, then i checked again my reference files (metallica - battery, chris isaak - wicked game), then realised i was lacking a bit of low end.. to a certain extent i can blame the sounds themselves - straight up the bass was not as low as my reference tracks, so when i mixed into a decent level there was lacking of low end...

Then I went through and did them again.


Mono mix - I found that certain sounds were imbalanced in the mix straight - ie needed work (eg bass and the outro rhythym guitar), so I mixed them to an aesthetic of quality rather than level.
This was the easiest and most straight forward, it sounds quite good and reminds me of AM radio.

stereo mix - I re-stereoised the sounds, panned a few other things a little bit. I noticed that I'd mixed the accoustic guitar too loud, but because it was quite stereo it was still quite transparent.

eq mix - HPF on the OH's, made the snare a bit snappier, gave the straight kick more room for eq. Got rid of annoying tone in rhythym gtr. I really got a bit annoyed with the vocals and had to automate the levels a little (picture below shows volume trim). Would have like to compress the bass, it's a bit all over the place. No eq on the accoustic, I really liked it as it was.. in retrospect it would have been good to at least have had a fiddle - it is quite big.
Just listening back to it in headphones, it does sound the most "commercial" of the three.


I did really miss having faders, one mouse does make it a bit harder. Maybe next week, just for fun, I'll try some subgrouping and use the real desk :)


[1] Fieldhouse, Steve. Mix. University of Adelaide, 15 May 2007.

Stavrou, Michael.
"Chapter 11 - The Art of Mixing". Mixing with your mind. Flux Research Pty Ltd, Mosman, Australia, 2004.

Wednesday 23 May 2007

f - week 9 - construction/deconstruction

Another week another forum.

First up, the magic formula for a hit song... as Freddy read from Michael Stavrou, get a big pile of songs, compare and work out what they all have compared to non-hits... magic ...
Hit songs my arse, I don't write songs so I don't care (I compose pieces :).

Next up, Dragos played some tunes and pointed out various techniques of construction. Techno - start with a sound, add another etc.... at some point pull some out, then end - or something like that.

Thirdly, Matt Mazone presented an add he'd done sound/music for and how he'd constructed it. The most interesting of the lot. And that's all I've got to say about it.

I think I'm getting quite bored of forum.. it's really quite uninteresting. ho hum.
oh well, I'm sure there's worse things to do on a thursday.

One thing I'm getting quite annoyed at, is low quality mp3's being played at us - what's the story ?
Although I do enjoy myself listening to the fluttering around the hi hats and other constant trebly sounds - one day I'll work out what the encoding is doing and then I'll be happy :)


[1] “Music Technology Forum – Week 9 – Construction/Deconstuction”. University of Adelaide, Schultz building. 10 May 2007.
presented by Freddy May, Dragos Nastasi, Matt Mazone.

cc - week 10 - meta synth

week 10 - use metasynth.[1]


Very nice piece of software. Annoying edit interface, although I'm sure it will get easier.

This weeks piece is amusingly the first one I felt I had little control over the end product. The editing is very crude and abrupt. I'm hoping that by using the mix and merge features it may smooth it out.

I mostly enjoyed the effects room. The stereo echo, pan and pitch, harmonise and harmonics were most enjoyable. Not to say the others weren't :)
Quite enjoyed the stereo factors, being able to draw and seperate dramatically. And the ability to effectively automate parameters !! I have previously spent so much time effecting tiny consecutive sections of files with tiny consecutive parameter changes.

Rendering was where the editing was a bit annoying, no tails. I am used to being able to either multitrack so long decays can be merged on seperate tracks, or having more obvious paste/mix features. Wasn't able to work out how to do this easily, potentially I was too focused on trying to do it all on one screen and would have found it easier with multiple files going on.

The image synth is quite interesting, I spent a while playing with it but didn't use it much for the final product.
The rendering to a new shorter file was a bit annoying - but a bit of render, copy, undo then paste sort of worked. Again not being able to layer was annoying (although the more I write this the more annoyed I get at myself for not working it out).
I enjoyed a bit of brush work, but need more learning of the grid to make more coherent sounds i think.

This weeks piece ; cc week 10.mp3.



[1] Haines, Christian. Creative Computing week 10 lecture. University of Adelaide, 17 May 2007.

Monday 21 May 2007

aa - week 10 - recording of the drums

WHINE WHINE WHINE : I've just spent an hour and a half trying to upload files. 672kb worth. Problems with authentication. Whenever a sample is in the process of travelling, I have to reauthenticate with my student id/password, then firefox freezes, then I have to use the task manager to kill the process - etc. Internet Explorer wouldn't even get that far....
Hence now that I'm going to write my blog, I'm grumpy and have much less time so it may be a bit short and sparse (apart from the whine that is ).
Also have noticed that some sounds missed normalising, and the loops aren't really that loopy (blame it on ProTools).
I'll attempt to neaten it all up over the next couple of days when time is once again leisurely.

WARNING : all the mp3's are on one external page, it's the best I could manage :)
http://www.box.net/shared/26k4tv49f4

This weeks' task was to record a drum kit[1].
Went for a minimal microphone setup. 6 microphones.

Shure 52A Beta on the kick, 56A Beta on the snare. Two Neumann Km-84's as overheads (spaced about 3' apart pointing down at roughly the snare and floor tom). Two Nemann U87's set up about 12' in front using MS stereo technique.
I had fun mixing these down. Eq on all mics except the MS stereo. Listening back, even the close mic has a pleasing amount of space. I quite enjoy the bigger looser sounds, lo-fi. Although I might have gone a bit overboard on "s dm 2 no sd.mp3", at least in regard to adding other sounds.

e dm 1.mp3
A mixture of all mics, just a little spacious. No compression.

s dm 1 no room.mp3

The "close" mics, shows a bit of space t from the overheads. Compression, perhaps a bit much.

s dm 2 no sd.mp3
Same loop as above. No snare mic, lots of room, quite a wide sound. Compression.

s dm 3 no sd.mp3
No snare mic, quite similar to above. Compression.
s dm 4 no comp.mp3
Mix of all mics, no compression. The crash has a really nice stereo decay. The toms sound a little thin but not unpleasant, perhaps a little out of context compare to the big sound of the rest of the kit.

snare above.mp3
snare below.mp3 :
Here the mic is alternatively positioned above and below the snare. Below has bugger all decay, the samples features one snare hit followed by 2 kicks, demonstrates how the snare will buzz to other sounds.
Above also shows the spill from the hi hats.

The kick drum mic was placed just inside the hole, did try the mic outside for 1 take. When i neaten this up I'll add an example of the two kick sounds.
The one outside was much brighter/thinner, I preferred the inner sound and moved it back in.


[1] Fieldhouse, Steve. Drum kit recording technique. University of Adelaide, 15 May 2007.

Tuesday 15 May 2007

CC - week 9 - further sounds and the sampler

More sampling goodtimes.

The quest this week: to further explore processing techniques to enhance our sound libraries, and create a soundscape using Reason and the NN-19 sampler module.[1]

I used my own voice as the sample source and manipulated it in ProTools, Peak, SoundHack and Fscape.

I've got to admit to finding Peak a fairly annoying program to use. Today I couldn't work out how to change the audio settings, I could not get a sound from Peak in studio 5 :( And also had trouble with ProTools not bouncing down sounds. It would go through the motions, create a file name in appropriate folder, do some thinking, then delete file - very odd, and only for some sounds....which did make some rather neat sounds not happen :(

In our lecture, Christian showed us a rather nifty technique that I quite enjoyed playing with.
Gate a signal, mix it with a phase inverted copy of ungated signal and then you have an anti-gate, whatever is above the threshold is removed.

Lots of cutting/pasting in ProTools, using multiple tracks/busses. Reversing, compressing, phase inverting, delays, reverbs, time stretching.
Also a lot more stereo imaging as a result of using multiple tracks for editing. I did have to limit myself, or individual samples could well have ended up as tracks in themselves.

Here are a couple of fun sounds. sound 11.mp3 reverse other.mp3

With Peak not making sound, I didn't use any loop points in any samples this week. Nor did I set key regions.
Once I'd imported the sounds into Reason, it stacked the sounds alphabetically and was quite easy to give them a neat key range to fit all sounds onto one stretch of the keyboard.
The looping however proved to be a bit limited. I think as a result I ended up with a lot of pitched down sounds, looooonnnnngggg and with new interesting artifacts :)

Final track. cc week9.mp3


[1] Haines, Christian. Creative Computing week 9 lecture. University of Adelaide, 10 May 2007.

Monday 14 May 2007

aa - week 9 - bass guitar

EDIT: i've just updated file links AGAIN.
so sorry, i am bloody annoyed. If they still don't work, here is the folder they're all in.17 may 2007



This week’s assignment. Record a bass guitar[1].

I went into the studio with Freddie May and Ben Cakebread using the pictured bass and amp.

The signal from the bass guitar was split by the Behringer DI, with one signal going into ProTools and the other into the bass amp. The amp was mic’d using a Shure Beta52A about ½ inch away from the amp aligned to the center of the speaker cone, and an AKG C414 was placed about 5 ft away in the direct path of the speaker. The C414 was routed through the Avalon pre-amp, then via a bus to a compressor.

We also patched the bass through the junction box so as to play in the control room.

The first 3 samples were recorded simultaneously.

f b 1 Beta52A.mp3 - Shure Beta52A

f b 1 C414.mp3 - AKG C414

fb 1 DI.mp3 - Behringer DI

The two mics captured the preferable sound, this I assume was because of the amp settings which coloured the sound quite strongly compared to the direct signal. Suggesting that eqing the DI could result in a good sound.

The Beta52A is much clearer and crisper. The C414 has a much roomier/muddier sound which would sound better in a looser spacier setting, the compression also reveals itself to be a bit too strong – more noticeably in the next recordings.

The next two samples are also simultaneous recordings.

f b 2 Beta52A.mp3. The C414 has very noticeable compression.

f b 2 C414. Mp3. Again the Beta52A has the crisper/clearer sound.

These recordings show that recording effected signal can be problematic. The compression settings which were good for one moment, were not good for later moments. Messing about with the bass and not playing consistently made for to much variation pre-compressor.

Overall I think the Beta52A had a good sound direct to recording, capturing the tone and extra buzzing very nicely (I like the buzzing).


[1] Fieldhouse, Steve. Electric guitar recording technique. University of Adelaide, 08 May 2007.

Wednesday 9 May 2007

f - week 8 - gender in

Another week on the issue of Gender in Music Technology.
Really quite a loose interpretation of the subject.

Kraftwerk[1], de-gendering music. That's like saying looking at something through a window removes you as a subjective interpreter. Just because you use a machine doesn't really remove your input, and just because it sounds like a machine doesn't mean it was created by a machine.

Freddie Mercury and Queen[2]. Never really noticed that the band name was slang for gay, quite amusing how one can just not notice something. Bring back the court of Louis XIV and let's all wear wigs and makeup :)

Pink vs. Eminem[3]. It was mentioned last week that love songs transcend the gender boundary[3a], people still will buy a love song about the same sex as the purchaser.
Well, Eminem is a tad rude perhaps, and I don't know his demographic. But he generally has quite nice backing music, and a good sense of rhythym - that's really the only thing I enjoy him for (not that I actually own any - of course not).
I don't really listen to Pink, so maybe she sells as a role model. Kylie will always be Charlene, and the buying public know it (she hasn't managed to make it in the US).

I also presented[4]. My point, in case anyone missed it. We are all suckers for what our minds tell us, and if it tells us that Music Technology is cool then it is :)
More affirmative action can only help the situation of imbalance, so I'm in favour of it.
I also wanted to get into the Jungian collective unconscious, and archetypes but didn't quite get excited enough :)




[1] Leffler, Bradley. “Music Technology Forum – Week 8 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 03 May 2007.

[2] Cakebread, Ben. “Music Technology Forum – Week 8 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 03 May 2007.

[3] Gadd, Laura. “Music Technology Forum – Week 8 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 03 May 2007.

[3a] Morris, Jake.
“Music Technology Forum – Week 8 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26 April 2007.

[4] Kelly, Edward. “Music Technology Forum – Week 8 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 03 May 2007.

cc - week 8 - fscape soundhack and the nn-19

Get some vocal sounds and process through Fscape and SoundHack[1], easier than it sounds.

Fscape took me a while to be able to effect sounds. I wonder why that open option in the menu exists ? I couldn't get it to work.
I also noticed a few sounds generated with one of the effects (something to do with Bloss) created a bit of serious DC offset - and Peak doesn't deal with that (ProTools does :).

Didn't really understand what SoundHack or Fscape were doing. Just twiddled settings until a product was settled upon.
They do seem quite interesting and look forward to more fun with them.

When editing files,I am used to using 1 and 2 to navigate to beginning/end of selection, am yet to work out how to in Peak. Makes it a tad unwieldy when looping not being able to jump around.

Got a bit more into the sample settings in the NN-19. Root key, key range and level. Last week everything worked straight up, this weeks sounds enjoyed it !!

Apart from that the sampler was pretty much like last week, set it up similarly (velocity and mod wheel modulating filter, fader modulating amp env release, bit of delay and compression), but used a bit of LFO on the filter (BPF again) this time AND found the pitch modulation !! I did cut the whammy out because it went to about 1.5 minutes, too long :( Still can't believe I missed it last week.

Track: cc week 8.mp3

Screen shot.

Shows pitch info, velocity, amp env release and pitch modulation. Some sort of proof I whammied it up :)


[1] Haines, Christian. Creative Computing week 7 lecture. University of Adelaide, 03 May 2007.

f - week 7 - gender in

Gender in Music Technology - can you tell the difference.


Not a topic for the faint hearted. The presentations on this subject probably reveal more about the people presenting than the earlier subjects, also make me realise that I'm almost old compared to most of the others in the forum :)

The presentations varied from"I'm not a feminist"[1] to "why should you tell the difference"[2]. With a quick stop via Homo Erectus, with the women staying in the cave [3].
Oh, not to forget the interesting variation of hard mastery vs. soft mastery [4], this I found the most interesting.
Women are more associated with soft, and men hard. Ha , yin and yang. Here the truth of the taoist philosophy of yin and yang proves itself again. In the heart of yin there is yang, and in the heart of yang there is yin. There is no actual dualism, there are no truths'. Sure gender gives major inclinations to societal roles but there are always exceptions.

And by the way, seemingly like almost everyone else who has posted a blog on this, I muchly enjoy Bjork !!

[1] Amy Sincock. “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

[2] Jacob Morris. “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

[3] Douglas Loudon. “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

[4] Probert, Ben . “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

Sunday 6 May 2007

aa - week 8 - electric guitar

EDIT : I've had problems with the host for the files on this page, it's sort of fixed (I think) but am generally not impressed with them.
_____________________________________________________________

So, recording the electric guitar. That horrible overused tool of mayhem and boredom...


Set up the house amp, moved it around to a "nice" location.
Moved around listening to the amp, it's amazing how much different it sounds when you're listening in front of the speaker vs. above. All those high frequencies that travel straight.


3 microphones, AKG C414, Shure SM57, Sennheiser 421.
Set up the 3 mics to record simultaneously.

Put the SM57 and 421 pointed diagonally just inside the rim of the speaker. Steve saying he enjoyed that area[1], and me finding no better.

The 414 about 4 1/2' out and 1.5' above the center of the speaker. I had some issue with phase, moved it around for a while and couldn't find a spot without obvious phase problems. I got around this by deciding it wasn't a problem :) I recognised a particular metal sound generated by this phasing.

Patched everything so I could play from the control room.

Screenshot shows 4 takes, 3 microphones.



Take 1.mp3 : funny looking Roland midi guitar. No midi, lots of whammy. Mixture of the 3 mics, compression.

Take 2.mp3 : same guitar, same whammy. Mixture of the 3 mics, compression


Take 3.mp3 : the strat style guitar, neck pickup, bit of eq, a bit of compression, using the two dynamic mics.
Just noticed what sounds like a bit of delay, mmm no delay, so must be sick technique !!

Take 4.mp3 : the strat, bridge pickup, bit of eq, no compression, mixture of the 3 mics.
The thinnest sound of the lot, emphasised this.


All the sounds have their relative merits.
The differences in sound are down to technique, guitar and amp settings. And post production :)




[1] Fieldhouse, Steve. Electric guitar recording technique. University of Adelaide, 01 May 2007.

Tuesday 1 May 2007

cc - week 7 - NN-19 sampler

This weeks task was to "create a performance instrument that uses samples manipulated from voice",[1] using the NN-19 sampler module featured in Reason Adapted.

Using a wave editor to create samples was a genuine pleasure, really takes the hard work out of looping. No more little LCD screens!!

Importing into the NN-19 was all good, automap worked great - oh so easy (once I remembered how to do it :).

Then to get some control happening over the sampler.
Modulating the filter with velocity and the mod wheel was easy.
Getting the amplitude release to listen to the Novation was a bit of a pain
- on my first session I spent a bit of time playing with the buttons, then reading the manual, but couldn't find how to change the controllers other than using templates. So I scrolled through the templates, using the override mapping to listen for useable controller info, until I found a template and a fader that were useful....
Then on my second session, whomever had been on the computer before had the Novation setup on the User preset, which just worked fine straight up.

That was it really, routed the sampler through the compressor (kept the resonance under control), and a bit of delay and reverb - Bob's your finished product..
sample #1 010407.mp3

here's a picture of the NN-19



Heres a picture of the sequencer window, showing velocity and the amplitude release automation.



I could have spent longer getting that particular setup under control, with more tweaking of the samples, sampler and the Novation controllers and really done some aesthetic magic.
What I would have liked to do is setup each sample on it's own sampler, and then mapped out the keyboard accordingly - this would have given me far greater control over the individual sounds rather than only 1 envelope for all sounds etc.
But that for me is the general problem with Reason, it uses too much of an archaic form, mapped on outdated forms. The flexibility I desire is lacking (that based upon minimal use of Reason 1, and this "adapted" version).


[1] Haines, Christian. Creative Computing week 7 lecture. University of Adelaide, 26 april 2007.

Monday 30 April 2007

aa - week 7 - vocal recording

Recording vocals.[1]


For this the Neumann U-89i microphone was used, set on a cardioid pattern with no filter or pad. A pop filter was set up about 3" away.
This was then routed through the Avalon pre-amp into pro-tools
In pro-tools I routed the initial signal through an auxiliary track to then record compression with the take.
For some reason, blogger keeps making my "large picture" not so large :( Hopefully enough information can be garnered from such..


All samples were post eq'd with about a 4dB of reduction at 3kHz. I used this on all samples because it sounded good. Perhaps it was a mic characteristic or perhaps the positioning in the room that made this a good frequency to cut.

1st and 2nd takes were about 8" from the microphone. Nothing other than a bit of "coaching" was used.
take 1 take 2

3rd take was about 6" from the mic, with the Avalon HPF set just above 130 Hz.
take 3

4th the same as the 3rd with a compression of 3:1, gain reduction of about 6 dB. As in picture above.
take 4

I found Kath's gained confidence over time, and whilst she managed to get more of an overall dynamic constancy, she still tended to lose volume over time.
Also there are noticeable unwanted mouth sounds, a bit of practice would help her delivery to gain a better recording.

5th and 6th takes were me singing :) 6 " away, 6th take featuring similar compression to earlier takes.
The 6th take (which was really about the 14th or so), shows my voice wearing out. Obviously a better singer could maintain vocal ability for longer periods, also a better dynamic stability.
take 5 take 6



All these could do with more production, but definitely the last take of Kath's voice was the most usable. This could well be due to her evolving experience at the time.
As to the takes of mine, I find them much of a muchness, neither is more preferable than the other - they both sound all right for what they are.


[1] Fieldhouse, Steve. Vocal recording technique. University of Adelaide, 24 april 2007.

Sunday 8 April 2007

cc - week 6 - sequencing to someone elses score

EDIT : i manage to forget to link to resultant piece.
paper_montage_la_3.mp3
there it is..

This week we were asked to sequence a piece for a score given to us. [1]

Here is a picture of the track window which features the score at the bottom.




After a bit of pondering I recognised similiarities from my own score.
The cut up bits as abstract sounds, the vertical as percussive etc. I then lay sounds down in a representation of the score, then manipulated to my aesthetic pleasure.

I used time stretching, pitch shifting and cutting up and rearranging to manipulate my existing sounds.

From much experience of previous adventures using this technique, I decided to use 5 tracks for simplicity. This kept the vertical information to a minimum, hence vertical scrolling to a minimum.
With dragging and dropping willy nilly between them (I found that ProTools will copy any automation and plugins along with the wave, both annoying and useful), and tweaking of the automation, a satisfactory result was acheived.

The top 3 tracks were used for generally centered sounds, with a bit of panning and volume automation.
Bottom 2 were hard panned left and right and both feature the moogerfooger delay as an insert effect.

The picture shows automation for ;
- the top track ; EQ bypass
- the bottom track ; delay time

I also used a bit of compression on the master fader channel for a bit of overall gain.



[1] Haines, Christian. Creative Computing week 6 lecture. University of Adelaide, 5 april 2007.


Saturday 7 April 2007

f - week 6 - collaboration revisited

Today was official "not enough collaborations day", hence rather than the scheduled presentations we were exposed once more to perspectives on collaborations.

First was Luke Harrald and his talk on the soundtrack for the short film "the 9.13". [1]
He made a few interesting comments;
- regarding the continual blurring of the edges of media, that one must either become multi skilled or collaborate with others who have the skills.
- he considers machines as active creative collaborants.

Second was David Harris whom collaborated with visual artist Pamela Rataj on a number of works.[2]
- had a starting point of not to interfere with each other.
- for their first piece together, David found himself using a similiar technique as Pamela but with totally different media. Pamela was working with moire patterns in a grid mesh, and David resonance of a sound, where every location has a different harmonic content.
- for the second piece David also found himself collaborating with the space, and evolving through it.

Third up, Poppi Doser and Betty Qian talked about their collaborative work "Behind the door". A short animation (Poppi did the sound, and Betty the animation).[3]
- coming from different nationalities there were difficulties in communication through speech, so charades was resorted to to communicate some ideas.
- theirs was a simultaneous process, working together.

Fourth was Stephen Whittington and his general impressions on the concept of collaboration.[4]
- two words, empathy and humility. Empathy for sharing the process and creating a synergy, and humility for being open and receptive to other potential originations.
- "Beethoven was a far greater musician than I."[5] This statement ties in with humility, interpreting in such a way as to feel the music rather than force.
- one collaborates with equipment or instruments. "I see it having a kind of intelligence in its structure."[5]

I feel quite strongly with the ideas of collaboration being well outside the realms of working with someone on a limited task. Anything we choose to work with outside ourselves is the work of another entity and therefore has it's own innate nature and abilities that will come into play through the work we create !!

It was during the question time after Stephens talk that the following question was voiced. "What is not a collaboration ?"[6]
After all the philosphising and broadening of abstract concepts, this rather pointed question almost pulled the rug from under my feet. The dangers of becoming too abstract, is that words lose any useful meaning and threaten to collaborate in doing my head in :)



[1] [2] [3] [4] Music Technology Forum on collaboration. University of Adelaide. 5th April 2007.

[5] Whittington, Stephen. Music Technology Forum on collaboration. University of Adelaide. 5th April 2007.

[6] Unknown student.
Music Technology Forum on collaboration. University of Adelaide. 5th April 2007.

Friday 6 April 2007

aa - week 6 - accoustic guitar recording


I went into the studio with Sanad (Khaled Sanadzadeh) and Alfred Essemyr to try out a few different microphone techniques on this weeks assignment, recording some accoustic guitar. [1]

We recorded everything with two microphones at once. Generally attempting a stereo sound, but some techniques were more about layering of the mics to give a more interesting sound.

We recorded 6 different sounds, myself playing the guitar (attempting to play similiar for comparison - but I did get distracted once).



Above is a picture of the mixing section after I went through and mixed each sound. Each take has been grouped to help differentiate (different colour buttons on each channel).

I used no EQ in production but think that it would have helped each sound to be more useable.

Unfortunately we have no photo's of the microphone arrangements so I'll explain each one.


The first technique was an X-Y stereo technique using two Neumann KM84's (track 1+2). One pointing at the top 3 strings, the other at the bottom 3 strings, about 15cm from the end of the fretboard near sound hole. Each hard panned opposite.
01_neumann_km84_xy_stereo.mp3



Second we used an AKG c414 (omnidirectional)(track 3) and a Neumann U87 (figure 8)(track 4) to create an M-S stereo pattern. Both placed again at the end of the fretboard about 30cm out. The U87 was copied onto another track (track 5), the copy phase inverted and then each hard panned.
02_akg_neumann_ms_stereo.mp3



Third was the Rode NT4 stereo microphone (track 6+7), postioned similiar to the first technique.
03_rode_nt4_stereo.mp3



Fourth was a technique sugged by Sanad, using the two KM84's (track 8+9) pointed at the front near the bridge (about 15cm out) and behind the guitar (didn't notice how far out this one was because i was looking the other way) pointed at roughly the same point (but through the body). I inverted one of the signals, softpanned each and just gave a bit of the rear signal, still very warm.
04_km84_front_and_back.mp3

The rear microphone picked up a surprising amount of attack.
http://www.hddweb.com/90573/04_back.mp3



Fifth using the two KM84's (track 10+11), one at the bridge and one at the end of the fretboard near the soundhole, both about 15cm out. Hard panned.
05_km84_end_of_neck_and_bridge.mp3



Sixth was similiar to the fifth (track 12+13) but one of the mics was shifted up to the nut. This one I used much more signal from the nut microphone.
06_km84_bridge_and_nut.mp3




[1] Fieldhouse, Steve. Recording an accoustic guitar. University of Adelaide, 3 april 2007.

Wednesday 4 April 2007

f - week 5 - collaborations 2

This week another 4 presentations on the theme of collaborations [1].

1st up given by Luke Digance on the collaboration between choreographer Merce Cunningham and pop bands Sigur Rós & Radiohead.

Lots of nice slides.

Having composed a number of dance pieces I found this slightly interesting - the best part was that the dancers wore different coloured costumes for the seperate sections. I did some extra reading on this, and apparently there was a fair amount of chance in the actual performance, which piece went where, which band performed with which dancers.


2nd, Daniel Murtagh gave a brief on Mike Patton and his innumerable collaborations.

Having enjoyed the first Mr Bungle LP (mainly for the John Zorn aspect), this had some historical interest for me.

Apparently once a work is completed, Mr Patton shelves it and never listens to it again. Makes me wonder if that's why a lot of his work sounds the same.


3rd, Darren Slynn, was an ongoing exploration of collaborators, and there perhaps reason for being such was necessity - how else can you be band without getting someone else to join ?

What first piqued my interest was the sound effects guy from "Hey Hey its Saturday" (Murray ?). Mainly of interest because I have fond memories of Winky Dink - probably before Murray was involved.

Going through Frank Zappa, the Weather Report and I think ending up with Steely Dan. The whirlwind tour may have made some more stops on the way but these were the ones I remembered.


4th up was given by Alfred Essemyr, similiar theme to the 3rd with the idea of collaborations through necessity, but in this case not necessarily known or planned.

The DJ as a collaborator with a musical artist. One records the music, one plays the music. This reminds me of a Sesame Street skit [1]on co-operation highlightling the potentials in sharing efforts.

So once a track is released, anyone can collaborate with the initial artist. I myself have "collaborated" unofficially with John Farnham, Craig David, Milli Vanilli, Slayer, JS Bach, Richard Wagner, Bela Bartok.......oh, and John Cage :)




[1] Collaborations 2, Music Technology Forum. University of Adelaide, 29 March 2007.

[2] Two muppets, one with short arms, one with long unbendy arms.
One fruit tree. The fruit is too high for the short one, and once the other plucks the fruit it can't get them to its mouth.
Co-operation. The long armed muppet picks the fruit, the short armed feeds them both :)

A childhood memory from Sesame Street, Childrens Television Workshop.

CC - week 5 - more adventures in sequencing of a sort

For this exercise [1] I have used similiar rules to last weeks score to manifest this score.

Here is the score.


Horizontal folds represent repetitive percussive sounds.
Vertical folds represent staccato sounds.
Rips within the piece of paper represent abstract washy sounds.
Angular folds represent simple sounds with pitch variation.
Scrunching represents long scrunchy sounds.
Rips at the bottom represent tearing sounds.

Here is the rendered score. The grease_at_the_end_of_time.mp3

With this piece showing deliberate manipulation of spatial information (pan/volume), I have torn the piece in half to simplistically portray stereo information.

One interesting technique with this I found, was joining the two pieces asymmetrically and applying folds or crinkles etc. This had the effect of major stereo misalignment.

Below is a screen shot of the ProTools track window. [2]
Nothing too exciting going on, a bit of panning, a small volume envelope. For this track i quite enjoyed hard panning, with repeated and slightly offset files to create the stereo misalignment.



Below is a picture of the ProTools mixer window. [2]
A bit of fader adjustment and the panning is more evident.




[1] Haines, Christian. Creative Computing week 5 lecture. University of Adelaide, 29 march 2007.

[2] Digidesign ProTools 7

Monday 2 April 2007

aa week 4 - mic techniques

The aim of this weeks exercise was to try out various microphone techniques for recording a consistent sound source. [1]

I set up a small radio broadcasting a tuning whistle, around this I set up an array of 6 microphones. (as above)

  1. AKG C414 - condenser mic set to OMNI DIRECTIONAL pickup pattern

  2. AKG C414 – set to SUPER CARDIOID

  3. Shure SM 58 – dynamic mic, CARDIOID pattern

  4. Shure BETA 52A – dynamic mic, CARDIOID pattern

  5. Sennheiser MD -421 – dynamic mic, CARDIOID pattern

  6. Sennheiser MD -421 – dyanmic mic, CARDIOID pattern


With this setup I could explore

  • proximity effect using the MD-421's. Both of these mics were pointed at the same point of the sound source, with one being approx 8 cm and the other 60cm distant.

  • differing polar patterns using the C414's. These mics were placed at similar distance and along the same axis.

  • differing microphones using the SM 58 and the BETA 52A. These were also place at similar distance and along the same axis.


Here are the six sound samples (these have been normalised)

01.mp3 02.mp3 03.mp3 04.mp3 05.mp3 06.mp3


There are noticeable differences in all samples.

The two MD-421 samples exhibit the tendencies of the proximity effect quite nicely.

There is a marked difference in sound with the more distant mic having a much thinner sound, and a much reduced volume (obviously the normalised samples do not :).


The two C414's sound qualities are again quite different, below are frequency analysis of them.[2]







The omni-directional (top) having more of a distinct peak at about 670 hz and the hyper-cardioid (bottom) having more balanced frequency distribution.


The two Shure microphones also show major differences in sound quality.

The SM58 having a broader sound.

Of the sounds recorded, the C414's surprised me the most - the difference is quite disctinct.
Whether this was caused solely by the differing polar patterns or the setup [3] could only be revealed through having two recordings from the same postion (this thought occuring to me several days after the recording session).



[1] Fieldhouse, Steve. Tutorial on Microphones. University of Adelaide, 27 march 2007.


[2] Steinberg Wavelab5 3D analysis.

[3] Using a mono signal coming through two seperate speakers (the AM whistle through a stereo cassette radio), introduces all sorts of phase issues when comparing two microphones in different locations. The two mono signals would interact and effect the sound depending on location.

Tuesday 27 March 2007

cc - week 4 - sequencing of a sort

This week was to create a "sequence" of our paper sounds using a multitrack interface (rather than a sample sequencer).

Here is a photo of the score.

Horizontal folds represent repetitive percussive sounds. eg dodgy beat.mp3
Vertical folds represent staccato sounds. eg stuttering car.mp3
Rips within the piece of paper represent abstract washy sounds. eg vacuum organ.mp3
Angular folds represent simple sounds with pitch variation. eg
whirry biscuits.mp3
Scrunching represents long scrunchy sounds. eg crunkling orama.mp3
Rips at the bottom represent tearing sounds. eg stereo tearing.mp3


Here is the finished product, Davader the prequel.mp3.



I decided not to use the grid editing, but aligned files sequentially using their own lengths as a general guide. Hence the repetitive features are occasionally not quite regular.

I kept any computer style processing to a minimum by using pan, volume and pitch modulations for effects.
I also used some compression on the finished mix for better signal strength.

I used a bit of layering sounds simultaneously (or close to) with these variations.

A sort of time stretching was achieved by cutting the file into multiple parts with overlapping sections, and then playing them sequentially.

F - week 4 - collaborations 1

This week was student presentations. Four seperate ones on the theme of "collaboration"

I didn't gain much out these particular presentations. Only perhaps a glimpse into the psyche of each presenter.

First up, given by David Dowling (edit: oops, changed the name to correct one) focused on the collaboration of Metallica with Michael Kamen and the San Francisco Symphony Orchestra. A reasonably presented foray into the difficulties of dynamics.

The most interesting thing was the idea of space in the music of Metallica - between the simple chords used (a lot of 5ths), and the general rhythmic simplicity/repetition.


Second, given by Vinny Bhagat focused on the work of Trilok Gurtu and his ongoing collaborations with a startling variety or musicians of many bents.

Not very coherently presented.


Thirdly briefly discussed the production of software (Exact Authoring Tool) specifically designed to make collaboration easier between groups of people with little face to face communication. As referenced to game design and a 3 layered sound design situation (music, incidental sounds, fx).


Fourth, given by Khaled Sanadzadeh featured a general discussion on the concept of "world music". A reasonably mayhemic descent into opinions and general assumptions on the music industry.

A variety of content made for a wide view, opening the concept of collaboration beyond the expectations I went in with.

Monday 26 March 2007

AA - week 4 - recording a band more info

This post to further explain the recordings of the band used in week 2, with diagrams.

The actual recording positions of each person and instruments used depend on the song (and who turns up – extra brass for example).

The band consists of

Jared ; Laptop – beats, melody, rhythm and probably more.

Melodica.

Euan ; Tabla, Djembe, Guitar, Bass, Vocals.

Liam ; Guitar, Bass, Vocals

Andy ; Saxophone, other windy things.

I’ll just pretend I know the actual instruments for a song, and call it song 1.

Jared will play both laptop and melodica, Euan will play tabla, Liam guitar and Andy saxophone.

Because the bass will be coming from the laptop we can use the deadroom for the guitar amp patched into the live room for guitar input, and then monitor through headphones. In the live room their will be three accoustic instruments, melodica, saxophone and tabla. Using low baffles to minimise cross information with face to face communication still available.

Make up another song with different instrumentation, song 2.

Jared will play the same, Euan guitar, Liam bass and Andy clarinet.

Because there is both bass and guitar – the bass will go in the deadroom, and depending on who’s using the rest of the space there may well be somewhere else to hide the guitar amp outside the recording area – failing that we’ll just baffle it.

More baffles to minimise sound and promote face to face communication.

MUSICIAN ARRANGED IN THE LIVE ROOM


GENERAL SIGNAL FLOW

general signal flow

There will be different number of mics used depending on song, most in the live room being patched through the wall bay into the studio, one (or just a DI out of the bass amp depending on sounds) in the deadroom patched into the studio. The Laptop outputs can be run straight into the patchbay into the studio also, I’m thinking that these can be patched straight into the line in of the desk in the studio.

From the studio the signals will go into the desk then into the DAC then into the computer into ProTools. Whatever is in the deadroom will be sent through the headphones back to the live room, also any other signal necessary…

Wednesday 21 March 2007

F - week 3 - compossible in 19 parts

For this weeks forum we all got to perform the work "compossible – with nineteen parts" by David Harris.

It is a chance piece with a reasonably well defined structure, giving pitch and time directions, AND being scored for 19 separate parts. Although as I'm comparing this to other pieces I've been responsible for (and much that I could be called irresponsible for), such as "composition for coins and dice" [1] and "monopoly" [2] , I'm willing to admit this perhaps constricting environment is all in the perspective.

Having recently been reminded of the Fluxus movement through reading a biography on Yoko Ono[3] (which is one of the obvious inspirations that lead to such pieces by myself, and no doubt also this one by David). I was more inspired by the performance aspect that the clock arranging and chair moving [4] created than the music created.

The music produced over the course of the 45 minute piece was fairly unsatisfactory, although I'm sure with a bit more preparedness a more fulfilling piece may have been performed. I did enjoy the initial 5 minute rehearsal more than the actual performance.

Overall I was quite appreciative of the experience, it has been quite a while since i have been involved in such a shindig. The general reactions of many of the students was quite amusing, and I'm sure that the underlying motives of whomever designed the course structures is slowly opening the pores of all our minds to new potentials.




[1] Kelly, Edward . Composition for coins and dice. 1993.

Directions.

Roll a dice – the number indicates how long to maintain the action as defined by the coin toss.

Toss a coin – heads : do something/ tails : do nothing.

When the time as indicated is finished – repeat instructions.


[2] Kelly, Edward . Monopoly. 1993.

Directions.

Play the board game Monopoly.

When it's not your go – do something.

When it's your go - stop whatever you were doing and have your go.

Repeat.


[3] Hopkins, Jerry. 1986. Yoko Ono. Macmillan Publishing.



[4]After several other potential props were tried and discarded, and a stack of chairs arranged to a satisfactory height, the clock was eventually balanced on top. This taking numerous minutes, and being accompanied by Stephen Whittington conducting - apparently in the style of John Cage.