Wednesday, 9 May 2007

cc - week 8 - fscape soundhack and the nn-19

Get some vocal sounds and process through Fscape and SoundHack[1], easier than it sounds.

Fscape took me a while to be able to effect sounds. I wonder why that open option in the menu exists ? I couldn't get it to work.
I also noticed a few sounds generated with one of the effects (something to do with Bloss) created a bit of serious DC offset - and Peak doesn't deal with that (ProTools does :).

Didn't really understand what SoundHack or Fscape were doing. Just twiddled settings until a product was settled upon.
They do seem quite interesting and look forward to more fun with them.

When editing files,I am used to using 1 and 2 to navigate to beginning/end of selection, am yet to work out how to in Peak. Makes it a tad unwieldy when looping not being able to jump around.

Got a bit more into the sample settings in the NN-19. Root key, key range and level. Last week everything worked straight up, this weeks sounds enjoyed it !!

Apart from that the sampler was pretty much like last week, set it up similarly (velocity and mod wheel modulating filter, fader modulating amp env release, bit of delay and compression), but used a bit of LFO on the filter (BPF again) this time AND found the pitch modulation !! I did cut the whammy out because it went to about 1.5 minutes, too long :( Still can't believe I missed it last week.

Track: cc week 8.mp3

Screen shot.

Shows pitch info, velocity, amp env release and pitch modulation. Some sort of proof I whammied it up :)


[1] Haines, Christian. Creative Computing week 7 lecture. University of Adelaide, 03 May 2007.

f - week 7 - gender in

Gender in Music Technology - can you tell the difference.


Not a topic for the faint hearted. The presentations on this subject probably reveal more about the people presenting than the earlier subjects, also make me realise that I'm almost old compared to most of the others in the forum :)

The presentations varied from"I'm not a feminist"[1] to "why should you tell the difference"[2]. With a quick stop via Homo Erectus, with the women staying in the cave [3].
Oh, not to forget the interesting variation of hard mastery vs. soft mastery [4], this I found the most interesting.
Women are more associated with soft, and men hard. Ha , yin and yang. Here the truth of the taoist philosophy of yin and yang proves itself again. In the heart of yin there is yang, and in the heart of yang there is yin. There is no actual dualism, there are no truths'. Sure gender gives major inclinations to societal roles but there are always exceptions.

And by the way, seemingly like almost everyone else who has posted a blog on this, I muchly enjoy Bjork !!

[1] Amy Sincock. “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

[2] Jacob Morris. “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

[3] Douglas Loudon. “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

[4] Probert, Ben . “Music Technology Forum – Week 7 – Gender in Music Technology, can you tell the difference?”. Forum Presentation, University of Adelaide, Schultz building. 26th April 2007.

Sunday, 6 May 2007

aa - week 8 - electric guitar

EDIT : I've had problems with the host for the files on this page, it's sort of fixed (I think) but am generally not impressed with them.
_____________________________________________________________

So, recording the electric guitar. That horrible overused tool of mayhem and boredom...


Set up the house amp, moved it around to a "nice" location.
Moved around listening to the amp, it's amazing how much different it sounds when you're listening in front of the speaker vs. above. All those high frequencies that travel straight.


3 microphones, AKG C414, Shure SM57, Sennheiser 421.
Set up the 3 mics to record simultaneously.

Put the SM57 and 421 pointed diagonally just inside the rim of the speaker. Steve saying he enjoyed that area[1], and me finding no better.

The 414 about 4 1/2' out and 1.5' above the center of the speaker. I had some issue with phase, moved it around for a while and couldn't find a spot without obvious phase problems. I got around this by deciding it wasn't a problem :) I recognised a particular metal sound generated by this phasing.

Patched everything so I could play from the control room.

Screenshot shows 4 takes, 3 microphones.



Take 1.mp3 : funny looking Roland midi guitar. No midi, lots of whammy. Mixture of the 3 mics, compression.

Take 2.mp3 : same guitar, same whammy. Mixture of the 3 mics, compression


Take 3.mp3 : the strat style guitar, neck pickup, bit of eq, a bit of compression, using the two dynamic mics.
Just noticed what sounds like a bit of delay, mmm no delay, so must be sick technique !!

Take 4.mp3 : the strat, bridge pickup, bit of eq, no compression, mixture of the 3 mics.
The thinnest sound of the lot, emphasised this.


All the sounds have their relative merits.
The differences in sound are down to technique, guitar and amp settings. And post production :)




[1] Fieldhouse, Steve. Electric guitar recording technique. University of Adelaide, 01 May 2007.

Tuesday, 1 May 2007

cc - week 7 - NN-19 sampler

This weeks task was to "create a performance instrument that uses samples manipulated from voice",[1] using the NN-19 sampler module featured in Reason Adapted.

Using a wave editor to create samples was a genuine pleasure, really takes the hard work out of looping. No more little LCD screens!!

Importing into the NN-19 was all good, automap worked great - oh so easy (once I remembered how to do it :).

Then to get some control happening over the sampler.
Modulating the filter with velocity and the mod wheel was easy.
Getting the amplitude release to listen to the Novation was a bit of a pain
- on my first session I spent a bit of time playing with the buttons, then reading the manual, but couldn't find how to change the controllers other than using templates. So I scrolled through the templates, using the override mapping to listen for useable controller info, until I found a template and a fader that were useful....
Then on my second session, whomever had been on the computer before had the Novation setup on the User preset, which just worked fine straight up.

That was it really, routed the sampler through the compressor (kept the resonance under control), and a bit of delay and reverb - Bob's your finished product..
sample #1 010407.mp3

here's a picture of the NN-19



Heres a picture of the sequencer window, showing velocity and the amplitude release automation.



I could have spent longer getting that particular setup under control, with more tweaking of the samples, sampler and the Novation controllers and really done some aesthetic magic.
What I would have liked to do is setup each sample on it's own sampler, and then mapped out the keyboard accordingly - this would have given me far greater control over the individual sounds rather than only 1 envelope for all sounds etc.
But that for me is the general problem with Reason, it uses too much of an archaic form, mapped on outdated forms. The flexibility I desire is lacking (that based upon minimal use of Reason 1, and this "adapted" version).


[1] Haines, Christian. Creative Computing week 7 lecture. University of Adelaide, 26 april 2007.

Monday, 30 April 2007

aa - week 7 - vocal recording

Recording vocals.[1]


For this the Neumann U-89i microphone was used, set on a cardioid pattern with no filter or pad. A pop filter was set up about 3" away.
This was then routed through the Avalon pre-amp into pro-tools
In pro-tools I routed the initial signal through an auxiliary track to then record compression with the take.
For some reason, blogger keeps making my "large picture" not so large :( Hopefully enough information can be garnered from such..


All samples were post eq'd with about a 4dB of reduction at 3kHz. I used this on all samples because it sounded good. Perhaps it was a mic characteristic or perhaps the positioning in the room that made this a good frequency to cut.

1st and 2nd takes were about 8" from the microphone. Nothing other than a bit of "coaching" was used.
take 1 take 2

3rd take was about 6" from the mic, with the Avalon HPF set just above 130 Hz.
take 3

4th the same as the 3rd with a compression of 3:1, gain reduction of about 6 dB. As in picture above.
take 4

I found Kath's gained confidence over time, and whilst she managed to get more of an overall dynamic constancy, she still tended to lose volume over time.
Also there are noticeable unwanted mouth sounds, a bit of practice would help her delivery to gain a better recording.

5th and 6th takes were me singing :) 6 " away, 6th take featuring similar compression to earlier takes.
The 6th take (which was really about the 14th or so), shows my voice wearing out. Obviously a better singer could maintain vocal ability for longer periods, also a better dynamic stability.
take 5 take 6



All these could do with more production, but definitely the last take of Kath's voice was the most usable. This could well be due to her evolving experience at the time.
As to the takes of mine, I find them much of a muchness, neither is more preferable than the other - they both sound all right for what they are.


[1] Fieldhouse, Steve. Vocal recording technique. University of Adelaide, 24 april 2007.

Sunday, 8 April 2007

cc - week 6 - sequencing to someone elses score

EDIT : i manage to forget to link to resultant piece.
paper_montage_la_3.mp3
there it is..

This week we were asked to sequence a piece for a score given to us. [1]

Here is a picture of the track window which features the score at the bottom.




After a bit of pondering I recognised similiarities from my own score.
The cut up bits as abstract sounds, the vertical as percussive etc. I then lay sounds down in a representation of the score, then manipulated to my aesthetic pleasure.

I used time stretching, pitch shifting and cutting up and rearranging to manipulate my existing sounds.

From much experience of previous adventures using this technique, I decided to use 5 tracks for simplicity. This kept the vertical information to a minimum, hence vertical scrolling to a minimum.
With dragging and dropping willy nilly between them (I found that ProTools will copy any automation and plugins along with the wave, both annoying and useful), and tweaking of the automation, a satisfactory result was acheived.

The top 3 tracks were used for generally centered sounds, with a bit of panning and volume automation.
Bottom 2 were hard panned left and right and both feature the moogerfooger delay as an insert effect.

The picture shows automation for ;
- the top track ; EQ bypass
- the bottom track ; delay time

I also used a bit of compression on the master fader channel for a bit of overall gain.



[1] Haines, Christian. Creative Computing week 6 lecture. University of Adelaide, 5 april 2007.


Saturday, 7 April 2007

f - week 6 - collaboration revisited

Today was official "not enough collaborations day", hence rather than the scheduled presentations we were exposed once more to perspectives on collaborations.

First was Luke Harrald and his talk on the soundtrack for the short film "the 9.13". [1]
He made a few interesting comments;
- regarding the continual blurring of the edges of media, that one must either become multi skilled or collaborate with others who have the skills.
- he considers machines as active creative collaborants.

Second was David Harris whom collaborated with visual artist Pamela Rataj on a number of works.[2]
- had a starting point of not to interfere with each other.
- for their first piece together, David found himself using a similiar technique as Pamela but with totally different media. Pamela was working with moire patterns in a grid mesh, and David resonance of a sound, where every location has a different harmonic content.
- for the second piece David also found himself collaborating with the space, and evolving through it.

Third up, Poppi Doser and Betty Qian talked about their collaborative work "Behind the door". A short animation (Poppi did the sound, and Betty the animation).[3]
- coming from different nationalities there were difficulties in communication through speech, so charades was resorted to to communicate some ideas.
- theirs was a simultaneous process, working together.

Fourth was Stephen Whittington and his general impressions on the concept of collaboration.[4]
- two words, empathy and humility. Empathy for sharing the process and creating a synergy, and humility for being open and receptive to other potential originations.
- "Beethoven was a far greater musician than I."[5] This statement ties in with humility, interpreting in such a way as to feel the music rather than force.
- one collaborates with equipment or instruments. "I see it having a kind of intelligence in its structure."[5]

I feel quite strongly with the ideas of collaboration being well outside the realms of working with someone on a limited task. Anything we choose to work with outside ourselves is the work of another entity and therefore has it's own innate nature and abilities that will come into play through the work we create !!

It was during the question time after Stephens talk that the following question was voiced. "What is not a collaboration ?"[6]
After all the philosphising and broadening of abstract concepts, this rather pointed question almost pulled the rug from under my feet. The dangers of becoming too abstract, is that words lose any useful meaning and threaten to collaborate in doing my head in :)



[1] [2] [3] [4] Music Technology Forum on collaboration. University of Adelaide. 5th April 2007.

[5] Whittington, Stephen. Music Technology Forum on collaboration. University of Adelaide. 5th April 2007.

[6] Unknown student.
Music Technology Forum on collaboration. University of Adelaide. 5th April 2007.