Overview


This piece was written for NoTAM’s educational CD-ROM «DSP», using only the signal processing and synthesis tools provided as part of the program. The piece is documented in detail on the CD-ROM, and the development history of each sound is explained in detail to allow the students insight into the compositional ideas on both micro- and macro levels.

Part of the idea was that the piece should be available both as music, as a demo that the students could change, develop and destroy at any point, and as a computer music animation. The method for displaying the music in this animation is largely taken from an earlier work «When Timbre Comes Apart«.

The data structure that comprises the music has been mapped into the visual domain through the construction of a model where amplitude corresponds to «altitude», and frequency corresponds to landscape forms – high frequencies to the left, low frequencies to the right. The construction of this work is one of many possible mappings, and the animation is based on a direct representation of the data structure that comprises the music, one «sees» the music as one hears it.

A mapping of this sort has pedagogic aspects, where the camera movement leads the viewer through the music, pointing the attention to actual events or perceptual concepts. The animation also touches on the current debate on musical representation, displaying the sounding object itself, not only the code used to generate it, whether that would be a score intended for musicians or a computer program.

Technically, the work was realized first as music, and the sound was processed through an FFT analysis of the same type used to make sonograms. The data set was used for the above mentioned model which was later «filmed», and the result is an experience of flying over/under/through the music as it is being played. The animation was realized by from Roger O. Nordby at the University Center for Information Technology, University of Oslo.

The Sound


The musical idea for this short work was to present a short travel through three different timbral spaces created through different kinds of processing. The piece start off with sounds from flapping wings, and these sounds reoccur several times through the work, to open and close different parts of the musical journey.

All the programs on the CD-ROM has been used, and the two rooms have been constructed to have different character. The first «room» is full of questions and answers, and the second room is full of motion and rhythm. This room has been constructed through algorithmic composition, represented on the CD-ROM with a program including four different kinds of algorithms where the user can input values to the different parameters.

The Images

The visual idea, as mentioned in the beginning of this document, was based on using the data set from an FFT analysis. An FFT analysis provides data on the frequency components present in every moment of a sound. The data set was considered as a sonogram; a two-dimentional representation of time, frequency and amplitude. The camera was moved along the time-axis in the sonogram, and the «altitudes» in the «landscape» show amplitude variation. Several curves were drawn in manually to describe the camera placement in the spectrum and the height over the spectrum. A model of the data set was created in the program Explorer on a Silicon Graphics Indigo II, and the material quality as well as lighting and lighting angle was set there. A number of small C-programs were written to generate the splines needed for smooth camera movement. The images, 25/sec, were shot onto a SONY CRV-disk, and transferred to Betamax tapes for the final sync/mix with the music.

The movement and focus of the camera was set with the intention of augmenting the musical development, either by focusing on interesting parts of the spectra, or by showing connecting elements. In addition, I added signs to demarcate new sounds appearing in the palette, and to further note the change of «rooms». It can be argued that this kind of preparation takes something away from the experience of the temporality, but this sort of expectation is very similar to what we encounter in verbal communication, where grammar and word categories form the foundation for pattern recognition.

The camera movement over/under the model is designed to display the illusion of the data set, to explicate the abstraction of the idea, and to disassociate any notion of a fly-by-nature experience. The same applies to the zoom effects that occur in two or three places.

The images could not have existed without the music in this video, but they are not to be considered as some sort of subset. It seems natural to say that the images and the music are reciprocal explanations of each other.

fuglene1fuglene2fuglene3 fuglene4fuglene5

All pictures © Jøran Rudi.
Not to be used for publication without permission.