Composing contemporary music in the age of sound libraries
20/12/19 12:36 Filed in: Creative Work
I’ve been trained to write music on paper. My main teacher, one of the best composers of his generation, was also a copyist, and insisted on good and accurate calligraphy. Writing on paper seems like the most obvious way, when dealing with a very rational type of music, based on proportions and semi-automatic processes. It’s also the fastest way to notate some tightly integrated music gestures, made of a bundle of pitches, articulations, and expressions, that would be impossible to quickly notate with notation programs (an example: a violin playing a starting pitch, fading into a jété and gliding up to an uncertain pitch).
At the same time, I’ve always felt the need to feel the music under my fingers. Neither my mind, nor the calculated music coming out of a computer when listening to the results of what I was writing, has given me a satisfactory connection with my music. My mind, a powerful generator of music, is only a part of my body; and my ears, receiving the waves of pressure from the loudspeakers, are only a part of the sensorial experience. I need a tactile experience of my music, together with the auditory experience.
When very young, I composed at the piano. Sometimes, I sat at the piano, and went on experimenting with stravinskian overlapping chords, bartókian hammering rhythms, schoenbergian piercing intervals and misty outburst of notes; some other times, I just checked at the piano what I was writing on paper. I had a physical connection with my music.
Later, when I could afford one, I switched to computers. I tried to simulate “real” music and sounds. However, notation programs were unable to make my notes sound as I wrote them; I wrote them as music, the computer insisted on playing them back as arithmetic expressions. The sounds I could feel when playing on the keyboard were not what they were named after: the piano lacked hammers and resonance, violins lacked wood and body, brass did not explode, woodwinds lacked breath and click. The computer was great for electronic music, not for simulating music made with real instruments.
But acoustic sound libraries improved over the years. VSL was a revolution. Other libraries appeared for specialized types of sound. For what I was looking for, VSL and XSample offered great support to my music. At first, I had something resembling realism in the libraries that came with the Native Instruments package (a taste of VSL, then the realistic chamber strings of Session Strings Pro). Then, I could finally afford the solo instruments of the XSample Library with their extended techniques, and the accurate orchestral sounds of the VSL Special Edition. I had a powerful, realistic sonic arsenal under my fingers. I had all the sound tools I could need.
So, I could compose at the computer again. But how? Notation programs continued to be cold as a grave. Wallander’s NotePerformer added life to my Sibelius scores, but still more the life of a lemur than that of a living body. And composing by patiently writing and sculpting pitches on the staff looked like underusing the tools I had. What I had was basically a glorified piano – the same keyboard on which the greatest composers of the past loved to improvise and compose, the same white and black technological interface with music they loved to spend time imagining and feel their music – but capable to really *play*, and not only suggest, a full orchestra.
Is composing at the keyboard legit? With Logic, I can keep the score, pianoroll and controller pages open, in a mix of traditional notation, evolving texture and cluster graphic notation, oscillator and modulator diagrams. I can move the input cursor where I have to insert a segment, and start recording from there. I can step-input pitches as I would do in a notation program. I can have rather accurate notation and realistic sonic rendering at the same time, write notes on staves, and later export a MusicXML file to refine notation with a dedicated program. Logic can also assist me with some serial-based elaboration, unless I want to cut and paste pitches generated by OpenMusic.
What I feel is that, by composing at a DAW that lets me easily have accurate control on the piece’s micro- and macro-structure, I can really go further, and maintain a better control on the piece’s macrostructure and evolution. I can create a general structure and time signature map in advance, insert motives and focal points as placeholders, use the track arrangement space as a blank wall where to attach post-its, and create my piece by going from the general image to the finer details, gradually filling that wall. And always keeping contact with the actual sound, not simply a mental image of the sound.
Isn’t this a lot like the old Maestros sitting at their klaviers, one hand on the keyboard and the other hand writing on a music sheet?
(The above is an old reflection, made in February 2017.)
At the same time, I’ve always felt the need to feel the music under my fingers. Neither my mind, nor the calculated music coming out of a computer when listening to the results of what I was writing, has given me a satisfactory connection with my music. My mind, a powerful generator of music, is only a part of my body; and my ears, receiving the waves of pressure from the loudspeakers, are only a part of the sensorial experience. I need a tactile experience of my music, together with the auditory experience.
When very young, I composed at the piano. Sometimes, I sat at the piano, and went on experimenting with stravinskian overlapping chords, bartókian hammering rhythms, schoenbergian piercing intervals and misty outburst of notes; some other times, I just checked at the piano what I was writing on paper. I had a physical connection with my music.
Later, when I could afford one, I switched to computers. I tried to simulate “real” music and sounds. However, notation programs were unable to make my notes sound as I wrote them; I wrote them as music, the computer insisted on playing them back as arithmetic expressions. The sounds I could feel when playing on the keyboard were not what they were named after: the piano lacked hammers and resonance, violins lacked wood and body, brass did not explode, woodwinds lacked breath and click. The computer was great for electronic music, not for simulating music made with real instruments.
But acoustic sound libraries improved over the years. VSL was a revolution. Other libraries appeared for specialized types of sound. For what I was looking for, VSL and XSample offered great support to my music. At first, I had something resembling realism in the libraries that came with the Native Instruments package (a taste of VSL, then the realistic chamber strings of Session Strings Pro). Then, I could finally afford the solo instruments of the XSample Library with their extended techniques, and the accurate orchestral sounds of the VSL Special Edition. I had a powerful, realistic sonic arsenal under my fingers. I had all the sound tools I could need.
So, I could compose at the computer again. But how? Notation programs continued to be cold as a grave. Wallander’s NotePerformer added life to my Sibelius scores, but still more the life of a lemur than that of a living body. And composing by patiently writing and sculpting pitches on the staff looked like underusing the tools I had. What I had was basically a glorified piano – the same keyboard on which the greatest composers of the past loved to improvise and compose, the same white and black technological interface with music they loved to spend time imagining and feel their music – but capable to really *play*, and not only suggest, a full orchestra.
Is composing at the keyboard legit? With Logic, I can keep the score, pianoroll and controller pages open, in a mix of traditional notation, evolving texture and cluster graphic notation, oscillator and modulator diagrams. I can move the input cursor where I have to insert a segment, and start recording from there. I can step-input pitches as I would do in a notation program. I can have rather accurate notation and realistic sonic rendering at the same time, write notes on staves, and later export a MusicXML file to refine notation with a dedicated program. Logic can also assist me with some serial-based elaboration, unless I want to cut and paste pitches generated by OpenMusic.
What I feel is that, by composing at a DAW that lets me easily have accurate control on the piece’s micro- and macro-structure, I can really go further, and maintain a better control on the piece’s macrostructure and evolution. I can create a general structure and time signature map in advance, insert motives and focal points as placeholders, use the track arrangement space as a blank wall where to attach post-its, and create my piece by going from the general image to the finer details, gradually filling that wall. And always keeping contact with the actual sound, not simply a mental image of the sound.
Isn’t this a lot like the old Maestros sitting at their klaviers, one hand on the keyboard and the other hand writing on a music sheet?
(The above is an old reflection, made in February 2017.)