Dealing with Logic’s Beat Mapping

Do I really have to do it? You know – load an audio file in Logic, enable Beat Mapping, then drag a Measure/Beat ruler position to the corresponding peak in the audio waveform. If you are working on very simple materials, maybe the automatic detection can work nearly alone. If you are working on real music, it doesn’t work.

After many long tedious sessions of Ctrl-Shift-dragging and hoping the beat to sticks to the desired position, and usually ending up with a bumpy Tempo track, I’ve decided a better solution had to be found. My preferred workaround is to beat the Tempo in a new MIDI track. Then, I’ll ask Logic to get the beat mapping from that one. If things are a bit out, I can simply nudge the MIDI event, proportionally respace them with the Time Handles, or even add or delete events if a beat should be added or removed.

This works a lot better for me. At least, it is a tempo I beat myself on real music, and not an informed hint from a machine.

Praise for the open meter and key

One of the most common criticism moved to Dorico is that new documents are not in 4/4 meter and in C Major. They are in an open meter and key.

In my view, this is one of the strongest points in Dorico, and one of the most innovative. I liked it immensely in Igor Engraver, and am immensely happy to find it again.

From the point of view of a student or beginner, having a blank staff is highly educative, since it forces one to learn how to choose the right meter and key. It is not a given, it has to be carefully considered. Having it ready is easier? It depends on the target. If someone only needs a way to transcribe a simple tune, maybe a program as complex as Dorico Pro is not the right one.

For someone writing music in the 'academic' styles since the 20th century (including much film music), having a blank staff is liberating. The idea of 'flow' is at the basis of Dorico, and flow is not only the name of one of the structural elements: it's the deep philosophy of this program. Music flows. You are free to give it a measure, design patterns. But, at the basis, it is a free flow in time.

A missing standard for keyswitching

The lack of an universal and advanced standard for keyswitching makes me crazy. I’m one of those who prefer not to insert keyswitches in the score, nor use separate tracks for playing techniques. I want a meta-code to drive my technique changes.

What I did, in making my articulation sets for Logic, was to first create my own personal articulations/techniques map, starting from a Spitfire Audio UACC map repeated two times (UACC s 1-128, Logic 1-256). This means that all my maps will have the same articulation types at the same ID. Selection messages will start from those fixed positions.

Unfortunately, not all libraries are coherent in how they map their articulations/techniques, so I'm still using too many articulation sets and expression maps. With VSL VI libraries I built my own presets, all organized in the same way. But this is not possible with all libraries.

Composing contemporary music in the age of sound libraries

I’ve been trained to write music on paper. My main teacher, one of the best composers of his generation, was also a copyist, and insisted on good and accurate calligraphy. Writing on paper seems like the most obvious way, when dealing with a very rational type of music, based on proportions and semi-automatic processes. It’s also the fastest way to notate some tightly integrated music gestures, made of a bundle of pitches, articulations, and expressions, that would be impossible to quickly notate with notation programs (an example: a violin playing a starting pitch, fading into a jété and gliding up to an uncertain pitch).

At the same time, I’ve always felt the need to feel the music under my fingers. Neither my mind, nor the calculated music coming out of a computer when listening to the results of what I was writing, has given me a satisfactory connection with my music. My mind, a powerful generator of music, is only a part of my body; and my ears, receiving the waves of pressure from the loudspeakers, are only a part of the sensorial experience. I need a tactile experience of my music, together with the auditory experience.

When very young, I composed at the piano. Sometimes, I sat at the piano, and went on experimenting with stravinskian overlapping chords, bartókian hammering rhythms, schoenbergian piercing intervals and misty outburst of notes; some other times, I just checked at the piano what I was writing on paper. I had a physical connection with my music.

Later, when I could afford one, I switched to computers. I tried to simulate “real” music and sounds. However, notation programs were unable to make my notes sound as I wrote them; I wrote them as music, the computer insisted on playing them back as arithmetic expressions. The sounds I could feel when playing on the keyboard were not what they were named after: the piano lacked hammers and resonance, violins lacked wood and body, brass did not explode, woodwinds lacked breath and click. The computer was great for electronic music, not for simulating music made with real instruments.

But acoustic sound libraries improved over the years. VSL was a revolution. Other libraries appeared for specialized types of sound. For what I was looking for, VSL and XSample offered great support to my music. At first, I had something resembling realism in the libraries that came with the Native Instruments package (a taste of VSL, then the realistic chamber strings of Session Strings Pro). Then, I could finally afford the solo instruments of the XSample Library with their extended techniques, and the accurate orchestral sounds of the VSL Special Edition. I had a powerful, realistic sonic arsenal under my fingers. I had all the sound tools I could need.

So, I could compose at the computer again. But how? Notation programs continued to be cold as a grave. Wallander’s NotePerformer added life to my Sibelius scores, but still more the life of a lemur than that of a living body. And composing by patiently writing and sculpting pitches on the staff looked like underusing the tools I had. What I had was basically a glorified piano – the same keyboard on which the greatest composers of the past loved to improvise and compose, the same white and black technological interface with music they loved to spend time imagining and feel their music – but capable to really *play*, and not only suggest, a full orchestra.

Is composing at the keyboard legit? With Logic, I can keep the score, pianoroll and controller pages open, in a mix of traditional notation, evolving texture and cluster graphic notation, oscillator and modulator diagrams. I can move the input cursor where I have to insert a segment, and start recording from there. I can step-input pitches as I would do in a notation program. I can have rather accurate notation and realistic sonic rendering at the same time, write notes on staves, and later export a MusicXML file to refine notation with a dedicated program. Logic can also assist me with some serial-based elaboration, unless I want to cut and paste pitches generated by OpenMusic.

What I feel is that, by composing at a DAW that lets me easily have accurate control on the piece’s micro- and macro-structure, I can really go further, and maintain a better control on the piece’s macrostructure and evolution. I can create a general structure and time signature map in advance, insert motives and focal points as placeholders, use the track arrangement space as a blank wall where to attach post-its, and create my piece by going from the general image to the finer details, gradually filling that wall. And always keeping contact with the actual sound, not simply a mental image of the sound.

Isn’t this a lot like the old Maestros sitting at their klaviers, one hand on the keyboard and the other hand writing on a music sheet?

(The above is an old reflection, made in February 2017.)

Sketching libraries and quick composition

Are “sketching libraries” really useful to speed up composition? I’m not a great example of quick and fast productivity, but when writing tonally I like to start playing the piano. Considering how good modern sampled pianos are, I find it liberating to just return to my first instrument and let it help me drawing at raw lines my music.

The new tempo follow metronome in Logic is a revolutionary innovation for me, since I have all my creativity killed by playing on a metronome click. Having the metronome follow me is letting creativity flow freely, as one could do in the age of acoustic pianos, pencil and paper.

The piano sketch is a good place to work on melody, harmony, form. It's just two lines of music and some textual annotations. When done with the basic matters, you can start propagating your music to the other instruments, by copying&pasting or playing idiomatically the new lines.

But I admit that sketching libraries are also great. Not as straight as a piano, they let you write down more information on the first pass. I like Albion One for more booming music, Vienna Smart Orchestra for more classical. The Berlin Orchestra Inspire series should also work great. Orchestra sketching libraries are however already forcing one to follow their pace. String attacks may be too slow, and while you can adjust this with a controller (later or in realtime), here are you already and again facing detailed editing - the thing you were trying to avoid.

On the other side, sketching orchestras may offer you inspiration. I can't wait to try British Drama Toolkit.

The horrors of the ideas about horror music

Jump into a discussion about sound libraries like Spitfire Audio's Albion IV "Uist", Sonokinetic's Espressivo or 8Dio's CASE and CAGE, and you see that they are automatically associated to horror music. Some also associate Spitfire's EVOs, with their intrinsic instability, to horror music.

To be honest, some sound library manufacturer does nothing to prevent this automatic association. Native Instrument called their dedicated library, developed with Audiobro, "Thrill". 8Dio is not hiding this is the intended destination. And it is true that their CASE and CAGE library are very dedicated to the genre.

In my view, however, some of these, like Uist, Espressivo and the EVOs, are simply great tools for modern classical music. They are not "effects", but "words" or "phrases" typical of a particular modern language. In particular, Spitfire's Evolutions are more on the subtle side, so I would exclude them from the "horror effects" category.

All considered, we often consider "horror music" the soundtrack assembled by Kubrik for his movie Shining. But these were, in origin, modern classical pieces from Bartók, Ligeti and Penderecki. The ones to whom the finest of these libraries are inspired.

Writing for the real, AND virtual players

Let's be honest: when writing a piece for a real ensemble or orchestra, we know that it will probably remain in the realm of virtual instruments. Our piece will never, ever be performed by a real orchestra.

I had the honor and pleasure of having some of my pieces performed by real players. Very skilled musicians, sometimes among the best players for that instrument. Yet, this is something that can only happen in some special events for most of us.

The rest of us will, most of the time, continue to listen to their works from virtual performers and instruments.

That means that we have to find a balance between writing for real instruments, and at the same time make accurate prototypes, that will have to be considered an alternative form of the final piece. Our piece has to sound great both when read by real performer, and when performed by our samplers.

We write for virtual orchestras. This is no longer to be considered a second choice. Virtual orchestras very often go into feature films released in major theaters. Virtual orchestras are a real instrument, even if the human content is just that of the musicians who recorded the samples, and the composer that created the virtual performance.