I am often asked, ‘Was that a “real” piano in your recording?’. ‘Did you use a “synthesizer”?’ ‘Was the recording performed in “real” time?’ I have actually not made an issue of how the music has been produced because it is not what I wanted to make the focus of my work.  I did not particularly want to draw special attention to the work as a ‘synthesized’ performance, even though it is ‘orchestral’  in nature. However, for those interested in the production of the audio, here are a few notes about how the recording was produced.


All the instrument parts were performed ‘live’ (in real time) by myself on keyboards. Real-time performances still seem to be the best way to breath ‘life’ into a recording (but that may change with improved software technology in the future). The piano and orchestral parts were layered using overdubbing and synchronisation was visually cued (just as an orchestra uses a conductor), or with a metronome when required. I used two types of keyboard for the performances: a Kawai digital piano and an M-Audio midi keyboard controller. I selected a Kawai digital piano, because Kawai uses the same keys and mechanisms found on their acoustic grand pianos. So from a pianist’s point of view, you are seeing and feeling the same interface that you find on a grand piano. The M-audio midi keyboard controller was used where extra continuous control parameters were required, such as control of vibrato, tremolo or other parameters.


Regarding the question, ‘Was it a real piano?’ I would answer ‘yes’. That may seem like an unusual answer, but a number of professional musicians that I have discussed this with have actually been fascinated by my explanation. The piano was real, in that I performed the piano parts on an actual Kawai keyboard. The process involved a real-time performance of the various piano parts, with the parameters of the key-strokes and pedalling being accurately recorded by a computer. To generate the sound that you hear, I then ran the key-stroke and pedalling information through Cakewalk’s Sonar software and used a software instrument called Ivory developed by Synthogy. Ivory uses recordings of a ‘real’ piano. A ‘real’ Steinway, was set up in a ‘real’ hall and every key on the piano was recorded, using the best studio gear available. Each note was recorded at 10 different velocity levels and multiple release points. So Ivory responds to a key press by selecting the recording that corresponds to the key and velocity with which the key was struck. When a key is released the ‘real’ recording of the release is also heard. I used this approach mainly because it enabled me to obtain superior control over the piano sound in the studio environment. Everyone who has listened to my recording (including leading professional musicians) has said to me that the piano sounds ‘real’, and then are surprised when I tell them how the sound was produced. In using this technique I sometimes felt like I was following in the footsteps of Glenn Gould (the famous Canadian pianist) who back in the 1960s through to 1980s experimented with tape splicing, and unusual studio configurations in his recordings of classical music. If he were alive today, and could see what we can do with modern technology I am sure it would take his breath away; and he might even possibly been more innovative than I have been.


Regarding the orchestral parts, I took a similar approach. I used the Orchestral software instrument produced by Edirol. Orchestral provides samples of various orchestral instruments with a range of articulations. So all the orchestral parts you hear are based on recordings of ‘real’ instruments. Admittedly, the orchestration software was more limited in its ability to express all the possible articulations of ‘real’ instruments. I started production of the CD back in 2005, and of course there are now superior orchestral packages available. Computers have also since moved on, and new technologies are available. Two such techniques include ‘Physical Modelling’ (the physical response of a musical instrument to the players inputs is computed in real-time) and ‘Morphing’ (enabling continuous interpolation between multi dimensional sound samples). I am using newer technologies for my next project.  In that respect the audio production involved in Pictures at an Exhibition represents the tip of the iceberg of what is possible in future classical music recordings.

Notes on the Production of the Recording

George Galanis, March 2008

© George Galanis 2008

Pictures at an Exhibition