On Music Programming

Journalist: Can programing, at your opinion, replace playing?

Vangelis: The things are like this: let's say you play 10 notes on the violin; on those 10 notes you can apply 10 different vibratos, instantly. Let's say you do that with a sophisticated keyboard; you can play maybe 10 notes with 10 different vibratos, but you have to program it, and that's ridiculous! You are interfering with the most sacred thing - the immediacy, as well as the expression.

For Vangelis, and many others, the key word that differs playing from programing music is immediacy. Playing is immediate, while programming is not. Art is a product and a trace and an expression of human will; we immediately express our will through our muscles -- muscles are the only immediate outcome of our will, so they can be in a way equated with it. Music that does not use our muscles is not immediate expression of human will, and therefore not art in strictest sense. This reminds me on a participant of visual arts forum, who, arguing over computer visual software, replied in the manner of “when they make computer software that can simulate the feeling of touching the canvas and the splashing of paint, then I will be interested”. Classical circles, which still consider electronic culture as belonging to pop culture, speak through the mouth of one musical journalist: "Pop art strives on dirt. If it (music) is to be true, it has to speak immediately."

What is basically stated, is that music is necessarily naturally produced by our motoric movements. But is this so?

Music was always both played and programed in a way. A classical composer is programing music on paper as instructions, obviously not literal, to the future performers. Classical notation made sense, and no one objected.

As we can see, it is not so with modern music programming. By giving music to machines, using a musical programming language to step by step define the sound picture, we completely take music away from human performers. Why would anyone wish to program his ideas into a machine, and not play them?

The least controversial is programming those parts of music that are planned to be mechanical, utterly regular. One can instead say that in many cases, it is exactly live interpretation that is converting man into a machine. This is true for repetitive parts, say drones or mechanical drum loops. By giving such parts of music to machines we do nothing wrong. Like in all other areas, musical machines are used to free us from unsubstantial things, so humans can concentrate on substantial ones. So it’s a kind of a policy, by which we can program music, but only those parts of it that we don’t want to play live. Where machines can perfectly imitate us, we can give them to produce sound on their own; but, we never try to transcend our playing, because that is not art anymore. Or is it?

We must remember the complexity, the inherent imperfection, unfinished quality of human nature; that incredible fact, that we have much more than just ‘human’ inside. Human is not only a physical animal; our will and emotions, our minds and souls, are bigger then our body. There can be things that are perfectly natural and artistic, yet are not tied for our own body; our will is bigger than the muscles that originally express it.

A man is aware of owns double nature which expresses through eternal self-contradiction. Ralf Waldo Emerson

Now, it would be great if we could make universal music as spontaneously as we make musica humana and instrumentalis, without having to go through hell. But playing music spontaneously and singing is principally not how we make universal music. Not only because of the limitations in sounds which restrict our inner sense of music, but also because of our own body, which makes distance between us and that inner feeling of music. In other words, instead as a possibility for making music, our bodies and traditional instruments can be seen as a limitation – they impose, by theirs construction, theirs physical limitations, how our music sounds, often harming inner laws of music as we have them within ourselves.

Even with ten fingers, you are limited to a certain speed, because you have a skeleton with muscles around it that permit you to go from A to B, but not to C. With a synthesizer, you can go to C, and that's not something that shouldn't happen. Your brain, your heart, your feelings, these are different instruments, so synthesizers can come past mechanical limitations and go deeper into human possibilities. Vangelis

This sense of music with which we are partially born, as it overcomes our god given body, it also overcomes every concrete instrument that we can invent. We have one inner sense of music that overcomes any concrete means of producing sound; there is a pure idea of music as such, of what we experience as music inside and which is independent of physical construction of our instruments or our body. While it seems logical to assume that body that we have is in tune with our sense of music, this is not the case.

While universal music is more direct expression of our spiritual nature than any other music, it is not music of our body. Mans sense of music overcomes his ability to immediately express it through owns motions. In strive to perfect us as musical beings, we made instruments. Universal music is the final point, at which ancient truth becomes clear – the contrast within our own nature, between our soul and our body. In universal music, with an aim of expressing soul and spirit, one no must be said to body. Body must be transcended if we are to get to know with our soul and spirit in greater clarity and depth. Instead of being music of movements of our body, Universal music is music of movements of the soul and the world. Electronic musician is having divine musical ideas that he can not get out through his body; so he programs them into a machine. A musician can keep his ideas in his memory as detailed sound-picture; and then this inner, divine musicality of his doesn’t immediately materialize by playing or singing, but gets out through the machine. That is then the romance of electronic music and electronic composing. Through programming, we might do some things we feel as musically perfectly natural, but that we can’t play, or anyway we wouldn’t naturally play – we would more easily and naturally program them. And machine interpreted music can have some advantages -- changes in sound a machine can produce are often above that that can come form a human; they are more regular & edgy, giving more possibilities for achieving high degree of articulation.

Great part of concern about e-music is exactly this -- it could seemingly make our whole bodies obsolete; tending to become completely virtual, it pushes human body out of equation of music. Many people see evil in this transcendence of the body. Among other, this was the reason for contra-wave of “body electronic music” (Moroder’s “I Feel Love” is taken for the beginning).

But I don't think that there is an issue here. Simply, some things that seem difficult to program are naturally done live, others are more naturally programmed. Different facets of a piece can be programmed or played live.


There is now a deeper question tied for programming music. It’s not lack of muscle-playing, but the doubt in how music made with the help of computer software is conceived in the first place. Some doubt that programmed music doesn’t come in the first place to our mind and than goes out, but on some other way. What bugs some, is that when a musician programs, he, unlike immediate, spontaneous creator, doesn’t have ideas from inside, doesn’t write music directly from his being, but through experimenting with automatized functions of the machine, accidentally or semi-accidentally getting something. In that way, the music is actually not really his and we can wonder can it then have any true meaning and character, and if it is a piece of art at all.

We've now created the 'mouse people'. That's what I call them, and I call this the 'mouse period'. Everything is done with the mouse, and I find that appalling. In terms of communication, computers are the worst thing that has happened for the performing musician. Why? Because you have to learn to talk to the computer. Having to talk to a piece of equipment moves you one step away from spontaneous creation, things are no longer immediate. When you want to play a piano, you just sit down and play it - you don't have to talk to it. You don't have to say give me some sustain here', but unfortunately that's exactly what you have to do with Fairlight, for example. Vangelis

The question is, is talking to a machine, transcribing of musicians ideas onto the language of machine, destroying possibility of one capturing his ideas authentically, in theirs precise original shape, as they appear? Does this translating owns ideas on the language of the machine interferes with the idea? With programming, ideas don’t go out immediately. If a mind can have an idea that is very subtle and can not be retained, it can only be immediately played but not programed. Creation could be said to take place, when one surprises oneself knocking out things from the edge of consciousness. A dream can bring some thought so delicate that our memory can not get hold of it, because it comes from the top of our consciousness, from the boundaries of our abilities, and we do not have that sort of consciousness that could catch and hold it and write it into our memory; if we don’t write it down instantly, it very soon disperses like a snowflake. Creating as this mysterious self-overcoming, that happens in a moment and must immediately be written down like some sacred séance, can not be programmed. The essence of live playing is that player makes a lot of decisions for a short period of time.

This is a point that probably can’t be brought down. But one would have to wonder about symphonies of classical composers, who usually didn’t create like that, but on another way: through layers of decision-making. And with electronic music and it’s techniques of recording, more than ever a piece can be written through layers of recursive, discreet decision making, and not through immediate, spontaneous creation. Instead of composing like surfing on owns emotions and thoughts, a piece can be written through layers of recursive, discreet decision making. This programmed music is not a “sacred séance” that spontaneously comes in a moment, but crystallized product of human thought; a sublimate, a diamond made by diligent scolding. It is not however created through hacking, through trial and error. The artist from the beginning knows where he is heading, and then skillfully, layer by layer, gets closer to the final picture. There is mastery here as well. Knowing what you want and structurally and masterfully approaching it, but with as little going back as possible; like making the picture sharper and sharper. It’s an architectural approach, such as that of Beethoven. Here reflexion, memory and postponing, and lack of immediacy makes up for certain limitations.

So composing doesn’t have to be immediate. Beethoven struggled for every note, shaping and reshaping his melodies, searching for his real thought. Also, Beethoven wrote down his 9th symphony in deafness. People always greatly admired this kind of writing music from inside, which one hears in owns head. Programing music can be seen as this -- a try to equal inner music and music that is heard via a machine. Ideally, programming music is materialization of inner voices by directly projecting sounds from the head onto recording. Instead of writing music on paper, for future interpretation by humans, a composer simply writes in sound. Composer is a master of programing language on which he explains to machine exactly what and how to play. On that way, music should come out exactly as he wants it. A musical programmer is dealing only with his inner sense of music, whereas his means – computer software and hardware – are immaterial. It is an idealized picture in which a composer becomes one with his means, like he is with his body, so as one would sing, he writes the whole music -- his full inner sense of music can come out. After all, what is playing if not immediate programing, taken that sounds emanate from within us as expression of our mind states? Programing is performing, if one programs in a way to materialize music that he hears in ones head. And also, immediate musical creation must not also mean that it is also honest and meaningful – often live improvisations are just an excuse for lack of idea.


But does music has to be in the head first? Why not music as interaction with a programming machine, as interaction with an instrument? If one insists that a musician always has to have music inside first, and than play/program it, that would also mean banning interaction with acoustic instrument, or even owns voice. Tools and methods are not there only to turn to reality what we already have in mind, but also can inspire. Every traditional musicians knows the importance and inspirational power of improvising live on an instrument. On the same way, one can interact with a music-programming software. So we could allow musician not to get all his ideas purely inside, in the head, but through interaction with automatized functions of computer software.

It was a great opportunity to work with a computer", he speaks about Songs Of Distant Earth (his album from 1994 inspired by the science fiction novel under the same title by Arthur Clarke). "Take the introductory rhythm track - there are so many components to it, to be able to pick them up with the mouse and graphically move them around, align them, chop bits out…" Mike Oldfield

Some still see programing music as difficult, unnatural way, out of touch with nature, unlike playing. But art does not come naturally to us. As word suggests, it is artificial. Even to creators themselves, it is not a natural, easy, unconscious thing. Hard work, and even more, opposing owns nature is necessary. It’s not like bird's song. In art, not being natural, and which also means being less immediate, is equally important as being natural and immediate. That lack of naturality and immediacy is a path towards somewhere further and above. It leads to more innovate, more enlightened creations, if not of better quality. In classical music, this lack of naturalness and immediacy is reflected in written notation. In electronic music, it culminates with concept of programmed music. Programed music can hope to overcome played music; of course, most programed music does not overcome live played music. When it happens, it’s a miracle and a romantic achievement of overcoming against chances. Remembering now the argument of immediacy, we here present the contra argument – that it is exactly the lack of immediacy that may be in the essence of an artistic achievement.

The next question is, is there maybe a princip[al difference between music inspired by programing and playing? My impression is that played music tends to be dramatic and dance-oriented, while programmed music more tends towards being atmospheric and logic-oriented. It also tends to have a particular technical and scientific sting to it. It is the meeting between the realm of science and realm of music. The result is a bit different, more ‘technical’, ‘controlled’ sort of music.


A question: is the composer who doesn’t know to play live acceptable, or a musician who does not have basic musical talents, who can’t even sing in tune?

A musician could function through a dialogue with a machine that makes up for some of his shortages, those mechanical talents, like keeping rhythm and absolute hearing. Now machine is there for that so a composer interacts with functions of the machine, applying his creativity over it, so that music is born as an interaction between creativity and artistic skill of the human artist and algorithmic functions of the machine. Judgment becomes more important and musical skill less so. In music, talent is often equated with some basic predispositions, such as hearing etc. There were examples of composers without perfect hearing in past – Berlioz, Stravinsky, Wagner. And still, they were mighty creators.

My take is, that raw musical talents – hearing and timing - should not be considered a talent, but their’s absence a handicap. Some musicologists and scientist say every child has absolute hearing at birth, to lose it afterwards. Anyway every computer has a superior hearing and timing than any human. This is not to say that we need handicapped composers. It is reasonable to assume that a composer has these abilities; but talent is not the same as absence of handicap. While absence of handicap, that is precise perception, is usually a condition for any higher order of talent, it is solely the ability of articulating owns ideas into right forms that can be considered creative talent. Creative talent, in the most narrow meaning of the word, is the ability to find a proper form for owns ideas. The idea itself must not be original – originality is the matter of personality rather than creative talent. A person naturally predisposed for creativity intuitively, and ideally without putting much effort, finds appropriate way to express owns thoughts. In everyday language, sentences will sound efficient, clear, well structured; not necessarily complex, but adequate and elegant. This economy of means is a sign of superior form, by rule paired with deceiving sense of casualness. If a composer has this gift, we could allow him being somewhat handicapped – and with machines helping him, even more so.


Now about notation. Historically, most western art music has been written down using the standard forms of music notation that evolved in Europe prior to the Renaissance period and reaching its maturity in the Romantic period. If we say that a computer program which plays music is also a sort of notation, we are right in a way,but not really. Symbolic notation allows us to concentrate only on decision making within a piece and basic ideas, while we ignore lesser details and those that do not include decision making. Notation is a rational decomposition of a piece, whereas program which plays music contains all the details needed for the performance, and so is not the same as abstract notation. In live music, a human performer converts composition into a final version, also adding his own spin to how it sounds; in programmed music, a machine is performing music where composition is given in a form of musical programming language. Eventual variations (‘versions’) are here a separate complicated issue – since a machine, unlike human performer, has no understanding and experience of music, it can not add “it’s own twist”, so principally the performance is always the same, that is, it corresponds to a final recording - so performance itself is immaterial. However in cybernetic music, where composition is indeterminate – where the performing algorithm is introducing certain changes to a composition, a machine can still knock out different performances, although principally difficult to predict in way of effect or orderliness.