Synthesizer Controversies
But it's not like things were sorted out inside of synthesizer scene itself. For the beginning, even people who accepted synthesizers sometimes viewed them rather superficially − as a means to imitate “real” instruments. This greatly irritated electronic musicians, who were putting much creative energy and skills to conceive and mesh their’s sounds.
“I don’t try to imitate. What is a horn or trombone? It is an instrument or a machine that is made to produce a certain sound wave with certain harmonics in a certain range. Now, this sound can be produced by blowing into one instrument, scratching another, or by electronics. You're talking in each case about similar areas of sound. These are all sounds that are in nature anyway. We don't invent any new sounds. The trombone sound exists in nature and to capture that sound from nature in the past, the only thing we could do was to produce a trombone. Now, to change or extend that sound, we build synthesizers. But even though the instruments are different, we are still taking about the same areas of sound, the same family. You can distort them or do whatever you like, but you're talking about the same given law, the first law of our acoustic system”. Vangelis
Indeed, although we can not produce e-sounds without knowledge of science, electronic sounds on physical level are no different then acoustic, produced by common acoustical instruments. After all, senses don’t lie – there is no unreal color, smell, visual - or sound, and despite not being of immediate psychical origin, electronic sounds are a reality of our experience. Even the fact that we can only imagine them, automatically means that they are “real”.
“Many times I've heard people who were looking at a painting of something like a flower say, "Oh, it's so beautiful. It's almost like real." And when they see the flower itself, they say, "It's so beautiful that it's almost not real anymore." Both things are absurd. In music, it's not because it's a synthesizer or not a synthesizer”. Vangelis
Taken everything said until now, shouldn’t musicians consider electronic music be the most serious musical art, more serious even than classical music? Shouldn’t classical music, since using acoustic instruments that are very limited in therms of sound-control compared to electronics, be some kind of lower music when compared to electronic music?
Maybe. But why is then electronic music not taken for serious art still? Why then, are electronic instruments, after almost a century of development, still regarded as toys? Even the most enthusiastic electronic musicians acknowledge this.
"To get a satisfactory result concerning the expression and the musicality of the synthesizers, one has to expect a constant effort", Vangelis explains, "because those instruments weren't made for that purpose”. Vangelis
Despite the technical progress of electronic instruments, theirs less and less restricted powers of producing sounds and the growing sound libraries, electronic music is still regarded as a half-popular concept and most serious musicians have problems to work in it, feeling like kids toying and not like artists. Synthesizer, despite all of its power, is not seen as a serious instrument like piano or a violin. Vangelis gives one opinion on why:
“Not because the technology isn't good enough, but because of a narrow state of mind and a too commercial state of the constructors, which pushes them to produce these instruments.”
Many than again say it is coldness of synthesizers that makes them illegitimate, in accordance with “monster does not breathe” saying of Stravinsky. But what could be more cold than atonal music? And still it was respected. Synthesizer is not only not loved, but not respected neither.
The reason why people refuse to take synthesizers seriously and respect them as a musical instrument is, that electronic instruments to these days failed to provide a well conceived system for producing the sound and orchestral music. So synthesizer is not actually so often seen faulty for what it does, as is often thought, but for being bad at what it does. I still remember when my friend from childhood, who started going to music school, showed me his acoustic flute, all spellbound: “do you see what logic that is?”, pointing to different parts of the flute. Despite being based on simple technology, a flute is a well conceived system for producing sound; it is like an organic whole. But even with the best synthesizer until now, something like that is missing: despite sophisticated technology in the core, the sounds and effects put in a synthesizer are there simply because someone thought they are interesting and put them in. Even the best e-instrument made until now is just a bag of interesting things with no overall plan and thought that makes it into a true system for producing sound.
Many musicians were surprised to find out that the mere lack of restrictions that new technology brought did not automatically give them more creative possibilities. As Erich From would put it, there was the freedom from all restrictions, but what was there freedom for? While overcoming restrictions of acoustic instruments, synthesizers until now were also destroying all system. With acoustic symphony orchestra there is a limited range of possibilities, but offered to a composer as an exact system. Once he master this orchestra, a composer feels like a master of it and doesn’t have to learn something substantially new day-in-day out, and can concentrate on purely artistic parts, perfecting his already gained basic knowledge.
But electronic musician doesn’t know his way through his own instrument; he forever stays in a state of discovering his own tool, never begins simply using it as a tool he mastered. He therefore remains a child, feeling somewhat unknowing and powerless. Composer is simply put in front of an agnostic mess that is a synthesizer or a computer music software, with nothing guaranteed, no firm starting points and no system. He is then expected to empirically find his way through that mess. He feels like a cook who doesn’t know what ingredients he has in his hands and how he can mix them. He has to constantly taste awful mixtures in order to find something that tastes good. And as electronic instruments grew, instead of becoming better organized and less tiring, they had less and less system musically. Contemporary computer systems are a much bigger and arbitrary technological mess than the first analog synthesizers. Old analogue gear which JM Jarre used for Oxygene or Vangelis for Beaubourg, say, was much more limited than today's digital technology, but with them a musician had more system. No wonder he could create more interesting music, than todays musician who is using theoretically unlimited, but unorganized digital tools.
JM Jarre was during 1990s trying to overcome this by choosing in advance a set of sounds and effects for his next project, and than working with that.
“If you're working with synthesizers /.../ you can do almost anything you want to. So what then becomes important is deciding which is the right thing to do. And that's the leap that very few synthesizer players make, I think. They generally just do everything they can.”_ Brian Eno
But artist should not think about how to restrict himself, but use all possibilities he has in order to express himself. It is constructors of the instruments who should develop a well-rounded system. Overcoming restrictions should be done on such a way as to keep the system, only to make it less limited.
“The development of synthesizers so far has all been predicated on a particular assumption that I feel I no longer agree with. That assumption is that the best synthesizer is the one that gives you the largest number of possibilities. Clearly this is what's been happening with the big digital synthesizers. Now, the effect of this on the players - or at least the conspicuous effect, as far as I can see - is that the players move very quickly from sound to sound, so that for any new situation there would be a novel sound for it, because there's such a wide palette to choose from. This seems to me to produce a compositional weakness. These players are working in terms of sounds they don't really understand yet, you know - the sound is too novel for them to have actually understood its strengths and weaknesses, and to have made something on that basis. It's like continually being given a new instrument. Well, that's exciting for the player. Every ten minutes somebody says, "Hey, here's another instrument. Now try this one." But from the point of view of the music, it seems to produce a rather shallow compositional approach. Frequently in the studios, you see synthesizer players fiddling for six hours getting this sound and then that sound and so on, in a kind of almost random search. What's clear if you're watching this process, is that what they're in search of is not a new sound but a new idea. The synthesizer gives them the illusion that they'll find it somewhere in there. Really, it would make more sense to sit down and say, "Hey, look what am I doing? Why don't I just think for a minute, and then go and do it?" Rather than this scramble through the electronics. You could contrast this approach to that taken by Glenn Gould, for instance. In the article in Contemporary Keyboard [see Keyboard , Aug. '80] he mentions the fact that he has been working with the same piano for years and years. Clearly he understands that piano in a way that no synthesizer player alive understands his instrument. You see, there are really distinct advantages to working within a quite restricted range of possibilities, and getting a deeper and deeper understanding of those. I think it might be interesting if people now started making electronic instruments that were deliberately limited, that had maybe four or five great sounds on them. Well, a Minimoog is rather like that, actually. Within a few months you know pretty well what its limits are, so you don't waste a lot of time trying to program novelties into it. You know what the instrument can do, and you choose from among its possibilities. So this becomes a musical choice, not just a sound choice. Of course I'm not suggesting that people stop developing big synthesizers. What I'm suggesting is that big synthesizers aren't necessarily going to produce the most interesting music, which has been the tacit assumption, I think - that the bigger the synthesizer, the more interesting the music would be”. Brian Eno
Brian Eno told most of the story, with only remaining to be told that it is not amount of possibilities that is the problem; artificially limiting the instrument is not a way. The point is in defining the overall system.
But how comes that after all those decades, constructors of synthesizers failed in constructing a well-conceived, functional electronic system?
“As you can see, in over 20 years there is not one synthesizer that has become a classic.” Vangelis
The problem is actually not in length of time. Centuries will not lead anywhere, until it is understood that all synthesizers until now have been constructed around a certain superficial premise. And that premise has to do with very essence of a synthesizer: with basic way of synthesizing the sound.
The wrong premise is: sound is waves in the air. But this is not what we call sound, it is only what causes sound. What we call sound, is our experience of sound. Seemingly there is a direct, 1-1 connection mathematically speaking, between these vibrations and our experience of sound. But it is far from being so simple. We can easily see how our sense of pitch is very different from objective frequency of sound: a logarithmic law governs our experience of pitch. On a similar way, but much more complex, we interpret harmonics inside the sound to get the idea of timbre. Our ear and brain does much more than just absorb vibrations. It is very far away from simplistic Fourier-driven ideas of timbre, that are still found in all books and publications about sound:
“Sound is waves in air. These waves may be the result of an object, such as a musical instrument, rapidly compressing and decompressing the air around it. The distance between an area of compression and its adjacent/contiguous area of decompression is a single wave. The length of a wave, or the distance between compressions, determines its frequency since the speed of the sound wave is relatively uniform.”
During whole current history of electronic music sound has been always perceived like this, like an outside scientific object. The first musicians from academical environment who were interested in prospects of electronic music, took this as a starting point and worked from there.
“I returned to the element which is the basis of all sound multifariousness: to pure vibration, which can be produced electrically and which is called a sine wave. Every existing sound, every noise, is a mixture of such sine waves - a spectrum. Proportions of numbers, intervals and dynamics of such sine waves determine the characteristics of each spectrum. They determine the timber. An thus, for the first time, it was possible to compose - in the true sense of the word - the timbres in a music, i.e. to synthesize them from elements, and by so doing, to let the universal structural principle of a music also effect the sound proportions”. “New means change the method; new methods change the experience, and new experiences change man. Whenever we hear sounds we are changed: we are no longer the same after hearing certain sounds, and this is the more the case when we hear organized sounds, sounds organized by another human being: music”. Karl-Heinz Stockhausen
Fourier's concept of sound is too raw and inartistic to stand true to these words. Fourier representation of sound, while enabling us to theoretically synthesize any sound in existence, actually reveals little of the inner anatomy of sound as it appears to us. It is a purely scientific theory, not an artistic one. No wonder then, that electro-acoustic music to which Stockhausen belonged to was not able to do nothing nearly as spectacular as the thrilling theories suggested.
Realizing that these existing theories are a dead-end, composers in next generation had to strive away from them. But as there were no other theories around, they ended up working agnostically, through empirical experiment. They didn’t devise any theories at all, but started from various more evolved instruments of 1970s, such as Moog and Yamaha synthesizers, and were searching for the ways to produce music using theirs functions and possibilities. While all of this was agnostic, purely empirical, it was actually no less scientific than theoretical approach of previous generation. Although less consciously, musicians such as Jean Michel Jarre were exploring the sound in the true sense, dealing with it for a change on a real way -- as an experience, and not abstract idea about superposing sinusoids. Trying to create musical beauty using 1970s and 1980s electronic systems, they defacto also charted, to a degree, the territory of our experience of sound. People reacted on theirs music, unlike on electro-acoustic.
However these agnostic methods can’t do more than pioneer. We now need a true science of instrumentation. Until now synthesizers were faulty instruments by theirs very construction. They claimed to control the sound, like violin controls pitch; but they were like a violin without a few strings, where we additionally have no idea of how much strings do we need at all, nor how many notes are there in existence in the first place, nor what relation exists between them. It is a big mess. Synthesizers are able to produce variety of sounds, but not able to offer true control and understanding of sound. Electronic instrument seemly gives us control over the sound, but it actually just puts us in an alien land. Theoretically, with todays synthesizers we can go anywhere inside the space of sounds, but we can’t know exactly where we are nor control our moves precisely. This is unlike the melodic, harmonic and rhythmic part – and in order to integrate sound into equation of music, it has to become gnostic on the same way. It can’t be like being in a dark forest without a map.
A perfect illustration of all faultiness of e-music systems described until now is a method called additive synthesis. On the lines with Fourier understanding of sound, a musician is expected to literary construct the timbre by adding harmonic on top of harmonic. But by adding a harmonic over an existing timbre, nothing humanly too sensible happens. One has no feel of working with an instrument, that is leading a musician somewhere with each move. The feeling is more like being in a labyrinth. It is very difficult to get used to, become able to predict behavior and control this system on any way. The result is principally poor while requiring a lot of unnatural effort. Scientifically, theoretically, additive synthesis is universal, unlimited way of controlling sound; but in artistic sense, such systems give us very little control.
“Each instrument has a character. My way with them is to have a dialog, a love affair, with each one. The more you try to understand its behavior, the more you have the response you need." Vangelis
Instead of this, electronic musician is expected to even dabble with voltage, megabytes and such things. He is a technician and scientist more than an artist.
“I am not an average musician. I am a technician who converts his ideas to the language of music”. Mike Oldfield (half-jokingly)
No wonder that musicians tend to hate electronic instruments, even the most enthusiastic ones among them. But they still use them simply because the final result can come out great.
"Computer is tiring. It's not very interesting, but the final result is excellent”. Mike Oldfield
On the end, however the future synthesizers will look like, they will have to be more natural to work with and include true science of instrumentation − in accordance with how we hear sounds, rather than concentrate on basic frequencies. Pierre Schaeffer was among musicians who were trying to creating “a solfege of sounds”:
"At the time, I was involved in trying to create a solfege that could include many sounds and timbres. I thought we should classify the sounds in terms of their effect on the listener, of their psychological effect. We would classify them in high, low, hard, harsh sounds. (Etudes, 1952)." Pierre Schaeffer
Apart from exploring the world of different sounds, the important notion is also concentration on the idea of a sound, rather then it’s particular version.
One thing that synthesizer lacks is a sound that is idiosyncratic enough to be interesting. By that I mean that all natural instruments respond naturally, which is to say they respond unevenly, and somewhat unpredictably. You know, a guitar sounds slightly different at each fret, and it has oddities, which are undoubtedly a large part of the interest of the instrument. A good player will understand and make use of those oddities. But synthesizers in general don't have that. The aspiration of synthesizer designers is to produce maximum evenness. And that was actually the aspiration of traditional instrument designers - violin makers, for instance. They wanted to produce an instrument that was completely even in timbre at every pitch. Of course, they failed, because they were working with materials that wouldn't permit that, and their failure is what makes those instruments interesting. Look at the shakuhachi, the Japanese flute. The intention in its development, in contrast to what we've been talking about, was to produce a quite different timbre at each pitch, and for each individual sound to have its own distinct character. What I'm saying is that as far as I'm concerned it would be much nicer if synthesizers began moving away from their perfection and through the violin stage of imperfection towards the shakuhachi stage. Now, in doing this you can move in one of two directions. In the direction of very high technology you can do that - with instruments like the Clavier and the Fairlight you can program each note to have its own special idiosyncrasies. Or you can move in the direction of very low technology, which is the direction I'm very much more likely to take. If I built a synthesizer, it would be fairly unpredictable. In fact, the synthesizers I own have already become fairly unpredictable because I've had them a long time and haven't had them serviced very much. I know a lot of people are into the inhuman cleanliness of a synthesizer, but I don't like that, and I subvert it number one by laziness: I never get my instruments serviced, so they start to become a little bit more idiosyncratic, and I also use a lot of auxiliary equipment, which I also don't get serviced. Now this sounds flippant, this not getting things serviced sometimes, but a lot of the faults that develop are rather interesting, so I leave those alone. **_Brian Eno**
Brian Eno sounds like promoting a failure, a limitation of acoustic instruments into a virtue. But what he tries to convey is a sound which retains its basic idea through non-substantial deviations, which only make this basic idea come out more clearly and more rich.