Barry Truax

Barry TruaxTruax: So here we are in the gardens of the Archevêché in Bourges, with its formal gardens and its walkways and roses.

Asymmetry: Maybe we should just talk about this, then, and forget about music for awhile….

Or perhaps this is just the place to talk about soundscape and how you got started in soundscape composition.

Truax: In 1973, at the end of my two years in Utrecht at the Institute of Sonology, Murray Schafer invited me to come to Vancouver and work with the World Soundscape Project, which he had just created, in a new department called Communication Studies. [At this point, a loud tractor came rumbling by.] Well, we’re going to have some competition here. We thought this would be the quietest place. But that’s always a good segue to any discussion of soundscape, because Schafer was dissatisfied with simply being anti-noise. He had taught a course in noise pollution in the nineteen sixties I think, and his particular bête noir was the seaplanes in Vancouver harbour. But he was also working with the idea of the soundscape as being a positive approach to the acoustic environment, based on listening and ear cleaning, which he had already proposed in an essay called “The Music of the Environment” and in his educational booklets, “The New Soundscape” and “The Idea of the Universal Soundscape,” a Cage-inspired, listening to the world as if it were music type of thing.

And in ’73, Schafer was bringing together a group of young, idealistic composers who would work as lowly research assistants. Of course, we weren’t lowly at all. We were just having fun, and we didn’t demand too much from the world in a material way.

When I arrived in Vancouver, they were just finishing up the Vancouver Soundscape, a systematic documentation of the city soundscape, with sound and a booklet, historical and contemporary. We later updated it in the ‘90s with new recordings and with soundscape compositions that were more individually rather than collectively put together.

Soundscape composition seems to go along a continuum between “found” soundscapes that are minimally altered and what John Drever and others have referred to as phonographic representations, which resist manipulation or use only transparent mixing or editing, because you can do that without people figuring out that there is a manipulation, as if it just happened that way. And of course in Europe, Luc Ferrari had done that kind of stuff, which at the time challenged the whole European notion of the composer as “author.”

“What are you doing, just framing an environment? And daring to sign your name to it?”

The purpose of the World Soundscape Project, however, was not artistic; it was educational. We were working as a collective, so we didn’t sign our names to anything. Obviously, certain people had certain tasks in preparing the document, but we never even thought about “authoring” it. But then, fairly quickly, because most of the people on the project were composers, individuals wanted to start doing personal documents and compositions.

Asymmetry: And that included you?

Truax: Yes. And Bruce Davis and Peter Huse and Howard Broomfield. And I’m specifically thinking of the series of ten radio programs we did in 1974 for CBC called Soundscapes of Canada. At one end of the spectrum are found soundscapes such as the Dawn Chorus, and the most spectacular I think, or seminal, is the Summer Solstice, a twenty-four hour recording we did on the grounds of Westminster Abbey near Mission in B.C. There is a semi-rural area around a pond where the birds and the bells from a monastery punctuate the day, and we edited that down to a one hour programme, so each hour of the day was represented roughly by two minutes. The Dawn Chorus was just an expansion of the dawn period where the maximum change happens.

Saint-Etienne[As Barry was talking about the monastery bells, the noon bells of Saint-Etienne cathedral began tolling.]

On the other end of the spectrum were some narrated documentaries, fairly straightforward, about signals and soundmarks, for instance. And a little more poetic version of that, the piece I did called Six Themes of the Soundscape that uses different voices to talk about different themes, but largely pedagogical, and situated compositions, such as The Soundmarks of Canada and the Directions programme, which could easily have been called “Dialects of Canada,” which were recordings that Peter and Bruce had made, starting at the east coast and working their way back to Vancouver, and containing the prominent sounds of various towns and villages and cities, and the dialects. Every time they asked for directions, they just had the mike going, and so they recorded all these people giving them directions.

That has become my passion ever since, extending the work of soundscape studies to what I call acoustic communication and engaging with all the real world issues, whether it’s noise, or the sound environment, or the media, or listening in general, the acoustic community—in short, all the manifestations of sound, from an interdisciplinary point of view.

Asymmetry: So what led up to 1973?

Truax: Well, in my undergraduate years, I was officially studying physics and math—and actually using a computer at one point—but having this passion for music and the arts. I had been told, like a good, white, middle-class boy, “Keep your piano playing, your interest in music, as a nice hobby.” And it’s very good advice, you know. It kind of worked for my father, who was a wonderful marimba player and percussionist. But there I was, working in the physics department and then going over to the practice piano house and feeling these urges, not dissimilar to an adolescent’s sexual emergence, for composition. There was this force emerging that was mysterious and dark and dangerous and wild, and what could I do about it?

So I finished the degree in physics and math at Queens University in Kingston and got accepted to UBC in music. Did some make up courses in some areas and some composition and so on and so forth. And then I walked into the electronic music studio at UBC and never came out, essentially.

At the Institute of Sonology, Gottfried Michael Koenig and Otto Laske and a host of really excellent teachers were formulating the digital future. That may sound overly dramatic, but they had this wonderful set of analog studios, with a lot of custom made equipment and two and four channel machines for recording it and banks of voltage control equipment that defied description. It was very, very complex. A long way from the Buchla and Moog synthesizers I’d been weened on at UBC. Stan Tempelaars was teaching modern psychoacoustics that he had gotten from Reiner Plomp, which I now realize was pretty cutting edge at the time. Koenig was teaching composition theory but also programming and macro assembly language for the PDP-15, almost as fast as he was learning it himself. And suddenly, for the first time, I found myself with the mini-computer; that’s what they were called, even though they took up one huge wall of a room. But they were single user, not mainframe computers like Max Mathews had. Although the only means of interaction was the teletype terminal, you could have real-time synthesis and interact with it as a composer rather than writing programmes. And I developed this thing called the POD System for interactive composition with synthesis, which was a top down type of approach.

Also at this time I made two trips to the EMS studio in Stockholm, where I got to meet Knut Wiggen, the controversial director of that studio. Knut Wiggen is this very modest, introverted Norwegian visionary who’s one of the marginal figures now, unfortunately, in the computer music field, largely because when he got kicked out of EMS he went back to Norway and didn’t promulgate his work much. So even his pieces are not that well known. I have one of them that I managed to get him to send me, which I still treasure. It was Knut Wiggen who gave me the best reason ever why to use a computer, particularly since they mainly ruin your life! He said “You need the computer. Why? To control complexity.”

The idea that a computer could control complex systems, such as composition, and open up things that you didn’t foresee and specify and notate to the nth detail of the microsecond, that was very, very inspiring. You guide the general rules as it may be, or general parameters at different levels, but the details are unforseen, in the classic stochastic sense. And that’s always been very attractive to me. Computers are the best way to work with systems in the microsound or time/frequency domain—basically the domain of less than fifty milleseconds, where the conventional rules do not apply. It’s a different way of thinking about sound. It’s a different way of thinking about structuring it. You’re definitely not writing a score for granular synthesis. A thousand grains per second? “Would you like to transcribe those? I’m ready whenever you are!” It’s reductio ad absurdum, right?

Asymmetry: Let me get my pen out; I’ll give it a try.

Truax: I’ll come back to you in a day. See if you’ve got your first second!

At that time there was a whole parade of visitors through Sonology, John Chowning, Jim Beauchamp, Charles Dodge, and of course Boulez and his entourage, who came through to “investigate the state of computer musique, because we are building the IRCAM.” We were not even allowed into the computer room during the state visit of Monsieur Boulez, even though I think they may have shown him my software.

Chowning of course was exactly the opposite. He was immediately a friend and informal and everything we know and love him to be, and of course he said “What are you doing?” So I showed him these really pathetic attempts at synthesis with fixed wave forms. The farthest I’d gotten was amplitude modulation in real time, and he said “There is a better way. It’s called FM.” Talk about being in the right place at the right time! He hadn’t published this yet; this was in 1972. But he sketched it out for me, and a couple months later I did what turned out—twenty years after, he and I were talking and realized it—to be the first real-time FM implementation. Not that, maybe, it’s all that significant, and not to take away from his discovery, but I did implement a single voice version of FM.

That was for a solo in what I was planning to be an opera, a solo of epic proportions called “The Journey to the Gods.” The analog work I had done to that point didn’t seem quite to capture what was needed for this concept, this journey of the character Gilgamesh.

And then FM came along, and it did what I needed it to do, coupled with Knut Wiggen’s inspiration to use stochastic distributions to create complex timbres and complex structures, particularly in four channels. The initial synthesis—done with four mono channels put in a quadraphonic format and all generated by different random start numbers of the same process—produced this really complex interaction which was a delight to hear, a complexity that I had created not by typing anything in that was specific to the notes, but top down, by controlling the conditions under which these things were generated and then guiding it in general ways using Koenig’s tendency masks and Poisson distribution.

In 1975, when Schafer left Simon Fraser, I became de facto his successor there and developed the teaching program further and then later the acoustic communication, the graduate programme, and the book, Acoustic Communication, now in its second edition, which I’m very happy about. And the second edition includes Handbook for Acoustic Ecology as a CD-ROM.

In a way, I felt like I was applying the kind of theory analogous to Otto Laske’s work, another inspiring teacher at Sonology in Utrecht, on music as a cognitive process as opposed to music as an artifact. And there’s a direct analogy, then, to the acoustic environment not as an artifact the way an acoustical engineer would measure and document it but as something defined by how we perceive it. In other words, as information. And, as it turned out, the communications school over the last thirty-four years that I’ve been there has provided this incredible framework for understanding sound as information-based, and generalizing it beyond music to every aspect of sound, including the enormous impact of technology in the twentieth century on that process.

That’s what now appears to me now an almost humorous series of circuitous routes that brought me to this interdisciplinary organization of music, technology, and the social environment and always integrating those, bringing those together, and finding to me still very inspiring cross-fertilizations between those.

So personally I feel I could have had no better an interdisciplinary grounding, even though it took two continents and several disciplines to, almost randomly or just by chance, put this all together.

So that’s the background. Now where do you want to go?

Asymmetry: Well, I’d like to hear more about that difference between seeing things as objects “out there,” or seeing things as in a relationship.

Truax: OK. So communications started in signal processing; you know, the classic who says what to whom with what effect? Harold Lasswell and others developed this idea, and it obviously comes right out of acoustics: the source, broadcasting the sound, the medium of propagation, the receiver, et cetera, and composers have not been amiss with that as well. The composer writes the score and transmits it through the performer to the audience who’s supposed to have an aesthetic reaction to it. We model it, in fact, we often come down to the point of thinking how could it be otherwise?

But communication postulates much more as, first of all, information. And what you just mentioned, sound creating a relationship between the listener and the environment, which includes other people. Creating a relationship and hence the possibility of creating a community. So yes, there are messages and there is information, but there are also the relationships, something that comes out of systems theory and other types of approaches that capture something very basic that gets you out of a lot of the simplistic models of sound.

I’m always asked “Does A cause B?” Or as Max Mathews was asking this morning, are there timbres that are inherently pleasant or inherently unpleasant? And the discussion we went through at that point went through all the subjective/objective relational types of things. But it’s still very much based on the question, “Do the sounds own properties?” And there’s only so far you can go with that. It’s not wrong, but you miss something ultimately; you end up then saying things like “The way you regulate noise in the environment is by establishing risk criteria or permissable limits for sound.” Because there has to be this objective proof that A causes B. Or statistically—you’ll end up in statistics very quickly—what is the risk of….

And that’s not adequate for explaining an acoustic community.

Asymmetry: It may be adequate for describing the toxic effects of sound.

Truax: Yes, it could be. Or for city council decisions. And I teach that stuff to my students because unfortunately the risks to their hearing are not something that our educational system actually tells them anything about. I’m not going to blame “the kids” for destroying their hearing, because the system has not given even step one in the educational knowledge of what the risks are. In a way that they would believe, as opposed to just a put-down; you know, “You’re all ruining your ears with that horrible music,” or whatever. That’s not risk assessment.

Asymmetry: And that’s still how it’s put. That’s how I hear it; all the time.

Truax: And the irony is, the basic knowledge has been there for almost a hundred years. If you go back to the first noise study of any city, which was New York, 1929, you see that the main five or six effects of noise that the health department lists are exactly a gloss on how we would structure the same information now. We’ve filled in a lot of details, but the basic idea was there ever since industrialization got so bad, or so serious, that there could be such a thing as a city noise study. The information has been there; the communication has not. It can be tragic, since sound and hearing is such a fragile and important part of one’s life. It’s a major way of relating to the environment, and people can throw it away needlessly and haphazardly.

When Schafer wrote me about the World Soundscape Project, I realized that as soon as I stepped outside of the building in Utrecht, I was in the centrum of a European city that was not designed for the trucks and busses and traffic plowing through the narrow streets, creating what I would now estimate as 90 dB of noise. The best thing you could have done in that city was to eliminate traffic from the centrum and make it all pedestrian. They did have a few pedestrian areas, along the canal, and of course you retreated to those areas to plan your trips through the city. So there was the paradox. I’m in this wonderful building on a small canal street in Utrecht in these very esoteric music and technology studios, including computer music, then I walk outside, and I’m plugging my ears.

The only time it was ever quiet—which I described, and I think Schafer quoted in the Tuning of the World—was on Dutch remembrance day, May 4th, and Dutch liberation day, May 5th. The whole city is quiet or shut down for the moments of silence, and in Utrecht the bells from the cathedral ring out over this celebration. And it was absolutely awesome to be there. The solemnity of it, the sheer acoustic weight of those bells tolling out over the city, this huge mass of sound that defines the whole space of the community in a positive way rather than the traffic that was defining it in a negative way.

So there are, if you’re lucky, some moments when the environment can be enriched and redefined in a purely sonic way. Soundscape composition is not the only way, because I do electroacoustic music in general and all sorts of things related to that. But tying up at least one of the threads I’ve been developing here, is that soundscape composition, obviously, brings together two things, using technology compositionally and using soundscape elements not as just material but as an idealized or virtual sonic environment. And with eight channel surround sound, you are immersed, just as you are in the soundscape all the time, so you’re recreating the soundscape experience rather than stereo frontal experience of acousmonium or the traditional orchestra.

Asymmetry: Yes, I’ve noticed that they favor the orchestra set up here.

Truax: In Bourges, the French tradition of diffusion, which I adore and have found very inspiring since I started coming here in the mid seventies, is literally an orchestra of loudspeakers. But gradually even Bourges has incorporated more rings of loudspeakers around you. And also knowing that I prefer the circular configuration of speakers, they’re usually able to accommodate that nowadays. It is very enticing and seductive to be in an eight channel sound space with the sound all around you.

Asymmetry: It is very enticing. And I’m always amazed at how few people manage to be enticed. Last year, my first time in Bourges, was a big shock to me, to walk into the room and have what I consider to be really cool stuff going on all around me, and six people are in the room, and they’re all composers. So why aren’t the people out in town coming to hear this stuff? They’ve been here for almost thirty years.

Truax: But I’ve been even longer where I live. and that’s even harder. I live in Burnaby, and you definitely don’t get performances in Burnaby, and I don’t even try.

But Françoise and Christian, I don’t think you could find any two people who have done more for electroacoustic music if you tried. The studio, the festival, the competition, the tours—I don’t know if they do the educational outreach as much as they did in the first few decades, I don’t know what the status of that is, but the list of achievements and the encouraging of young composers, of emerging composers. Composers from the eastern bloc countries—you could only hear them here. Latin America—you could only hear them here. On and on and on and on. Essentially two people, in a provincial town, made this international impact. This is the big picture.

OK, so you’re going to grouse because they didn’t also bring in hundreds of locals? We have in Canada a very similar situation. The Banff Centre, in the middle of the gorgeous Rocky Mountains, with no impact on the surrounding area. Banff is largely a tourist center and no one can afford to live there, not even the people at the Banff Centre, so they live down the road in Canmore. And as far as I know, the delightful people of Canmore have probably not figured out what the Banff Centre really does. Are you going to tear it down as a result of that? Well, actually you do better; you make it have to be self-supporting! And then you gradually transform it that way. That’s the government’s solution for how to deal with that.

So there aren’t really many counter examples of success stories in this area. So I’m a little loathe to vilify anybody who has made such an incredibly important contribution to so many lives and so many composers, and to the field in general. And continue to do so. And negotiate all of the politics and the administration and the financing and all that to make it happen.

Asymmetry: I have decided with Asymmetry, at least for the moment, that I’m just going to sidestep the vilifying stuff. There are maybe some books that I want to vilify; those are the books that do vilifying. But mostly the tone I want for Asymmetry is that this music is perfectly nice, perfectly fine, perfectly lovely, and, you know, give it a listen.

Truax: That’s the most reasonable attitude I can think of to approach the general public. It frustrates me when there are barriers between me and the audience. And I guess the way to get through that is, first, if I can satisfy me, then I have some hope of satisfying you. Because I don’t know who the “you” are.

Asymmetry: If you start out trying to satisfy me, unless you know me, personally, you’re probably not going to do it.

Truax: Well, this is the marketing thing. If you market to the audience, what they already know, it quickly goes to least common denominator. And I’ve found that when my students debate this—because of course they do as well, in and out of class—they are so engaged with mass culture that they are always throwing this music up against it.

Asymmetry: And you really can’t.

Truax: And unfortunately it’s the music types who have done so, despite the communications school being there. They don’t have the slightest concept of how mass culture works—the political economy of mass culture and the production of the consumer, which is advertising. That’s all just kind of obliterated, and all they see is the mass audience but none of the capital that goes into it.

Asymmetry: So they see none of the steps.

Truax: Not really, they don’t understand them, because they haven’t even taken a basic communication course that would inform them of it. It’s not complicated. Well, it is, of course, ultimately, but not the fundamental principle, the production of the consumer by capital. That audiences are created by advertising and promotion, by massive amounts of capital that are controlled by only a few corporations.

The students either don’t know it, or they don’t want to know it, or they’re ignoring it, so they just reduce it down to “Well, this music doesn’t have any audience, and all this other stuff does, so why are you composing that music?”

Asymmetry: I think a lot of people perceive the demand as a given, even though it’s not; it’s a totally manufactured thing.

Truax: Exactly. And they don’t understand that. Why is “popular” popular? Not because of people being popular but because of the way the consumer has been created. One of the rhetorical strategies I use to get this idea across is I say “The arts have this quaint idea that you spend 90% of your budget on creating the product and then you figure out if you have anything left over to advertise it. The mass market spends as little as possible on creating the product and massive amounts, at least ten to one, on creating a consumer for it.”

And you’re going to mix those two and talk about the relationship of those two practices? They have nothing in common, economically. And that’s simplified, but then maybe people begin to start to get it, that the popularity isn’t in the content, it is in the promotion of it. This is like 001 Communication Theory or Political Economy of Media or Media Studies, things like that.

I do think, however, that the composer should be thinking about communication to the audience, at every level. Title, programme note, I’m always overemphasizing to my students, because they always have trouble with their titles. I do too, but that’s probably the only thing the audience is going to know when the piece starts: how are you using the title to prepare them; are you using the title to prepare them? That’s probably the only thing other than the piece itself that they’ll know, because they may or may not read the programme note, but that’s your second line of defense, that’s your second chance to give the listener something to expect, how to listen to it, not, you know, what inspired you, what got you going, what you read, although it may, if it’s integral enough.

The naïve person approaching this would like to know “What am I getting into? How should I listen to this?” And particularly in electroacoustic music, because it’s so variable, and it’s so unexpected, and so unconstrained by the normal physical laws of what a violin or a piano can do, which already kind of prepares you, since you’ve heard violins and pianos before. Given this open-endedness, not to mention the possibility that you can be suddenly blasted by enormous power at any moment and shocked out of your skin, right? I can certainly understand how the naïve listener could feel. Naïve, but with yet a kind of cautious positive wanting to be open to this. That’s the person for whom the programme notes should be written.

I always find it curious to read programme notes, particularly from students, and then hear the piece. They’re almost always disconnected. “That’s what you were thinking? That’s not what I hear!”

Composers, even if they’re thinking abstractly, they have their own process, and the stories they tell themselves, and what they think the piece is about. But usually that a) doesn’t make a good programme note, b) it doesn’t really help the listener, and c) the listener might as well take the approach of just simply ignoring it. Or maybe afterwards, if they’re curious. “Oh, were did this come from?” But you don’t need to know. It’s extra musical.

Asymmetry: And I think the autobiographical stuff, particularly, is probably more interesting, more useful, after you’re already familiar with the piece. I don’t think it leads you into it very well at all.

Truax: Not usually, no. But with the soundscape composition I think it can. Because the piece clearly should be about something. And the inspiration and what it’s about have to be one and the same. Not external, not separated as they normally are—and guided by the subject not by your personal desire to express yourself. That’s the way that I have usually worked with timbre, anyway, once I had gotten past synthesis into granulation of sampled sound, which was 1990. There was an earlier period, 1987 I suppose, like with the Wings of Nike phonemes, but no fullblown sampled sound until after 1990. And of course that’s where the integration with soundscape had to be more explicit, and that principle naturally evolved. I say this as if it was a dictum now, “Thou shalt,” but what I really mean is that it seems to work.

And I think of how Koening distinguished between composing with sounds, which is instrumental music, and composing the sound, which is electronic music and acousmatic music. And Walter Branchi formulated something like composing through sound, in which the sound itself and its properties and its context and everything about it guide the process, not your personal whims.

That’s in contrast to the German tradition, which largely used an abstract syntax with arbitrary sounds that were fitted into it. The best composers found or made relationships between those, so it’s not like it was wrong, but the idea that you can separate structure from sound is something that disappears at the micro level, although I think it also needs to disappear to some extent at the macro compositional level. If you were to map all the sounds that we’ve heard in this park onto a composition that’s going to behave in the same way, that would be an elementary soundscape application. I’ve heard it applied to instrumental music. And the result was completely different, completely fresh from any mere style of contemporary music I’ve ever heard. Because the whole flow of it, the whole tempo of it, structural tempo of it, the whole listening attitude, changed even though it’s only instrumental sounds that you’re hearing.

Suppose now we’re using actual environmental sounds, what do they suggest? One of the simplest examples, since we’ve been talking a bit about bells, is my piece Basilica, from early on in my time-stretching technique days. ’92 or something like that. I started stretching my favorite bells from the soundscape collection, the bells of the Basilica of Notre Dame de Quebec in Quebec City, which are the European-style bells that come down one at a time, as opposed to English change ringing.

I started stretching them, later adding an octave below and a twelfth above to fill out the spectrum, but just the stretched bells seemed to create resonances of a very large space. Hmmm. What very large spaces could be associated with bells?

Asymmetry: Hmmm. Yes. Airplane hangars?

Truax: Ha ha, yes. Well, I’d been spending a lot of my time in Europe in a lot of Italian basilicas. Though even all the sounds are from the bells, the map of the piece, which is a very simple map, takes the form of a basilica, a cross within a rectangle. That gave both the imagery and the shape of the piece—approaching the church, going inside and down the long nave; that’s the opening six minute section, where it’s just the bells. Almost the literary idea of the part standing for the whole. In the eight channel version they swirl around as well and create this very large sense of space. And in the process, I also started to hear these vocal-like sounds coming out, vocal resonances that came out of the stretched bells. And so I thought, hmmm, vocal resonances in a church.

Asymmetry: Yes, let’s see. Has that ever happened before?

Truax: Alright! So I’m not afraid to be so obvious about it, because I want it to be about that, right? Another type of composer might say “Oh, but that’s too literal. We’ll abstract that.” You’ve heard lots of examples of that where if you don’t read the programme notes, you won’t figure out that. Did you know that piece the other night was about Mao Tse Tung?

Asymmetry: Only because I read the program note.

Truax: So how dare a composer say that that piece was about Mao Tse Tung!

Asymmetry: Exactly.

Truax: And I’m sorry to pick on that one person, but nine times out of ten, that’s the disconnect between the piece and the programme note. The person thinks it’s about this, and then goes out of their way to obscure that fact in the piece but writes about it and tantalizes you by saying “Oh, but it’s really about this.” So why does a composer do that? I suppose one reason may be that they’re young and insecure and afraid to be too literal, because their composition teachers have told them that abstract is best; and maybe it’s also because they’re at a university and, as one of my students said—I hated the expression, but I had to appreciate the sentiment—“this is my school music.” And I suddenly realized, he’s got his garage music, his punk music, his world music, his et cetera music, and then there’s his school music. Oh dear. Who taught him that his school music was different? I’d been working all this time to show him that it wasn’t school music, it just was music. So even if you don’t want them to be academic and abstract, they still think of it, because of the institution. They have absorbed abstractness as the way to go, and it is a sore point with a lot of composers in teaching. So there’s a lot of work to be done about that.

I’m teaching a course for the second time this fall. We just call it, for a label, Soundscape Composition, but the real topic is context-based composition. Soundscape just happens to be a well-defined term and communicates a little less abstractly than context-based. But imagine that, music engaging with the real world and not in the business way of dealing with the real world, but informing the actual music—suddenly even the really good composition students say “Wait a minute! That’s gonna be difficult. It sounds good, but I haven’t been taught how to do that. I’ve been taught how to write all these notes. I’ve been taught instrumentation, I’ve been taught…. But I haven’t been taught about the real world. And I’m not encouraged to use the real world except maybe for the title that I put on my piece and maybe a little programme note. That doesn’t seem to cut it with this prof, who says it should inform the compostition in some way.”

I’m over-dramatizing this. But it is a scary thing. And then, furthermore, the things that have engaged with the real world have been put down as second class, programme music, film music, sound effects, et cetera, et cetera. We only have those kind of models. And electroacoustic music? Well, that depends on who you’re listening to. If you’re listening to Trevor Wishart’s Red Bird, well then you’ve really heard something, and one reason a piece like that was so shocking in the 1970s was that it did dare to have this narrative structure, but so complex and abstracted that with its sophistication of the sounds and the level of its syntax, you couldn’t just pass it off as a radio play, or an existing genre. It clearly was a piece of music that would stand on its own, even if it is forty minutes long.

Other pieces broke through that prejudice, too, early prizewinners at Bourges such as Jack Body’s Musik Dari Jalan, which got a first here in the seventies and then later the Euphonie d’Or. When they did a retrospective, they went back and awarded some other prizes to the prize winners of the past, and it was interesting how some pieces had survived and some others had become a bit dated. Red Bird and Musik Dari Jalan were two that picked up a Euphonie d’Or. I don’t know if you know the latter piece, but it’s one of my favorites; it alternates between the Indonesian street cries (which is what the title means in Indonesian) abstracted as sound objects and then put back into their environmental context. Done completely transparently. And the moments of transition are beautiful.

Asymmetry: That sounds really stunning.

Truax: It was. It was stunning in the seventies, and it still is. So I take moments like that, and you could multiply those by many others, as ones that were striking and did get recognized and even celebrated later as landmark pieces, to the community’s credit and particularly to Bourges’ credit. So those are examples of challenging this “abstract is best” structure that unfortunately a lot of acousmatic music, as much as I love it, does tend to fall into, taking the sound out of context as an object of perception with reduced or focussed listening and ignoring the source.

And of course that can be traced back to Pierre Schaeffer’s dislike for what he called the anecdotal quality of the sound. He wanted to get away from that as fast as possible. The result has been of course wonderful in the sense of the primacy of the ear and the classification of sound and the sound world of the acousmatic, but then gradually composers started, like Parmegiani for instance, to bring back an environmental context into their music.

I think we’d better pause here. We’re being invaded by the daycare, the local school or daycare center here. Probably we should move.
_______

So I was mainly talking about how soundscape compositions have this kind of dual purpose of relating you back to the environment and the environmental experience. We hope that it will carry over into everyday life and will challenge the assumption that music is independent of the real world. It’s a peculiarly Western, probably European-derived concept that music can be abstract and not functional, and of course anything that’s functional or programmatic or things like that is definitely second-class, right?

On the other hand, one of the things that really bugs me about people trying to communicate for instance about nineteenth century romantic music is that they read so much into it. Take a Schumann sonata, for instance; they’ll go on and on about his love for Clara, and how the themes represent this or that, which is projecting this soap opera onto the music. Is that the relationship we want to have between abstract musical form and the real world? Projected fantasy stories of composer’s biographies or whatever?

Asymmetry: Well, you still get people saying that the story that was made up to accompany Symphonie Fantastique is autobiographical, in spite of the fact that none of the details of that story correspond to anything that had ever happened in Berlioz’ life.

Truax: Yes, it’s fiction. So you have the fiction model for abstract music!

Asymmetry: And it was supposed to be, like all programs, an aid to the audience, to get them through this music that was very new and dangerous sounding to them.

Truax: Exactly, and the example in my work that would be a pretty close parallel would be my first granular synthesis piece, Riverrun, 1986. It felt like breaking into such a new domain: it felt like such an incredible, powerful force that I was unleashing, even on myself. I never played it in public for quite some number of months. And I never even let anyone hear it in the studio, because of the sheer power of it. It’s like, “My God, what have I created here?” particularly in the confines of a studio. And, footnote, I have a lovely new eight channel version of it, which is the proper version. I don’t normally go back and tinker with pieces, but all of the material for Riverrun was generated in eight channels. And it was merely the banal fact of not having enough tape recorders to record it. Ultimately, it had to be mixed down to four and that’s how it was presented for many years. And most people know it just in the stereo version on CD. So I finally decided a couple of years ago to create an eight channel version of it, directly from the eight channel sources, without that compromise. And, oh, it’s so much better. I’m hoping that people will be able to hear it.

Anyway, back to the Berlioz. I think the only comparison I might have to Berlioz’ experience with Symphonie Fantastique is in feeling that a very simple metaphor, the river from its source to the ocean, would be a good guide for the listener. First of all, it was a guide for me on how to make sense of it, literally so. When I started with these first tracks of granular synthesis, I had no idea I would produce a piece called “Riverrun.” What I did was just fill out the tape, an eight track tape, with all these experiments, all these many sections or tracks, and when the tape filled up I decided well, what am I going to do with this piece? Yes, it’s a dubious confession to make, but it captures the sense that composers tell stories afterwards, you know, how you made this or that, with this superhuman effort, intending everything, putting everything in place, and creating this masterwork.

No, not really.

A lot of Riverrun is done with grains that are enveloped sine waves. There are FM grains in it, so I can’t say it’s entirely done with sine waves. The sampled sound stuff came later. But Riverrun is now an almost classic example of using the lowly sine waves, which Stockhausen, by the way, had called “little brutes.”

I love Richard Toop’s article about the correspondence of Stockhausen with Goeyvaerts—in Musical Quarterly, of all places. Richard Toop’s analysis of that correspondence is valuable, because he demythologizes some of Stockhausen’s later autobiographical glosses of how early and how all-intended this was. That he was basically what we would now think of as a graduate student, hitchhiking to Paris and back to Cologne, hearing about this, that, and the other. That he got Eimert to show him these things. It’s not clear if he heard any sine waves in Paris during that period, but he obviously discovered sine wave generators in the Cologne studio and in the correspondence with Goeyvaerts he literally says “These little brutes. Surely you can’t mean composing with these! They are so irritating, and it goes on and on like the whistle on a radio.” He says what everyone thinks listening to a sine wave generator after only a few seconds, that this is not wonderful, OK?

And we know that as early as 1946/47, Dennis Gabor wrote a seminal article on understanding what the alternative was, what is now called the Gabor grain. He said what we need is not the timeless Fourier abstraction of piling up sine waves and then later figuring out how to put some envelopes on them, à la Jean-Claude Risset. He said what we need is at the quantum level, and there was enough psychoacoustics in the nineteen forties to know that there really was a quantum concept, that as you got shorter in the time domain, you got broader in the frequency domain. And the Gabor grain is the point at which the waves match perfectly, the Gaussian shape in the time domain is the same as the Gaussian shape in the frequency domain. And it’s only taken us 60 years to read what Gabor said, in theory of course, although he did some experiments. He’s a seminal figure, and fortunately Curtis Roads has taken it upon himself to revive Gabor’s memory and his contributions.

Asymmetry: That’s incredible. That’s at the very beginning of things.

Truax: That’s right. If there had been Gabor grain generators around instead of sine wave generators, the whole history of electroacoustic music would have been completely different, because it would have been based on a psychoacoustic principle rather than on an instrument-based principle. The closest that they could have come was the impulse generator. The impulse could have been a grain, but it was only controllable in its frequency of repetition, not in its shape.

So, yes, the theory was there. Even, enticingly, we know that Stockhausen’s teacher, Werner Meyer-Eppler, knew about that publication—it evidently is in his papers. But I think it comes down to the fact that there was no way to implement it. The theory was there, but there simply was not the possibility of actually using it. That often happens.

So back to the Goeyvaerts’ correspondence, you get Stockhausen decrying these little brutes of sine waves, and a few months later, he’s done the mixtures, he’s put them into the reverb chamber—a literal chamber—has cut off the original and just kept the reverberant portion and now sees them as flowers glistening in the sun or raindrops glistening in the sun, I think it was. And Richard Toop, with characteristic British understatement, says that never has such a reversal had such importance in so little time.

Then of course what came down to us was the fixed wave form, the oscillator and all the sins that have been committed in the name of Fourier ever since. And then, gradually, with Risset you get a way of reestablishing a realistic and timbrally interesting spectrum by manipulating the time domain, the envelope synchronous or asynchronous, of the various frequencies.

And then, you know, Xenakis and Curtis Roads and others were working in this area. Eventually the technology, in my case microprogrammable DSPs, allowed that idea to be implemented in real time in 1986. There were of course historical precedents before that. But they weren’t ones of general enough flexibility to either produce full-scale compositions or systems that other people could use. I remember in ’86 hearing these first sine wave-based grains and just thinking “This is so aurally satisfying.” And since I had put them in two stereo channels, the grains were independent of each other, so you had this spatially distributed timbre that fell on the ears so approvingly.

These were quasi-synchronous grains; the more synchronous they were, the more you got amplitude modulation effects, so that you also created sidebands according to amplitude modulation that would create other pitches. A fact that, by the way, completely escaped Xenakis—he was bitterly disappointed to hear pitches when he heard Riverrun in Illinois. “Could this be granular synthesis?” Because of course, he had always thought asynchronous grains, because he devoted his whole life to being anti-Fourier. Not only that, of course, but he was always looking for a non Fourier-based approach. And so suddenly to hear pitch in the world of the grains that he had predicted was very upsetting to him, because of course he didn’t understand that if they were quasi-synchronous, they could create periodicity.

I’m not as purist about these things; I can go from periodic to aperiodic with no emotional or psychological or aesthetic qualms. I just see it as a continuum, you know.

Another key moment was de-synchronizing the grains. It was possible to line them all up if you didn’t have any randomness between them. And then you are back to the fixed waveform thing, hard-edged, motorized, modulated. You heard the example Trevor Wishart made this morning, simply repeating the r sound. And of course it creates modulation; suddenly all the dimensionality collapses to one dimension, and we think “machine,” right?

At the granular level, when you introduce even one or two or three milliseconds of asynchrony between those grains, it suddenly explodes into the three dimensional granular texture that we’ve come to know and love. And I was shocked that it only took a few milliseconds to do that. Whenever you find such a discontinuity, you should stop and think. Because there are not usually such discontinuities anywhere. And it’s not a discontinuity in the mathematical sense; it was a perceptual leap. And I’ve tried to write about this. The experience of it was very, very striking, and I now realize that we were then talking about uncorrelated grains. That’s the problem with fixed wave forms. All of the frequency components are correlated. And there’s a question of what degree of quasi-correlation is useful for recognition; for instance sounds that fuse together have a certain degree of correlation that the brain picks up and sounds that are clearly uncorrelated are called different sources. Particularly if they come from different locations.

So we’re back to again the soundscape kind of approach. How do we manipulate the auditory stream? How do we put together the soundscape? How do we make sense of these two little vibrating ear drums, of all these complex sound sources that are all around us all the time? Not to mention how to balance those like we just did when we moved to an environment here where there’s somewhat better balance! [Away from the kids taking lunch in the park where we’d started out.] We just redesigned our acoustic space by choosing a different location. So it’s fun to hear the little bird over there and the little bird over there and that bird over there and I can quite easily follow all of those and the crunching steps behind me and so on and so forth.

The Risset additive synthesis-type envelopes are quasi-correlated. They can’t be exactly correlated or you’re back to a sawtooth wave. But they are correlated enough that you identify that as a source; they “fuse together” in modern psychoacoustic terminology. The classic Stanford example of that is putting vibrato on different harmonics so that they fuse into a voice. The Perry Cook book has that example, that Steve McAdams and others have talked about. If you put synchronous vibrato on a bunch of frequency components, it suddenly is a voice. Because vibrato is by definition synchronous. And then the punchline of the examples is, they make three separate voices then mix them together—a mess of harmonics. Then they put vibrato on each of those, on three synchronous pairs that are not synchronous with each other. And suddenly you have a choir of three people.

It’s one of the great “Show & Tell” things that they do, and we all steal that example in teaching.

So modern psychoacoustics has given us some clues as to how auditory streams are segregated—how we listen, what the ear wants to hear, or can hear, and how it sorts it all out. And electroacoustic practice has, by simply the good, aural perception of the composers, worked all that out. There are in fact all sorts of ways in which musical practice has intuitively just worked in this area, without necessarily formalizing it, and then psychoacoustics has gradually been figuring out the musical tricks that composers have found that work. There’s still lots of work to be done, but I see these different elements as all coming together in some very exciting ways, around the computer, around the soundscape, around perception.

I’m presenting a paper in Leicester next week at the electroacoustic music studies conference, EMS. It’s all about the analysis and study of electroacoustic music, documentation of it, analysis of it. I’m turning around the classic Cage/Schafer notion of listening to the soundscape or environment as if it were music and asking what happens if you listen to electroacoustic music as if it were a soundscape. It’s a nice idea, right?

Asymmetry: That can’t help but just broaden the whole range of possibilities.

Truax: I hope so.

I’ll be curious about your reactions to the student pieces I’m presenting Thursday. Because first of all there’s no way that they’re going to imitate what I do or do what I say or anything like that. And you could almost argue that none of those are soundscape compositions. I think that you will see that they are all rather individual, but what they do all have in common is something very general and that is an engagement with the real world. There’s only one piece that is abstract enough to be called acousmatic, and it’s still based on a piano being hammered and abused. I could have chosen another piece by the same person, but I thought that that one was the best right now.

Of course, I got to select them, so I chose the ones that I like. And the ones that I like are the ones that are based somehow in the real world. But also I see that they tend to communicate better with people. Leigh Landy calls this the “something to hold onto factor.” His new book will pursue that more systematically. Accessibility of the work is how he phrases it. Composers have to have something worth listening to, otherwise they can complain all they want in their closet, expecting the audience to break down the door.

Asymmetry: Whenever I hear the word “accessibility,” I think of my own personal experience, which is that composers didn’t have to do anything, for me. They didn’t have to meet me at all. I just went out and started pursuing.

Truax: So how can we create more listeners like you who are willing to go out and do that? It’s an intoxicating world.

Asymmetry: Intoxicating is exactly it.

Truax: And we shouldn’t be afraid to use that term, in the sense that involves all the senses.

This entry was posted in Composers, Interviews. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Upcoming Events

    Festival Mixtur 2017

    30 March through 9 April

    http://mixturbcn.com/en/mixtur2017/programa-2017/

    Music by

    Fausto Romitelli
    Jean-Claude Risset
    John Cage

    and many more, including people who are still alive, of course.

    Présences électronique

    14, 15, 16 April

    http://www.inagrm.com/presences-electronique-2017-0

    Most of the names I did not recognize, which makes me even more sad that I cannot attend this year. But you can. And probably should. If you love your ears, attend this festival.

    SIME 2017

    24 – 28 April

    https://www.facebook.com/SIME-689117931140405/

    Probably, though I'm not sure how it is possible, this will be the best SIME yet. Attend. Let me know.

    Echofluxx17

    3-6 May

    http://echofluxx.org/ECHOFLUXX17/index.html

    Opening Performance Orchestra
    Terrible Orchestra
    Echofluxx Ensemble

    And film and more music and more performers. More of everything.

    Monaco Electroacoustique
    4, 5, 6 May

    http://www.academierainier3.mc/fr/electroacoustique/monaco-electroacoustique-2017

    Francis Dhomont
    Horacio Vaggione
    Annette Vande Gorne
    Hans Tutschku
    Robert Normandeau

    And many more. Two festivals that overlap is just cruel. But go to one or the other of these. Monaco or Prague, you choose.

  • Recent Articles

  • Donate to Asymmetry





    • Donations may now be made from anywhere in the world.
      For US residents, all donations to Asymmetry are tax-deductible. Asymmetry has been serving the new music community for almost seven years now. With your help, it can continue its mission indefinitely.
  • Facebook