Jon Christopher Nelson

nelson “The technological advances that we have seen in our lifetimes create a wonderful atmosphere for sound exploration. For those of us who are easily intoxicated with sound, computer music provides fantastic possibilities.”

I first heard the music of Jon Christopher Nelson on an IMEB disc (Cultures Electroniques, vol. 9). I first met him at the 2006 Bourges festival, where his “Just after the rain” was played. Since we held this interview right after Jon had returned from some time in the studios at Bourges, where he had just finished a commission, we started things off talking about his decade-long association with IMEB, and about his latest piece.

Nelson: I first became associated with IMEB (Institut International de Musique Electroacoustique de Bourges) back in 1996 when I received a prize for “They Wash Their Ambassadors in Citrus and Fennel.” Since then, I have been fortunate to receive several prizes from them, and in the last five years have served twice on their composition competition jury. I was particularly pleased when my composition “Scatter” was recognized as one of the ten best works of the past ten years. Since I have received two commissions from IMEB (one in 2003 and one that I just completed in January of 2007), I have also enjoyed the luxury of composing music in their studios. I feel deeply indebted to IMEB for all of the support that they have provided for my work in the last ten years.

In many of my compositions, I have been intrigued by transformations of surface gestures—those materials becoming so dense that they lose their distinct features and blur into some sort of background texture. My most recent work, “objet sonore/objet cinétique,” explores the boundaries between kinetic foreground gestures and static background materials. The work is at times frenetic and chaotic, with the surface marked by a number of fleeting gestures and contrapuntal relationships, while at other times there is very little sense of forward motion.

It incorporates a number of samples from lots of sound sources—in that sense I suppose that it is a bit more like my composition “Scatter” than some of my other recent works. I also wanted to provide a contrast with my most recent tape work “Just After the Rain,” a more contemplative work and one that flirts with notions of soundscape composition.
Also, “objet sonore/objet cinétique” is a stereo tape work. I have composed several works for 8-channel tape recently but wanted to get back to a stereo work—sort of like the string quartet of electroacoustic music.

Asymmetry: It sounds as if you consciously try to provide a sense of motion in your electroacoustic music. Is that so, and is that related to the whole business of foreground and background?

Nelson: I do think quite a bit about directionality; in an odd sort of way I may be somewhat traditional or old fashioned in this regard. But I feel that the better I understand how I process, experience, and enjoy music, the better chance I will have of communicating to others more effectively. I consider much of my music to be gestural, that is, I think of my musical materials as being sound objects/gestures/textures distributed throughout varying levels of a listener’s attention—ranging from immediate and present surface materials to textures or drones. In structuring a composition I try to provide some sense of direction and formal location by crafting relationships through similar/divergent structures, referential associations (harmonic/timbral/registral/gestural), recurring patterns, and transformative processes. Of course, this whole topic is quite unwieldy and I am already perhaps beginning to sound a bit too academic! I would say that I am constantly thinking about structure and all of the notions of motion and stasis implied by structure. I think that all composers think about this on some level, even if we try not to overtly think about motion and directionality. Composing for me is a matter of trying to articulate the unfolding of musical time. My understanding of motion and direction may not resonate with how others understand my music, but it provides me with a mode of thinking and working while immersed in the creative process. And I do hope that listeners can hear motion and structure in my compositions.

Asymmetry: All that makes me curious about how you listen to music. We think of composers as people who produce music, but composers must first be listeners. First and last, I suppose.

Nelson: Well, in a general way, I think I understand music as many do, by grouping materials together if they are similar and parsing them if they’re divergent. Similar meaning a consistent use of texture, rhythm, harmony, timbre, register, or even rate of change. On one extreme, a minimalistic work severely limits the musical materials and uses constant repetition to minimize the surface changes. In this context, miniscule changes in the patterns and contrapuntal relationships or the resultant phasing relationships can create a strong sense of motion from one section to another. In contrast is John Zorn’s cartoon music, with whiplash changes in style and constant interruption of ideas. If we hear snippets of divergent styles and genres that only last several seconds, never allowed to reach some level of repose, then we are likely to understand a longer passage with a more continuous musical statement as a large formal or structural change.

I suspect that I enjoy music when I have some sense of expectation and can either experience the satisfaction of having these expectations fulfilled or be pleasantly surprised when the music takes another path. I also revel in an elegant or exquisite transformation of materials and the demonstration of expert technique, but I also love hearing any new sounds or juxtapositions/combinations of sonic materials.

These notions about formal structure and how I understand or process musical ideas are not the easiest to articulate and I could perhaps drone on for a long time about the topic (my students might attest to this as well). Nonetheless, in terms of my own preferences in music, I tend to enjoy very complex and dense music whether this is classical or popular, acoustic or electroacoustic. I enjoy a wide variety of composers ranging from Bach and Brahms to Carter and Donatoni or Erik Mikael Karlsson and Paul Koonce. Although I am quite partial to contemporary classical electroacoustic and acoustic music, I also enjoy progressive rock and jazz fusion (I have always been drawn to complex rhythms and synthesizers).

Asymmetry: Much of what you’ve said implies that listeners do a fair bit of structuring themselves, perhaps even to the extent of perceiving relationships and values that the composer may not be aware of. That might mean that composers don’t have to worry so much about providing structure—or perhaps it’s that they structure simply because they are also listeners?

Nelson: I think that you are definitely on to something with your thinking about composers imposing structure because they are listeners. While this is certainly not the only motivation for creating structure, it does inform the way we think about music and the act of creating it. Otherwise, creating structure has a great deal to do with our desire to make something that’s interesting or beautiful, a desire tied in with lots of factors, including our aesthetic biases, our musical training, our sense of exploration we bring to our creative endeavor, our cultural and sociological experiences, and almost any other seemingly extraneous factor that is influencing us when we are immersed in the act of composition.

Your observation that listeners will often hear something in a composition that the composer did not realize was there suggests that composers may be creating relationships at a subconscious level, which is not surprising given the complexity of this particular art form and the variety of influences I just mentioned. It is particularly interesting to think that we may be creating relationships on a subconscious level, since a lot of us tend to be control freaks when we are composing (and I include myself). It is also possible that we might not be creating any relationships whatsoever on a subconscious level, but that the listeners simply bring their own unique experiences to the table and create these relationships solely for themselves. The truth, if there is such a thing, possibly lies somewhere between these two extremes.

I wonder if talking with others about our understanding of music (or publishing music analysis papers) is simply a way to promote and propagate our own understanding of musical relationships and encouraging others to hear music in the same way. There’s nothing wrong with that—with academic study of music, generally—so long as we are careful to leave room for many possible interpretations of music, as each unique approach can show us something new and interesting.

I know that I have always been drawn to music—listening incessantly as a child and wanting to talk about any music or interesting sounds that would grab my attention. Other people often seem oblivious to the sounds around them. Becoming a composer didn’t really seem like a career choice as much as a necessity. Moreover, I have always been interested in finding unusual sounds and my musical tastes have never been terribly traditional. My response to some of the first contemporary classical and electroacoustic music compositions I heard as a teenager was “wow, that is cool, how did they do that?” rather than “what is that weird stuff?”

Asymmetry: It seems to me that electronics have changed the way all of us listen to music, not just the sounds themselves, but the way they’re put together changes how we listen to acoustic music. Certainly electronic musics have influenced how composers treat instrumental sounds, even to the extent of someone like Helmut Lachenmann describing himself as an electroacoustic composer for traditional instruments. As someone who has written (who writes) for both traditional instruments and for electronic media, do you find that your musical thinking changes from one sort to the other?

Nelson: It is interesting that you note how Lachenmann feels he is approaching acoustic music as an electronic composer. I have had similar discussions in recent years with Mario Davidovsky, who has felt that some of his recent acoustic works are the most electronic compositions that he has created. Although my work in computer music definitely influences my acoustic writing, there are other factors that influence it as well, the physical constraints of the instruments, the desires of the performers or commissioning organization, the realities of limited rehearsal time. A work for solo performer who specializes in contemporary performance practice can be quite profoundly influenced by an electronic compositional approach. A commission from a community choral society to celebrate their 25th anniversary maybe less so. I am not sure that it is really possible to make a clean division between electronic and acoustic thinking since the compositional process is going to be some cumulative amalgamation that includes every possible way of thinking compositionally, so I am hesitant to make such distinctions between them. Perhaps it is better to think of composing computer music as simply composing for a different instrument that has its own unique possibilities and limitations, possibilities I am particularly drawn to these days. Similarly, composing “tape plus” works provides other unique possibilities and limitations, just as composing for a large ensemble does.

I still actively compose works for acoustic instruments, with or without electronics. Most recently I completed a work for clarinet and interactive electronics (MAX/MSP) for Gerry Errante as well as several other acoustic works. My acoustic work is driven primarily by artists requesting new works. I am actually woefully behind on two new works that some fantastic performers have requested, but I hope to be able to work on these in the upcoming months—a new work for guitar and interactive electronics and another for flute and interactive electronics.

My acoustic and electroacoustic writing are intertwined and the process is quite similar. If you had asked me 20 years ago, I would have had to admit that there was quite a difference in my approach to composing in these different genres, but these days the difference is less pronounced. My tastes have never been traditional, but much of my training has been, so I initially felt most comfortable in the acoustic domain despite being constantly drawn to electroacoustic music. As a result, most of my earlier compositions with electronics included acoustic instruments. In many of these works I was interested in exploiting the tape as a means of extending the instrument, often seeing how much I could manipulate and stretch the boundaries of sound with a sonic palette limited to processed instrument samples. Although I enjoy writing for acoustic instruments (either with or without an electronic component) and have enjoyed working in an interactive environment, I find the rich possibilities of fixed media to be the most engaging and rewarding at the moment. Having said this, one of the great pleasures of being a composer is the constant search to stretch oneself to explore new possibilities and try new modes of communication.

Asymmetry: Could you tell us a little about Csound, the programming language you seem to favor? It’s been clear throughout this exchange that you love sound, for its own sake as much as for its possibilities for manipulation. Is that why you use Csound, or it that just for technical reasons?

Nelson: Csound is a digital audio programming language that is one of the direct descendants of Max Matthew’s Music 1 programming language. Since it is free, and since there is a very large network of people developing opcodes, the language is quite comprehensive. The language is also extremely flexible and can be used for all sorts of synthesis paradigms (linear, non-linear, granular, waveguide, wave terrain, physical modeling, etc.) as well as for digital signal processing. It also has some nice analysis/resynthesis possibilities that you don’t find in most other audio programming languages. Partly as a result of its flexibility and partly due to its lineage (the language is text-based and doesn’t have all sorts of nice objects that are easy to connect visually), many find it to be a bit daunting, especially when they are first learning the language. However, it tends to be the programming language that I use to do things that I simply cannot do in any other program. I suppose that I am also fond of it since this is the first audio programming language that I learned. I had the great fortune to participate in the last MIT summer Csound workshop that Barry Vercoe taught in 1984. Back then, we had to take our turns on one of four terminals hooked up to a PDP11-15 mainframe to run our jobs. The language was then Music 11 (written in assembly language for the PDP 11 series of computers) and evolved into Csound when Barry Vercoe ported it over to C code in the late 1980s. At around this time, personal computers were beginning to be built with enough RAM and sufficient speed to be able to run a language like Csound and Barry decided to make the code freely available to anyone who wanted to use it. Around the mid-1990s faster computer speeds made the language even more appealing. Although the language was initially developed without any intention (or optimization) for real-time use, there are some people who now use this as a real-time performance platform. However, I find MAX/MSP or Supercollider to be much better suited for real-time use. Nonetheless, I do often use Csound when I need to do something that is too complex to do in real-time or simply impossible to do with another language.

When I first began composing computer music, I used Csound exclusively. I suppose that this was the case up through the early 1990s. When Csound first became available on the Macintosh, the sound file analysis subroutines were not available. This was actually some of the impetus for Tom Erbe to develop SoundHack. I forget when MSP was developed to accompany Miller Puckette’s MAX software, but I began doing some work with MAX/MSP when it became available. Still, I ended up using Csound more than either of these platforms through the 1990s, because I was frustrated by the limitations of cpu speed to create the more complex audio files that continue to capture my interest. As cpu speeds get faster, it is possible to do more synthesis/resynthesis/dsp work in real-time. As a result, I have developed a number of MAX/MSP patches that I use when I don’t need the more vast non-real time possibilities of Csound. I also like to use programs like SoundHack and a variety of plug-ins that I can use with ProTools or Digital Performer. Consequently, I find myself using Csound less as time goes on. However, I still do work with it when I need to do something that exceeds the cpu limitations or programming possibilities offered by other languages.

Software like Csound can tend to be very unforgiving. It forces you to think very clearly and logically about exactly what you are doing when working with digital audio. I have found this to be helpful for myself and also for students who are studying computer music. Although Csound (and other music software) has shaped how I think about music, I suspect that it has not had a great impact on the way I think about musical form and structure. Dealing with the limitations that any programming language presents to us in creating sound forces us to come up with creative ways to work within these parameters to create something that is musical. In a way, it is a bit like dealing with the timbral production or range limitations of a particular acoustic instrument. However, with computer music I think that the possibilities are more expansive than the possibilities you find with any particular acoustic instrument. This gives working in computer music a sense of sonic exploration that makes the endeavor such a rewarding adventure. One could perhaps argue that writing for a large ensemble such as an orchestra also provides an expansive range of sonic possibilities. However, limited rehearsal time and a great reluctance on the part of most orchestras to do anything remotely adventurous gives me great pause before taking on the effort to compose an orchestral work. Although computer music requires similar (or perhaps even greater) effort there tend to be many more performance opportunities, and it is possible to hear the work even while it is in progress.

The technological advances that we have seen in our lifetimes create a wonderful atmosphere for sound exploration. For those of us who are easily intoxicated with sound, computer music provides fantastic possibilities. I am quite content when I can spend hours extrapolating microscopic aspects of various sampled sounds to create something new that has never been heard before. I also love the mental stimulation that sitting down to solve an audio programming problem presents. Of course, there are times when you find it to be impossible to create a sound you are imagining or impossible to solve a thorny programming matter. Nonetheless, I cannot imagine many other endeavors that can be so rewarding—at least given the way my ears and brain work, I cannot
imagine anything more gratifying.

Asymmetry: One last question, if I may. What question has no one ever asked that you wish someone would ask?

Nelson: I guess it could be very interesting if someone were to raise the question regarding the role that rhythmic structure plays in my music. In a way, this ties in with our previous discussion about notions of structure in general. From my perspective, the temporal aspects of our art form plays a very critical role. Although a great deal of analytical attention has been paid to the role that pitch (melody/harmony/timbre) plays in music, there has been very little work done with rhythm. Moreover, in electroacoustic music temporal relationships are critical in creating a sense of animation. In addition, through electronic means it is possible to create a broader spectrum of rhythmic possibilities, ranging from isolated sound objects to kinetic gestures to dense granular textures and any sort of composite rhythmic structure that combines all of these elements (not to mention the added dimension that sound spatialization can bring to our discernment of rhythmic structures).

As I mentioned previously, I am very drawn to complex sounds, and this is also true in my preferences for rhythmic structure. I have found Elliott Carter, Gyorgi Ligeti, and Helmut Lachenmann influential as acoustic composers whose utilization of rhythm has been captivating. On the electroacoustic side of things, I am very fond of the ways in which Åke Parmerud, Erik Mikael Karlsson, and Gilles Gobeil have worked with the temporal aspects of electroacoustic music. One of the concerns that perenially plagues electroacoustic composers is the difficulty of creating a static, fixed media work that contains musical energy and a vibrant sense of physical kineticism. I am convinced that a composer’s sense of musical timing and technical facility in working with rhythm is critical for this endeavor.

———-

Jon is a professor of music composition and computer music at the University of North Texas in Denton. A catalogue of his recent works has been published by the American Composers Alliance, New York, New York.

The following partial discography includes the pieces reviewed in this issue of Asymmetry, releases on other labels of those and other pieces not available to me in April 2007. Some day soon I hope to have those pieces, too, and will add their reviews to the Nelson Reviews page. (The SEAMUS cds are available directly from SEAMUS.)

Following the discography is a short list of some other stuff, an interview, some reviews Jon wrote and a few of his essays.

Discography:

L’Horloge imaginaire, “Music from SEAMUS 13” (EAM 2004).

L’Horloge imaginaire, “Chrysopée Electronique 22,” Mnémosyne Musique Média Bourges Compendium International 2002 Bourges (LDC 278 11 25).

Dhoormages, American Composers Forum “Sonic Circuits X” (Innova 119).

Scatter, “Cultures électroniques no. 16,” Mnémosyne Musique Média Bourges 2002 Prix Quadrivium (LDC 278 076/77).

Scatter, American Composers Forum “Sonic Circuits IX” (Innova 118).

Scatter, Society for Electro-Acoustic Music in the United States “Music from SEAMUS 10” (EAM 2001).

Other Terrains, Society for Electro-Acoustic Music in the United States “Music from SEAMUS 9” (EAM 2000).

the rain has a slap and a curve, Centaur Consortium for the Distribution of Computer Music (CDCM), Volume 27, “CEMIsonics: The Threshold of Sound,” CRC 2407.

A Chris Mann Mambo, The Frog Peak Collaborations Project (FP007).

They Wash Their Ambassadors in Citrus and Fennel, with Heidi Dietrich Klein , Society for Electro-Acoustic Music in the United States “Music from SEAMUS 7” (EAM 9801).

They Wash Their Ambassadors in Citrus and Fennel, with Joan La Barbara, “Cultures électroniques no. 9,” Mnémosyne Musique Média Bourges 1996 Prix Quadrivium (LDC 278 060/61).

Six études brèves, with Rhonda Rider, Society for Electro-Acoustic Music in the United States “Music from SEAMUS 4” (EAM-9501).

Waves of Refraction, with William Buonocore, NEUMA “Electro-Acoustic Music III” (NEUMA 450-87).

———–

Other stuff:

Composer interviews: Jon Nelson in International Computer Music Association’s ejournal Array, http://www.computermusic.org/array.php?artid=97, 11-3-2002.

Review of Annette Vande Gorne’s “Impalpables.” in Array: Communications of the ICMA, Vol. 19, No. 3, Winter 1999.

Review of David Tudor’s “Three Works For Live Electronics.” in Array: Communications of the ICMA, Vol. 17, No. 2, Summer 1997, pp. 16-17.

“Understanding and Using Csound’s GEN Routines.” In The Csound Book, Richard Boulanger, ed. Cambridge, Massachusetts: MIT Press, 2000.

“Composing With Csound: Granular Strategies” on the CD-ROM accompanying The Csound Book, Richard Boulanger, ed. Cambridge, Massachusetts: MIT Press, 2000.

GrainMaker 2.0, a Csound soundfile granulation score generator, on the CD-ROM accompanying The Csound Book, Richard Boulanger, ed. Cambridge, Massachusetts: MIT Press, 2000.

GrainMaker 2.0, a Csound soundfile granulation score generator, available via ftp from
ftp://ftp.ircam.fr/pub/forumnet/max/FAT/applications/

This entry was posted in Composers, Interviews and tagged . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Upcoming Events

    Festival Mixtur 2017

    30 March through 9 April

    http://mixturbcn.com/en/mixtur2017/programa-2017/

    Music by

    Fausto Romitelli
    Jean-Claude Risset
    John Cage

    and many more, including people who are still alive, of course.

    Présences électronique

    14, 15, 16 April

    http://www.inagrm.com/presences-electronique-2017-0

    Most of the names I did not recognize, which makes me even more sad that I cannot attend this year. But you can. And probably should. If you love your ears, attend this festival.

    SIME 2017

    24 – 28 April

    https://www.facebook.com/SIME-689117931140405/

    Probably, though I'm not sure how it is possible, this will be the best SIME yet. Attend. Let me know.

    Echofluxx17

    3-6 May

    http://echofluxx.org/ECHOFLUXX17/index.html

    Opening Performance Orchestra
    Terrible Orchestra
    Echofluxx Ensemble

    And film and more music and more performers. More of everything.

    Monaco Electroacoustique
    4, 5, 6 May

    http://www.academierainier3.mc/fr/electroacoustique/monaco-electroacoustique-2017

    Francis Dhomont
    Horacio Vaggione
    Annette Vande Gorne
    Hans Tutschku
    Robert Normandeau

    And many more. Two festivals that overlap is just cruel. But go to one or the other of these. Monaco or Prague, you choose.

  • Recent Articles

  • Donate to Asymmetry





    • Donations may now be made from anywhere in the world.
      For US residents, all donations to Asymmetry are tax-deductible. Asymmetry has been serving the new music community for almost seven years now. With your help, it can continue its mission indefinitely.
  • Facebook