3/40 On the future of algorithmic music

I was at a friend’s house some time ago and he showed me his iTunes library. Proudly he announced that there were nearly 15,000 tunes to be found there. I couldn’t help but be simultaneously impressed by the sheer commitment of his collection impulse and dismayed at the prospect of owning so much music with a very slim chance that it would ever be heard in its entirety.

I wondered then about “what is music for?” It’s a big question, and I don’t think it can be answered easily, but I want to try to address some aspects.

In the era of pre-recording, music must have served a variety of purposes: as “furniture” or “wallpaper”, for personal reflection, as an expression of cultural values, as an expression of wealth and power, as sheer entertainment etc. But one thing would have united all of these purposes and settings: the music was by and large impermanent. That is, it would have been to some degree improvisational. This changed most drastically with the advent in the west of a precicesly written musical nomenclature. Though the tradition of an improvised dimension in music was preserved for a while in western classical music (the cadenza, for example), it was gradually “evolved out”, and does not exist in a meaningful way in that genre today (though it is making a comeback in some circles!) Of course, there are other western traditions where improvisation is still an important part of the art form: jazz, rock, blues, etc.

However, all musics today are subject to that great servant and destroyer of music: recording. The focus of music has been shifted – the moment of creation has become less important than the preservation of that moment. The cost is a loss of spontaneous and physical co-experience of that inspired slice of time between performer and listener. The benefit is the ability to recall an unliving but perfect recollection of that moment at will.

I’m reminded of a time when I went to see Nusrat Fateh Ali Khan in concert. I had heard a lot of recordings of this great qawwali singer, but it was nothing like seeing and hearing the magic in the making. I saw sparks coming from his fingertips. This was never captured in any recording.

But, the high tech world of digital recording is only one aspect of modern music making. The technically minded have also made some steps into algorithmically created and generative music. With live music as an affirming principle and recorded music as a denying principle, could algorithmic music represent a kind of reconciling force? And, if so, how does this impact the idea of being a human performer?

I wrote in an earlier blog post about “AGNIS” (Algorithmic Guitar Noospheric Interactive System). I described it as a system capable of listening to its surroundings and interacting musically in real time. It is a very sophisticated, flawed, and ever changing system partly pre-programmed and partly in “real time process”. The joke is, it’s me: I am AGNIS.

However, in the future I think we will see algorithmic systems capable of producing music indistinguishable from a human performer. I say this with an eye towards recent developments in physical modelling and so-called “artificial” intelligence. We could have music that would, once again, be unrepeatable, inspired, fresh, always different, and always created “in the moment”.

I make the following prediction: the day will come when we will speak to our computer and say something like, “Louis Armstrong: he’s had a bad day, but he’s been listening to a lot of Chinese guqin music, it’s a warm day outside and the sun is coming out.” The system will take these parameters into account and generate the music in real time. What you will hear will be indistinguishable from the “real thing”.

Let’s have fun with another one: Rammstein have hired an 86 year old Beethoven to create for them a 1 hour apocalypt-opera. They have requested that the work include at least 3 arias sung by Bjork, accompanied by Sufi Inayat Khan. The Russians did not leave Afghanistan, having been so impressed by the Buddhist history of that land that there has been a mass conversion in both countries to a more contemplative tradition and a large scale cultural interchange between the two cultures. The concert is held in a large hall in the recently remodelled Kremlin with wooden walls where 80% of the audience is wearing wool clothes.

What would it sound like? While that processing capability may be far off, some intermediate steps are right around the corner. I see no reason why in the next 5 years we couldn’t see music that has a pool of 7 guitar solos, and which one you hear today is based on a random number, user voting, or other parameters of the user’s or creator’s choice. (I myself have created projects approximating this e.g. the NTTS remix of Stream of Consciousness with Besart Hysniu).

Non musicians can use their imagination to hear what such generative systems might be like. Musicians: activate your imagination and play something today – you can enter this world at will, and all you have to do is lift a finger.

Recommended reading: Brian Eno on generative music and “games for musicians”, Alan Watts on skill, Jeff Hawkins “On Intelligence”.

One thought on “3/40 On the future of algorithmic music

  1. Pingback: 11/40 Who Cares if You’re Human? | Tim Gerwing: A Personal Blog

Leave a Reply