MUS171_Miller_Music_Science_Perspectives

Announcer: [0:28] ...and Max as well. Miller studied mathematics at MIT and Harvard, as an NSF and Putnam fellow, but became a computer music researcher at the MIT Media Lab, and then at IRCAM, where he wrote Max. He now teaches music at UCSD and works on the pure data environment, and on a variety of musical and research projects. Please welcome Miller Puckette. [applause]
Miller: [0:59] I talk to scientists a lot. I am actually a co-worker of Ricardo who spoke this morning at Calit2. And we are non-scientists working in a scientific environment, or in a scientific and engineering environment. Me, I'm not only not a scientist, but I'm not an artist, either.

[1:22] I'm not even really an engineer, in the sense that I don't actually ever specify anything - I just go out and do things. The thing that I've been going out and doing since about 1997 is called Pure Data. I'm not going to talk a whole lot about it, because I'm just going to be using it. And you might be making inferences about what's going on as I do so.

What I really want to talk about is a more general topic, about which I have a very bad understanding - or a very unclear understanding - which is: [1:42] What really are the issues that come up and are important, when you try to make a computer be usable in a musical context?

[2:04] So, the reason I'm thinking and talking about this is because, well, all right... The thing that's correlated with why I'm thinking and talking about this now, is that I've spent the last 25 years of my life trying to make musical instruments out of computers. That was the original goal behind writing Maximus P, and was the goal behind writing Pd.

[2:31] It's kind of beside the point, or it's a nice side effect for me, if not only can it be musically useful. But it can be useful for other kinds of art forms or for visualization of scientific data, or for oralization of scientific data - even better. Or for all sorts of other things that I won't try to list, that people seem to turn this kind of environment to.

[3:03] What is it? I'm just telling you this so I can stop talking about it. If you like, what Max and Pd both are programming environments for using computers as reactive real-time tools. Of course, the input and output that you use - over which you talk to the computer - is a medium. So, in fact, it is an interactive programming environment for messing with media in real-time.

[3:30] Medium, I think, just means "between". The thing about it is, that you're not supposed to know that you're programming when you use it. So, there's nothing to this today. But in the middle environment in which I started all this work, it was a shocking thought that a composer would actually program a computer to do something.

[4:11] Composers were artistes, and they dealt with ink on paper, and made beautiful scores. The boffins - the people on the other side of the aisle - were the ones who took the scores, and turned them into beautiful sounds for the composers. This turns out to be a lousy way to make music. For a number of reasons. In fact, the same principal reason for which it is extremely rare that you will find a composer who can't play a musical instrument at a nearly professional or professional level. A good composer, I mean.

[4:22] There are lots of bad composers who've never touched a musical instrument, but you won't find a really top one. You won't find a Liszt, or a Schubert, or something like that, who can't get around a musical instrument.

[4:47] But it sort of follows that - or doesn't follow -there's no logic here. There's not going to be anything logical this... It doesn't follow, but you might also think that a composer wouldn't be able to make good computer music, or electronic music. Unless it was the composer him or herself whose fingers were actually really on and in the musical tool that was making the sounds, right?

[5:08] So what you're really looking at is a way of fooling composers into doing their own work, which is the spirit in which you have to do things in order to get the good music to come out - instead of just some old elevator music, or who know what. Or some nice academic music, of the sort that they make out on the East Coast. [laughter]

Miller: [5:17] Now, don't get me started on Milton Babbitt, but... Now... [laughter]
Miller: [5:22] Hmm, wait... Milton Babbitt, by the way, is an excellent pianist. [laughter]
Miller: [5:45] So I didn't mean to say that he was either a bad composer, or a bad musician. He's the person who invented the PhD in Composition, and the PhD in Composition, the idea that a composition is research is evil. It's the reason that music in universities has split off from the real lifeblood of music in the United States.

[5:55] This didn't happen in Europe, because they didn't do the PhD in Composition. So, I'm sorry I did get off on it, but I've given you my opinion. Now I can stop that, and get back onto reality here.

[6:36] What I'd like to try to describe today is... What I'd like to demonstrate, in some sense, is a series of attempts - none of which are really successful, and none of which will ever really be successful in any sort of final closing sense - of possible approaches. Perhaps even promising approaches, to trying to make a box like this into something that you really could operate in a musical way.

[7:03] So musical instruments are things that you pick up, and you do things, and outcomes sound. If you do the thing harder, as a general rule, the sound comes out different from if you do it softer. Your ears know how to - or your brain knows how to -sense the amount and the type of effort that you put into operating the violin, or the snow shovel, or whatever you're playing. And the sound that the thing emits when you're actually doing it.

[7:34] Computers don't work this way - naturally. The reason for that is that the actual energy that's going into the computer is coming through the power cord and being generated down in San Onofre or someplace like that. And it's turning to heat here. Most of it is just blowing out the fan as air, but a little tiny bit of it, like a hundredth of a watt, is going down the nice audio cable where another whole huge influx of electricity is turning it into physical motion of speaker cones which can be music for us.

[7:45] But none of this actually involves something that me as a computer operator is making the thing do in any sort of physical real sense to actually make the sound.

[8:09] And this separates the computer from almost any other instrument known to humankind. The only other example I can think of something that is this weird is the pipe organ. I'm probably not thinking of something else important, but pipe organs are that same way. Pipe organs are actually closer to being a proper musical instrument than a computer is, at least in its ordinary state.

[8:43] What is a computer really? Well, no one ever designed a computer as a musical instrument, right? It was designed as a thing to do, well; we all know the first thing was missile trajectories. And then pretty soon people figured out that you could do banking on them. And so IBM was this big company in the fifties and sixties that made these monster machines that were bought by research laboratories, some of them good and some of them evil, and then banks, and then other stuff like that.

[9:08] And the way you use a computer is you go into your office and you sit down at your chair and you start doing office work. For that phrase, by the way, I'm indebted to David Zaccardelli who wrote a paper that I've thought about a long time afterward, entitled something like, well, I'm sorry I don't remember the title of the paper. But you know, when you're at a computer, you're doing office work.

[9:33] If you watch musicians as they live their lives, the pianist kind of musician spends a lot of time at the piano making music. The electronic musician doesn't spend a whole lot of time actually making music come out of the computer. Notice, by the way, I'm not making music come out of the computer right now, right? I'm doing office work. Well, I'm not even doing that right now. I'm not sure what I'm doing.

[10:00] But when I'm sitting down, when I made this patch, there wasn't sound coming out while I was doing the real work. You do the real work, then you stop and the sound's coming out. But when you're doing the work, you're basically sitting in an office typing, right? And that's a transaction that was designed for financial transactions. It was not designed for musical transactions, if I can make that phrase up.

[10:40] And in what sense is it ill-equipped to do that? Well, one difference between music and banking is, with banking, plus or minus a day or so, it doesn't matter when something happens, right? You deposit a check and then you're going to use the money the next day and that's about the end of it. You don't deposit three checks and have the PO [snaps rhythm] rhythm, right? That doesn't happen, right? But rhythm is one of those things without which you couldn't have music.

[11:06] Why is rhythm important? Well, let's not get too deeply into that. Sound is a thing which doesn't spatialize as well as light does, but it's a thing that timeifies, temporalizes. I don't know what the verb would be, but it's a thing in which we can distinguish differences in time that are fantastically subtle.

For instance: [11:43] , if you ask a pianist to play a scale, just play up a scale but play it in fours, and then play it again in threes, just yaga-daga-daga-daga-daga-daga or yagada-yagada-yagada-yeah [vocalizes to demonstrate rhythm patterns] like that but exactly evenly, and then you look at what happened. You will find when the musician was thinking in threes, the timing will be grouped in threes instead of in fours and the differences will be in the order of milliseconds. Furthermore, you can play a tape of this and another musician or even a good listener will be able to hear the difference.

[12:07] This is something you can't imagine doing with your eyes. Your eyes can be fooled by, for instance, presenting 24 images a second as film does, if I've got the right frame or 30 frames a second, to be generous. And it looks like continuous motion. Gives you a headache, but looks like continuous motion.

[12:33] So your eyes are at least about 10 times stupider about time than your ears are, although they are at least, I think, about 100 times smarter. Smarter/stupider is not the right word. More acute in spatial resolution, so you can read something like the letter A up there on the screen, even though the angular size of that thing from where you are sitting is far less than a single degree of arc.

[12:48] But if that thing were making sound, if one side of one of those letters made a peep and I asked you which side of the letter it was, or if I made the whole letter say "peep" like that, would you be able to tell the difference between an A and an F? I don't think so.

[13:32] What that should lead you to think perhaps, is that to make a computer be an adequate musical instrument, it has to be made to be exceedingly reactive in terms of time because that's where the information is. And it's even worse than that, because of course you can write down a sequence of time to the nearest microsecond and say "do this, then do this, then do this" and so on, but humans for some reason can't do this.

[14:02] They can't take subtleties of musical phrasing, how things should be in time, why a thing sounds the way it does, why music says what it says, when all it takes is just oompah. That's a different thing from something else that has seemingly exactly the same values of time.

[14:25] There's some weird magical channel of communication in music that no one really understands, right? It's not even a single channel. You write it down on the piece of paper and it's lost, but a good musician will take the piece of paper and work over it over a period of time, learn it, figure out what really the meaning is behind that mnemonic, which is the score. Nothing but Mnemonic and outcomes the music again.

[15:02] But, you can't actually write down the instruction that you have to give computer, for instance as a musical instrument to turn that into sound that had a believable and meaningful musical phrasing. The only way anyone has been able to figure out is to have someone play it and so you have to somehow take the thing which was the computer. Which was this banking and missile trajectory calculating machine and turned it to something that you can walk up to and play.

[15:30] And if you don't do that, what comes out might be nice sequences but is not likely to touch anything like the full range of musical expressions that humans are capable of and computers might help or might hinder from doing. I am telling you about all the terrible things about computer music. There are good things about computer music and I think there are so obvious that they don't need mentioning.

[16:00] But, it was clear to me even as a junior high school student, when I learned about 4E analysis from my father who is a mathematician. It was clear that OK, you can take sinusoids and build them up and you can make any wave form at all. Wave forms are tambors and you can make any tambor that you can possibly want out of the computer.

[16:29] In BASIC I started calculating things and started making teletypes, draw little waveforms and asterisk that go up and down the page and started synthesizing wave forms. Even knowing that I could never hear them that was the best thing to do. The good side of the situation is that a computer can make any sound as you can possibly describe that can come out of the speaker. That is the limitation. That is the cool part.

[17:20] The bad part of course is that you can't play it as you can only just order it what to do. Send in its' punch cards and get the result back in mail or something. That was the central challenge that lead me to make maximus P and now to make Pd. What I want to do is show you what the issues are. I am still of the opinion that computers are lousy musical instruments. It is not because a whole lot of people haven't tried very hard to make software that will turn computers into things that will respond in a very musical way to gestures that humans make. But we still are a long way from being there.

[17:46] Furthermore, we are at period of time right now historically, where over the last several years -last 10 years say - that situation hasn't really been changing in any substantive fundamental way. Right now, we are on a weird plateau wherein we figured out a certain area of the problem and that is OK. But, then the rest of the problem is still something that may be someone will think of something tomorrow that everything will work out.

Maybe there is hundred years of research still in our future that we can't see yet. That still stands between us and the sort of full realization of the computer music, that it can do anything which you could possibly [inaudible 18: [18:19] 00] and make it sound musical at least up to the musicality of the person using it. I want to try to show you what I think some of the openings might be and none of this is going to be more than openings for a variety of reasons.

[19:06] First things I want to tell you is why pure data is called pure data. I got the idea in 1997 as to what would the original maximus P sort of thing what it was. We had a very good way of describing process, process meaning you have analog to digital converter and then its output will go to multiplier and it is multiply by an oscillator and that will go do the digital analog converters. Something like a patchable synthesizer I say. You could take that thing and make it run real time and furthermore you could make it respond real time to request change the frequency of the oscillator or the amplitude of the aggregate output or things like that.

The reactive aspect of getting the computer there at one level was solved, while at another level clearly was not because you still had to talk to it through weird things like midi ports or keyboards or things like that and that seemed insufficient. But another general thing is that, there is the whole other way musicians think about music which is the storage aspect. Which is the thing I was integrating earlier the business about the ink on the piece of paper? It is not as if just being able to play an instrument is not adequate for being able to be a musician [inaudible 19: [19:49] 46] .

The ability to write things down is also crucial [inaudible 19: [20:28] 54] is enabling they are music that don't require that. But the ability to have a way writing things down multiplies your possibilities like stepping into much larger room. Right now I think by and large what we do with computer to make music is kind of separate into two things. One is real time interactive thing which maximus P and Pd you are doing pretty well.

[21:02] And the other things is the score thing and do that you get off into Sibelius or Finale or Excel. People do all kind of stuff that way and there you are talking about documents. You are doing document preparation, you are doing office work. What would it take to make those two things to somehow manage to cohabit a single kind of computer environment or single kind of situation user would be presented with. This is the reason why I named pure data what I named pure data.

[21:36] Besides slipping couple of jokes and digs, the intent was to go back and think not about the part that seemed interesting in early 90's or late 80's in fact getting the activity to work but getting data to work. Getting it possible to have a way of describing data that was so flexible that you could express any kind of musical expression that could sit in a place that you could sort of store anything.

[21:48] My answer to that is this is not a complete answer. I'm not going to claim to solve this problem at all; I'm just claiming to regard this as a problem that I'm interested on working on.

[22:01] Sorry, I've forgotten where I put the window I wanted you to see.

[22:38] This is me recreating a nice piece of music by Charles Dodge. Charles Dodge wrote a piece called "Earths Magnetic Field" which is a bit of evidence for me that people thought about scientific data and computer music in the same thought for many years. That would have come from the mid sixties, I think it is a fascinating thing to do and wonderful. But this is a pure piece of the east coast, academic, but beautiful music.

This is a thing called "speech songs" and I actually like speech songs so much that I decided to study it to the point I could transcribe it and redo it. This is not actually Charles Dodge speech songs, but this is one I transcribed off the tape and got all the times to the millisecond and put it back on a score. In doing so, my intention was to make the language in which the score is expressed be as close as possible to the musical space that I was able to [inaudible 23: [23:28] 17] Charles Dodge was working in. I don't actually have Charles Dodge's version of it cued up and I think only one of you in the audience is going to know this piece by heart the way I do. [music] "A man.......Man sitting in a cafeteria, and one enormous ear and one tiny one which was fake..........."

[music ends]

Miller: [25:00] Isn't that beautiful? That's not me. That's Charles Dodge.

[25:16] The point behind that was that, this is a very simple example but this is already an example of something that would kind of defy Sibelius.

Sibelius is a commonly used music notation program. Just for the stupid reason that Sibelius does not know how to attach "temporal" elements to things it regards as notes. Now this would be a lousy thing to write Beethoven's 5th symphony in. You can probably tell write off. This was just you will be miserable, because it doesn't know about sharps and flats and doesn't know about measures or reasonable like that, [inaudible 25: [25:51] 47] or a French horn or a piccolo.

[26:14] What it is good at is not pushing you into any corners at all, in the sense that the only thing you see is something that is absolutely essential to what that piece of music is. There is nothing inessential.

[26:43] What it is in effect is a private notation, which is not Charles Dodge notation; it is my private notation of Charles Dodge's piece. It's transcription. The notation was made in a way that has perfect correspondence to what I was able to identify as the musical meat of that piece of music. It would be different from the way I would transcribe any other piece in the repertory or anything that I would have to do personally.

[26:47] Any questions about this?

Student: [27:02] What did you do the first time you listened to the music?
Miller: [27:05] What did I do? Is that it?

[27:44] The poem is by Mark Strand. The first thing I do is what the poem was. This one is 90 seconds long. I got in a sound editor and I looked at the beginning and end of every single sound in the whole piece. Then I measured them out in milliseconds and by ear figured out what parts of the original poem corresponded to what note of the piece.

For instance: [28:23] "Which was fake"..... There is a very strident way that syllables and consonants are treated in the piece. So it was clear when I do it actually find a way to a place where consonants started and stopped and get a separate control of the timing of those events and so I just did that with my ear and sound editor until I was satisfied that I knew it. So it's not such a huge feat for you trying to do this.

[28:44] I didn't do it in real time. I did it in a couple of hours and there's one mistake. It's all the pitches are temper 12 tone regular old western pitches except on that's off that was probably mistranscription.

[29:13] Yeah so oh so why do a piece like this instead of do a new piece of my own music. That's the scientific method. To keep yourself honest rather than you know adjust the problem to whatever solution you have. Whether consciously or unconsciously. They had to really do some things to force yourself to fit into a situation that someone else has externally prescribed on you I think anyway.

[29:35] That might be more an opinion than a fact. All right so there's that and then the next thing would be, well another example of the same thing anyway is going to be this. And my apologies if any of you have already gotten tired of this by yourselves.

[30:09] This is a one of many possible representations of a sound that develops over time. And let me just get this thing cooking and then I'll see if I can make this thing operate. So first off we'll say something. Spaghetti and meatballs. All right, nice two second sample or something like that. And then what I'm going to ask the machine to do is, if I can make this happen.

[30:46] Oh, first of are we listening to it. No were not running why not? Oh, let's not worry about it. Yeah, that looks good. See there's the letter s up there and now I'm making tonal sounds. Oooh we got a couple glitches up there that's nice. Sound is not as easy to work with as image in some ways. By the way you're going to see much, much better than this when Curtis Rhoades gets up here and starts talking. Because he's got tools that are much cooler than this, This is a classical tool.

[31:20] Now what's happening is what happens is each one of these traces you can't see the amplitudes because I didn't know a good graphical way to represent that without making everything get totally messy. But what you see is the pitches of all of the sinusoidal partials that you would have to make a sinusoidal oscillator bank say in order to utter the phrase spaghetti and meatballs the way I just uttered it. Let's see if we can actually hear this. Resynthesis. This is kind of a horrible patch to use.

Computer voice: [31:23] Spaghetti and meatballs.
Miller: [31:25] There it is. All right.
Computer voice: [31:26] Spaghetti and meatballs.
Miller: [31:28] You can hear a lot wrong with that sound right.
Computer voice: [31:29] Spaghetti and meatballs.
Miller: [31:47] But there's something crucially right about it which is it has the same general musical arc as what I put in. Now the cool thing about this is this is the same software as the one that I showed you previously. Except that it has been customized in an entirely different way.

[32:21] So now rather than having things that are notes and you choose a syllable underneath which goes and looks at a bank of analyzed sounds. Here there is only one analyzed sound and you're looking at the analysis yourself. Right. And that's gives you a different collection of things you could possibly do. For instance if you want to make it a question you do this. All right. You can see the interface is just terrible. You shouldn't really make a composer do this to make a crescendo.

[32:35] But I'll do it this way anyhow, why not. Because you know we are just having fun here were not making music yet. And now if I play it back again maybe I'll be offering you spaghetti and meatballs.

Computer voice: [32:35] Spaghetti and meatballs. [laughter]
Miller: [33:05] OK. That's computer music. Well the thing that can be cool about that is that this could give you a tool for actually building up pieces of electronic music by basically putting your hands right in the center of the sound that you're operating on. And the other thing that hopefully is cool about that is there is since there's no boundary between this way of operating and the way I showed you.

[33:25] You could in fact do things that incorporated both of those things at once. The sort of higher level organizing aspect that you saw earlier. Acting on those dimensions that require the higher dimensional organization and perhaps using things like this for describing the details of the things that you need to be able to control it in a detailed way.

[34:03] I don't know that's perhaps easier to suggest than it is to explain in any sort of real way. And I should tell you know that you should probably not start making music with this. Because it is so clunky. And the problems turn out to be so much harder than I'm making them look right now. That when you really get down to the nuts and bolts of trying to make this thing make this thing play with you. It turns out really to be hard enough to turn most people off.

[34:23] So you should regard this not as me selling you a piece of software something like that. This is a way of really showing you what would be cool to be able to do if someone could actually solve this problem in a really friendly malleable way. This is not what this is. This is a really clunky way that I hope someone will overtake.

[34:42] But the principle is there, this idea of data whose view you control so that you can develop the visual language you need as a musician or as a maker of any other kind of art. To get at the aspects of it that you really want to tease the meaning out of.

[35:17] Questions about that? OK. I will just jump on to another wonderful thing that you can try to do and it needs trying to do harder than I've been trying to do yet. Actually I'm going to start with a jockey one or jockey thing. Tell me if this is too jockey and I'll get rid of it. Where did I put it. Oh yeah you just whack the button and it goes maybe.

[35:50] This is something that I assigned a bunch of students in a computer music class that I was teaching. Let's see, are we happening yet. I don't know how to figure out if were happening. Hello, oh yeah there we go. This is a useful application once you've done it. Because if you've ever been a student the way that you act as a student is you sit in a room like this and you open up your laptop and read your email by your professor is explaining calculus.

This is a useful application to have because this will tell you when the professor was actually talking to you and you can put on your buds and listen to your favorite tune [inaudible 36: [36:42] 01] calculus. I actually don't know how many of my students are using the bud on the ears and showing these wonderful things. The purpose of this discussion is... What is this really? Well, this is a normal computer music interface which is actually the stupendous possible music interface. Just put on a microphone or something. Why? Because... so this mike is going straight in to my computer here. Right.

[37:27] People now had to make sound and for them musicians are used to making music by making sound. Because that is what music is made of at some level. One pretty good possible approach to making a computer in to... One good way of making a computer in to a musical instrument or a good general person doing this might be to take what musicians know how to do and strap microphones on their instruments as close as to get on the part that is emitting the sound. And just take that signal which the musician suspends, his or her life learning how to make come up the way they want to come out.

And then turn that potentially into a whole world of other things which would somehow be able to [inaudible 37: [37:48] 32] on a musician's training but then be capable of a wide variety of other musical even non musical activities like what I just told you here. This is a pedagogical example. There are possible real examples of this.

Before I go too deeply in to this, this is only one bunch of other things that you could think of doing. For instance, there is a whole industry out there and research industry of people who are trying to design physical interfaces for computers. things that you [inaudible 38: [38:27] 09] throw or step on or whatever it might be, the computer is unable to sense the electromagnetic [inaudible 38:15] And that become a sort of the control surface over which you make the computer, over which you explain the computer what you want to do.

For example, probably the midi keyboard. But there would also be [inaudible 38: [39:02] 32] things you buy now which were boxes that have [inaudible 38:36] computers get in bunch of changing virtual voltages that can drive a synth or something like that. So that's the whole thing. And I think they are cool things to be done there. Even in the realm of musical instruments, you can put sensors on the instruments to sense things like bow pressure on a violin. They are supposed to putting the sensor which is a microphone which is sensing the output of the instrument.

[39:30] And what I am hoping to suggest here is that it is actually looking at the output of the instrument that might be a promising thing in certain things for certain instruments. You didn't give this to piano. For piano you put the sensors on the instrument itself to try to sense what the players doing, because it comes out of the piano so complicated that I don't think you can use that very easily.

[40:03] But the voice will be opposite in that particular spectrum where you do not want to put the sensors on the thing that is making the voice. Especially the last person I would wish to do that will be a trained singer. Right. They are very careful about their throats. But on the other hand, it is the easiest thing to sense what is coming out or comes out about the same place which is not true in place of clarinet as you can really pick the sound in a local and controlled way and do what I just doing with the bud.

[40:43] And the next thing is that when you use the sound output of a real instrument, be at the voice rather you really using the aspect of what is going on... that the musician's intend is really driving towards. So whether a musician is using a lot of bow pressure or little bow pressure, you know, of course it has some correlation with the musical expressivity of the situation might be. But it doesn't correlate anything nearly strong is the sound better correlate. Because that is the thing. That is the product. That is what they are really making. So it seems like the perfect thing to tap if you want to tap something.

[40:44] Next problem, don't use this for saxophone. Because the saxophone makes so much noise that you’re just going to listen to the sax and the computer is going to follow the sax alone. Do with something whose sound you can cover with the sound coming out of the speakers which you can do very easily with the voice and do extremely well with guitars. You just get nice solid body electric guitar and slap a pickup on it and you’re there. No one even know you are playing the thing except that comes out of the speakers. Examples of possible approaches of using this and this meaning using audio input to a computer as a source of potential musical control. In no particular order, because I was not able to think of natural order to describe all this in. I am just realizing my examples are going to be so weird that it might turn you guys all off. But that’s just going to be what it is.

[41:50] This is an idea that I’ve been working on for a decade or so. This is aimed at instruments which have a rich timbral set of possibilities – the voice of course being the best, the prime example. And here the idea would be in this patch – for which I apologize. The idea would be, take the thing that you’re doing, whatever it is, and go look up in a bank of sounds. The thing that most closely resembles it and play that instead. So, for instance, go record a year of alternative radio. Right? And then, give yourself an application where you can just say, “Poomh!” like that, and it would go find the funk tune on Channel X that really have that sound, and it get for you, and play it right then. Wouldn’t that be cool?

[43:07] There are hideous conceptual problems with that idea that I am not explaining to you right now because I don't know how to explain it, except actually by showing you how it doesn't work when you actually try to do it. So, this is actually not bad. It's not a year of sound; it's 40 seconds of sound.

[43:52] And what I've done here is an experiment with three different corpuses of sound. They're not going to be pretty or anything. I was aiming for things as things being as different from each other as possible. Here's me playing violin. Well, we don't even hear it. Why not? Because I'm not doing something right, but what is it? What am I not doing right now? Maybe I have to... OK... maybe I have to do this. Maybe I need to hit the play button which is right here... maybe. Let's see. This patch exists for the purpose of doing research, so it's not a thing you'd want to... yeah, here we go. [violin music starts to play]

Miller: [44:08] All right. That's me. I don't play violin, but that's what it sounds like when I do it anyway. The second thing is that it's always good to get a politician out. So, here's just... [sound of George Bush speaking]
Miller: [44:36] Random phrases taken from here and there, so it doesn't make any sense. And then here's a really wonderful recording I made of Trevor Wishart, who was at Earcom back in the early 90's, I think. I asked him to give me some sound that I could work on. Just to walk up to a microphone and improvise and the result, I have just about memorized this. It's actually a piece of music. [sound of random noises]
Miller: [45:20] And so on. Right. I'll stop it right there. Now just imagine that you could watch TV and there's your favorite politician speaking and you could hear that instead. Wouldn't it be wonderful? Why not? Let's see. Let's see if I can do this. So this is going to be the control source. There's the control source, and here's Trevor Wishart singing instead. And now we're going to do that and turn this on and let's see if it happens. Yeah. [sound playing]
Miller: [45:36] Those are Bush's phrases spoken through Trevor's voice. And then when you get tired of that, we'll make Bush play a little violin. [sound of violin playing]
Miller: [46:15] The test would be, and I think I'm going to fail the test by the way, the test would be could you play the same thing in the control as you looked up and presumably it should find the same thing and play it back out pretty much as it was. And the answer is, I didn't really quite succeed but I'm getting somewhere close. Let's do the easiest to recognize is going to be the politician. Let's see. Now we're going to listen to this one. This is the synthesized result. So, here is the politician controlling his own recording. [sound playing]
Miller: [46:42] Oh, that's completely wrong. I'm doing something wrong here. It doesn't work that badly. Oh that's close. Well I don't know. I don't know if this is succeeding or not. I keep changing the program so you're seeing it in the state it happens to be in right now.

[47:11] But that as a principle seems kind of powerful. This idea that you could just tell the computer what you want it to do just by making it do it by imitatively verbalizing, right? It seems like there's a tremendous space of possibility there to explore. So, really what I'm doing is trying to incite people to go home and do this themselves and see where they get.

[47:29] All right. So that was one possibility. Another is a little bit related. This is another manifestation of that idea, I think. Let's see. First off I'm going to see if we're.... [music playing]

Miller: [48:11] All right. So we got that going. Now what we're going to do is kind of turn it backward. That's radio. You could imagine this could be you and you would be beat boxing. The idea now is not to just try to treat sound in a continuous string, but to try to identify attacks, and then when you get an attack, then rather than go look in a huge collection of sound, just have a drum kit's worth of sound, you know, 20 to 30 things that are somewhat percussive. I happen to have a nice collection of percussive sounds. [music playing]
Miller: [48:38] Never mind where I got this. I only have a rhythmic extraction of that wonderful piece of music we heard earlier. And it turns out to be great with the... [music playing]
Miller: [49:34] All right. Is it clear what that was? So this was reduced in a rather serious way. All we're doing now is rather than trying to do an elaborate, terrible analysis, we're just responding to two aspects of the sound that's coming in. One is just how loud it is. One thing is we're identifying moments of attack. After having done that we're trying to come up with two pieces of information.

[50:10] One is how loud it is. And the other thing is what I call the Westle Number, which is how much the spectrum is weighted toward the high or low end which you would just conceptually move you from the kick drum all the way up to the high hat or something like that. And, that's it. By the way, this is a great way to listen to the radio. On the musical level, this is really only a kind of an experiment. Am I running out of time? I must be. Oh, I'm kind of out of time, yeah. OK, so I'll...yeah. The next thing would be talking about electric guitar. But we’ll save that for another time.

Student: [51:20] I want to ask you something based on your experience. You worked in different environments, for example, in Carnegie, we have composers and assistants and technicians and people programming. So if you could think of a model where creativity can develop, what kind of model would you choose? For example, you choose a model where different people work together and for example, science, artists, composers, whatever. So different kinds of creativity are connected sort of, through a dialogue, or you think that people should develop, individuals should develop these different kinds of creativity themselves. What kind of model would you think be more satisfactory for generating ideas?
Miller: [52:05] Yeah. Oh boy. That is a great question. Speaking specifically of music, music really grows out of groups of people than it does out of individuals, I think, at least as I have seen it. But, it does not necessarily mean that the right way of doing it, is to establish some kind of hierarchy where there is the composer or the assistant or something like that. In one way of thinking, all of everything that I have done, really has come out of interactions that I have had with composers and other kinds of musicians.

[52:27] So, none of this was actually done in isolation. So yeah, collaborations and especially interdisciplinary collaborations are absolutely crucial to the way I work personally. So, that is only a partial answer because, that is just about me, I guess. But, it is certainly worse to collaborate.

Student: [52:55] I was curious if you had followed the work of a project called Scrambled Hackz, about two years ago they tried to break video and disegmented audio, and said, they could either recreate phonings of Michael Jackson's speech with him as the microphone or MC Hammer in a beatbox re-sampling. What ever happened, did he use your software? I was curious.
Miller: [53:21] You know, I read about it and, I did not actually pay enough attention to find out what his software was, or anything. I do not know, even if it was he, I don't know how they did it and it is an absolutely very cool kind of thing to do. You know, experimenters are doing cool stuff like that all the time and, by the way, always reinventing stuff, because there is nothing like a new idea.
Student: [53:24] Just curious. Thanks.
Student: [53:46] You talked about having some sort of new interface, like a visual interface for composing music, saying there is major ways of doing that, and with your example, the way you drew those on your arms, and how you are doing it.

[54:12] Do you see the future of, where that could go, is more of one piece of software that does what you are doing or is it more of a communication between multiple types of software? So, say something like, the evolution of OSC or where that could go with, you know, communication between, let us say, a visual application and a sound application, as opposed to one piece of software doing it all, as you are doing it.

Miller: [54:34] Yeah, my thinking is actually changed about that. I used to think that because my other tune was totally real time centric. I mean, there are two sides to this question, one is, if you are really trying to get stuff to happen in real time, then that almost forces you to radically limit the variety of applications that you are using at once, because it just won't work.

[55:11] But on the other hand, from a productivity standpoint, it actually makes all the sense in the world, to have software pieces, really be as small, as they possibly can, to have them rich as possible, instead of interconnections. And in some way of thinking, Pd and Max are an expression of that, because, for instance, Plus Tilda is a program and the line is inter-connectivity. But at a larger level, when you think about the whole bind space that thing lives in, you are stuck in there. And that is why you need also to have portals in and out of an environment like that and to other kinds of things.

[55:33] So practicality sort of, at least in the real time context, forces you to down into one environment. But, the best of all possible worlds would be the most supple, possible interfaces and the smallest and most simplest possible applications.

Student: [55:47] So, it would be like having a visual interface that exists on its own and communicates with the sound generator. It is almost having two unique variables.
Miller: [56:12] Well, I would not make the separation between media or between senses like remotes, like sight and sound. I would make it be between algorithm and reactivity. For instance, if you were doing a mark of chain analysis or something, Pd is not the place to be doing it. It is something else. So, the type of work and the type of data you are operating on seems like a sort of good place to, drawing boundaries between environments.
Student: [56:52] If the computer sometimes allows the musical instrument, do you ever see it like a co-pilot, who you then acknowledge how good its output was? So, maybe it's listening and responding, as a synesthetic tool, if it takes in what it heard, and then reproduces it in another mode.
Miller: [57:19] You are intelligent. Well, OK, that is the idea of changing the mode, but there is also the idea of the box itself, having intelligence and that is the thing that fascinates some people and not others. Me, personally, I don't want my piano to think about the music I'm going to play through it, and so my own world view is that the computer should be a conduit and not an intelligent thing, but other musicians think exactly the opposite of that.
Student: [57:20] Thank you. [applause]