Pages in this blog

Saturday, July 16, 2022

Two parameters: Mechread and parset [plus a note on E-flat]

For the last two years I've been corresponding with a friend of mine from the old days at RPI, Bob Krull. Bob was in communications and did a lot of empirical work on patterns of attention in watching TV. But that's not what we talk about. We talk about music. He' banging away on keyboards and I'm tooting away on trumpet. Here's two notes that I've written to him.

* * * * *

Mechread and parset

And now for something like a model. Actually a pair of them.

One we’ll call mechread, for Mechanical reading. The other we can call parset, for parameter setting.

Mechread: How do we play music from a written score? Unless one is very experienced and skill, one uses mechread. The score gives you a representation of a series of pitches and note values. You call on memory and translate them into a sequence of movements that activates your instrument. The result is a sequence of pitches each of the proper duration. But it doesn’t sound much like music. Too mechanical.

Parset: How is it that I can hear a tune – not any tune to be sure, but many – once or twice an be ready to blow on it fluently? I may even be able to play the melody ‘from memory’? I call up an appropriate ‘engine’, one attuned to the style of tune I’m listening to, set a half-dozen parameters, and I’m reading to go. Picking up those half-dozen parameters is much easier than actually memorizing (via a mechread procedure) the tune. It requires much less information. Of course, the model whose parameters I’m setting, that contains a great deal of (pre-compacted) information.

The thing is, in order to give a musical performance of a written score, you’re going to have to do something like parset. Sure, you can lather on some expression, some of which is likely marked in the score (crescendo, accel., rubato, etc.), but that’s just another layer of mechread. It smooths over the rough edges, but doesn’t breath life into the sound. For that you need parset. Thus when performing effectively from a score, you’re treating the score as a source of prompts to parset.

So now we’ve got mechread and parset going at the same time. And I’m sure mechread is going when I’m listening to a tune in a jam session so I can blow on it three choruses later. You don’t have to have a written score to use mechread. I figure we use a version of mechread to ‘transcribe’ the sounds we hear into ‘inner music’ which we can then ‘pass’ to parset [if we have and have developed the knack].

Now consider this improvisation I did on a Japanese tune, “Kojo no tsuki.”

I worked on it a bit before I recorded it, but never wrote anything down and never deliberately set anything to memory other than the melody. I play the melody, in tempo, fairly straight time. Then I improvise a chorus, freeing up the time here and there. Then I go into a medium swing tempo an continue improvising, going through a couple of choruses until I reach a climax and than, after a very short break, collapse back into the melody, in time, and finish. I’m interested in that first improvised chorus. I can hear it in my mind’s ear as I type. I can conjure it up at will. If I were to pick up my horn and play that tune, I’d have no trouble playing that melody. In fact, I’d probably have to exert a deliberate act of will to play something else. Let’s assume I let well enough alone and then switch into a medium swing like I did in the recording. I’m pretty sure I’d do some/many of the things I did on that recording. But I don’t know how close I’d come to what I’d recorded. Don’t even know if I’d play the same number of choruses. But I don’t think I’d deliberately avoid playing many of those riffs. That would be too difficult, and a bit pointless.

That’s more or less how unnoted composition happens. I think.

Bonus: E-flat

Learning to play in Eb

I’ve been working on the key of Eb (trumpet, Db concert), 3 flats. The fingering are not particularly difficult, but I gather I’ve not spent much time playing in it. It wouldn’t come up much in Out of Control, which favored sharp keys because of the guitar. And for some reason, not otherwise.

Not that I’ve kept log of what keys I’ve been playing in and for how long. Rather I deduce that I haven’t played in it all that much because getting really comfortable in it has been taking a lot of time.

A couple of years ago I created a blues in Eb. Why Eb? Beats me. Likely I was just noodling around and when I had a lick that I like, well, it turned out to be in Eb. So when I developed it into a tune, the tune was in Eb.

Anyhow, I developed the tune itself but didn’t do much with it. Then I laid off the horn for a couple of years on account of a neighbor who didn’t like me practicing. The neighbor moved out a couple of months ago, so I started practicing again. That’s when I started working on this Eb blues. This time I decided to really work on it, which means improvising. That’s when I discovered that I wasn’t really fluid in that key. So I started shedding.

What’s that entail? It comes down to two things: 1) improvise, improvise, improvise, and 2) off the cuff exercises. The exercises are patterns in Eb, scalar patterns and arpeggios. I’d pick a figure and then start at the bottom of the horn and work up and then back down. I’d move back and forth between improvising and these little patterns. It’s been taking me WEEKS to get REALLY comfortable.

What’s that, really? If I stick to tried and true pathway, that’s easy enough. But I want to be able to move (potentially) anywhere. That’s tougher. I might be improvising within my comfort zone, but some lick doesn’t quite work out the way I’d intended. I want to be able to go with the new, the unintended thing, the ‘mistake’, and work on it, take it even further out, and then come back. Getting that kind of fluidity takes work. And moving back and forth between improvising and patterns is the way to do it. When I hit a snag in the improv, I turn it into a pattern and work on it. So the pace of the back and forth tends to be in minutes.

Who knows, maybe one day I’ll be really comfortable in Eb.

2 comments:

  1. "I figure we use a version of mechread to ‘transcribe’ the sounds we hear into ‘inner music’ which we can then ‘pass’ to parset [if we have and have developed the knack]."

    I could not understand the sentence at first, as I am having to translate from music to verse (which I always hear/ listen too as much as read).

    By inner music do you mean hearing a tune internally? I can do that without issue if it has vocals. If its instrumental I will vocalize it, dum de dum, etc, for a few seconds before hearing it more fully.

    That is a form of mechread, I do that automatically when reading verse. Its not a sound, muscle memory of how the tongue moves and shapes in the mouth.

    Habit from having to learn to speak verse.

    I read the Lime tree bower when you posted it, not familiar with it but its distinctly a case of listening rather intently than reading.

    K.K much more familiar with although never had to speak it. To breath life into that, it begins where it ends, I think, from memory.

    ReplyDelete
  2. "By inner music do you mean hearing a tune internally?"

    Yes. Once it's been 'transcribed' into inner music I can play that inner music through parser.

    ReplyDelete