One of
the last things you’d expect to see at a physics conference is a
physicist on stage, in a dapper hat, pounding out a few riffs of the
blues on a keyboard. But that’s exactly what University of Illinois
professor J. Murray Gibson did at the recent March meeting of the
American Physical Society in Baltimore. Gibson has been doing these
wildly popular demonstrations for years to illustrate the intimate
connection between music, math, and physics.
While
there is a long tradition of research on the science of acoustics and
a noted connection
between music and math in the brain,
science and math have also influenced the evolution of musical styles
themselves. For thousands of years, Western music was dominated by
the diatonic Pythagorean scale, which is built on an interval (the
difference in pitch between two different notes) known as a perfect
fifth: where the higher note vibrates at exactly 50 percent higher
frequency than the lower note. Anyone who’s seen The
Sound of Music probably
gets the idea of the perfect fifth, and can likely sing along with
Julie Andrews: “Do, a
deer, a female deer….”
If you start on one note and keep going up by perfect fifths from one
note to the next, you trace out a musical scale, the alphabet for the language of music. While a musical scale built like that includes a lot
of ratios of whole numbers (like 3:2, the perfect fifth itself), it
has a fatal flaw: It can’t duplicate another keystone of music, the
octave, where one note is exactly
double the frequency of the lower note.
Contrary to Andrews’ lyrics, the scale doesn’t really bring us
back to
“Do.”
To
bring the fifth and the octave together in the diatonic Pythagrean
scale, various versions of the same interval were forced to be different lengths in different parts of the scale—one was so badly out of tune it was called the
“wolf fifth” and composers avoided using it entirely. This meant that a piece of music composed in the
key of E sounded fine on a harpsichord tuned to the key of E but
dreadful on one in D. It also made it difficult to change keys within
a single composition; you can’t really re-tune a piano
mid-performance. Johann Sebastian Bach, among others, chafed at such
constraints.
Thus
was born the “well-tempered” scales, in which each appearance of an interval was tweaked so that it was not far off from the ideal length or from other versions of the same interval, so composers and performers could easily switch between keys. Bach used
this scale to compose some of the most beautiful fugues and cantatas
in Western music. This approach eventually led to the equal
temperament scale, the one widely used today, in
which every interval
but the octave is slightly off from a perfect ratio of whole numbers, but intervals are entirely consistent and each step in the scale is exactly the same size.
In the
20th
century, musicians like Jelly Roll Morton and ragtime composer Scott
Joplin wanted to incorporate certain African influences into their
music—namely, the so-called “blue notes.” But no such keys
existed on the piano; when in the key of C, one major blue note falls
somewhere between E-flat and E. So blues pianists started crushing
the two notes together at the same time. It’s an example of “art
building on artifacts,” according to Gibson. That distinctive
bluesy sound is the result of trying to “recreate missing notes on
the modern equal temperament scale”: In more traditional scales,
the
interval called a third represents a frequency ratio of 5/4; and
indeed in the key of C, a true third lies
between E-flat and E.
Gibson
always had a musical bent, and first heard about the “blue note”
as a child, while reading a book by Leonard Bernstein from the 1950s.
But his serious interest in physics and the blues began when he
joined the faculty of the University of Illinois after eleven years
at Bell Labs. He used various diffraction techniques to explore the
structure and properties of materials in his research, which he sees
as the equivalent of harmonic analysis in music. And he realized this
would be a good way to bring the concepts of math and physics to life
for non-physicists.
For
instance, music provides an excellent analogy of the uncertainty
principle in physics, which states that you can’t precisely measure
both the momentum and position of a particle at the same time. The
same is true of a musical tone: It exhibits a kind of particle/wave
duality, related to the duration of the sound. “The momentum is
equivalent to the wave nature and the position is equivalent to the
particle nature,” Gibson explains. “It’s exactly the same
physics and mathematics that controls this phenomenon with sound.”
Particles are very short in duration, so they sound like
percussion—“like banging on the table”—but they don’t have
a well-defined pitch; a sound wave of long duration means you can
measure the frequency very precisely. “The musical particle is the
point in time where you can’t tell what the tone is, and the
musical wave is the perfect tone whose frequency can be measured very
precisely,” he says.
“We don’t really know how the brain works, but you have to believe that somehow we’re recognizing these fundamental rules.”
Gibson
has a nifty way to demonstrate this using a synthesizer to create
tailored wave packets of sound. (A wave packet is just the number of
oscillations in a sound wave, determined by the decay rate of that
wave.) He can program it to play a simple melody, like “Mary Had a
Little Lamb,” and then gradually shorten the number of oscillations
for each note until it is little more than a very short burst of
noise. At that point, people can no longer identify a note as being
C, D, or E, and hence don’t recognize the melody. He can do the
same exercise in reverse, starting with short bursts and gradually
increasing the duration until his audience can identify the tune.
The
reason for this relates to how we perceive sound. The tone has to be
long enough in duration so there are enough oscillations for your ear
to measure the frequency well enough to distinguish between a C and a
D. There is only about a 5% different in the frequency between those
two notes, which means the ear needs around 20 oscillations to tell
them apart. If it’s just a short blast with, say, a mere five
oscillations, you can’t really hear the difference.
When
Gibson talks to biologists about his ideas, they inevitably raise the
question of why human beings evolved this kind of harmonic structure
capability in the first place. It may be related to the vocal chords,
which vibrate like strings. The ear, too, resonates on different
parts of the membrane depending on the frequency of the sound it
detects. “We don’t really know how the brain works, but when you
see how much the structure of music is influencing the way we hear
it, you have to believe that somehow we’re recognizing these
fundamental rules,” he says.
Humans’ understanding of pitches may be connected with our rare ability to imitate and learn through sounds around us.
Research suggests that a only few other animals, like whales,
dolphins, and some birds, can
engage in this type of “vocal learning”; no other primates seem
to do it. Possibly as a result, we are also one of the few animals
that can keep
a beat well enough to dance in rhythm.
For
Gibson, the rules and axioms that control the composition of music
serve as a palette of colors for the composer, much like the laws of
physics are nature’s palette for creating things in the world, and
those constraints can lead to innovations like the blue note. “We
take these kinds of artifacts, these mistakes, and we make them
beautiful,” he says. “Constraints help human innovation thrive.”