MichaelArell.com
  • Home
  • About
  • Blog
  • Buy
    • Christmas- Music for solo piano
    • St. Mary's Choir Favorites
    • SLIM Original Soundtrack
    • SLIM >
      • SLIM- Accolades
      • SLIM- Letter To The Viewer
      • SLIM- Behind The Scenes
    • Why Are Comedy Films So Critically Underrated?
    • Disorder In The Court
  • Donate
  • Contact
  • Home
  • About
  • Blog
  • Buy
    • Christmas- Music for solo piano
    • St. Mary's Choir Favorites
    • SLIM Original Soundtrack
    • SLIM >
      • SLIM- Accolades
      • SLIM- Letter To The Viewer
      • SLIM- Behind The Scenes
    • Why Are Comedy Films So Critically Underrated?
    • Disorder In The Court
  • Donate
  • Contact
Search by typing & pressing enter

YOUR CART

Self-Directing

film director, independent film, movie making, support independent film, film history, music history, music theory, comedy movie
Thank you for visiting my blog!
Here I share what I have learned about my passions--teaching, music, and film.
Use the categories and archives features to sort posts.
Let me know what you think [email protected]

Categories

All Film Music Profiles Teaching

Archives

August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020

5/30/2021 0 Comments

Music- History and Development of Electronic Instruments

Electronic instruments are defined as instruments that use electricity to produce sound. This is different from electric guitar which uses electricity to amplify or modify sound. Although there were experiments before the 20th century, electronic instruments that still exist today started in the 20th century.

Theremin

The first major electronic instrument was the Theremin. It was invented around 1920. It has two antennae on opposite ends. Without touching the instrument, the player uses their hands to manipulate the current between the antennae. One hand controls the pitch--how high or low the sound is while the other hand controls the dynamics--how loud or quiet the sound is. It is incredible to watch a Theremin player as they never actually touch the instrument. The theremin has been used in many science fiction soundtracks, think of Bernard Herrmann’s score for The Day The Earth Stood Still (1951).
Picture

Ondes Martenot

The Ondes Martenot was invented in 1928. This instrument is played with a keyboard, making it very accessible to keyboard players and able to easily play with music ensembles that rely on specific scales and key signatures. It has been used in pop music as well as film scores and classical music. The player can also manipulate the sound of the instrument using a metal ring on a curved wire.

The next innovation in electronic instruments would be synthesizers.

Picture

Early Synthesizers

The first synthesizers were developed in the 1950s. The components of these instruments allowed players to creatively manipulate the sound. Like the Ondes Martenot, a keyboard controlled the pitch of the instrument, while other aspects of the sound could be controlled by other buttons and knobs. Similar to early computers, the first synthesizers were so large that they were not portable and only could be used in the recording studio. If you look at liner notes from the 70s and 80s, there are often several technicians listed under Synthesizer Programming.
Picture
In the 1970s, synthesizers became more compact. The Minimoog named after creator Robert Moog became very popular. These portable synthesizers were monophonic, meaning they could only play one note at a time. This made the synthesizers good for solos or to add a layer on top of other instruments but they could not function as a harmony instrument like guitar or piano. Monophonic sounds limiting, but wind instruments like flutes, saxophones, and trumpets are also monophonic.
Picture
By the mid-1970s polyphonic synthesizers were developed. These newer instruments could play more than one note at a time, meaning they could play chords. Once we get to the 1980s, synthesizers became common in many styles of music, sometimes even replacing the dominance of the electric guitar in pop music.

Digital Synthesis and MIDI

Also in the 1980s, synthesizers became digital, meaning that the instruments could communicate with computers. The technology that allows computers and instruments to communicate is called MIDI- Musical Instrument Digital Interface and it has remained relatively unchanged for 40 years.

When recording with MIDI, every aspect of the sound played becomes information-- the length of each note, how loud it was played, the key on the keyboard, etc. This also means that notes played into the computer can be manipulated after recording.

Unlike when recording in audio, you can record a section of MIDI and completely mess up while recording, but you do not have to delete it and try another take because you can move any note to the correct pitch and drag notes to the correct parts of the beat.
Picture
Beyond keyboard-style MIDI controllers, there are controllers that resemble wind instruments, guitars, and simple boards with pads on them. It is important to note that some MIDI controllers do not produce any sound on their own, they simply send data--information to a computer and the sound comes from software inside the computer.

This also means that the MIDI controller can create musical information for sounds outside of the keyboard family. A MIDI controller can be used to play synthesized or sampled sounds of hundreds of different instruments. Synthesized means it is an approximation of an instrument sound. This is what is found in most affordable electric keyboards.

Sampled instruments are created from the recorded sounds or samples of actual instruments. Instruments are recorded playing every note in their range at different dynamics and articulations. Then, software programmers allow the instrument to be played using a MIDI controller. Based on the information played into the computer, the results can be very realistic.
It is interesting how the history of electronic instruments has gone from musicians seeking alternatives to traditional instrument sounds that could not be created without electricity to trying to reproduce the exact sounds of these traditional instruments.
Did I leave out an electronic instrument that you like? Please let me know!
0 Comments

5/23/2021 0 Comments

Music- Why Are There Different Bass and Guitar Amps?

Picture
That is an excellent question! When you look at them, they are about the same shape. There are even bass practice amps that are smaller than guitar amps. What is the difference and why are guitar and bass amps different?

Amps can be really complicated but at a basic level there are two ideas driving amps--size and power level.
Picture

Size


Size-- Although we cannot see them, except when they make liquids move (like the cup of water when T-Rex approaches in Jurassic Park) sound waves have size.


Pitch, or how high or low a sound is, is determined by how close together or far apart the crest of one wave is to the crest of the next wave. Higher pitched sounds are closer together and lower pitched sounds are more spread out.

For guitar, most of the sound waves it produces would be measured from a several hundred millimeters to a few centimeters. Bass notes, on the other hand, are most often measured in meters.

In order to produce such large sound waves successfully, bass amps require larger speaker cones or several different sized cones for different frequency ranges.

In terms of size, bass amps are most likely going to be larger overall than guitar amps. However it is the size of the speaker cone inside the amp and not the overall dimensions of the outside, what we call the cabinet.

Thinking about the size of sound waves, using a bass amp for a regular guitar would mean that the higher pitches would not sound as good as if they were from a guitar amp with smaller speaker cones. Likewise, playing a bass with a regular guitar amp would mean that the lowest notes would not sound as good (or not sound at all) than if it were played on a bass amp.
Picture

Power

The second layer of how amps work involves the amount of power it can output. More or less power equates to how loud the amp can get, but the size of the sound waves or frequency, also changes how much power is needed. Remember sound waves? The height of the wave is how loud or quiet the sound is.

On average, the very low bass notes require more power output than higher notes played at the same volume. This means that a bass amp playing at the same perceived loudness as a guitar amp would need more wattage.

It’s also the power idea that makes selecting the correct amp important as using an amp at a loud volume with an instrument that it is not designed for can eventually damage the amp.
But how can I tell which amp is which? Here’s a hint- they usually are labelled somewhere on the unit. If you do not see the word “guitar” or “bass” on the amp, find the make and model number name/number and look it up. Google should be able to tell you.
Another difference between guitar and bass amps that is more subtle has to do with the EQ settings available or rather the focus of these settings.

In order to understand the purpose of them, first we need to understand how instrument pitches work.
Picture
Every note has a main frequency (sound wave shape) that gives it the pitch-- we call this the fundamental pitch. But fitting within that wave are other related waves. These other waves are called overtones. While the fundamental pitch is what gives the note its letter name, it is these overtones that give the note its quality of sounding like a bass guitar and not a piano, tuba or other low instrument playing the same pitch.

The EQ settings on an amp are obviously not going to make a bass turn into a tuba, but it can enhance or repress certain overtones so that different frequency ranges (sound wave sizes) are brought out.
Going back to the sound wave sizes, the low, medium, and high frequency ranges for a bass is going to be different than the low, medium, and high frequency ranges for a regular guitar. Meaning that the medium knob on a guitar amp will not affect the same frequency range as the medium knob on a bass amp.
Picture
Guitarists are notorious for chasing the perfect tone with complete pedal boards to assist in that process. Often the focus for bass is more on defining the fundamental and how tight or reverberant the player wants that sound. For a bass, lower EQ boosts would give it more power and depth whereas higher EQ boosts would give the notes more clarity.

Hopefully this post has not confused you more. You should be glad I did not talk about keyboard amps and how they have to work for a wider frequency range than either bass or guitar amps. Unlike many products that are simply marketing gimmicks, bass and guitar amps are two separate products that serve different needs.

Please let me know if you have questions!


0 Comments

5/2/2021 1 Comment

Music- How Does Recorded Music Affect Listener Expectations?

Picture
This post is more of an open-ended question than an answer.

How much has recorded music changed listener expectations for how music should sound?
In the history of music, recording technology is really a recent development, but the effects of the recording process are now inescapable.
The vast majority of listeners hear much more recorded music than they do live music. So through conditioning, recorded sound is heard as “normal” and live sound is heard as “different”. Depending on the live sound setup, almost all of the effects applied to a recording can be replicated live including reverb, pitch-correction, and echo effects. The reverb effect in particular is so matched to some styles of music like 80s ballads that when performed live, artificial reverb is added to the singer’s microphone.

Picture
Pitch-correction or the brand name of Autotune may be the most criticized music technology. But that could be because it is more talked about in mainstream media than other effects. It first appeared in the late 90s and became popular due to the effects it created on Cher’s hit Believe. In some ways, listeners felt that they were being misled. An original performance may not have been pitch perfect, but computers allow it to become perfect. No longer was there any doubt that recording engineers could manipulate every aspect of a recording.

However, the very basis of recorded music is manipulation that cannot be captured live.

Picture
In the early days of radio, singers described as “crooners” became popular as they experimented with distance from the microphone. Instead of singing on a stage a distance from the audience, crooners got extraordinarily close to the microphone. The result was that the singer sounded so close to the listener, as if the performance was just for you. This sound could not have been found in a large concert hall.


Along with the perceived closeness of the sound, clarity of the text and diction had to be altered for recorded singing. The technique of diction used in large performance spaces sounded ridiculous when recorded, as if every syllable was over-pronounced. Singers could use a more natural inflection with consonants to record vocals.
Picture
I find that preparing singers to use good diction in a live, unamplified performance can be difficult because the vocalists they listen to in recordings are not pronouncing their words in a way that is necessary to be understood from a distance. The result is that using diction required for a stage feels unnatural to the singers and does not sound normal to them because the recordings that we consider “normal” singing do not pronounce words that way.


The recording process not only changes the sounds of singing, but also instruments. Although not quite as drastic a change as singing, instruments could be played in different ways that would not work in unmodified ways. Beyond the obvious amplified instruments like guitars and keyboards, composers writing for recorded orchestras could feature combinations of instruments or solos that maybe would not work in a live setting but when miked closely could be heard easily. It is not uncommon in a film soundtrack to have a solo instrument heard clearly over an entire orchestra. When composers try for a similar effect with a live, unamplified orchestra, they may find that their solo instrument is buried if the rest of the orchestra is to play at the same volume.
Picture
Despite so many of the adjustments made to recordings happening after the fact, I do not see recording and audio manipulation technology as cheating. Rather, I see it as another layer of creativity in music. It’s as if the Digital Audio Workstation (audio editing software) is an instrument itself and the audio engineer puts their own touch on the music through their edits.


Once audio recordings became widely available, music was never going to be the same. This does not mean that recorded music has replaced live music or made it obsolete. In some ways, computers have allowed recording technology of some kind to be more accessible to most musicians. Since many more musicians have the ability to present polished recordings, perhaps the ultimate judge of a performer’s merit is how well they can perform live.

What are your thoughts? How much has recorded music changed our expectations for live music?
1 Comment

4/5/2021 0 Comments

Music- Why Divisions of the Beat Are Important

Picture
Music is an artform that combines multiple modes of thinking. It is limitlessly both subtle and expressive while at the same time exact and mathematical. Today’s topic of rhythm comes from the mathematical side of music.

Rhythms come from combining shorter notes into longer sounds and dividing longer notes into shorter divisions. If you remember from earlier posts, the beat is the underlying pulse of a piece of music. Fitting within that beat we can have longer notes that take up several beats or smaller divisions that are just a fraction of one beat.

The easiest example (and most common time signature) is based around 4 beats per measure. The largest note would be the whole note that takes up all 4 beats. As one can understand, only one whole note can fit within a measure of 4. When we divide the whole note into two equal parts we get two half notes (notice the math connection). If the whole note is 4 beats, each half note gets 2 beats. Dividing each half note equally again provides 4 quarter notes. Each quarter note would then receive 1 beat.

It is with these quarter notes that you can start to see how important these smaller divisions are to the overall sense of the beat. Counting each measure as 1-2-3-4, 1-2-3-4 is a lot easier to follow than counting as whole notes 1---, 1---. Without the divisions, it is very difficult to feel a steady pulse.

Picture
Of course, quarter notes can be divided into 8 eighth notes in a measure of 4 beats. Smaller still would be 16 sixteenth notes in a measure of 4 beats. With the possibility of these small divisions, comes the necessity to use them to keep the beat. If a measure includes eighth or sixteenth notes, simply counting 1-2-3-4 gives us less of a chance of playing a rhythm accurately than if we were to count 1-and-2-and-3-and-4-and (eighth notes) or 1-e-and-a-2-e-and-a-3-e-and-a-4-e-and-a (sixteenth notes).

When writing music, we find that smaller divisions of the overall beat can help to propel the music forward and give it a sense of movement. Even if a song is based upon 4 beats per measure, the drummer dividing the beat into 8 eighth notes or 16 sixteenth notes on the cymbal helps to keep the music exciting. Also think about how 8 eighth notes or 16 sixteenth notes in a measure allows the player that many more opportunities to vary the dynamics (volume) for each note.

Next time you play or listen to music, I encourage you to try to hear and feel the divisions of the overall beats and think about how it changes your perception of the music.

Picture
0 Comments

3/8/2021 0 Comments

Music- A Brief History of Music Recording

Picture
The recording of sound is as much an art as it is a technology and as with other media there is often a fine balance between the way the technology works and how the artform is captured through the medium. The technology sets the limits for how the art can be captured, and in turn the way the art is created may change to better fit the technology.

Experiments with technology that allowed us to capture sound began in the middle of the 19th century. The oldest sound recording that we still have access to today comes from 1860 France. It is clear that someone is speaking, but no actual words can be made out. The tool used to capture the sound is called a phonoautograph- in a sense “writing sound”. But we see the origins of records in the sense of the phonoautograph scratching the soundwaves out. 

Picture
Phonoautograph, 1860
In the early 1900s, we find a recording technology that has mostly gone away--player pianos. Like the phonoautograph, the player piano is completely mechanical and not electric. A talented pianist would record a song on the piano and this would punch notches in a paper scroll. Every open notch on the paper scroll would mean that key was pressed. After recording, the scroll would then allow the player piano to function without a person pressing the keys. For many of the earliest ragtime piano pieces, the player piano was how listeners would have heard how the composer would have interpreted their own piece.

Picture
Now we move on to electricity. To simplify an explanation of how sound is captured--think of microphones and speakers as doing similar yet opposite things. A microphone has a magnet, an electrical coil, and a spring-loaded, movable diaphragm. When soundwaves (air) hit the diaphragm the microphone captures the movement. The movement, or pulses, is read as electricity. Speakers have a magnet as well with an electrical coil. The coil is attached to a cone that receives the pulses and amplifies the signal.

The earliest forms of recordings use physical media--meaning something we can touch. 

The first records were wax cylinders that would spin similar to the later discs. Eventually, it was discovered that flat disc records could spin faster than cylinders. The needle on a record player follows the grooves in the record and the record player converts the electric pulses into sound. Here we see a technology limitation that I highlighted in the first paragraph. Because of the RPM (revolutions per minute) or speed of the record player and the diameter of the discs, there was a limited amount of music that could fit on each side of a record. Because of this, songs that were created for and recorded on records would have to be short enough to fit on one side. Even older, classical compositions were recorded by orchestras at faster than usual tempos so that they could fit on a record.

Picture
Phonograph Cylinder, 1890s
I do not think that we can overstate how this technology has influenced music. Even today, about 3 and a half minutes sounds normal for a song. Anything longer than that is often the exception for streaming and radio.

Later sound technologies do not seem to have had the staying power of records. The cassette tape was introduced to consumers in the 1960s. Tapes have the advantage of being more compact than records, but did not last as long. The science behind tapes is that the tape itself is magnetic. Another advantage of cassette tapes beyond the smaller size is that tape recorders allowed amateurs to record themselves without the cost of renting a recording studio. This enabled the creation and sharing of music to become more open. It also led to some of the first cases of music piracy, as people would record songs off the radio to share with others.

The last physical media we explore are CDs or compact discs. Instead of electrical signals, the music is digital, meaning the information is saved as 0s and 1s. Another description of CDs is optical media, as a low-powered laser reads the digital information on the disc. CDs became available in the 1980s and were seen as the ultimate solution to music recording and storage. No one could foresee that within a couple decades, owning physical copies of music would become the exception to the rule.

By the end of the 1990s, internet users began experimenting with the ability to upload audio files for others to download. Today, we remember sites like Napster and Limewire more for legal reasons instead of technology or music reasons. Within a brief time, large companies like Apple and Microsoft jumped at the chance to offer music downloads--this time with the legal permissions of the artists and/or publishers. iTunes and the iPod totally changed the way listeners consumed and stored recorded music.

As I write this post, iPods have been replaced by phones that can serve multiple functions and iTunes now focuses on streaming music instead of downloads. Today, streaming is the way that we listen to most music. Streaming really began with YouTube in the 2010s. Music publishers began to notice that most people wanted to listen to music on demand, but did not necessarily want to own a recording of the music. 

Spotify is now the largest streaming music platform with Apple, Youtube, and Amazon competing for a market share of streaming revenue as well. Right now, streaming is a great deal for the platforms making the music available but a terrible deal for the musicians. Buying recordings directly from artists is still the best way to support artists. Right now Spotify pays an artist between $3 and $5 for 1,000 streams meaning that their music has been played on the platform 1,000 times. We can do the math and realize that if you listen to one song from an artist on Spotify, that artist will be paid between .3 cents and .5 cents for that song, or between ⅓ and ½ of a penny. A Spotify Premium subscription is currently $10/month. Clearly, not all of the membership fee is going to the musicians. 

Enough about streaming. Hopefully this article gives you a little insight into the development of recording technology and how that technology has shaped, and still shapes the music we listen to.

0 Comments

2/8/2021 0 Comments

Music- How Fractions and Rhythms Work Together

Picture
If you remember from an earlier post, rhythm is the way that sound is organized through time. A lot of people get the idea of beat and rhythm confused and often think they are synonyms. 

Once you get the idea of the difference, they are easy to tell apart. Beat is the strongest pulse in a piece of music. Often, they are felt in groups of 2, 3, or 4. How do we know how many are in a group? If you listen very carefully you will hear one beat that is the strongest and one or more slightly weaker beats. A great example of this is a waltz. Waltzes are in 3 with beat 1 being the strongest and 2 and 3 being weaker. The Viennese style of Waltz especially emphasizes this feeling, as the entire foot only touches the ground on 1 and it is just the toes touching the ground on 2 and 3.

Picture
If that is the beat, then what is the rhythm? It is how we organize the beat and the space between the beats. It is very rare for a waltz to only have 3 notes all on the beat over and over. We may have longer notes that last longer than 2 beats or shorter notes that move in between the larger beats.

It sounds confusing but looking at rhythms and hearing the difference between the beat and the rhythm makes it a natural process.

Picture
Rhythms are completely mathematical. Every way that we organize the rhythms is measured. Splitting the overall beat into rhythms is the work of division. For an example, we will start with music based around 4 beats. If one note takes up all four beats, it is called a whole note. To help students remember its name I remind them that it takes up the whole measure. When we split the whole note in two equal parts, the result is two half notes. Again, we notice that these take up half a measure of 4, each getting 2 beats. Splitting the whole note further into 4 parts or 1 note for each beat, the name is quarter note and notice that it takes up ¼ of the measure. Further dividing the whole note into smaller parts, we end up with 8 eighth notes in a measure, 16 sixteenth notes and so forth. 

If the number of beats in a measure is constant, then we can use addition or subtraction to figure out which beat we are on. If we are looking at a measure of 4 quarter notes, the third beat is the third quarter note. Either we count to three from the left or we can subtract 1 from the right. In combinations of eighth notes and quarters we simply count each eighth note as half of one beat, so that it takes two eighth notes to fill one beat. It is the opposite for a 2 beat note or half note, there would only be two remaining beats when one half note is present.

I’m going to stop here before you get overwhelmed, but be comforted by the fact that rhythms work in an organized system with rules and definite answers. Simple knowledge of fractions and division can go a long way to understanding and organizing rhythms.

0 Comments

1/18/2021 0 Comments

Music- What Are Clefs?

Picture
If you remember from an earlier post about how to read pitch and rhythm, when the notes move up the page, the pitch gets higher and the letter names go in alphabetical order (A, B, C, D, E, F, G). When the notes move down the page, the pitch gets lower and the letter names go in reverse alphabetical order (G, F, E, D, C, B, A).

If you remember, no matter the letter name of the first note, the names of the lines and spaces are constant; they do not change, but what tells us that the lowest line is always E and the top line is always F is the clef symbol.

Picture
The most common clef symbol is the treble clef. It is also called the G clef because the line inside the circle part of the symbol is the note G. This clef works best for higher notes. This is the clef read by the right hand of keyboard players, flute, clarinet, oboe, saxophone, trumpet, horn, violin, guitar, xylophone, glockenspiel, and many other higher pitched instruments. When you see this treble clef symbol, the bottom line is always E and the top line is always F. That’s why it is important to check the symbol, but once you do, you know the names of the lines and spaces will not change.
Picture
The next most common clef is the bass clef. This clef is also called the F clef because the name of the line between the two dots is F. For bass clef, the name of the lowest line is G and the highest line is A. But the notes going up the page still move in alphabetical order and notes going down move in reverse alphabetical order. As long as there is a bass clef at the beginning of the piece, the names of the lines and spaces stay the same. 

The lowest instruments read bass clef including bassoon, bass clarinet, trombone, tuba, cello, bass, and timpani. Other instruments go across treble and bass clefs depending on the range of the instrument they are playing. These include many keyboard instruments like piano, organ, harpsichord, and pitched percussion like marimba.

Picture
Beyond the two most common clefs, we have the movable C clef. The name comes from the fact that the arrow looking part indicates that line is C. When the middle line is C, it is called the alto clef. This is what viola players read exclusively. Same idea- letters move up alphabetically and down in reverse alphabetically.
Picture
If the C clef moves up so that the second line down is C, it is now the Tenor clef. But you guessed it, once you know where C is, the notes move up alphabetically and down reverse alphabetically. I am not aware of any instrument that only reads tenor clef, but if playing for prolonged periods in the upper range cello, bassoon, and trombone will use tenor clef.

The best way to get used to reading in different clefs is just repetition. I recommend getting confident with treble and bass clefs before you explore alto clef, unless you play viola.

The reason that we use different clefs is so that instruments of different pitch ranges can most often play notes that fit within the staff and are not always playing notes above or below the staff. It would be difficult for a trombone or an instrument in a similar range to read notes that were all below the treble clef, they would go so far below the staff that they would really not be readable, not to mention that they would be printed over the next staff down on the page.

All of this may seem complicated, but be thankful that each instrument doesn’t have it’s own clef!


0 Comments

1/4/2021 0 Comments

Music- The Pros and Cons of Written Notation

Picture
“Standard” music notation, as we know it today, arrived at its current form, more or less, during Bach’s lifetime (early 1700s). One big reason for this was that before the 1700s, keyboard instruments were not tuned in even or equal temperament meaning that not all half steps sounded the same. Before this, the difference between C and C# did not sound the same as the difference between F and F#.

Once keyboards were tuned so that every half step was equal, a player could play a piece in any key and the scale degrees in relation to each other would be the same. This means that music could be transposed, or moved up and down to different keys in order to better accommodate a singer’s or instrument’s range. 

Soon after, we see horns that have valves and woodwinds with more keys to be able to play chromatically, or by half steps in every key.

Our symbols of music notation tell us the pitch of each note (the letter name), the length of each note (which we call rhythm), changes in dynamics over time, how notes should be “attacked” with articulations, and the key of the music (flats and sharps that result from the starting note of the scale). 

For other information we rely on numbers (time signature), letters (dynamics for sections, chords) or a combination of words and numbers (expression descriptions, tempo, etc). 

Without understanding the system, someone familiar with the style of music being written would be able to see the basic beat structure in each measure and whether the notes are moving higher or lower, regardless of knowing the letter names or specifically what the rhythmic symbols mean.

Notice how a knowledge of what sound the symbols represent is a prerequisite. Without knowing the sound, the symbols are meaningless. It’s like trying to understand symbols that represent a forgotten language, if you are not sure how the spoken language sounded, the symbols would not help.


Picture
Try singing this song!
Immediately, one can see how written notation can be a barrier to someone who never learned to read it. 

Even with knowing the definition of each symbol, the style of the music will change it. For example, eighth notes in jazz may be written as eighth notes, but intended to be played as a “long-short” swing pattern. Knowing how the eighth notes should sound in other styles would not help unless the reader were familiar with the style of jazz.

Another rule-breaker is the entire concept of rubato, very common in many styles since the 19th century. Rubato describes the subtle slowing and quickening of the tempo regardless of what the overall pulse of the piece is. Often this is indicated with the word rubato and not any change in the symbols used.

Another shortfall of our notation system is it really only works for Western music. If, like me, you have ever tried to transcribe a traditional Indian piece, you will soon notice that their rhythms and pitches do not fit into our symbols, or rather, our notation system was not made for Indian music. Traditional Indian scales use quarter steps (between the space of two half steps). Western listeners have a very difficult time identifying these and not hearing them as wrong notes. The Indian rhythmic system is actually based on ragas. A piece may use Raga A, Raga C, and Raga M (I made up these labels). In other words, each rhythm pattern is a different idea, not made of smaller divisions or larger combinations of the beat.

Many traditional styles of African music are very difficult to reproduce rhythmically in Western notation. The issue one runs into is the Western idea of a time signature. When music is more or less steady, it helps to have beats grouped by measures with time signatures. Of course, later Western classical music did use changing meters or changing time signatures, but it still does not capture the pulse of multilayered rhythms that we find in many African styles. 

Just like any language, written notation has its shortcomings. As musicians, we need to remember that written notation is a tool--a wonderful tool that can open many doors to explore other styles of music, but the symbols on the page are not music in themselves. The music comes from a person interpreting those symbols.

When I introduce composition to my youngest students, I do not require them to write in standard notation. Some of them choose to follow it fairly closely, others create their own, equally valid, system, and others use a hybrid of familiar notes and their own way of interpreting them. 

The only requirement I use is that the student should be able to explain their system and teach it to a classmate. I even had a student once write rhythms for bongos. Not only did the student have a way to know the pattern he was playing but also a way to identify which bongo he was playing, something that is more difficult in standard notation unless the first measure has high bongo and low bongo labelled using text.

I write this article simply to encourage us to reconsider what many of us grow up learning may not be the only legitimate way to record sound on paper. We also must reconsider that reading standard notation may not be the only correct way to learn and to understand music.

In many music schools, students are not accepted into the program if they cannot read standard Western notation. Once I completed music school and began to make music with musicians that learned in ways other than myself, I had the privilege to make music with many people that learn music by ear. 


Picture
Two ways to notate the same music.
Learning music by ear is not a deficiency as many would lead students to believe. Music is an aural art form and I have found that those that rely more on their ear than their eye can often respond and react more quickly. 

Musicians that do not read standard Western notation are not musically illiterate. They may in fact be reading something, just written a different way. There have been many very successful musicians that solely read chord symbols. If you are not familiar with chord symbols- with just a couple of letters or note names, the experienced player knows which notes to play and based on the alignment of the chord symbols over the lyrics, when to change chords. By only showing the changes, this can often give players much input for the rhythm of the chord repetitions between changes. Although the letter system of chords comes from Jazz. Even in the 1600s-1700s, there was a system of chord shorthand known as figured bass. The bassline would be written out in standard notation and the harmonies would be implied by the figured bass symbols.

Other forms of notation work especially well for string instruments like bass, guitar, and ukulele. Fretboard diagrams show the player where to put their fingers on the strings and sometimes even number the dots to show which finger to use. Similar to this, tablature shows the strings of the instrument themselves and which fret of the instrument (no fingers down would be 0) to press on that string. Fretboards and tablature are most helpful when the player already has an idea of the beat or rhythm of the piece.

As you can see, standard Western notation has been well developed for very specific instructions and helps musicians to share ideas without hearing something first, but it is not the only way to write and to share musical ideas. If we can expand our thinking on how we share music, we open the experience of playing and creating music to more people. Please share your thoughts in the comments.

0 Comments

12/7/2020 0 Comments

Music- Basic Notation: Pitch and Rhythm

Picture
I have written this article to serve two purposes, either to get you more confident with understanding standard notation, if you are not yet confident, or as a way in which notation can be taught to others. I will start with very broad concepts and then address more specifics or exceptions to rules.

Two of the most basic elements of music that can be represented through notation are pitch --how high and low the notes are; which can also be thought of as letter names, keys on a keyboard, finger position on a string instrument, etc-- and rhythm-- the organization of sound through time.

We will start with pitch.

Pitch

In our Western system of music, there are only 12 letter names. Each of these 12 notes is the same amount of space apart (we call this a half step). Since the letter names are only A, B, C, D, E, F, and G, the other notes in between would have the same letter name with a half step higher called a sharp (#) and a half step lower called a flat (b).

These 12 pitches can be read on the music staff, a background of 5 lines and 4 spaces. On a typical note, when the circle part of the note moves higher up on the page, the pitch goes higher, and the next letter name up would move forward in alphabetical order. From line to the next space and space to the next line, we move up by one letter. So if our bottom line is E, the rest of the spaces and lines would be F, G, A, B, C, D, E, F at the top line. It is exactly the opposite situation moving down the page, the lower the circle, the lower the pitch and we go in reverse alphabetical order. The nice thing is that the names of the lines of the spaces stay the same no matter where the notes move.

Picture
That is the basics of pitches! Knowing this, you can follow the up and down contour of a melody even if you do not know the exact letter name in the moment, you can see whether each note is higher or lower than the note before it.

Now, on to rhythm. 


Rhythm

As I said above, rhythm is the way we organize sound through time. No matter how high or low the circle part of the note is, that does not affect how the rhythm is written. You have probably noticed that when looking at music, some notes have circles that are colored in, some have a single line attached, some appear to be two notes attached together, and there are even notes with no lines. 

That’s a lot to remember. Over time, you will be able to recognize and remember how each note looks, but there is another way to figure out the basic rhythm of a phrase. When you look at a piece of music you will notice that there are lines that go down through the staff and seem to separate groups of notes. This is no accident. Depending on how many beats are in a measure of music (the space between two lines), every group of beats will be separated by one of these barlines.

Picture
But how do we know how many beats are supposed to be in a measure? Look at the very beginning of the music, the top left. You will see two numbers, that look like a fraction. For now, we will just worry about the top number. If you see a 4 on the top, it means that the quarter note gets the one beat.
Picture
If you see 4 of these in a measure, it means the counting is simply 1 2 3 4 and the next measure begins again with 1. In this way, it doesn’t matter how many notes are in the piece total, each measure starts again with 1. 

Without memorizing the names of eighth notes, half notes, whole notes and more, for well formatted music we can think of the amount of space that each note or group of notes takes up.

If you see two notes in the space of one quarter note, it means they are each half the value of the quarter note (1 2 and 3 4).

If you see a note that seems to be taking up more of the measure than one quarter note, it could be a 2 beat note (1 2 3 -) or if there is only 1 other quarter note left in the measure of 4, then it would be a 3 beat note (1 - - 4).

You don’t have to know the exact definition of each rhythm symbol, just see how many beats should be in a measure and how many notes are written in a measure. If there are very few notes in a measure, the music will sound slower. If there are quite a lot of notes in a measure, the music will sound faster.

You do not have to know everything about pitch and rhythm to be able to follow the basic structure of written notation. Hopefully these few tips can help. Please let me know if you have questions!


0 Comments

10/26/2020 0 Comments

Music- Henry Mancini

Picture
Henry Mancini was one of the most prolific film and television composers of all time. By the time of his death in 1994, he had received more Grammy award nominations than any other artist (this has later been surpassed by Quincy Jones). He is also one of the few film composers to have one of his arrangements become a #1 hit when his version of the “Theme from Romeo and Juliet” knocked the Beatles off the top spot in 1970. Mancini was equally respected among film critics, audiences, and popular music listeners. Of the great film composers, he stands with the top few whose music is a terrific listening experience outside of the context of the film.

However, much of Mancini’s impact seems to have been forgotten in the post-Star Wars era when big, symphonic scores returned to popularity. Film music, like any artform, has cycles, the symphonic (I daresay, classical) scores of the 1930s and 1940s gave way to rock and roll inspired scores in the late 1950s and 1960s. Disco music reigned in the soundtracks of the 1970s, until Star Wars returned the style to symphonic music. One could argue that, depending on the film genre, symphonic scores are still the norm (with added electronic elements).


Mancini, like many great composers, stands out because his music often stood out as contrary to the prevailing style of the time. What the general public remembers about Mancini are the Pink Panther and Peter Gunn scores, which are incredible, yet only a fraction of Mancini’s output. By examining Mancini’s work, we see he was quite versatile.



Mancini got his start in Hollywood working at Universal Studios as part of a composing team or music department that would write music for giant monster movies. Often in situations like this, members of a music department would be uncredited and the title of music supervisor would be the sole credit. It was not until the end of the 1950s with the Peter Gunn television show that Mancini’s work was noticed. Also, Peter Gunn was his first collaboration with producer-director Blake Edwards, a composer-director partnership that would serve Mancini for the rest of his career.


Today, listening to the Peter Gunn theme often reminds the listener of the James Bond sound. However, the influence actually went the other way. Peter Gunn premiered in 1958 and the James Bond series did not start until 1962. We could say that Mancini’s sound influenced the sound of the 1960s spy movies.


Many times, Mancini’s score would complement the films by providing a contrasting counterpoint to the comic action on screen. Consider cues like “Nothing To Lose” from The Party and “Piano and Strings” from The Pink Panther occur during outrageous comedies. Like subtext for dialogue, his music would highlight the underlying emotions amidst all the chaos.


Mancini was a master of jazz and pop orchestration. He actually wrote one of the few texts on the subject, Sounds and Scores, which I highly recommend. His use of string counterpoint and jazz harmonies were imitated heavily by other composers, especially in television. 


Despite his ability to layer sounds, his music never had to rely on complicated textures to evoke feelings. Often, a piece could be a piano melody with simple string pad beneath. A great example of this is “Hilly’s Theme” from Silver Streak. Some of his most notable melodies are recognizable within 5 notes. Consider “Moon River” takes about 3, “Pink Panther Theme” about 2, “Crazy World” about 3, etc.


In terms of Academy Awards, 1961-1962 were quite special for Mancini. He won best score and best song (“Moon River”) from Breakfast at Tiffany’s and then best song for “The Days of Wine and Roses” the following year. His final academy award was in 1982 for the Best Song Score for Victor/Victoria. In between, he was nominated for an award practically every single year. He won a best song Golden Globe for “Whistling Away The Dark” from Darling Lili in 1971 and was nominated for 9 others. Including one posthumous award, he earned 9 Grammy Awards out of 33 nominations.


Although his background was jazz, his classical sound could be quite evocative, many times having an Italian quality, which makes sense considering his heritage. A notable example of this is the Italian film Sunflower--no traces of jazz harmony or instrumentation, but unmistakably Mancini. Another outlier in his canon of films is Lifeforce from 1985. He jumped at the chance to work on a science fiction-horror movie when asked. The exciting title march fits well within the John Williams, Jerry Goldsmith, and James Horner scores of the era.


Beyond his composing work, Mancini was also an accomplished performer, playing piano on most of his albums and soundtracks. His playing style is unlike that of any other pianist I have heard. When most pianists would play louder into the keys for a crescendo, he would back off with a lighter touch for the climaxes, as if he wanted the listener to be drawn in and listen closer.

If you have not listened to a Mancini score before, or if it has been some time, I highly recommend it. Almost all of them can be found on Youtube and most of them are available on CD if you prefer the old-fashioned way :)

0 Comments
<<Previous

    Michael Arell Blog: Teaching, Music, and Movies


    [email protected]

Powered by Create your own unique website with customizable templates.