Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.

QUESTION

ASSIGNMENTUpon viewing the video on the Anatomy and Physiology of Hearing (the link can be found in Lessons - Week Two), describe the structure of the ear, focusing on the role that each component pla

ASSIGNMENT

Upon viewing the video on the Anatomy and Physiology of Hearing (the link can be found in Lessons - Week Two), describe the structure of the ear, focusing on the role that each component plays in transmitting the vibrations that enter the outer ear to the auditory receptors in the inner ear. Then, discuss the basic difference between determining the location of a sound source in the brain and determining the location of the visual object in the brain. Lastly, discuss the somewhat surprising outcome of research on hearing loss in urban versus rural environments, and the physiological explanation behind it. Support your belief and use specific examples. MINIMUM 300 WORDS

https://youtu.be/bk0raUXwCjc

READING

Introduction

Topics to be covered include:

  • The components of sound and how they interact
  • The function of the cochlea
  • Localization of sound

In this lesson, we will learn more about sound and the auditory systems that sound waves pass through as they are transmuted to signals the brain can understand. Sound travels as vibrations through the outer and middle ears before it is transmuted to electrical signals in the inner ear. We will also look at how we are able to identify where a sound came from, and how sound hits each of our ears.

How We Rely on Sound

For many, sight is the first sense we rely on. We see something and go by what we see. Yet, we cannot always see something, and what we perceive based on our sight is not always accurate. So, which sense do we rely on more than we realize? We can hear in the dark, and while we can be fooled by sounds, we might be a little more cautious with what we hear as opposed to what we see. We use our hearing to listen to and identify different sounds. Some sounds are enjoyable, and others might be a little too loud, or have an unpleasant sound, like a siren or a child playing the same note on a recorder for the fiftieth time trying to get it just right.

Yet, let’s look at an example that will help us explain sound and auditory perception. We are at a concert for second grade children playing their recorders, the plastic flute-like instruments elementary children often learn to play notes on. A couple of children seem to be doing better than others, and have solo parts. Parents scramble to record their children and happily move to the sounds that fill the auditorium. Of course, some visitors might not conclude that the recorders are quite as melodious as they listen to the concert. In each case, pressure changes in the air create the stimulus for hearing, similar to how light is processed by visual senses. This change in air pressure activates the auditory senses. The information travels through the outer ear to the middle ear, then to the inner ear. The information is processed and sent through brains systems to create a perceptual experience. We have systems that help us determine where the sound comes from, based on how quickly it hits an ear, and which ear it hits first. In some ways, this information is more reliable than visual senses.

Physical and Perceptual Definitions of Sound

This video shows how sounds are produced and how you hear them: What is Sound?

Open file: Transcript ‹1/5 ›

  • The StimulusLike vision, sound begins with a distal stimulus. In our example, the distal stimulus would be the sound of the recorder. The vibration of the recorder causes changes in the air that trigger auditory organs to process this representation of sound and send it to the brain. This sound is physically based on the pressure changes that occur as the sound is emitted from the distal stimulus (Goldstein & Brockmole, 2017). The sound is also perceptually based on our experience– we perceive the recorder sound as wonderful (if you are mom), or as perhaps a little annoying (if you are anyone other than mom). So, we have the recorder vibrating with a frequency of 1,000 Hertz (Hz), which is the physical stimulus, and the experience of sound based on your enjoyment of the recorder concert (Goldstein & Brockmole, 2017).

Loudness and Pitch

The frequency of sound is on the horizontal axis; the dB levels at which we can hear each frequency is on the vertical axis.

  • LOUDNESS
  • PITCH
  • TIMBRE

The amplitude of a sound is expressed in dB. The perceptual aspect of the sound stimulus loudness is related to the level of an auditory stimulus. The higher the dB the louder we perceive a sound, but this varies with the frequency of the sound. The audibility curve indicates the range of frequencies we can hear. Underneath the audibility curve we would not be able to hear talk, but above the curve we can hear tones. This area above the curve is called the auditory response area. The area above the upper range of the audibility curve is the threshold of feeling, which is an area where the amplitudes are so high that we can feel them, and they would likely cause us pain, but we wouldn’t necessarily hear them (Goldstein & Brockmole, 2017). How many of you have ever heard of a dog whistle? The amplitude of a dog whistle is so high that we, as humans, cannot hear it but dogs can. Dogs can hear frequencies higher in the human audibility curve. As you get older, the range of frequencies you can hear shrinks. You can test your hearing at: Hearing Test.(transcript not yet available)

The video plays sounds of the frequency indicated on the screen. Watch the video until you can hear the sound. That is the lower threshold of your hearing. Towards the end of the video you will probably find that you cannot hear sounds above a certain frequency.

The Journey through the Ear

‹1/3 ›

  • THE OUTER EARNow that we have seen sound travel from the distal stimulus to the ear, it is time to see happens once it reaches the ear. We took an abbreviated journey through the ear in Lesson 1 and now we will look at this journey in more detail. The journey begins with the outer ear. The structure of the outer ear that we all see is called the pinna (plural pinnae). From the pinnae sound travels through the auditory canal, which is the tube-like recess that leads to the eardrum, also called the tympanic membrane. When you find wax in your ear, you find it in the auditory canal. The purpose of the wax and the small size of the canal is to protect the eardrum. The auditory canal also enhances the intensity of sound through resonance. Resonance is a result of the interaction between soundwaves reflected back from the close end of the auditory canal with new soundwaves entering the canal (Goldstein & Brockmole, 2017).

Vibrations and Electrical Signals

  • FROM SOUND TO ELECTRICAL SIGNALS TO BRAIN
  • PLACE THEORY
  • FREQUENCY TUNING CURVE
  • COCHLEAR AMPLIFIER

As sound vibrations move through the stapes and press against the oval window, the oval window begins a back-and-forth motion that transmit the vibrations to the liquid inside the cochlea, which, in turn, sets the basilar membrane into an up and down motion. Remember that the basilar membrane lies below the organ of Corti, so the up and down motions cause the organ of Corti to move up and down also. The organ of Corti in turn causes the tectorial membrane to move back and forth just above the outer hair cells. At this point, the vibrations are transformed into electrical signals, beginning the process of transduction. As the cilia of the hair cells bend in one direction structures called tip links are stretched, opening tiny ion channels in cilia membranes. When its channels are open, positive ions flow into the cell and create an electric signal. When the cilia bend in the opposite direction, the tip links go slack, ion channels close and the electrical signal stops. This causes alternating bursts of electrical signals and no electrical signals as the tip links stretch and then slacken. When signals are sent, neurotransmitters are released to cross the synapse between the inner hair cells and the auditory nerve fibers, which causes the nerve fibers to fire. If you think about this, you see a pattern. The auditory nerve fibers fire with the rising and falling pressure of a pattern from a pure tone. When the auditory nerve fibers fire at the same place in the sound stimulus is called phase locking (Goldstein & Brockmole, 2017).

Frequency Theory

Remember that pitch is concerned with the quality of the sound described as high or low. This is determined based on the frequency, which we have just seen is impacted by place. So, what is pitch impacted by? One other theory is frequency theory, which proposes that the frequency of the sound wave throughout the basilar membrane is the same as the firing rate of the hair cells. If, for example, a frequency of the sound is 300 Hz, the firing rate of the hair cells across the basilar membrane would be 300 pulses per second. So, if we put the place theory and the frequency theory together, what would we get? Research has determined that specific locations on the basilar membrane match specific sound wave frequencies – except for the lower ones. The lower ones seem to match the frequency theory and the firing rate of the entire basilar membrane. There is a maximum firing rate for nerve cells, and cells take turns firing, which increases the maximum firing rate for all of the cells in the group. This process is called the volley principle, and between place theory, frequency theory, and the volley principle, we can see how information is processed by the brain to perceive pitch (Griggs, 2016).

From the Cochlea to the Brain

Now that we have seen what happens in the cochlea, let’s move out of the cochlea and continue toward the brain. The auditory nerve carries the signal away from the cochlea toward a sequence of subcortical structures. The first structure is the cochlear nucleus, and then the superior olivary nucleus in the brain stem. This signal then moves to the inferior colliculus located in the midbrain, and then on to the medial geniculate nucleus in the thalamus. The signal continues from the thalamus to the primary auditory cortex in the temporal lobe. While the exact location of the brain specifically responsible for response to pitch, the most responsive area seems to be the anterior auditory cortex, which is an area close to the front of the brain (Goldstein & Brockmole, 2017).

Hearing Loss

This graph demonstrates the hearing damage for workers in a noisy weaving factory. dBA is another abbreviation of dB.

So far, we have looked at the process for normal hearing. What if someone experiences a loss of hearing? How does that happen? Most hearing loss is associated with the outer hair cells, and damage to auditory nerve fibers. Damage to outer hair cells results in a loss of sensitivity in the basilar membrane, making it harder for someone to separate sounds, such as hearing a door close during a concert. Inner hair cell damage can also result in loss of sensitivity.

One form of hair loss is presbycusis, which is caused by damage to hair cells from extended exposure to loud noise, ingestion of substances that can cause hair cell damage, and age-related degeneration. There is a loss of sensitivity that is more pronounced at higher frequencies with presbycusis, and tends to have a higher prevalence in males than females. Noise-induced hearing loss is another form of hearing degeneration resulting from loud noises. In this case, the damage often involves the organ of Corti. It is also possible to have hearing loss that is not indicated by standard hearing test results, called hidden hearing loss. Standard hearing tests often measure hair cell function, which might not indicate issues with complex sounds (Goldstein & Brockmole, 2017).

Perception of Sound

We have covered perception of sound based on pitch, frequency and amplitude, so now what about how we perceive where a sound comes from? Imagine you are at the concert and you hear a baby crying in the audience. You turn your head to the left and see the parent quickly ferrying the child out of the auditorium. You knew where to look based on auditory localization. Now, let’s say you are in the school’s waiting room, waiting with other parents for your child’s name to be called so you can pick them up. It is a small room with quite a few parents, and when the teacher calls your name, you are able to hear it the first time, even though it travels two different paths – directly from the teacher’s mouth to your ears, and by bouncing off the walls of the small room. The fact that your auditory perception relies mainly on the direct path is called precedence effect. Think about this small, noisy waiting room again. Many parents are talking to each other. You are speaking with two parents, and are able to hear what they are saying even though others are talking all around you. Your ability to segregate your conversation from the other conversations in the area is called auditory stream segregation (Goldstein & Brockmole, 2017).

Localization of Sound

Let’s think back to our first scenario where we heard the baby crying while the concert recorder band is playing. You hear sounds from two different directions, which creates an auditory space When you locate the sound of the baby in that auditory space, it is called auditory localization. If you think about the baby’s cry and the sound of the recorders, you will see that they are different and would stimulate different hair cells and nerve fibers in the cochlea. Thus, the auditory system uses location cues created by the way the sound interacts with your head and ears. The two location cues are binaural cues, which depend on information from both ears, and monaural cues, which depends on information from just one ear. Research indicates three dimensions are involved in location of sound: the azimuth, extending from left to right, the elevation, extending up and down, and the distance the sound travels from its source to the person listening to it.

Binaural cues use the time it takes to reach both ears to determine horizontal positions (left or right), but they do not help with vertical information (azimuth). There are two types of binaural cues, interaural level difference, which is based on the difference in sound level, and interaural time difference, which is based on the difference between the time it takes for a sound to reach the left ear, and the time it takes for a sound to reach the right ear. Both time and level differences can be the same at different elevations, which means they do not account for the elevation of a sound, causing a place of ambiguity, or cone of confusion. Information using monaural cues can locate sounds at different elevations using the spectral cue (Goldstein & Brockmole, 2017).

NEURAL SIGNALS

Now that we have identified different cues, think about how they might send and receive signals through neural circuits. One theory, the Jeffress model, proposes that neurons used to transmit signals from the ears are designed to receive signals from both ears. In other words, each neuron processes signals from both ears. The signals move inward and ultimate meet as the neurons sending the sound from the right ear meet the neurons sending the sound from the left ear. The neuron they meet at are called coincidence detectors because they only fire when both signals meet at the same time. When they meet at the same time at this neuron, the neuron indicates that interaural time difference is zero. If the sound comes from one side first, the signal from the ear on that side begins sending signals before the other ear (Goldstein & Brockmole, 2017).

Auditory Areas of the Brain

Areas of the brain that have been indicated in sound location include the back of the cortex, or posterior belt area, and an area toward the front of the cortex, or the anterior belt area. There seems to be a “what” auditory pathway that extends from the anterior belt to the frontal cortex, and the “where” auditory pathway, which extend from the posterior belt to the frontal cortex. The “what” pathway works with determining what a sound is, and the “where” pathway determines where the sound is coming from (Goldstein & Brockmole, 2017).

BACK TO THE WAITING ROOM

We are going to return to the recorder concert. If the concert had been outside, perception of the sounds would have directly moved from the recorders to your ears, or direct sound. This concert was inside in an auditorium, so sound reached the ears of the parents through the direct path, and by bouncing off of the various surfaces of the auditorium, which is indirect sound. As parents talk to each other in separate groups, adding to a general array of sound sources the environment is called the auditory scene. You are able to separate out and listen to your conversation with another parent even though numerous conversations were going on around you. This ability to separate the sound from each source is called auditory scene analysis.

Imagine that you hear your name from a female voice while you are talking to a parent, and you saw someone open their mouth and look your way at the same time, so you believed the sound of your name came from that person (even though another parent said your name). You did this based on the ventriloquist effect, which occurs when sounds come from one place, but appear to come from another. In this case, you relied more on your vision than your hearing, and you were wrong. On the other side of this, people can use echolocation to detect the positions and shapes of objects without sight. People who cannot see often learn this technique of making a clicking sound and listening for echoes to determine locations and shapes (Goldstein & Brockmole, 2017). These examples show how important hearing is as a source of sensory information.

Conclusion

A simple concert shows us how much we use our hearing in our daily lives. Sound is processed as vibrations that are transported through the outer ear to the middle and then inner ear systems. Systems in the inner ear are responsible for transforming the vibrations into electrical signals that the brain can understand as audio messages. We also have mechanisms that help us determine where a sound is coming from based on which ear the sound arrives at first. Of course, sometimes we can be mistaken. This can happen when our eyes register one thing while our ears register a sound, causing us to make an assumption about where the sound comes from. Sound is important, and our ears can provide information when our eyes cannot, or when our eyes are mistaken.

Sources

Goldstein, E. B. & Brockmole, J. R. (2017). Sensation and perception (10th ed.). Boston, MA: Cengage.

Griggs, R. A. (2016). Psychology: A concise introduction (5th ed.). New York, NY: Worth Publishers.

Image Citations

"A close up of a microphone " by https://pixabay.com/en/microphone-shure-singing-music-2498641/.

"A graph representing sound, with time on the x-axis and air pressure on the y-axis" by http://oceanexplorer.noaa.gov/explorations/sound01/background/acoustics/media/sinewave_261.jpg.

"An audibility graph showing the dB level needed to hear sounds of different frequencies" by https://upload.wikimedia.org/wikipedia/commons/b/bc/Audible.JPG.

"The anatomy of the ear as described in this section." by 13699578_ML.

"The middle ear anatomy" by 13699578_ML.

"The anatomy of the cochlea " by 46938501.

"The organ of Corti" by 73652691.

"The auditory pathway" by 15313015.

"A graph showing the hearing loss of workers in a noisy weaving factory" by https://commons.wikimedia.org/w/index.php?search=threshold of hearing&title=Special:Search&profile=default&fulltext=1&searchToken=975xk3qgfyy96u9ixxtnhepzs#/media/File:Permanent_threshold_shift_(hearing_loss)_after_no

Show more
LEARN MORE EFFECTIVELY AND GET BETTER GRADES!
Ask a Question