Training tRANSCRIPT
Transcript for Audition / Hearing and Learning Part 1
– [Instructor] Do you ever think about hearing? Do you ever think about listening? We live in an active world alive with the chatter and noise of people, places, and things. Each day, as we step out into this world, we are immersed in sound. Having access to all that sound is important because there is a lot of information in it for us. For instance, the sound of these jackhammers might alert you to a nearby construction site. And the sound of a busy office might tell you that business is good. But there’s so much more. As infants, when we heard the sounds of spoken language and listened in to the conversations of mom, and dad, and of others that took place all around us, it helped us develop spoken language skills of our own. As we developed spoken language skills of our own, we gained access to new ways of obtaining information, storing information, exchanging ideas, and expressing feelings. What’s more? With our spoken language skills, we could access the keys to literacy. We could now learn to engage the world through reading and writing. As our literacy skills sharpened, avenues for higher learning opened up. We could learn about other people, history, science, math, and so much more. We could pursue the teachings of Plato and learn about Einstein’s theory of relativity, and a whole world of future possibilities opened up. So when you think of listening, don’t just think about hearing this sound or that. Instead, ask yourself, what has listening done for me lately? Welcome to the Wyoming Early Hearing Detection and Intervention Program’s Audition, Hearing and Learning course. In this four-part course, we will cover four primary topic areas. They are the development of auditory skills, the connection between hearing, learning, and literacy, the typical school scenario, and outcomes for a lifetime. In part one, we’ll discuss hearing, hearing loss, and learning. We’ll examine the power of a redundant auditory signal, the idea of hearing with the brain, and hearing loss in the auditory cortex. In part two, we’ll discuss the role of hearing in acquiring vocabulary and reading readiness skills, and the connection between hearing and literacy. In part three, we’ll take a look at the typical school scenario. We’ll discuss the auditory premise of education and present a typical classroom setting. And finally, in part four, we’ll review what we mean by outcomes for a lifetime and look at the tremendous effect audition has on us throughout our lives. Can you remember how you first learned to speak? Chances are that nobody sat you down and said okay, now I’m gonna teach
– Okay, now I’m gonna teach you how to talk.
– You to talk. In fact, learning to talk is actually the result of a great deal of hearing and listening to speech over and over throughout the first years of life. A child learns language, language meaning having something to say versus simply saying something, by being bathed in language each and every day. Now, to truly help you grasp the power of being exposed to language over and over and over, watch this video. After you know the chorus, advance to the next slide.
♪ Does the hanky panky ♪ ♪ Yeah, my baby does the hanky panky ♪ ♪ My baby does the hanky panky ♪ ♪ My baby does the hanky panky ♪ ♪ Hey, my baby does the hanky panky ♪
– [Instructor] Okay, I’m gonna take a wild guess. I’d be willing to bet that even if that was the very first time you’ve ever heard that song, you now know the chorus by heart and could jump right in and sing along the next time you hear it. Am I right? As that song so perfectly demonstrates, it takes listening to a redundant signal in order for children to develop vocabulary and language. In fact, spoken language is the product of hearing spoken language over and over and over again. Just like listening to the line, “My baby does the hanky panky,” sung over and over and over again. It takes a great deal hearing and listening before a baby says his or her first word. By the fifth month of gestation, the auditory system is fully formed, and over the course of the first year of life, a baby will hear words over and over. For this reason, babies are very familiar with hearing their mother’s voice. By about 12 months of age, a child will produce his or her first spoken words. Speech develops naturally because we hear. For most children, listening to spoken language is the primary manner in which they develop spoken communication skills of their own. Babies exposed to, bathed in, and surrounded by spoken language will develop better auditory skills earlier than those babies who do not receive this type of stimulation. It’s important to understand that when we hear, we hear with our ears and with our brain. But how is that? When babies hear sounds, their ears are essentially receiving auditory signals and sending those signals to the auditory cortex portion of the brain where, finally, the information is processed. When thinking about hearing and listening, it’s helpful to think of the brain as a computer’s hard drive or CPU, the central processing unit, and to think of the ears as the keyboard. We know that a keyboard is vital to entering information into a computer’s central processing unit or hard drive. It helps to think of the ears in much the same way. Without a healthy, properly functioning keyboard, only part of what you are entering via keystrokes is available. The same thing is true for the ears. So while the ears allow access to auditory information, it’s the auditory cortex portion of the brain that actually processes the information so that it can be meaningful for the listener. What happens during the first year of life is a terrific example of data inputting that precedes data processing. Just as newborns need to be capable of hearing auditory signals before being able to process it as sound, they must be able to hear spoken communication before being capable of speaking. So just as data must be entered into a computer via a keyboard, so too must auditory information be entered into the brain via audition. This analogy is very important to remember because, among other things, it will help you grasp the of making certain a child’s auditory system is working well. When we screen a child’s hearing for hearing loss, we’re essentially checking to make sure that the keyboard is working properly. Because the stimulation of the auditory cortex is vital in the hearing process, its development will play a pivotal role in the acquisition of spoken communication, reading and academic skills in children with typical hearing, as well as in children with all degrees of hearing loss. Have you ever heard someone say that the brain is plastic? When we say that the brain is plastic, we’re really talking about how the brain is self-adjusting and adaptive. It can wire and then rewire itself. This is what is meant by brain plasticity, not this. By the time a baby is three years old, his or her brain has become a complex center of electrical and neural activity. Because the brain is so active during this early period of growth and maturation, it’s primed and ready to be stimulated with all kinds of auditory information. After a child reaches the age of three, his or her ability to take advantage of the brain’s plasticity begins to decrease. In order for these neural pathways to grow stronger, they need to be consistently stimulated with sensory information. Again, the importance of a redundant signal. Each time a neural network is stimulated, that pathway grows stronger. If the weaker neural pathways cannot be adequately stimulated, then they eventually will begin to wither as a result of insufficient activity. This is referred to as the pruning process. In other words, a properly functioning keyboard, or ear, is the first step in providing data, auditory stimulation, to the hard drive, the brain. This is why it’s so important that a vigorous consistent auditory signal reach the auditory cortex. With respect to hearing and language development, consistently stimulating the auditory cortex with auditory signals is critical if we want to develop neural networks strong enough to become resilient to the brain’s natural pruning processes. In review, if auditory data is entered inaccurately, incompletely, or even inconsistently, analogous to using a malfunctioning keyboard, then that child will have incomplete or inaccurate auditory information to process. It’s essential and it’s critical to have a working keyboard and hard drive that work well and work well together. Imagine that you are typing a letter to a friend. In this letter, you mistakenly typed in the word tango when you really meant to type the word mango. Going in and correcting that is as easy as using the find function and swapping out every instance of the word mango for the word tango. Well, unfortunately, when it comes to hearing and hearing loss, there is no find function. We cannot go in and find missing or inaccurate information and correct it. There is no find function in the brain. If the information is laid down via a faulty keyboard or ears, then the information that exists on the hard drive is inaccurate. Once the keyboard is repaired, allowing data to be entered accurately, analogous to the child’s hearing loss being managed, enabling him or her to detect word sound distinctions, and to begin comprehension, what happens to all the previously entered inaccurate and incomplete information? Unfortunately, it still exists. The information must be retaught. There is no magic button, no find function that converts inaccurate auditory information and data into suddenly complete and accurate information. A common misperception is that people with hearing loss just hear things softer than a person with normal hearing, or that a person with hearing loss is deaf. Neither of these are true. Hearing loss is on a continuum. There are varying degrees of hearing loss. Hearing loss is like a filter that filters out sounds in a way that makes some sounds easier to hear and some sounds more difficult to hear. Hearing loss simply means that some sounds, but not all sounds, are being heard and processed by the auditory cortex. When we think of data entry into the auditory cortex as being incomplete or inconsistent, it’s because of the filtering effect of a hearing loss. As we just stated, hearing loss causes some speech sounds to be heard less clearly than others, making speech more difficult to understand even though it was heard. To get a better idea of what hearing loss is like, the following examples will help demonstrate the auditory information lost by this filtering effect. Speech perception is an important aspect of speech and language development. Let’s take a look at how speech perception can be affected by hearing loss. In this demonstration, try to follow along with Fred Flintstone and Barney and Betty Rubble as their conversation is altered by the filtering effects of hearing loss.
– One, two, three, four, five, six, seven.
– Ah-ha, you’re on my apartment building on Granite Avenue. You owe me 300 bucks, give it up.
– [Wilma] Fred, take it easy, it’s only a game.
– Wilma, just like them big tycoons, I play to win. Now, Barney, pay up or get out of the game.
– But I’m busted.
– That’s one down and two to go. Betty, that it’s your turn.
– I don’t have any more money either. You got it all.
– Then I’ll take a mortgage on your orphan’s home. Well, come on, shoot the dice, will ya, don’t just sit there like a dummy.
– I will not have you talking that way to our guests.
– Come on, Barney, I think we better go home.
– [Instructor] As you noticed, while Fred was speaking, the degree of hearing loss was indicated. When Fred was speaking and normal hearing was demonstrated, we could hear Fred speak and also clearly understand what he was saying. However, things changed as this clip began to demonstrate the varying degrees of hearing loss. When mild hearing loss was simulated, we could still hear Fred speaking, but it became more difficult to understand him. As a moderate hearing loss was shown, we could still hear Fred and Barney speaking to one another, however, it became very difficult to understand them. Finally, when a severe hearing loss was demonstrated, it was difficult to hear and understand them. We could barely hear Betty Rubble speaking, but understanding what she was saying was impossible. This is the take home message. It is important to understand that children with hearing loss have access to some sounds, but because they don’t hear all sounds, what they hear may not be intelligible or understandable. Let’s watch the video one more time, paying attention to the difference between hearing and understanding.
– One, two, three, four, five, six, seven.
– Ah-ha, you’re on my apartment building on Granite Avenue. You owe me 300 bucks, give it up.
– [Wilma] Fred, take it easy, it’s only a game.
– Wilma, I’m just like them big tycoons, I play to win. Now, Barney, pay up or get out of the game.
– But I’m busted
– That’s one down and two to go. Betty, it’s your turn.
– I don’t have any more money either, you got it all.
– Then I’ll take a mortgage on your orphan’s home. Well, come on, shoot the dice, will ya, don’t just sit there like a dummy.
– I will not have you talking that way to our guests.
– Come on, Barney, I think we better go home.
– [Instructor] Now, let’s look at the effects of hearing loss by using a visual analogy. In this visual analogy of hearing loss, normal hearing can be thought of as allowing us to see the full picture. Here we can see a magnificent vista, complete with a river, a mountain range, big blue skies and trees. If we were to simulate a minimal hearing loss, we could imagine that this image would be less complete, and sure enough it is. Fortunately, however, we can still figure out what we are seeing. We can still make out the river, the trees, the grass, the sky, and even the mountains in the background. However, there are a few parts missing, so we’re not getting the full picture. If we were to simulate a mild hearing loss, you can now see that trying to interpret what this image is showing has become much more difficult. Portions of the stream are missing as are parts of the entire foreground. And we’re missing all but tiny pieces of the mountain range and sky. All in all, we are now being presented with much less information to use to interpret what this image is. Now look at a moderate hearing loss. What information about this image is left? Not much. We can see some of the sky and a small portion of the foreground, but beyond that, we’re missing too much information to interpret what we’re supposed to be looking at. Think of a child with a moderate hearing loss sitting in a classroom. They’re still hearing, but their understanding is severely affected. Now look at what happens to the image. When we simulate a severe hearing loss, it’s virtually impossible to determine what we’re looking at. If we simulate a profound hearing loss, it becomes impossible to identify the image because so little information is available. Now, let’s add the information back in. You can see as pieces of the image reappear, the picture or message becomes more clear. The more sounds we have access to, the more clear the message is. In review, as children develop their speech and language skills, it’s critical that they’re able to receive or hear and process or understand all the auditory information we provide as speakers. The familiar sounds audiogram shown here is a terrific tool to use to help demonstrate just how unique and delicate speech sounds are. As you can see, it illustrates the characteristics, loudness or intensity, and frequency or pitch of speech sounds that we commonly hear as we about our day. Running horizontally along the top, we have frequency or pitch expressed in hertz. You’ll notice an increase in frequency or pitch as you move from left to right. The S sound, sss, is higher pitched, approximately 4,000 hertz, as compared to the M sound, mmm, which is approximately 250 hertz. Running vertically along the side of the familiar sounds audiogram, you’ll see loudness expressed in decibels. Moving from top to bottom, you’ll notice that loudness increases the further down you go. For example, the J sound, juh, is louder than the P sound, puh. The loudness level of normal conversational speech falls at approximately 45 to 50 decibel. It consists of an array of low frequency sounds, mid frequency sounds, and high frequency sounds, and is known as the speech banana. In general, vowels are low frequency information and consonants are comprised of more high frequency information. Consonants carry more of the intelligibility of spoken language. Vowels carry more of the power of spoken language. When we produce a vowel sound like ooh or eee, we use our voice to produce volume and our articulators are open, allowing the speech sound to be spoken without restricting the flow of air. On the other hand, when we produce some consonant sounds like S, sss, or TH, thh for example, the voice is not used, volume is reduced and the articulators are restricting the flow of air. A child with normal hearing will have access to both vowels and consonants and should be able to hear and understand words. Children with hearing loss will have certain speech sounds filtered out. This will make it difficult to understand what’s being said. These children may be likely to have difficulty producing these sounds in their own speech because they’ve not heard them clearly. So if a child hears speech in a limited or distorted way, then that child is likely to produce speech in the same manner. To gain a better understanding of vowel, consonant and speech intelligibility, consider the following demonstrations. Can you determine what word this is if you only have access to the lower frequency vowel sounds? This word could be anything. Stork, frown, clown, stock. There are any number of possibilities. But what if you had access to the consonant sounds as shown here? As you can see, it’s much easier to venture a guess. Now, imagine that these fill in the blank examples were spoken. If a linguistically mature adult who’s developed speech and language skills heard that second example, he or she may be able to quickly whittle down the choices to either click clock, clack or cluck. His or her brain will fill in the blank. But for a child with hearing loss who hasn’t yet sufficiently developed speech and language skills, filling in one or more blanks is very difficult to do. In this example, we have what it might be like for the brain to hear the sentence, Freddy thought he should find a whistle, as it would hear it with each of the varying degrees of hearing loss. As you can see, each degree of hearing loss will filter out more and more speech sounds. More of the intelligibility is lost as the growing degree of the hearing loss continues to filter out more and more sounds of speech. As you can see illustrated for a profound hearing loss, the hearing loss is such that the brain can only hear what it perceives to be either soft or loud sounds. Though the loss of intelligibility is mild here compared to a severe or profound hearing loss, the loss of auditory information can still disrupt the intelligibility of spoken words and present problems. Try saying the sentence aloud with the missing sounds. Again, remember, milder degrees of hearing loss may not be problematic for a linguistically mature person who has good attending skills and has a strong language foundation to fill in the missing speech parts. However, even a minimal hearing loss can sabotage the speech and language development of a child who is in the process of acquiring these skills. Finally, in this last demonstration, you’ll be presented with three groups of 10 spoken words, each introduced as it would sound with different degrees of hearing loss. Grab a sheet of paper. Once you have your paper, write down three columns numbered from one to 10 as shown here. As you hear each word, write down the word that you hear. Think of yourself as a third grader taking a spelling test. Advanced to the next slide when you’re ready to begin. This first sequence of 10 words will be presented with some high frequency sounds filtered out as it would sound with a severe high frequency hearing loss. In column one on your sheet of paper, write down the word that you hear. When you finish, advance to the next slide.
– [Person] Word number one, shoe. Word number two, tree. Word number three, math. Word number four, desk. Word number five, snack. Word number six, miss. Word number seven, test. Word number eight, thumb. Word number nine, fish. Word number 10, spill.
– [Instructor] This second sequence of 10 words will present the same words you just heard, but with fewer high frequency sounds filtered out. In column two on your sheet of paper, write down the word that you hear. When you finish, advance to the next slide.
– [Person] Word number one, shoe. Word Number two, tree. Word number three, math. Word number four, desk. Word number five, snack. Word number six, miss. Word number seven, test. Word number eight, thumb. Word number nine, fish. Word number 10, spill.
– [Instructor] Let’s listen now to the words as they would be heard with normal hearing. Write down the words in column three on your paper.
– [Person] Word number one, shoe. Word number two, tree. Word number three, math. Word number four, desk. Word number five, snack. Word number six, miss. Word number seven, test. Word number eight, thumb. Word number nine, fish. Word number 10, spill.
– [Instructor] Well, how did you do? How many did you get right? How many did you get right in column one? Or column two? How about column three? Does your score reflect your intelligence, or cognitive ability, or your ability to spell? No. Here’s the answer sheet of a person with a PhD. Do the wrong answers here reflect the intellect of the test taker? No. Your score reflects your ability to hear the words. How difficult might a spelling quiz like this be for a child with hearing loss? Can you understand why a child might look at his neighbor’s paper? If you were taking this test with someone else, would you feel compelled to do that? When you have finished going over your answers and comparing columns, please advance to the next slide. Having access to all speech sounds is important if we are to clearly understand what someone is saying. Detecting and diagnosing any hearing loss and beginning the appropriate management or early intervention is critical. If we are to truly capitalize on brain neuroplasticity, it is critical that we stimulate the auditory cortex with a full and consistent stream of auditory information. This is why the goal of the Wyoming EHDI Program is to screen newborns for hearing loss by one month of age, diagnose a hearing loss by three months of age, and by six months of age, make available the appropriate hearing health management and or early intervention services for children with hearing loss and their families. The longer a child’s hearing loss remains unrecognized and unmanaged, the more far-reaching are the snowballing effects of hearing loss. As you can see demonstrated on this chart, hearing loss affects much more than just a child’s speech and language development. Starting at the bottom of the chart, notice how each successive phase of a child’s development builds off the cornerstone of audition. Audition leads to spoken language. Spoken language leads to spoken communication. Spoken communication affects our experiences. And experiences affect our education and so on, until we get to our vocational choices, which affect our economic and cultural choices. Thus, the difficulties presented at one stage of development will affect the way that the next stage of development unfolds. The good news is that children with all degrees of hearing loss can achieve greater spoken language and literacy outcomes with timely and appropriate screening, diagnosis, management, and intervention. But remember, time is of the utmost importance. The best results are achieved when the identification, diagnosis, and intervention processes follow the one-month, three-month, and six-month timeline.