Hearing Matters Podcast

Speech Audiometry and Issues in Recorded versus Monitored LIVE Voice feat. Lisa Lucks Mendel, Ph.D.

Hearing Matters

Send us a text

Unlock the secrets of speech audiometry and speech perception with the renowned Dr. Lisa Lucks Mendel. With over 35 years of expertise, Dr. Mendel offers an enlightening exploration into the significance of choosing the right tests for speech perception assessments. Learn why classic tests like NU6 and CIDW22 remain relevant and how full 50-item word lists provide a more authentic reflection of natural speech sounds. Discover the rationale behind shorter word lists and how they can streamline assessment without compromising their purpose.

Get ready to unravel the complexities of evaluating speech recognition in challenging auditory environments. The Signal-to-Noise Ratio 50 (SNR-50) test stands as a pivotal tool in understanding hearing loss and the benefits of hearing aids. As we examine the nuances of phoneme-focused scoring, particularly impactful for cochlear implant users, we offer fresh insights into setting realistic expectations for auditory device performance. This episode also delves into the scoring protocols that might just change the way we interpret hearing capabilities.

Join us as we compare the efficacy of modern MP3 recordings against traditional monitored live voice (MLV) in audiometric testing. Uncover the surprising findings from our student-led research and the implications for clinical practice moving forward. As we advocate for standardized methods in speech and noise assessments, Dr. Mendel reflects on the historical recommendations that still resonate today. This episode promises a comprehensive look at enhancing real-world hearing evaluations, leaving our listeners informed and inspired by Dr. Mendel's invaluable contributions.

Connect with the Hearing Matters Podcast Team

Email: hearingmatterspodcast@gmail.com

Instagram: @hearing_matters_podcast

Twitter:
@hearing_mattas

Facebook: Hearing Matters Podcast

Blaise M. Delfino, M.S. - HIS :

Thank you. You to our partners. Sycle, built for the entire hearing care practice. Redux, the best dryer, hands down. CaptionC all by Sorenson. Life is calling. CareCredit, here today to help more people hear tomorrow. Fader Plugs: the world's first custom adjustable earplug. Welcome back to another episode of the Hearing Matters podcast. I'm founder and host Blaise Delfino and, as a friendly reminder, this podcast is separate from my work at Starkey.

Dr. Douglas L. Beck:

Good afternoon. This is Dr Douglas Beck with the Hearing Matters podcast, and today's guest is Dr Lisa Lux Mendel. She is a PhD. Lisa is a professor emerita in the School of Communication Sciences and Disorders at the University of Memphis. She is a clinical research audiologist with more than 35 years clinical and research experience in the assessment of speech perception for individuals with normal hearing and hearing loss and has published extensively in the area of speech perception assessment. Dr Mendel is a fellow of the American Speech-Language-Hearing Association. She has received the honors of the Council of Academic Programs in Communication Sciences and Disorders and serves as chair of the Audiology National Advisory Council for Educational Testing Services.

Dr. Douglas L. Beck:

Those are the folks out in New Jersey, right, that used to run the SATs and all that stuff. Lisa, welcome, it's so nice to see you and by way of disclosure I should say that 40 years ago for me, probably just a few months ago for you, we worked together at the House here Institute doing some research projects on cochlear implants before the FDA approved them, and we worked with my dear friend and resident genius, dr Jeff Danhauer.

Dr. Lisa Lucks Mendel:

That was how we first started Healthier Institute. I was a Ripley Fellow. I was a PhD student at UC Santa Barbara, yeah.

Dr. Douglas L. Beck:

So today I want to talk about speech audiometry, because you are among the most published people in that area and I think that in audiology we do a lot of clinical work, clearly, and we do a lot of research work, and sometimes the research, validation and verification does not make it all the way to clinical work, and I mean that in a kind way. But I think that most people are doing word recognition score testing probably incorrectly. So what I'd like to do is let's talk a little bit about NU6s and CIDW22s, because I think those are the most common monosyllabic word tests used, and can you start by telling us a little bit about the idea of balancing and word list length and things like that? How is it supposed to be done?

Dr. Lisa Lucks Mendel:

When I think about speech audiometry or speech perception assessment, I think the thing that I always would tell my students what is your purpose? Why are you doing this? Are you trying to find out how they're doing with speech understanding in a controlled environment? Do you really want to know? How does this predict their performance in a real world environment? And when you think about what the purpose is, then it helps you determine what's the appropriate test.

Dr. Douglas L. Beck:

I love that. That's a great approach, thank you.

Dr. Lisa Lucks Mendel:

So, yes, you're right, we still use monosyllabic words, meaningful monosyllabic words. Nu6, w22s are older than all of us. They were designed to assess telephone communication back in the 20s, 30s, 40s.

Dr. Douglas L. Beck:

Yeah, that was the Bell Labs out of Murray Hill, New Jersey.

Dr. Lisa Lucks Mendel:

Exactly the early ones. You know, nu6 and W22 came a little bit later, but it was the same model, and so we, as audiologists, said well, let's just see if we can just use those word lists to see how people are understanding speech, which is a good idea. Back then we still need to be moving forward. But to your question, so the modus alabic words. They are word lists that are 50 items NU6, w22s. There are many others out there and the idea behind the 50 item word list was to make sure that they were either phonetically or phonemically balanced. Most of them are phonetically balanced, meaning you are using speech sounds that typically occur at their regular frequency within the English language.

Dr. Douglas L. Beck:

And that's important because it's not necessarily all the sounds of the English language. It's the sounds in proportion to how they occur in common speech.

Dr. Lisa Lucks Mendel:

Exactly, and if you don't do an entire 50 word list, you no longer have phonetic balancing. Now, if that's not important to you, then that's okay, but the bottom line is, those lists are what we are dealing with, not the words themselves. Just a word by itself is not phonetically balanced. It's the entire list that's balanced.

Dr. Douglas L. Beck:

Yeah, and I remember some research that was done. I hesitate to say the name because I might get it wrong, but I remember and tell me if this is correct. About four or five years ago there was an article that came out and the researchers were comparing word lists 25 word lists, whether they be NU6s or CIDW22s. That wasn't the point. The point was that if they just used random monosyllabic words out of the phone book or out of an encyclopedia, they got the same results because none of them were particularly balanced or particularly clinically efficacious.

Dr. Lisa Lucks Mendel:

Exactly, and so it's not meaningful. I know a lot of people want shorter word lists. I know it takes time to do a 50-word list, but if you want to be sure you're assessing regular speech sounds for English speakers, then you really do need to do that full list with that genetic balancing.

Dr. Douglas L. Beck:

And for the short list, there are reasonably well-documented 10-word lists. Can you talk about those?

Dr. Lisa Lucks Mendel:

Yeah, there are some who have taken the 50 items and they've rank ordered the most difficult items in that 50-word list and put them as the first 10. So if a patient can get all of those first 10, then they don't need to do the rest of the test because they've gotten the most difficult items first 10, then they don't need to do the rest of the test because they've gotten the most difficult items. I'm okay with that.

Dr. Lisa Lucks Mendel:

But again what's your purpose? If you just want to see how they do with 10 difficult words, okay. Is that telling you much about what their speech perception ability is? I'm not sure.

Dr. Douglas L. Beck:

This becomes a very interesting discussion because you know there are people now who are advocating that rather than doing speech in quiet you know a typical word recognition score list we should probably just be doing speech in noise. That to me is a hard sell. I don't want to replace speech in quiet because I think we have 75 years of history of taking those measurements, but I do think that a far more meaningful measure is speech in noise because that tells us how the patient is doing in the real world, more or less. And of course there is no face validity. On most speech and noise tests, including the, you could do it seriously in headphones because then we get into the localization versus lateralization discussion Is the sound inside your head or is it outside around you?

Dr. Douglas L. Beck:

And requiring that you use and that your brain considers head shadow effect, interaural loudness differences, interaural timing differences those things all go into speech perception and noise, but they're not measured. If you're doing a test such as one ear and let's talk a little bit about that If we're doing speech in noise, some people do measure speech and noise in one ear, speech and noise in the other. I personally don't understand the value of that because I can see that as a signal to noise ratio test. I can see that as figure ground test. But if you're not doing it in the acoustic environment, that involves localization. I'm not sure how that relates to anything diagnostically.

Dr. Lisa Lucks Mendel:

Well, I agree, and I think we need to again step back a second and remember that speech. Understanding is a bottom up and a top down process, right?

Dr. Lisa Lucks Mendel:

So, am I hearing all the acoustic pieces of information? Do I hear the speech sounds, the different phonemes? But then I need my brain to then fill in the gaps, perform closure. I didn't quite hear everything exactly correct. So if you're just doing speech and noise in one ear, you're not really taxing the brain. And we say we hear with our brains. And I think there's a large, you know, it's a large strength to that comment.

Dr. Douglas L. Beck:

Yeah, I have to agree. I mean, most of the tests that I really like in speech and noise testing involve a speaker in front with the primary speaker noise in the back. Now, that's nowhere near a perfect test Nobody's arguing that that's a perfect test but I think it's the best replication that we can do in a booth without using virtual technology and I do think that ultimately we'll have three or four or five speakers in a booth and we'll be able to do it virtually so that it is all around you and we can move the primary speaker. I mean you could do that now in labs, but it's not clinically easy to do in most booth situations.

Dr. Lisa Lucks Mendel:

Right, I think the improvement in technology, though we'll be able to get that in some kind of a virtual surround sound, something in the booth, probably sooner rather than later, and I think the other thing that's important is what's the type of noise that we use in those tests?

Dr. Douglas L. Beck:

Oh, I'm so glad you mentioned that. Tell me about that.

Dr. Lisa Lucks Mendel:

Yeah, because a lot of the tests have well, they vary. Some of the tests have speech-shaped noise. That's not actual talking, it's kind of sound and that is not typically what people complain about when they say I hear you but I don't understand what you're saying because it's noisy. What they're complaining about is other people are talking. So many of the speech and noise tests that we have today do have people talking in the background and there's multi-talker babble and then there's babble, and I think babble means you have so many people talking. We used to call it cafeteria noise, where you hear the people in the background but you can't really make out too many of the voices. Multi-talker like two, three, four talkers. You're going to hear certain words from certain people and I think that's the more realistic background noise that people complain about in a restaurant or a noisy environment.

Dr. Douglas L. Beck:

Yeah, people will think that we rehearsed this, but we didn't. But I agree with you 100%. In fact, in the Beck Benitez speech and noise test, which is free, we recommended 4Talk or Babbel as well, for exactly that reason that it's very, very challenging and as you get to 10 or 12 or more speakers it becomes speech noise and it does become indiscernible. And the idea of why we want 4Talkr was that we wanted it to be difficult, we wanted it to be challenging, to replicate what would happen at a cocktail party, what would happen at a restaurant. And I agree with you speech spectrum noise, narrow band noise, white noise, gaussian noise, those noises, that sort of a sound. Your efferent nervous system can actually suppress that by two or three dB, which you can't do with four-talker babble. So it's much more challenging to use four-talker babble.

Dr. Douglas L. Beck:

Now, if you don't have four-talker babble, what I usually say is okay, use speech spectrum noise. But then the most important thing is you have to have an unaided score and an aided score, regardless of which background noise you use. Because if you're trying to use it as part of best practices, perhaps for hearing aid fittings, you want to get the unaided score for your patient with nothing in their ears and see how their speech and noise score is and then, aided, it could be their hearing aids that they've been wearing for 10 years, it could be a new product that they're trying and my philosophy is, if you don't improve the SNR50, the signal to noise ratio, to get 50% of the words correct, if you're not improving that, you're probably just making things louder. I mean, the goal is immediately for the patient to have an improved signal to noise ratio such that they can better understand speech and noise, because that's mostly why they came to see you.

Dr. Lisa Lucks Mendel:

Yeah, and I think that you know the point you bring up about the SNR-50,. Not everybody's doing that on a regular basis. We're still doing percent correct speech perception scores. I'm not sure how meaningful that is.

Dr. Lisa Lucks Mendel:

The SNR-50, which people I think are calling either SNR-50 or speech reception threshold not to be confused with speech recognition threshold, which is a spondee threshold, which drives me crazy, because a lot of people do use that synonymously. The SNR50 is a more realistic measure, as you said, because it's that, what is that? Where would they get that 50% correct in noise? And then how does it compare to a person with typical hearing to look at the SNR loss and what's the difference between those with typical hearing and those who have hearing loss? That's helpful information to know. Where do they stand now and, like you said, unaided versus aided, I think we can see those comparisons.

Dr. Lisa Lucks Mendel:

You know the Quentin used that comparison of SNR loss and said, okay, if you have a moderate loss, you need this kind of hearing aid technology. Well, we're past that now, because all of it said you need directional mics. Well, every hearing aid uses directional mics Pretty much, but we can use it more from a diagnostic standpoint and I think that's really helpful.

Dr. Douglas L. Beck:

Yeah, and for those not familiar, let me walk you through an SNR-50, how you would do that and, dr Mendel, please correct me if I'm wrong. So what we recommended in the Beck-Benitez test about 70 dB of speech in the front, hl, so that's loud for people with mild to moderate loss. And what you're going to do is, in the background noise with your four talk or babble. That might start at 55. So your signal to noise ratio is 15. And that's a very easy task for people with normal, mild or moderate hearing loss. Then what you do is you give them three words, let's say say the word went, say the word shop, say the word thought. If they get all three correct, that's great. Then what you're going to do? You keep the primary talker in front at 70. Now you bring up the background noise five. So now you're at 60. That's a 10 dB signal to noise ratio. More challenging but still easy.

Dr. Douglas L. Beck:

Mead Killian published a scale 30 years ago where he said people with normal hearing and listening ability need about zero, one, two or three dB signal to noise ratio to get half the words correct SNR 50. People with mild to moderate loss need about 8 dB SNR to get 50% of the words correct and people with severe loss need 15 or greater. So that's your basic scale and you can look that up. Mead Killian brilliant paper. It was in seminars and hearing. So now we're at a 10 dB SNR. So that's pretty easy, because people with mild to moderate loss need about eight decibels so they get those three words right. Then what I would do is increase it five more. So now your background noise is 65. That's the four-talker babble, your speech in front. Your primary signal is 70. That's a five dB SNR. That's where things get interesting, because a lot of people with mild to moderate loss cannot repeat those words. So then what you might do is go down a little bit, make it easier on the background noise, bring that down to, let's say, 58. So now you've got a 12. So you're going to more or less in a Houston Weston-like approach.

Dr. Douglas L. Beck:

Like we do with puritans, you can go up and down and find the point where they get half the words correct. That's how we do it in Beck Benitez. There's lots of ways to do it. You know, there's the BKB protocol, there's the AZBIO protocol, there's the Quicksend protocol. They're all fine. I don't care which one you use, but the point is to understand how you do that. What's an SNR50? The signals-to-noise ratio, to get 50% of the words correct. And if you do it the way Dr Benitez and I did it, we're just keeping the primary speech in front, stable, and we're varying the background noise to see at what point does the patient fall apart, and that's pretty reliable. So I would get an unaided speech, snr 50 and then aided, and again, if you're not improving their SNR 50, you could argue that you're just making things louder, and louder is not clearer. Most patients want sounds clearer, not louder. So, lisa, I'm sorry, I didn't mean to hijack that, but I wanted to make sure that we've explained what an SNR 50 is. Did I get that right? You want?

Dr. Lisa Lucks Mendel:

to add to that you can Very good. You studied hard. Most people are not doing it, as far as I know, are not doing it routinely in the clinic and I like what you're saying about unaided speed perception and aided. And again, I don't know how many people are actually doing that in the sound field in their hearing aid fittings. But it gives you the baseline to see whether you're seeing improvement or not, and to me that's telling you more than if you look at monosyllabic words or NU6, quiet and noise, or unaided-aided, because it's a lot.

Dr. Douglas L. Beck:

Yeah, and these are functional measures. I mean, if you have an SNR50 unaided of 14 and an aided SNR 50 of six, you've dramatically helped that patient. You know that's a major change.

Dr. Lisa Lucks Mendel:

And I think also setting up expectations for the patient. So if you show them here's where you are unaided and then here's where you are aided, and show that you do see improvement, it's not perfect. You're still going to struggle in background noise. We're not quite there yet to get you 100% there, but it's going to show improvement.

Dr. Douglas L. Beck:

That's great. All right, I want to spend a few minutes on scoring monosyllabic word recognition score tests because I think, clinically, we have a protocol that may not serve the best interest of the patient, and Dr Mandela share your thoughts on that.

Dr. Lisa Lucks Mendel:

Yeah. So most people will score using what we call synthetic or whole word scoring. So the patient says you say say the word yard. The patient says yarr, and you get it wrong. Say the word. And they say and they get it wrong. They get no credit for the phonemes that they did get correct. I'm a proponent of phonemic scoring where if you took a 50-word list for the NU6 or W22s, there are 150 phonemes. So if I only miss one phoneme per word, I've done pretty well and I've gotten about three-quarters of this test correct. It gives me credit for what I did get correct and doesn't harm me for the things that I didn't get correct.

Dr. Douglas L. Beck:

I love that and that brings me back, you know, again going back 40 years. But when we were trying to evaluate the sounds that a cochlear implant patient was perceiving, we actually scored them all phonemically Because if you said the word cat and the patient said catch, they got you know. So cat is three phonemes ca-at and what they said is ca-ach. So they got two out of three right rather than marking that wrongt-ch. So they got two out of three right rather than marking that wrong. We gave them credit for the two out of three that they got correct and that told us a lot more about what they were actually perceiving, rather than did they get the word correct or not.

Dr. Lisa Lucks Mendel:

Exactly. And now back in those days we couldn't do a lot of tweaking of the implant, but today we can. So, if an audiologist is getting that kind of information and that sounds like a ch, then they can make adjustments literally in the electrode array or in the mapping to improve the perception of that particular frequency range, to hopefully improve the ability to hear the difference between ch and sh and whatever.

Dr. Douglas L. Beck:

Yeah, and I think that that works as well with hearing aids. You know, as we want to know where the errors are, because that tells us where to apply corrections, where to reprogram.

Dr. Lisa Lucks Mendel:

Absolutely. And I have the other point to raise about scoring, doug, that I think is really important is reminding people that when you're doing a percent correct score on a 50 item or even a 25 item monosyllabic word list and you get, let's say, 80%, and next year the patient comes in and they got 88% and you say, oh good, your speech perception's gotten better. Well, again, we have to go back to some of the germinal information. We know that what is considered an improvement in a percent correct score on a 50-item word list and you go back to Thornton and Raffin's data, you look at Carnage-Lass who updated it back in 97, I believe we have to look at are those critical differences? Is that improvement truly an improvement? And most of the time it's not.

Dr. Douglas L. Beck:

This is so important. I'm glad you mentioned this. Thornton and Raffin, I think the original publication this is off the top of my head so it's probably wrong, but I think it was JSHR and I think it was in 77 or 78, something like that and the difference of the magnitude of these differences is huge. If you did a 25 word list and the first score was 92, and then you check them a year later and the score was 64. Those are actually the same. They are within the statistical probability of each other at a 0.05 alpha level. So, thank you, that's brilliant.

Dr. Douglas L. Beck:

And the point would be, if you did the full 50 word list, the variability is much, much less. And so then you could say statistically significantly, there's been a change. When somebody goes from 100% to 88%, that's the same. There's no difference statistically in those two. And I know that's hard for people in clinic to say, oh, but why? Well, because it's an imperfect test. It's not a very good test. So before I let you go, I'd like you to discuss two things monitored live voice versus digitized recorded speech samples and the time involved to do monitored live voice versus digitized recorded speech testing.

Dr. Lisa Lucks Mendel:

Happy to do that. We have published a couple of studies looking at monitored live voice versus recorded. Back in 2011, I worked with one of my students and we looked at comparing the time it took to do monitored live voice 50 item word lists using the CD recordings, with four second interstimulus intervals and two second interstimulus intervals, compared to doing that with monitored live voice, and those data showed that monitored live voice is faster statistically. But clinically our opinion is that it's not a clinically significant difference and the reason for that is number one it's less than a minute to do the recorded. But, more importantly, what we know about the variability of monitored live voice people's accents, the speed of their speech, intonation patterns all of that really affect how well they present those words and if you were to compare clinic to clinic or year to year, you would get different results because of those variables. We just did a study that just came out in AJA in December and we called it recorded word recognition testing is worth the time.

Dr. Lisa Lucks Mendel:

Again, a student project that I did with two students, allie Austin and Catherine Ladner, where we looked at okay, now we can do MP3 recordings through the audiometer. That's going to be faster for sure, right? You just push the button. Correct, it presents the next word. Incorrect it presents the next word no interstimulus interval. It's going to be faster. And we found out that it's not faster, that monitored live voice is still faster than that. We were really kind of surprised by those results. But a couple of things happened. One these are still the same recordings that are on the MP3.

Dr. Douglas L. Beck:

Right, they've just been digitized differently. Yeah.

Dr. Lisa Lucks Mendel:

Correct. So the speed of the speech that's being presented is exactly the same, right, but you have shortened the interstimulus interval and in monitored live voice, you can vary your speed as much as you want.

Dr. Douglas L. Beck:

Yeah, but here's the thing Over the 50 words presented, monitored live voice versus recorded, there was less than one minute difference. Total, right, exactly, yeah, so so it's not like oh, we do it, we did it. Digitized, that took 45 minutes and monitored live voice was one minute. No, no, the whole 50 word less. The difference was less than a minute, I think it was. Was it 50 seconds or something?

Dr. Lisa Lucks Mendel:

48, 48, 48 seconds yeah so. I know time is money in many clinics, but when you're wanting to do best practice, it's just the thing we've got to do, and these MP3 recordings are available. You can plug them into your audiometer and you're good. The other thing I want to mention is that the average score using the computer MP3 presentation of the word was significantly better for those who had typical hearing compared to those who had hearing loss, and so that's showing. But that only happened for the computer-assisted presentation, not MLV, not monitored live voice.

Dr. Douglas L. Beck:

Interesting. Why do you think that was?

Dr. Lisa Lucks Mendel:

Well, that tells me that MLB is not as capable of differentiating performance between adults with normal hearing compared to those with hearing loss. So really, MLB is not diagnostically sensitive to those with hearing loss. Maybe so with normal hearing, but not so with hearing loss.

Dr. Douglas L. Beck:

So that alone is a good reason to get it recorded, there might be some sort of I don't want to say placebo effect, but some sort of bias from the person running the test. If you see the person struggling with MLB, maybe you do slow it down, maybe you do say it a little bit louder into the mic. You know you do things like that because you're there and it's a live situation and you're trying to maybe help. And it's hard to be perfectly objective and not smile and not, you know, be involved, because you're talking with a patient who needs your help.

Dr. Lisa Lucks Mendel:

So well, and we did measure how people with typical hearing and hearing loss responded, and it does take longer for those who have hearing loss. For exactly those reasons, we probably slow down a little. They need more processing time to provide the answer. So so, yeah, I would agree with that.

Dr. Douglas L. Beck:

All right, so bottom line, just your thought on the idea. Before I let you go, Lisa, one last issue Do you anticipate that anytime in the near future we will have speech and noise actually replacing speech and quiet in the diagnostic battery?

Dr. Lisa Lucks Mendel:

I don't know that that will happen. I do think there is some valid information we get from speech and quiet, but more importantly, we need to add speech and noise to the routine. There's more information that we can find if we get assessment of what they're doing in background noise, but we have to do it with ecologically valid standardized tests that really do tell us how people are doing in a real world environment.

Dr. Douglas L. Beck:

Yeah, I have to agree, and I this is not a new idea. The first time I read this was 1970, ray Carhart and Tillman. Carhart and Tillman said speech and noise should be part of every audiologic assessment. And that was, oh, 55 years ago and we haven't done it yet. All right, well, dr Lisa Lux Mundell, it is a joy to see you again and to work with you, and I wish for you a joyful 2025. And thank you so much for participating.

Dr. Lisa Lucks Mendel:

Appreciate it. Thanks for having me on.

People on this episode