Policy in Plainer English

Hunger Vital Sign & Risk Screening Tools

June 07, 2022 Helen Labun Season 5 Episode 2
Hunger Vital Sign & Risk Screening Tools
Policy in Plainer English
More Info
Policy in Plainer English
Hunger Vital Sign & Risk Screening Tools
Jun 07, 2022 Season 5 Episode 2
Helen Labun

Find all supporting materials at the Hunger Vital Sign explainer series website.

This episode features an interview with Richard Sheward, Director of Innovative Partnerships at Children's HealthWatch

Citation for the Hunger Vital Sign tool and link to the original research:

Hager, E. R., Quigg, A. M., Black, M. M., Coleman, S. M., Heeren, T., Rose-Jacobs, R., Cook, J. T., Ettinger de Cuba, S. E., Casey, P. H., Chilton, M., Cutts, D. B., Meyers A. F., Frank, D. A. (2010). Development and Validity of a 2-Item Screen to Identify Families at Risk for Food Insecurity. Pediatrics, 126(1), 26-32. doi:10.1542/peds.2009-3146.

Audio Editing and Post-Production Provided By Evergreen Audio


Show Notes Transcript

Find all supporting materials at the Hunger Vital Sign explainer series website.

This episode features an interview with Richard Sheward, Director of Innovative Partnerships at Children's HealthWatch

Citation for the Hunger Vital Sign tool and link to the original research:

Hager, E. R., Quigg, A. M., Black, M. M., Coleman, S. M., Heeren, T., Rose-Jacobs, R., Cook, J. T., Ettinger de Cuba, S. E., Casey, P. H., Chilton, M., Cutts, D. B., Meyers A. F., Frank, D. A. (2010). Development and Validity of a 2-Item Screen to Identify Families at Risk for Food Insecurity. Pediatrics, 126(1), 26-32. doi:10.1542/peds.2009-3146.

Audio Editing and Post-Production Provided By Evergreen Audio


LABUN:

Welcome to the second of our short explainers for the Hunger Vital Sign tool. Last episode we went over why Children’s HealthWatch wanted a screening tool for food insecurity. This episode will give you information on how they defined what makes a good screening tool.  I’m your host, Helen Labun. And to help with the explanations, we have a guest expert from the organization that created the Hunger Vital Sign. 

SHEWARD:

I'm Richard Sheward, Director of Innovative Partnerships at Children's HealthWatch.


LABUN:

In the late 1990s and early 2000s, Children’s HealthWatch saw that children and families were suffering poor health due to a range of factors that weren’t apparent in a typical medical exam, including the effects of food insecurity. Their research team partnered with Doctors Erin Hager and Anna Quigg to develop a tool that would let health care providers identify patients with food security risks. This tool needed to build a bridge between what researchers were seeing in their detailed studies, to what health practice staff could quickly identify in the course of a short visit. We can phrase this in a way that applies to all screening tools:


SHEWARD:

So, the purpose of a screening tool is essentially to identify some unrecognized condition through the application of a questionnaire or a test or an examination, some procedure that's gonna identify this condition that you can't see. Essentially, they’re designed to do screening that will sort out individuals that probably don't have that condition from those that probably do have that condition.


LABUN:

In the words of researcher John Cook from last episode, screeners are making something that’s invisible, visible. 


SHEWARD:

And it's important to differentiate that a screening tool is not meant to be a diagnostic tool. That's a whole separate ballgame.


LABUN:

We’ll get into this point in depth in later episodes, it’s worth emphasizing here: a screening tool is only ever a first step. It’s flagging “hey, there’s something here you might want to pay attention to”. 

SHEWARD:

There are a few criteria for what makes a screening tool successful. It has to be reliable, meaning that you'll get the same results when that screener is repeated in the same target individuals or the same settings. It needs to have a certain level of acceptability, meaning that that tool is not embarrassing or socially unacceptable, that it's acceptable to the individual completing it. And then most importantly, it has to be valid, meaning that you're able to distinguish those that do have the condition from those that probably do not have that condition, so that it's measuring what it's intended to measure.


LABUN:

We’re going to start with what makes a screening tool valid.


We’ve already defined this as a form of alert system, and like any other kind of alert system, there are attributes that make it more or less useful. Alarm clocks are easy, they work well when they go off at the time you set them for. Other alarms are less exact. When I’m baking and my timer goes off, I check the cookies or bread or whatever is in the oven for done-ness – the time is good if it goes off when I set it for, but a few other things need to also be calibrated correctly to make that correspond to when I need to take my baking project out of the oven. I need to check myself for actual done-ness, and if the alert system works well then I don’t need to guess too many more times, or burn too many loaves, to get it right. Or, think of another common alert system – the pings from our phones. Calibrated correctly, the phone calls our attention to the important meeting we were in danger of missing, calibrated incorrectly and our days become a cacophony of pings until we learn to ignore them . . . and possibly miss the meeting. 


There are a few technical terms that let us measure these dimensions of a good alert system when it comes to clinical screening tools.    

SHEWARD:  

For a screening tool to be valid, it needs to have high sensitivity and high specificity. What we mean when we say sensitivity, it's the ability of the tool to truly identify those who have the condition. In the case of the hunger vital sign, high sensitivity means that the tool is able to identify those who screen positive that actually have food insecurity.


LABUN:

Think of a time your alarm clock didn’t wake you up – now imagine that wasn’t user error, your alarm clock simply had built in a margin of error where 30% of the time it didn’t sound. You wouldn’t want that. Similarly, if the alert doesn’t go off when the bread needs to be checked or the meeting is going to happen, it wasn’t a good alert. You want your tools to be sensitive to the condition they’re supposed to be alerting about. 

SHEWARD:

Specificity is the ability of the tool to correctly identify those who do not have the condition. In the case of the hunger vital sign, high specificity means that those who screen negative with the hunger vital sign actually don't have food insecurity.


LABUN:

Specificity is the alarm that didn’t ring. Do we congratulate our alarm clocks when they successfully realize that 3 am is NOT the time we intended to wake up? No. Do we congratulate our phones when they refrain from pinging us with trivial alerts? Maybe. The importance of specificity is easiest to see in a disease analogy – if a test says that we don’t have COVID-19, or the flu, or a broken foot, we want it to be correct. Specificity means a negative result is actually negative. Sensitivity means a positive case prompts a positive result. Smartphones are overly sensitive, alarm clocks are highly specific. 


Now, an alarm clock is easy because it’s either set for the correct time or it isn’t. Food insecurity in the context of health care isn’t as straightforward as recognizing it’s 6 am. The oven timer is the better analogy here. Its alarm is supposed to alert us to check whether our loaf of bread  is done. That done-ness depends on more than just how much time has elapsed, it depends on multiple variables: temperature calibration, air convection within the oven, rack position, and so on. Getting all those variables exactly calibrated to the conditions of the recipe developer would turn every baking session into a PhD-level laboratory test. Instead, home bakers do something much simpler - we subscribe to an unspoken theory that the time when our oven timer goes off not only reflects the number of minutes we set it for, but also is closely related to all those underlying factors that determined when the recipe developer’s baked goods were done.  


Putting this in statistical terms: Correspondence to a theoretically related set of underlying variables is convergent validity. My personal oven displays poor convergent validity, I need to set everything to convection and increase the temperature and check far more frequently at the end of baking than I should to account for a large margin of error – my oven would not pass the Children’s HealthWatch test.


Many different variables define health outcomes. The Hunger Vital Sign tool is for food insecurity within a health care context, so those health outcomes matter a lot. That complicates our task. Now we don’t only want to know how well the Hunger Vital Sign matches the results of food insecurity measured by the USDA’s survey, called HFSS, we also need to know if that corresponds to health risks. So, we need to check convergent validity – we know the Hunger Vital Sign reflects food insecurity (in our oven analogy, we know the timer is working as it should for measuring minutes passed), and we are now checking whether that also reflects a risk of poor health outcomes (or, whether the time for baking in fact reflects the loaf of bread being done).


And when I say ‘we’ need to check, I mean the researchers. It’s more of a ‘they’ kind of ‘we’. 

SHEWARD:

Convergent validity is a reference to how close the tool is to other variables or other measures of the same construct. In the case of the Hunger Vital Sign, we were able to demonstrate convergent validity by testing the Hunger Vital Sign against the gold standard, USDA household food security survey by comparing the results of those to the Hunger Vital Sign, and HFSS to the adjusted odds of certain health outcomes. The health outcomes that we looked at were: child’s reported health status by their caregiver, number of hospitalizations the child experienced, whether that child was underweight or overweight, positive for developmental risk, the caregiver’s self-reported health, and caregiver’s self-reported depression. And we found that families who screened positive with the Hunger Vital Sign had very similar adjusted odds of experiencing the same health outcomes as those that screened positive with the HFSS, the gold standard U S D A tool, demonstrating convergent validity.


LABUN:

Last episode we talked about the research team that had done 30,000 surveys over seven years in five different cities, with children and caregivers in different health care settings, including primary care and hospitals. The research team also built on the work of the U.S. Department of Agriculture in studying the dimensions of food insecurity. Lots of data. Through sensitivity and specificity, researchers set parameters for how closely the Hunger Vital Sign outcomes matched the results of standard food insecurity identification tool. Through convergent validity they checked that this response correlated to the bundle of poor health outcomes for children that the whole project was designed to help avoid. 

 

This process is common across screening tools, not just Hunger Vital Sign. 

 

SHEWARD: 

For a screening tool to be valid, it needs to have high sensitivity and high specificity. What we mean when we say sensitivity, it's the ability of the tool to truly identify those who have the condition. . . .Specificity is the ability of the tool to correctly identify those who do not have the condition. Convergent validity is a reference to how close the tool is to other variables or other measures of the same construct


LABUN:

The next episode will go one more layer into the details of validation, so if this quick introduction was too quick never fear. There are written resources linked through the show notes, and we’re about to talk through how testing validation plays out for the Hunger Vital Sign. If you’ve heard all you want or need to know about validating screening tools and want to skip ahead, then go on through to Reliability. 


Here's a recap of main points from this episode:


  • Screening tools are designed to make an invisible condition visible. That’s only a first step. Screeners work from probabilities based on large datasets, they flag a concern, but you can’t know how best to respond to that concern in an individual case without learning more.
  • Learning more is both a diagnostics issue and also a human interaction one. For the diagnostics, you can set your levels of accuracy using sensitivity and specificity parameters. We’ll get into the trade-offs and the role of additional diagnostic tools in later episodes. For the human interaction, a patient needs to feel comfortable saying they need or want assistance with food security, it’s their choice to work with a health care provider and to guide how their needs can best be met. We’ll tackle that in the second half of the series, when we talk about implementation.  
  • An effective screening tool also needs to be reliable, it will return the same results in different situations. We’ll dedicate more time to that, and how it connects to the concept of standardization, after we’ve finished with the validation part. 
  • Finally, researchers wanted the screening tool to be valid as correlated not only to food insecurity as measured by the USDA, but also to the risk of negative health outcomes. This is convergent validity. I know, convergent validity isn’t even helpful as a Scrabble work, but if you read any of the papers linked to tools like Hunger Vital Sign, you’ll see it. At the most basic, this is the measure of whether the Hunger Vital Sign tool is giving us the information we really want to know.   


And again, if you already got the idea from this short explanation, go ahead and skip to Reliability. If not, tune in to the next episode where we go over it one more time. For more resources on the topics covered in this segment, click on the link in the show notes.