*Kate Lorig, Dr. P.H., is the Director of the Stanford Patient Education Research Center and Professor of Medicine in the Stanford School of Medicine. She has more than 30 years’ experience in developing, evaluating, and implementing self-management programs for people with chronic diseases. Kate is a member of the Evidence-Based Leadership Council.
As health care payers and providers increasingly look to partner with community-based organizations (CBOs) to improve the care of their patients and lower their costs, the evidence-based health and wellness promotion programs (EBPs) CBOs deliver represent a promising vehicle to achieve the integration of social and medical services that is at the heart of much system reform.
But implementing and marketing EBPs is not without its challenges. Health care partners want hard proof that programs and services will truly impact the measures they care about. Even so, EBP providers need to ask themselves important questions about how, why and when they measure their programs’ impact.
The hallmark of evidence-based behavioral interventions is that there is evidence. This means that they have been shown to be effective in one or more trials. In some cases, the research literature supporting an intervention’s efficacy stretches to 100 or more studies. Nevertheless, every funder and every agency seems intent on measuring outcomes for the specific programs they are supporting. Somehow, it is expected that the results will be different for “my population” or “my city or state.” We would never make these demands for a medication approved by the Food and Drug Administration. One might say, “So why not measure outcomes? What can it hurt?” Unfortunately, this is not necessarily true. Several harms may result.
First, measuring outcomes is expensive—at least if it’s done correctly. Every dollar spent on measuring outcomes is a dollar not spent on providing services. Second, data collection may drive away the very people you wish to serve. Individuals may not want to give data or in some cases they may not be able to read and write English. Third, after all your efforts you might end up with data that is not useful. It is easy enough to collect data at the beginning of an intervention; getting follow-up data and chasing down missing data are other issues. These activities take dedicated trained staff and can be very expensive. If you do not end up with about 70% of the folks or more supplying outcome data, or if your data are not 90-95% complete, then any findings you have will be highly questionable. Finally, measuring outcomes takes effort and is a hassle. This is especially true if you are not going to learn anything new.
So what to do? First, find out what is known about your intervention and your outcome of interest. Do we know if the Chronic Disease Self-Management Program (CDSMP) or EnhanceFitness evidence-based program (EBP) prevents falls? Do we know if we treat the depression of those who enter our programs depressed, their health care utilization will also decline? It is relatively easy to find this information. Go to Google Scholar and enter search terms relevant to your EBP, such as “EnhanceFitness and Depression”. Peer-reviewed articles will pop up. You will then have to do a little reading to dive into the research base underlying your program. Even if you cannot get the whole article, you can read the abstract for high-level results. A few hours of upfront research is a lot easier than spending hundreds of hours showing something that is already well-known.
Next, decide what you really want to know. Then ask the important question, “Who Cares?” If you do not have an easy answer, then start again. The reasons for measuring outcomes in the real world are to ask:
- Are we hurting people?
- Will the outcomes change the way we do things?
- Might the outcomes influence policy?
Once you have decided, the next question is how to measure. There are loads of scales (one or more questions that measure something relevant to the EBP, like falls risk, pain or depression). You will find many of them at http://patienteducation.stanford.edu/research/. A couple of words of caution when using scales:
- Do not try to make up your own questions—leave this to the experts.
- Do not pick and choose questions.
- Use a whole scale not just a part of a scale.
- Do not change questions or scales without talking to the person who wrote the scale. It might seem easier to ask people to rate something from 1-5 instead of 1-10, but this means that they have to make at least a 20% change to measure a difference.
Even though doing additional outcome measurement for EBPs may not always be the best use of resources, there are many important questions to ask: How do people find out about our program? Why do people drop out? What does it take to retain leaders/instructors? Is wording the advertisement one way better than another? Which times of day or days of the week are best for classes? Do any of the things you do to screen leaders predict their success or staying with the program?
The bottom line: Do not measure just to measure. Measure to achieve something.