3 Types of Outcomes Assessment from Other Industries

By Derek Dietze, MA, FACEHP, CHCP, President, Improve CME LLC

Alliance National Learning Competency 3.1
Use evaluation and outcomes data to assess and determine:

  1. the educational outcomes/results of the activities/ interventions on participants’ attitudes, knowledge levels, skills, performance and/or patient outcomes,
  2. unmet learning needs, and
  3. the quality and success of activities/interventions

by…

  1. Identifying the level(s) of outcome associated with objectives and expected results of the activity/intervention.
  2. Selecting assessment methods and tools that are appropriate for the goals and objectives of the activity/intervention, based on the practice setting and resources (e.g., time, expertise, staff, budget, stakeholder expectations).
  3. Analyzing assessment data in order to draw conclusions about the effectiveness of the activity/Intervention, based on expected results.
  4. Analyzing assessment data in order to identify learning needs that future activities/interventions can address.

Searching inside the CEHP community to find innovators, and outside the CEHP community for those who assess adult learning and behavior change, can yield outcomes assessment methods that provide a fresh approach and interesting results. Some of these forms of outcomes assessment that may be newer to CEHP may solve some of the outcomes challenges we face in daily practice:

  • the need for simple implementation/analysis requirements and resources,
  • lower outcomes budgets,
  • new and exciting forms of assessment, and
  • the demand for more objective assessment of performance change and patient outcomes.

What follows are three types of outcomes assessment that you may not have seen before or seriously considered for implementation in your practice of CEHP. This list includes one simple, less expensive method, and two methods that require additional resources, training and expertise.

Almanac_Feb17_TypesofOutcomes_Figure1.PNG

Net Promoter Score®, or NPS®
NPS is a simple question with a 10-point scale that measures customer experience/satisfaction/perception of a brand. It is very likely that you have answered this question several times as a consumer, perhaps without knowing that it was the NPS question. Retail stores, hotels, utility companies, and many other businesses and service industries worldwide use this one question, sometimes combined with other very simple questions, to gauge and improve customer satisfaction. They also use it as a standardized measure to compare their service to that of others within the same industry.

The question is worded as follows: Using a 0-10 scale (0 = Not all likely, 5 = Neutral, 10 = Extremely likely), how likely is it that you would recommend [brand] to a friend or colleague?

Respondents are grouped as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

Subtracting the percentage of detractors from the percentage of promoters yields the Net Promoter Score, which can range from a low of -100 (if every customer is a detractor) to a high of 100 (if every customer is a promoter).

The question is commonly used within the service industry at two points of contact: 1) immediately following a specific interaction with a customer (after a purchase or after a service is provided); and 2) separate from a transaction in a survey, to assess overall brand satisfaction/loyalty.

So how could this be used in CEHP? Here are some ideas:

  • Ask the question in every activity evaluation form about that specific activity. Develop a mean NPS for all your CEHP activities, and work to improve the score over time.
  • If you have a CEHP “brand” that you promote within a specific target audience in the live or online setting, survey your learners (customers) to determine their brand loyalty. The question that is typically combined with the NPS to help you improve customer satisfaction is (for those with a rating of 8 or lower), “What changes would [company/organization/brand] have to make for you to give it a higher rating?” If you ask this question, you need to be prepared to address the concerns expressed, and the quicker you do that, the better.
  • If you have a good NPS, use it to promote your CEHP brand or activities.
  • If you have a poor NPS, implement a plan to make improvements, and re-survey to test for that improvement. Many organizations use NPS as part of a regular, ongoing system designed to improve the quality of their products and services.
  • There are currently no benchmarks within the CEHP community, but there are benchmarks in a vast variety of other industries [https://www.netpromoter.com/compare]. How could we work together to develop an NPS benchmark in CEHP?

Confidence-Based Learning (CBL) Assessment
CBL measures the correctness of a learner’s knowledge, combined with their confidence in that knowledge. It is designed to increase retention and minimize the effects of guessing, which can skew the results of traditional single-score assessments.2 In some online systems, the measurement facilitates the creation of a customized learning plan for each learner that helps them progress to total mastery (i.e., having complete confidence in their answers to 100 percent of the questions). Typically, the learner answers a multiple-choice knowledge question, and then rates their confidence in the answer they selected on a three-point scale: 3 = I am sure, 2 = I am partially sure, 1 = I am not sure, or 3 = Positive, 2 = Fairly sure, 1 = Not sure. Thus, responses fall into four categories, as shown in Figure 2.3

CBL had its genesis in the 1930s4 and has been used in high-stakes testing for professionals such as Navy pilots, air traffic controllers and law enforcement officers. James Bruno, Ph.D., a professor of education at UCLA, further studied the connection between knowledge, confidence, retention and the quality of knowledge while consulting for the North Atlantic Treaty Organization and the RAND Corporation in the 1960s.2 Since that time, and with the advent of computer and online systems, CBL has progressed into more sophisticated forms of assessment. It did gain the interest of the CEHP community about 10 years ago, and some articles were published on its utility.5 Recently, it has been applied in live settings in a game-type format using tablets and online for enduring activities.6

In a CEHP setting, CBL can be applied for pre- and post-activity questions, with a positive outcome being an increased percentage of respondents who are correct and highly confident in their answers, and a decrease in the percentage of respondents who are incorrect and highly confident. Those who are incorrect in their answer to a case study question, are highly confident and also see patients, clearly represent a group at high risk for making medical mistakes and hurting patients. Conversely, those who are highly confident in the correct answer are more likely to take that action in practice. Results can also provide tremendous insights into knowledge topics that need ongoing further emphasis and empowered confidence.

Almanac_Feb17_TypesofOutcomes_Figure2.PNG

Analysis of Cost-Savings from CEHP
The adoption of the Triple Aim (the simultaneous pursuit of improving the patient experience of care, improving the health of populations and reducing the per capita cost of healthcare) by the Centers for Medicare & Medicaid Services (CMS) and its incorporation into the Agency for Healthcare Research and Quality’s (AHRQ) “National Strategy for Quality Improvement in Health Care” underscore the importance of cost-savings and cost-effectiveness analyses in healthcare. Also, as the healthcare delivery system in the United States transitions from volume- to value-based care and reimbursement, there is increased interest in understanding the cost savings and cost effectiveness of CME activities and initiatives. Mazmanian7 notes this increasing interest in the economic impact of CEHP, yet few studies have actually conducted these types of analyses.8,9

Although it is optimal to assess economic outcomes at the individual patient level, such as in a closed system where costs and health records would be available, CEHP initiatives that are national in scope and include participants from many different practice settings do not allow for that kind of data capture.10,11 Without these data, reasonable estimates of cost savings can be made using modeling.12

While fairly new to many in the CME community, cost-effectiveness analyses with modeling have been routinely used and respected within the healthcare research community for many years. Economic impact outcomes models, such as the Outcomes Impact Analysis (OIA) model for CME developed by Ravyn et al and published in JCEHP in 201410 and again in CE Measure in 201511, have helped embolden the CEHP community to experiment with and conduct more of these types of analyses.

The OIA model approximates costs averted as a result of the incorporation of newly learned (or increased implementation of) behaviors into clinical practice as a result of participation in CEHP.10,11 Using this model and modeling software,12 an analysis can be conducted to estimate costs avoided by estimated participant implementation of specific performance changes recommended in CME activity content. The modeling software is widely accepted by the scientific community and regulatory agencies as a standard modeling platform.12

The methodology used to conduct the cost-savings analysis begins with building visual treatment decision pathways in the modeling software. The visual model mirrors disease progression, treatment options, adverse events, etc. over a specified period of time. Building the model requires information such as valid cost data, disease prevalence, probabilities of treatment success, percent receiving treatment, etc. Generally, this information is available in the published literature (i.e., from cost-effectiveness studies).

Once the model is built, the primary input data gathered from each CME activity — the numbers and percentage of participants committing to specific changes in performance – is adjusted according to the literature on planned versus actual performance change and entered. As an example, Ravyn et al5 used a conservative estimate of 20 percent actual performance change when commitment-to-change was self-reported by participants in the 50-80 percent range. The software then uses the visual structure to automatically generate the algorithms required to calculate the expected value of each treatment strategy. Cost savings in dollars are then calculated from this result.

Thus, using the OIA model, estimates of cost savings are based on commitments to change in performance reported by clinician learners, costs of care treatment paths (i.e., with/without implementation of the guidelines or a specific performance change) from the perspective of a healthcare payer, and probabilities of treatment success from the published literature.

Your experimentation with each of the outcomes assessment methods described above may depend on the assessment challenges you need to address, your skills levels and resources. However, a great starting point is to further investigate these methods and think through how they might help enhance your practice of CEHP.

References

  1. Description and graphic from netpromoter.com/know/, accessed January 16, 2017.
  2. Confidence-based learning. https://en.wikipedia.org/wiki/Confidence-based_learning/, accessed January 16, 2017.
  3. Mayer D. University of Illinois at Chicago, Case Study: A comparison of Confidence-Based Learning TM Methodology to Traditional Learning Methodologies on the Patient Safety Knowledge of First-Year Medical Students. Study funded by Transparent Learning, Inc.
  4. Hunt D. A method of correcting for guessing in true-false tests and empirical evidence in support of it. Journal of Social Psychology, 3 (1932): 359-362.
  5. Cash B, Mitchner NA, Ravyn D. Confidence-based learning CME: overcoming barriers in irritable bowel syndrome with constipation. J Contin Educ Health Prof. 2011 Summer;31(3):157-64. doi: 10.1002/chp.20121.
  6. Poster abstract accepted for the 2017 SACME Annual Meeting, Scottsdale, Arizona, May 17-20, 2017. LaTemple D, Simmons J, Dietze D. Guess Not: Confidence Measurement as a Means of Validating Knowledge Assessment.
  7. Mazmanian PE. Continuing medical education costs and benefits: lessons for competing in a changing health care economy. J Contin Educ Health Prof. 2009;29:133-4. 9.
  8. Trogdon JG, Allaire BT, Egan BM, Lackland DT, Masters D. Training providers in hypertension guidelines: cost-effectiveness evaluation of a continuing medical education program in South Carolina. Am Heart J. 2011;162:786-93.e1.
  9. Hogg W, Baskerville N, Lemelin J. Cost savings associated with improving appropriate and reducing inappropriate preventive care: cost-consequences analysis. BMC Health Serv Res. 2005;5:20.
  10. Ravyn D et al. Estimating health care cost savings from an educational activity to prevent bleeding-related complications: the Outcomes Impact Analysis model. J Contin Educ Health Prof. 2014;34 Suppl 1:S40-45.
  11. Ravyn D et al. Educational and Economic Outcomes of an Intervention to Improve Early Testing and Treatment of HIV. CE Measure. 2015;9:20-25.
  12. See this link for examples of numerous studies of costeffectiveness using the TreeAge Pro modeling software: http://www.ncbi.nlm.nih.gov/pu/

Recent Stories
Peer Learning Continues to Improve Radiologists' Educational Opportunities and Engagement

New Online Tools Provide Best Practices in Surgical Care for Older Adults

Conundrums in Instructional Design