Innovation in Outcomes Beyond Pre- and Post-Assessments

By Nimish Mehta, PhD, MBA CHCP1; Kathleen N. Geissel, PharmD, CHCP1; Tom McKeithen, MBA, BS2; and Martin Warters, MA1, 2Healthcare Performance Consulting, Inc.

You can’t claim that your education has achieved its intended goals and objectives if you don’t measure the outcomes. In their article, Moore et al have pro­vided an excellent framework for aligning outcomes methodologies with the intended goals.1 When education providers begin by stating the end goals of the project-align­ing instructional design and outcomes planning at every step of the process, they increase the likelihood of achieving desired results, regardless of the outcomes level targeted (see Figure 1). A good assessment design helps educators achieve the highest level of outcomes possible with minimum involve­ment of participants (to minimize survey fatigue); provide value to all stakeholders, including learners, educators, faculty and supporters; align with the objectives, format and content of the activity; and include multiple domains of knowledge, skills, behavior, attitude, competence and performance. In this brief article, we highlight some methods to help attain higher level outcomes beyond typical pre- and post-methodology.

Almanac_May16_InnovationinOutcomes_Figure1.PNG

Methods to Enhance Pre-and Post-Outcomes

Most providers are comfortable with using surveys to assess participants prior to and immediately after expo­sure to education; however, many different methods can enhance this approach to obtain more meaningful results:

  • Control groups: This involves comparing participant responses to those of an external control group of demographically-matched healthcare providers who did not participate in the given activity to increase the validity of unpaired pre- and post-assessments. When individual learners’ pre-and post-responses are compared (i.e., paired), the learners serve as their own controls, eliminating any influence of external factors other than the educational activity on learners’ responses.
  • Calculate effect size: Effect size is a method of quan­tifying the size of the difference between two groups, which provides insights into the practical significance of results to strengthen the validity of conclusions. It also allows you to compare different projects to each other, such as in meta-analysis.
  • Qualitative assessments: One way to enhance out­comes assessments is to add a qualitative component, such as a series of interviews or a focus group. Even a single, carefully crafted, open-ended question can add valuable insights into the effects of an educa­tional activity on participants and on how the activ­ity affects clinical practice. Qualitative methods, by virtue of their open-ended nature, can sometimes yield unanticipated outcomes that add to the overall outcomes assessment value.

Commitment to Change

Qualitative techniques can enhance the use of commitment to change as an outcomes methodology. Adding brief fol­low-up interviews, for example, provides a depth of infor­mation that a test or survey cannot. Such interviews can also glean specific information about the role of the activity in stimulating change, how changes were implemented in practice, what additional education is needed, what re­sources were used and what barriers learners faced. While a well-designed quantitative assessment can cover some of these issues, it does not provide the depth that an interview can, and quantitative assessment alone cannot account for unanticipated outcomes. Because this method may be more costly, not all participants need to be interviewed; interview­ing a subsample of participants will suffice in most cases.

Chart-stimulated Recall

Chart-stimulated recall is another method that uses interviews to add value to quantitative outcomes data. These interviews typically occur a minimum of six weeks after an activity. With chart-stimulated recall, educators ask a participant to select charts that meet certain criteria related to the topic of the educational activity. During the interview, the participant provides information from the charts that is indicative of clinical performance or patient health. This is the quantitative component. The participant is also asked a series of open-ended questions that assess the impact of the activity on clinical practice. The addition of this qualitative data, gathered simultaneously with the quantitative chart data, provides insight regarding the impact of the activity on clinical practice and patient health.

Simulation-based Assessments

Simulations are a well-known and effective instructional approach, as they provide “close to real world” practice and use a learning approach that resembles typical real-world decisions in a consequence-free environment. However, the real-time capture of clinical decisions made during the simulation also provides a unique method of measuring outcomes. Learning and outcomes measurement occur at the same time; therefore, this method of measuring outcomes is “nonintrusive,” and participants are not requested to fill out a separate outcomes questionnaire or answer a set of questions. Clinical decisions for ordering tests, making assessments and choosing pharmacologic treatments, as well as non-pharmacologic management and procedures, are captured in real time as the participants’ progress through the simulation case studies.2

Performance Improvement

For most providers, measuring performance as described by Moore et al can be a daunting task if you do not have ready access to valid, accurate, credible and timely data that are relevant to the individual. We have outlined some noteworthy alternative approaches:

  • Case scenarios: Data from Peabody et al3 indicate that how a physician responds to sample clinical scenarios is indicative of what that physician will do when presented with a similar case in actual clinical practice. Case scenarios offer a cost-efficient method for assessing performance.
  • Chart audits: Supporting physicians in extracting data from patient charts can provide objective data with true clinical endpoints. It can be labor-intensive and may require high incentives. It also relies on the quality of data entry in the patient’s chart/electronic medical record.

Summary and Conclusion

The fundamental purpose of continuing education is to enable the successful performance of practitioners in the di­verse practice characteristics of their professional work. With the dawn of the post-Affordable Care Act revised healthcare systems, there is heightened expectation that our industry helps support this purpose with a focus on true performance change that improves patient and population health. Per­formance is dependent on sufficient knowledge, confidence, skills and the right environment to be successful. Measuring all of these domains using established methods will improve the success of your program and help highlight areas of continued educational need.

References:

  1. Moore DE Jr, Green JS, Gallis HA. Achieving desired results and improved outcomes: Integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29:1-15.
  2. Cook DA, Brydges R, Zendejas B, Hamstra SJ, Hatala. Mastery learning for health professionals using technology-enhanced simulation: a systematic review and meta-analysis. Acad Med. 2013;88:1178-1186.
  3. Peabody JW, Luck J, Glassman P, Jain S, Hansen J. Measuring the quality of physician practice by using clinical vignettes: A prospective validation study. Ann Intern Med. 2004;141:771-778.

 

 

Recent Stories
ACCME and American Board of Otolaryngology — Head and Neck Surgery Collaborate to Promote Physicians' Lifelong Learning and High-Quality Patient Care

5 Ways to Meet Your CME Requirements

300 Central Texas School Nurses Head to UT for Training