Results of the Alliance 2016 Environmental Scan (I of II)

By: The Alliance Research Committee: Wendy Turell (Chair), Andrew Bowser, Greselda Butler, Elizabeth Franklin, Alexandra Howson, Jan Perez, Pesha Rubinstein, Greg Salinas

Overview and Methods

The Alliance Research Committee conducted an environmental scan between December 2016 and January 2017 to ascertain practices and member-organization direction in five domains of interest: 

  1. quality improvement;
  2. adult learning principles/educational methods;
  3. outcomes measurement;
  4. maintenance of certification (MOC);
  5. patient inclusion.

Two unique surveys were developed for educational providers and supporters. The Research Committee drafted survey questions that were vetted by Alliance board and staff leadership, and topic-aligned committee chairs. The invitation link to the online survey was sent to the entire Alliance membership (n=2000).

Analysis and Results

We removed duplicate data/incomplete entries and used descriptive statistics (e.g., frequencies and means) to examine overall responses and related trends among the survey items. In total, 178 educational providers (“providers”) and 31 educational supporters (“supporters”) provided full survey responses. Due to the low sample size (especially for the supporter group) and lack of random sampling for this overall survey, these findings may not be generalizable to the broader community based on this study alone.

Responder Characteristics

Provider responders were distributed across four main communities of practice; a majority of supporters were from pharmaceutical companies (Figure 1).

Figure 1. Responder Characteristics

Results of the environmental scan are reported for Almanac readers in two articles. This article — the first of a two-part series  covers results on quality improvement, adult learning principles/educational methods and outcomes measurement. Part two in the series will present results regarding maintenance of certifications (MOC) and patient inclusion.

Quality Improvement

CME providers and supporters were surveyed on their current planning and execution of quality improvement (QI) initiatives, future opportunities for QI initiatives and the barriers each group faces. For survey respondents, QI was defined as follows: “Quality Improvement refers to the use of objective quality measures (such as those obtained from medical or insurance records) as data in the assessment of the impact of an intervention on physician performance, patient outcome or other system-based change. Hospital providers: Please answer QI-related questions from the perspective of the CME department at your organization.”

More than half (58%) of provider organizations had ever begun or completed one or more QI projects; 28% had never engaged in a QI initiative; and 14% stated QI was not relevant to them. A majority of supporters (77%) had supported one or more QI projects at any point in time. Only 7% had never supported QI and 14% stated they did not believe QI was relevant. 

A survey question inquired about QI projects put forth or supported more recently — in the past 24 months (figure 2).

Figure 2. QI Trends Over Past 24 Months

Fifty-one percent of providers had begun or completed three or more QI initiatives, 39% had begun or executed five plus initiatives. Looking toward the future, a majority of providers and supporters (72% and 74%, respectively) reported that they planned to undertake/fund future QI projects; 23% of providers and 13% of supporters were undecided; 13% stated they have no plans to support QI in the future.

Access to healthcare system data was the biggest barrier to QI reported by providers (37%). Other barriers included human or financial resources (12% and 16%, respectively), organizational culture (16%), expertise (5%) and other reasons (15%), such as time, the “willingness of commercial supports to undertake a project that exceeds a 12-month grant cycle” and “physician engagement.” Supporters identified financial resources (30%) and expertise (17%) as the main barriers; other barriers included “quality submissions,” “scalability and time adaptability (ensuring the support resources are available and adjusting the expectation for receiving outcomes),” and “determining the right mix of QI versus other types of support internally).”

Adult Learning Principles and Educational Methods

Adult Learning Principles

Over 95% of respondents in both groups strongly agreed that adult education should be relevant to and integrate with the everyday lives of adult learners. They were somewhat split on whether adult learners are independent, self-directed and internally motivated (93% of providers versus 83% of supporters). Most respondents (87% of providers and 82% of supporters) agreed that adults are most interested in problem-related approaches to learning versus content-related approaches and felt that adults are more motivated to learn by internal versus external motivation (73% of providers and 71% of supporters). Figure 3 summarizes the educational methods that respondents ranked as most effective (categories included: instructor-led lecture, peer-to-peer, demonstration, panel discussion, standardized patients, audit and feedback, teach-back, preceptorship, flipped classroom, oral debate, journal-based, interview-based and computer-based gaming).

Figure 3. Three Top-Ranked “Most Effective” Educational Methods

new-piktochart_24509304_7461925a7bae1885630c5db0b6908717230a67bf.png

Outcomes Measurement

Respondents were asked about the highest Moore’s outcomes level reached across any of the programs they put forth or support in a given year (e.g., the best/highest example).[i] Supporters most often reported Level 6/Patient Health (48%), followed by Level 5/Performance (22%). In contrast, most providers reached as high as level 5 (44%), followed by level 6 (25%). Less than 10% of respondents overall reached level 7/Community Health; most providers in this group were in academic medical centers or hospitals. Supporters and providers were also queried about the highest level of measurement they reach for the majority of their activities in a given year; this should be interpreted as “regular achievement” at an outcomes level (Figure 4).

Figure 4: Average Achievement of Outcomes Levels

Both respondent groups most often reported level 4/Competence (57% and 56%, respectively).

Most providers (70%) reported using internal staff to plan and execute outcomes measurement; 21% use a mix of internal staff and outside vendors/consultants; 2% fully outsource outcomes measurement; and 3% reported not measuring outcomes at all. Provider barriers to measuring activity outcomes included “not enough staff” (64%), followed by “culture of resistance/fear of change” (28%), “lack of expertise” (24%) and “time” (16%).

Discussion and Conclusion

A majority of providers and supporters have begun to develop QI projects, but access to healthcare data is a major barrier to progress. Providers in hospital settings or academic centers were more likely to be engaged in current QI projects than providers in specialty societies or medical education companies, and supporters expressed greater organizational commitment to QI than providers. However, access to healthcare data remains a significant barrier to implementation of QI projects.

Providers and supporters shared similar attitudes on a variety of adult learning principles, although providers were more likely to view adults as independent and self-directed learners than supporters. Providers and supporters differed in the highest level of outcomes measured in a given year, with more supporters achieving level 6 than providers. The highest level in a given year for providers was performance/level 5. For both groups, competence/level 4 was the outcomes level threshold for the majority of activities in a given year, no doubt because it is difficult to fund and sustain high levels of outcome measurement across the board for most or all activities in a given year. Unsurprisingly, Level 7 outcomes were mostly reported by providers in hospital/medical centers. Their access to larger patient communities distinguishes them from other provider groups. A majority undertook measurement endeavors with only internal staff resources; however, lack of staff was the most frequently reported barrier to measurement efforts, followed shortly by lack of expertise. Other factors (such as budgets) may influence the dependency on internal staffing versus external vendor/consultant support.

REFERENCES

[i] Moore DE Jr, Green JS, Gallis HA.  Achieving Desired Results and Improved Outcomes: Integrating Planning and Assessment Throughout Learning Activities. JCEHP. 2009;29;1:1-1

 

Recent Stories
2018 Award for Outstanding CE Outcomes Assessment: Quantifying and Closing Gaps in Systems-Based CLL Care

How EHRs, Telemedicine Slash Barriers to Team Learning in Med Ed

Study: Only One-Quarter of Clinicians Comfortable With Value-Based Payment Program