Baseline: Feels Like I’m Going to Lose My Mind!

By Laura Gruber, Senior Director, Administration, Strategy and Education, University of Florida Health Physicians; Ted Singer, MA, CHCP, President, PVI, PeerView Institute for Medical Education, Inc.; and Marvin Dewar, MD, JD, Senior Associate Dean and CEO, University of Florida Health Physicians

Editor’s Note: This article is the second of a three-part Alliance Almanac series focused on the implementation of an asthma-focused quality improvement project at two different medical practices in Gainesville, Florida. One project is focused on adult patients and the other is focused on pediatric patients. This first segment of the series appeared in the November 2015 Almanac and covered project planning and kickoff. This installment is focused on the challenges of baseline measurement and ongoing implementation.

Alliance Competencies Addressed

  • Competency 8.1: Integrate a systems-based approach to identifying and closing gaps in healthcare into the design and assessment of educational activities/interventions by:
    • Evaluating quality and performance gaps for systems-based issues (e.g., structures and processes) that can be addressed within CEhp activities/interventions
    • Addressing systems-based issues that are barriers to change and implementation of new knowledge and skills
    • Assessing improvements in team performance
    • Developing CEhp content that supports collaborative practice within the interprofessional healthcare team

With a nod to the famous Madonna lyric, this month’s title refers to the difficulty that health systems face when trying to establish baseline benchmarks against which quality improvements can be measured. It seems like a simple enough task:

  1. Identify a population of patients suffering from a certain condition – most likely one with high prevalence.
  2. Identify outcomes or features that represent best practices or desired results.
  3. Measure the current practice performance with respect to the measures.
  4. Plan and implement a series of Plan-Do-Study-Act (PDSA) cycles designed to improve performance on the selected metrics (i.e., closing the performance gap).
  5. Reassess that practice performance on the measures to determine impact.

With motivation to assess and improve, figuring out where to begin shouldn’t be that difficult, right?

At the end of the first article in this series, we left our practices at different stages of this QI process, with both needing to assess their baseline performance. Despite the fact that both practices used a comprehensive and robust EHR system, much of the relevant data was stored in free-text fields, creating the need to assign two individuals – both with QI backgrounds – to manually review charts to find representative patients for baseline measurements. Even then, collecting data was not a simple matter. We’ve described some of the necessary assumptions and decision points to explain how a seemingly simple task became more complicated:

What Defines a Patient Eligible for the Initiative?
Asthma is asthma, right? In fact, it’s not that simple. Asthma care guidelines differ depending on whether or not the asthma is mild, moderate or severe and whether or not the symptoms are intermittent or persistent. Such data points are necessary for patient selection and measurement scoring, but different clinicians may use different phraseology to directly or indirectly document the needed information. The national adoption of ICD-10 coding in 2015 may help in the future by improving the specificity of information provided during the coding process, but for now, and for some time in the future, collecting information like this from the medical record usually requires some sort of judgment by a knowledgeable reviewer. Even the determination of whether or not a patient is pediatric or adult is not always easy. Our project involved two project sites – one pediatric and one family medicine – who used different age cut offs for what they considered to be a pediatric patient. All of these considerations, and others, contribute to an extremely complex measurement environment that, more than likely, doesn’t dovetail with the way data is currently structured within existing patient records, even in highly developed EHRs.

Quality Measures Are Not All Made Equal
The University of Florida Continuing Medical Education (UF CME) office, through its alignment with the medical center’s quality office and clinical practice operations, has experience compiling a variety of quality-related reports, such as the physician quality reporting system (PQRS), EHR Meaningful Use, Maintenance of Certification (MOC) projects in collaboration with physician-learner colleagues and practice improvement CME initiatives. In each case, depending on the condition, the quality measures may be defined slightly differently even though the assessed condition and the outcome objectives may be essentially the same. This often means that measurements made for one initiative cannot be used for related initiatives and the work with data systems to automate the measurement process, an obvious desirable result, often has to be replicated and modified. The multiplicity of current separate pay-for-performance and other quality-based incentive programs have created a heavy administrative burden for many health systems, a situation that has been derisively referred to as a “measurement tsunami”!

Help may be on the way to address these inefficiencies. In mid-February, the Centers for Medicare and Medicaid Services (CMS) announced the Core Quality Measure Collaborative, which aims to establish broadly agreed upon core measure sets that can be harmonized and used by both commercial and government payers to reduce vari­ability in measure selection, collection burden and cost.

Closing the Paper Quality Gap
In addition to the fact that EHRs often do not capture quality data in a way that can be easily mined, clinicians may not consistently document pieces of information needed for a quality measurement. Because the needed information is not in the record – despite the fact that the indicated care was actually provided – that clinician may “fail” that measurement. Further complicating this challenge, clinicians often provide care without knowing what quality data points are going to be later assessed or before the quality measures have been developed. We call this issue the “paper quality gap,” or the degree to which the record is not able to substantiate quality outcomes or process measures that were, in fact, actually delivered. Although physicians are often frustrated by documentation requirements that they profess don’t actually enhance care delivery, we emphasize that the paper gap must be rectified before any meaningful measurement of actual quality care can occur. Those working in the quality improvement area should always consider the impact of the documentation gap and include plans to close the gap as part of the improvement effort.

All of the challenges described above were experienced by the UF CME team and the participating practices. The baseline data collection effort took much longer than ex­pected: several months rather than weeks. When planning and implementing QI projects, don’t assume that baseline data collection will be simple. But, if done properly, the foundation for sustainable QI in that specific area will be firmly established.

Baseline Results
With all of the aforementioned caveats, the QI reviewers audited 30 records from each practice to assess baseline performance. The target population was patients with a diagnosis of asthma that was at least of moderate persistent severity or greater. The metrics selected (i.e., documentation of the presence of an asthma action plan, which was part of patient engagement; instruction in the proper use of asthma inhalers; establishment of asthma care goals; and the presence or absence of asthma exacerbations) were developed with reference to the Healthcare Effectiveness Data and Information Set (HEDIS) and PQRS. HEDIS is a group of 81 measures used to assess health plan performance. PQRS is Medicare’s quality reporting program, which includes a set of clinical quality measures for multiple specialties. Thirty charts are often used as the sample size for audits of this type and give the quality team the ability to assess an improvement from baseline to as high as 90 percent with 95 percent confidence using basic nonparametric statistical techniques. See table 1.

As has already been discussed, there is no way to know wheth­er the measured gaps represent paper gaps or clinical gaps, but with that baseline evaluation in hand, the improvement teams were ready to implement the project that was designed to improve both the documentation and delivery of care.

Almanac_June16_Baseline_Table1.PNG

Implementing and Optimizing QI Efforts
The QI initiative’s pediatric practice designed and im­plemented an asthma screening tool, described in Article One of this series, (click here to view Figure 1), intended to better standardize the care of asthma patients and the documentation of that care by facilitating data collection against the selected quality measures as well as other clini­cal endpoints and patient-reported symptoms. The process of implementing the screener and collecting feedback about its utilization led to a series of revisions to the tool itself and to useful modifications of the clinic’s workflow for patient navigation during the visit.

In order to optimize implementation of the new asthma care tool, we had to understand the ideal asthma patient interaction with the practice. UF CME worked with thepediatric practice manager to develop a flow diagram, which illustrated an optimized patient path:

  1. Patient has initial appointment where asthma is diagnosed.
  2. Patient/caregiver is given an asthma action plan and a follow-up appointment is set.
  3. The follow-up appointment is flagged as an “Asthma Follow-up.”
  4. Patient arrives for the follow-up appointment and is given the asthma care screening tool to complete in the waiting room.
  5. Patient sees doctor, reviews the screening tool results, reviews the action plan and sets follow-up appointment.
  6. Repeat cycles for steps 3-5 results in improvement of asthma care.

Armed with this vision for an ideal patient path design, the practice began using the screening tool and solicited daily feedback on its utility. There is an old saying that the greatest number of modifications to a process design occurs after the first end-users utilize the design. That axiom again proved true in our project. For example, the key to implementation was actually getting the tool into the hands of the patient/caregiver when they checked in for their appointment. Whenever this did not happen, most physicians moved on with the exam, not taking time to find and complete a copy of the tool. When it turned out that many asthma patient records were not being flagged, and that the failure to flag the visit was resulting in the screening tool not being used, the clinic team introduced process changes to increase the rate of flagging and, therefore, the utilization of the QI tool.

There were also early modifications of the screening tool to improve the ordering of questions and make data scoring easier. A valuable observation was that the use of a tablet or other electronic tool to complete the screening would allow seamless integration into the EHR, but that level of customization was a larger and more resource-intensive project that needed to wait for another day.

The above comments reflect the flexibility that QI teams need to maintain as they modify and optimize their designs based on actual usage. With these modifications, the asthma screening tool became part of the fabric of the pediatric practice, facilitating the collection of important quality data and reminding physicians to optimize asthma care with patients and caregivers.

What About the IRB?
When talking about QI projects, an issue we frequently deal with is whether or not to seek Institutional Review Board (IRB) approval for the project. We have even heard conference presenters minimize the IRB issue or, as part of a push-to-publish-outcomes declaration, say, “It’s not a big deal to have the project approved by IRB and then you can publish whatever you want!” IRB refers to committees established under federal law to review and approve research involving human subjects to ensure that research is conducted in a way that maximizes patient safety and protects individual rights. Certainly, projects that qualify as research or involve a potential risk to patient safety or the protection of individual personalized health information should be submitted to the local institutional IRB for review and approval. In addition, if a quality project that does not qualify as research is likely to be published, IRB approval can be helpful to ensure that patient protected health information (PHI) is being adequately protected. Fortunately, many IRBs have created processes for expedited review of quality projects with low safety or information security concerns. Talk to your IRB director about the availability of expedited reviews in your setting. The “to IRB or not to IRB” decision is an important one for quality projects. When in doubt, it is best to err on the side of IRB submission, but the task is not trivial and it will add time to the project start-up phase.

Points for Practice:

  • Collecting baseline data for QI projects often reveals important documentation gaps that take significant effort to close but establish a foundation for sustainable improvement.
  • Selecting quality measures that satisfy various quality reporting mandates is a huge challenge.
  • Creating a patient flow diagram can help identify key areas on which to focus QI projects.

Looking Forward: Did the Asthma QI Initiative Work?
The final article in this series will look at the performance on the asthma QI measures after the QI intervention. In addition to post-intervention results at the pediatric practice, we will describe progress at the adult practice, which entered the project with a less-established QI culture than the pediatric practice and took longer than anticipated to complete the project stages. Therein lies another important lesson of QI work: Just as we found that it pays to be flexible when modifying the initial project improvement design and tools to reflect the early experience of end users, the proposed project timeline is equally subject to change as the result of unanticipated challenges. Instead, if the team focuses on the ultimate goal – changing practice and improving care – rather than unanticipated challenges or lamenting the need to change a project design, it can still direct a successful and impactful project.

Recent Stories
AMA Congratulates IU Medical Students

From Pilot to Permanent: ABOG Program With an Innovative Pathway Integrating Lifelong Learning and Self-Assessment and External Assessment Is Approved

National Kidney Foundation and TMF Health Quality Institute Team Up to Prevent Kidney Disease Through Screening and Education