How Do We Compare?

Mike Taigman

When people first start realizing the power of real-time data analysis using Academy Analytics, it’s common for them to ask, “Can we compare ourselves to other centers to do some benchmarking?” The desire for comparison, ranking, grading, and the like seems to be woven into our collective DNA. How does our center’s frequency of aborted calls stack up against other centers? Is my time in Case Entry shorter than Stephen’s on A shift?

Benchmarking, the improvement science term for comparison, can be a helpful improvement tool, but not the way most folks think about it. In emergency services most benchmarking projects are built on the principle of “We will all measure the same thing the same way and see who is best and who is worst.” The best will feel good, and everyone else will feel less good. And that’s usually where it ends.

In the world of health care performance improvement, benchmarking is about finding an organization that does something really well, then studying the heck out of them by reading about, talking to, and visiting them. The goal is to learn about changes that might improve performance in your center.

For example, you might read about the significant improvement that the folks from New Castle County Office of Emergency Management, Delaware (USA), made in their percentage of cardiac arrest patients getting CPR and shortening their time to hands-on-chest by nearly a minute. Then it would be about figuring out how to learn from them to resuscitate a higher percentage of patients in the community you serve.

Ask three questions

  1. What are we trying to accomplish? The answer is your project’s aim statement. The strongest aim statement is specific, measurable, and achievable and includes how much improvement you hope to make by when. For example: Our aim is to improve CPR rates by reducing the time to hands-on-chest from 130 seconds to less than 90 seconds and improving the percentage of cardiac arrest patients who receive Dispatcher-Directed CPR (DD-CPR) from 34% to over 70% by the end of August 2020.
  2. How will we know that change is an improvement? The answer focuses on measurement. Identify your outcome, process and, if needed, balancing measures. Outcome measures are the results that you’re hoping to produce. In our example that would be walking with and talking to survivors of cardiac arrest. Process measures are those things that if done well are shown to produce the outcome we are looking for. In this case, good process measures could include the:
    • time from initiation of the first ring of the 911 call in the primary PSAP (if you can get it) to the time of the first compression measured in seconds.
    • percentage of cardiac arrest patients identified in EMS patient care records who received CPR initiated by bystanders on their own or DD-CPR.
    • balancing measures are unintended and potentially problematic consequences that you’d like to avoid. (In our example, it’s hard to think of any important balancing measures.)

Once you’ve identified your measures, it’s important to grab baseline data on these from over the last 12 months or so as a platform to start your improvement project. Once you have the data, plot the data in chronological order on a run chart or control chart.

3. “What change(s) can we make that will result in improvement?” will be a list of change ideas and theories that you can test in your center. How do you gather these? Benchmarking can be a great way to develop change ideas.

You could visit Robert Rosenbaum, M.D., FACEP, EMS medical director for New Castle County, and his team to study their practices and processes along with the changes they made to produce their improvements.

You could also reach out to the Resuscitation Quality Improvement Telecommunicator (RQI-T) team that studies and provides training on ways to improve the percentage of people who are successfully resuscitated from cardiac arrest. RQI-T is a continuous quality improvement program to help telecommunicators improve survival from cardiac arrest (visit the RQI-T website at rqipartners.com/rqit/).

For that matter, you could search for any place that produces great results. As you study these systems, identify potential changes that you can make to produce similar results in your system.

Time to PDSA

Test the change ideas with the Plan, Do, Study, Act (PDSA) cycle.

Plan: The first step in the testing process is to plan the smallest and fastest test of change you can think of. It’s often best to do this in simulation with practice cases so that you can see how things really work in your system. Many experienced leaders admit that few good ideas survive intact after confrontation with frontline employees. As part of the plan, predict what you think will happen. For example, if the change idea is moving Pre-Arrival Instructions from the secondary PSAP to the primary PSAP to shorten the time to first compressions, flowchart your current process times and estimate how many seconds faster it would be if the telecommunicator in the primary PSAP provided CPR instructions.

Do: Conduct your test, record measurements, and gather the opinions and observations from the folks involved in the test to inform your analysis.

Study: Compare results of the actual test with your predictions. Did it produce the improvement you’d hoped for? What else did you discover with your test?

Act: In this step we take our analysis from the test and decide whether to adopt, adapt, or abandon this change idea. Adoption makes sense if the change produced good improvement and seemed workable without causing too many complications from the perspective of those involved with the test. Adaptation is appropriate if there was some but not enough improvement or the folks involved noticed things that made the execution of the change idea problematic. With adaptation you think about how to modify change and then do the next PDSA cycle. Abandonment is the logical choice if the test failed to produce any improvement, and the folks involved can’t think of a way to adapt the idea for the desired change.

Before implementing any change, continue to conduct PDSA cycles until you arrive at a proven workable change that reliably produces improvement.

Let it be known

Once you’ve implemented a successful improvement idea, continue tracking outcome, performance, and balancing measures to make sure that the improvement takes hold and sustains. Next, write up your project and submit it to the Annals of Emergency Dispatch and Response (AEDR) so that others can learn from your success. If we continually collaborate to improve the results we produce, we can make the world a safer, healthier place through our communication centers. 

Facebook Comments

Comments are closed.