header

Seeing The Forest From The Trees

Jeff Clawson, M.D.

Jeff Clawson, M.D.

Ask Doc

Jeff Clawson, M.D.

Generalizability is viewed in academics as the gold standard for a research-driven process. In simple terms, generalizability means achieving the same results in different environments, with completely different populations. Attaining generalizability presents a daunting problem in the realm of emergency dispatch calltaking. Indeed, given their vast disparities in size, geographical jurisdictions, technology, roles, responsibilities, and staffing, a reasonable question would be: Is it even possible to achieve the same (or at least similar) results in emergency dispatch agencies throughout the world? Emergency calltaking is such a chaotic task after all, with panicked callers, life-threatening events, and overworked, underpaid calltakers—what’s the use in trying?

Enter protocols—a set of structured rules, instructions, and actions arranged in an organized, methodologically sound system used to complete all specific objectives in the right order in the emergency calltaking process. Protocols for emergency fire, medical, and police calltaking have become the standard of practice for achieving consistency of process, as well as a high level of service in emergency dispatch centers throughout the globe.

However, emergency dispatch protocols require continuous review and updates in order to stay current and effective. This need for updating protocols leads to an additional challenge for the emergency dispatch community: version control.

The necessity of version control is rooted in the need for comparability within a user or study environment.

Most would agree that a dispatch center should not have multiple variations of a given protocol between individual workstations or shifts, because if it did, the performance from station-to-station and shift-to-shift would be, by definition, different. The dispatch results at one station, if protocols were followed as written, would be different (who knows how much?) than the results at the next station. Such a practice of using different versions of a root protocol (or dispatch method) within a single agency make it difficult, if not impossible, to compare findings and hence to reach any valid conclusions about high, low, or mediocre dispatcher performance, or in reviewing good or bad case outcomes based on any one of the protocol variations in use.

This dilemma expands itself across jurisdictions and even between neighboring agencies in a similar way. When an issue might be studied by an agency, any data, findings, or recommendations examined are not necessarily comparable to another agency that uses a fundamentally different dispatch system.

In these environments, the various types of dispatch systems become, by definition, an increasing number of variations. Moreover, when certain “protocols/guidelines” can be modified at will by the user agency, the number of dispatch “protocol” types increase exponentially as each of these versions continues to morph.

In addition, what may be the same as, or different from, your protocol cannot be easily identified. The comparison factor involving studies done in these systems breaks down past the center level. Their individual outputs, and any outcomes based on them, therefore may be directly comparable only to themselves. In today 's sophisticated dispatch centers, generality “walks,” while specifics “talk.”

A related problem in any dispatch study is the simple failure to identify which “system” is actually being used. Often the best an agency can do is quote the system' s general type: “priority dispatch protocol," “criteria-based dispatch,” or even more general, “the EMD protocol,” etc., without being able to report version (and date) information because they have no organized way to evolve or even identify versions. At its worst, agencies cannot even report basic protocol taxonomy (version evolution) because they have no record of where the original system came from or what it was. It might as well have been “homegrown” because effectively that’s what it has become. This makes it impossible even to describe how the current version differs from the parent. In cases like this, even if a study is published, findings are impossible to analyze properly, and any suggested protocol changes are then difficult, if not impossible, to effectively implement.