+1(978)310-4246 credencewriters@gmail.com
  

Description

NR449 Evidence-Based Practice
RUA: Analyzing Published Research Guidelines
Purpose
The purpose of this paper is to interpret the two articles identified as most important to the group topic.
Course outcomes: This assignment enables the student to meet the following course outcomes.
CO 2: Apply research principles to the interpretation of the content of published research studies. (POs 4 and 8)
CO 4: Evaluate published nursing research for credibility and clinical significance related to evidence-based practice.
(POs 4 and 8)
Due date: Your faculty member will inform you when this assignment is due. The Late Assignment Policy applies to
this assignment.
Total points possible: 200 points
Preparing the assignment
1. Follow these guidelines when completing this assignment. Speak with your faculty member if you have questions.
2. Please make sure you do not duplicate articles within your group.
3. The paper will include the following:
a. Clinical Question (30 points/15%)
1. Describe the problem: What is the focus of your group’s work?
2. Significance of problem: What health outcomes result from your problem? Or what statistics document this
is a problem? You may find support on websites for government or professional organizations.
3. Purpose of the paper: What will your paper do or describe?
***Please note that although most of these questions are the same as you addressed in paper 1, the purpose of
this paper is different. You can use your paper 1 for items 1 & 2 above, including any faculty suggestions for
improvement provided as feedback.
b. Evidence Matrix Table: Data Summary (Appendix A) – (60 points/30%)
Categorize items in the Matrix Table, including proper intext citations and reference list entries for each article.
1. References (recent publication within the last 5 years)
2. Purpose/Hypothesis/Study Question(s)
3. Variables: Independent (I) and Dependent (D)
4. Study Design
5. Sample Size and Selection
6. Data Collection Methods
7. Major Findings (Evidence)
c. Description of Findings (60 points/30%)
Describe the data in the Matrix Table, including proper intext citations and reference list entries for each article.
1. Compare and contrast variables within each study.
2. What are the study design and procedures used in each study; qualitative, quantitative, or mixed method
study, levels of confidence in each study, etc.?
3. Participant demographics and information.
4. Instruments used, including reliability and validity.
5. How do the research findings provide evidence to support your clinical problem, or what further evidence
is needed to answer your question?
6. Next steps: Identify two questions that can help guide the group’s work.
d. Conclusion (20 points/10%)
Review major findings in a summary paragraph.
1. Evidence to address your clinical problem.
2. Make a connection back to all the included sections.
NR449_RUA_Analyzing_Published_Research_Guidelines_Sept20_v2
1
NR449 Evidence-Based Practice
RUA: Analyzing Published Research Guidelines
3. Wrap up the assignment and give the reader something to think about.
e. Format (30 points/15%)
1. Correct grammar and spelling
2. Include a title and reference page
3. Use of headings for each section:
o Problem
o Synthesis of the Literature
− Variables
− Methods
− Participants
− Instruments
− Implications for Future Work
4. Conclusion
5. Adheres to current APA formatting and guidelines
6. Include at least two (2) scholarly, current (within 5 years) primary sources other than the textbook
7. 3-4 pages in length, excluding appendices, title and reference pages
For writing assistance (APA, formatting, or grammar) visit the APA Citation and Writing page in the online library.
Please note that your instructor may provide you with additional assessments in any form to determine that you fully
understand the concepts learned.
2
NR449_RUA_Analyzing_Published_Research_Guidelines_Sept20_v2
2
NR449 Evidence-Based Practice
RUA: Analyzing Published Research Guidelines
Grading Rubric Criteria are met when the student’s application of knowledge demonstrates achievement of the outcomes for this assignment.
Assignment Section and
Required Criteria
(Points possible/% of total points available)
Clinical Question
(30 points/15%)
1.
2.
3.
Required criteria
Describe the problem: What is the focus of your group’s work?
Significance of problem: What health outcomes result from your
problem? Or what statistics document this is a problem? You may
find support on websites for government or professional
organizations.
Purpose of the paper: What will your paper do or describe?
Evidence Matrix Table: Data Summary (Appendix A)
(60 points/30%)
Highest Level of
Performance
High Level of
Performance
Satisfactory
Level of
Performance
Unsatisfactory
Level of
Performance
Section not
present in
paper
30 points
26 points
24 points
11 points
0 points
Includes 3
Includes 2
Includes 1
requirements for requirements for requirement for
section.
section.
section.
60 points
56 points
47 points
Present, yet
includes no
required criteria.
25 points
No requirements
for this section
presented.
0 points
Required criteria
Includes 7
Includes 6
Includes 5
Includes 4 or less No requirements
Categorize items in the Matrix Table, including proper intext citations and requirements for requirements for requirements for requirements for for this section
reference list entries for each article.
section.
section.
section.
presented.
section.
1. References (recent publication within the last 5 years)
2. Purpose/Hypothesis/Study Question(s)
3. Variables: Independent (I) and Dependent (D)
4. Study Design
5. Sample Size and Selection
6. Data Collection Methods
7. Major Findings (Evidence)
Description of Findings
(60 points/30%)
60 points
53 points
47 points
23 points
0 points
Required criteria
Includes 6
Includes 5
Includes 4
Includes 3 or less No requirements
Describe the data in the Matrix Table, including proper intext citations
requirements for requirements for requirements for requirements for for this section
and reference list entries for each article.
section.
section.
section.
section.
presented.
1. Compare and contrast variables within each study.
2. What are the study design and procedures used in each study;
qualitative, quantitative, or mixed method study, levels of confidence
in each study, etc.?
3. Participant demographics and information.
NR449_RUA_Analyzing_Published_Research_Guidelines_Sept20_v2
3
NR449 Evidence-Based Practice
RUA: Analyzing Published Research Guidelines
Assignment Section and
Required Criteria
(Points possible/% of total points available)
4.
5.
6.
Required criteria
Review major findings in a summary paragraph.
1. Evidence to address your clinical problem.
2. Make a connection back to all the included sections.
3. Wrap up the assignment and give the reader something to think
about.
Format
(30 points/15%)
4.
5.
6.
7.
High Level of
Performance
Satisfactory
Level of
Performance
Unsatisfactory
Level of
Performance
Section not
present in
paper
20 points
18 points
15 points
8 points
0 points
Present, yet
includes no
required criteria.
No requirements
for this section
presented.
Instruments used, including reliability and validity.
How do the research findings provide evidence to support your
clinical problem, or what further evidence is needed to answer your
question?
Next steps: Identify two questions that can help guide the group’s
work.
Conclusion
(20 points/10%)
1.
2.
3.
Highest Level of
Performance
Includes 3
Includes 2
Includes 1
requirements for requirements for requirement for
section.
section.
section.
30 points
Required criteria
Correct grammar and spelling
Include a title and reference page
Use of headings for each section:
o Problem
o Synthesis of the Literature
â–ª Variables
â–ª Methods
â–ª Participants
â–ª Instruments
â–ª Implications for Future Work
Conclusion
Adheres to current APA formatting and guidelines
Includes at least two (2) scholarly, current (within 5 years) primary
sources other than the textbook
3-4 pages in length excluding appendices, title and reference pages
26 points
23 points
11 points
0 points
Includes 8
Includes 7
Includes 6
Includes 5 or less No requirements
requirements for requirements for requirements for requirements for for this section
section.
section.
section.
section.
presented.
Total Points Possible = 200 points
NR449_RUA_Analyzing_Published_Research_Guidelines_Sept20_v2
4
NR449 Evidence-Based Practice
RUA: Analyzing Published Research Guidelines
Appendix A
EVIDENCE MATRIX TABLE
Article
References
1
(SAMPLE
ARTICLE)
Smith, L. (2013). What
should I eat? A focus for
those living with diabetes.
Journal of Nursing
Education, 1(4), 111-112.
Purpose
Hypothesis
Study Question(s)
How do educational support
groups effect dietary modifications
in patients with diabetes?
Variables
Independent(I)
Dependent(D)
D-Dietary
modifications
I-Education
Study Design
Quantitative
Sample
Size &
Selection
Data
Collection
Methods
N- 18
Convenience
sample-selected
from local support
group in Pittsburgh,
PA
Focus Groups
Major Finding(s)
Support and education
improved compliance with
dietary modifications.
1
2
3
4
5
NR449_RUA_Analyzing_Published_Research_Guidelines_Sept20_v2
5
ORIGINAL ARTICLE
Measures to Improve Diagnostic Safety in Clinical Practice
Hardeep Singh, MD, MPH,* Mark L. Graber, MD,†‡§ and Timothy P. Hofer, MD, MSc||¶
Abstract: Timely and accurate diagnosis is foundational to good clinical
practice and an essential first step to achieving optimal patient outcomes.
However, a recent Institute of Medicine report concluded that most of us
will experience at least one diagnostic error in our lifetime. The report argues for efforts to improve the reliability of the diagnostic process through
better measurement of diagnostic performance. The diagnostic process is a
dynamic team-based activity that involves uncertainty, plays out over time,
and requires effective communication and collaboration among multiple
clinicians, diagnostic services, and the patient. Thus, it poses special challenges for measurement. In this paper, we discuss how the need to develop
measures to improve diagnostic performance could move forward at a time
when the scientific foundation needed to inform measurement is still
evolving. We highlight challenges and opportunities for developing potential measures of “diagnostic safety” related to clinical diagnostic errors and
associated preventable diagnostic harm. In doing so, we propose a starter
set of measurement concepts for initial consideration that seem reasonably
related to diagnostic safety and call for these to be studied and further refined.
This would enable safe diagnosis to become an organizational priority and
facilitate quality improvement. Health-care systems should consider measurement and evaluation of diagnostic performance as essential to timely
and accurate diagnosis and to the reduction of preventable diagnostic harm.
Key Words: diagnostic errors, safety culture, quality measurement
(J Patient Saf 2019;15: 311–316)
T
imely and accurate diagnosis is foundational to good clinical
practice and essential to achieving optimal patient outcomes.1
We have learned that diagnostic errors are common,2–6 affecting
approximately 1 in 20 adults each year in the United States.7
Yet, efforts to monitor and improve diagnostic performance are
rarely, if ever part of initiatives to improve quality and safety.8 Diagnosis is a complex, largely cognitive process that is more difficult to
evaluate and measure than many of the other parts of the patient
safety agenda, such as falls, wrong-site surgery, nosocomial
From the *Houston Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
and the Section of Health Services Research, Department of Medicine, Baylor
College of Medicine, Houston, Texas; †RTI International, Raleigh-Durham,
North Carolina; ‡SUNY Stony Brook School of Medicine, Stony Brook; §Society to Improve Diagnosis in Medicine, New York, New York; ||VA Center for
Clinical Management Research; and ¶Department of Internal Medicine, Division of General Medicine, University of Michigan, Ann Arbor, Michigan.
Correspondence: Hardeep Singh, MD, MPH, VA Medical Center (152), 2002
Holcombe Blvd, Houston, TX (e‐mail: hardeeps@bcm.edu).
H.S. is supported by the VA Health Services Research and Development
Service (CRE 12-033; Presidential Early Career Award for Scientists and
Engineers USA 14-274), the VA National Center for Patient Safety and the
Agency for Health Care Research and Quality (R01HS022087), and the
Houston VA HSR&D Center for Innovations in Quality, Effectiveness and
Safety (CIN 13-413) and has received honoraria from various health care
organizations for speaking and/or consulting on ambulatory patient safety
improvement activities. H.S. had full access to all of the data in the study
and takes responsibility for the integrity of the data and the accuracy of the
data analysis.
The funding sources had no role in the design and conduct of the study;
collection, management, analysis, and interpretation of the data; and
preparation, review, or approval of the manuscript.
This article is not under consideration for publication elsewhere. The authors
have no conflicts of interest to disclose and declare: no support from any
J Patient Saf • Volume 15, Number 4, December 2019
infections, and medication errors. The dearth of valid measurement
approaches is a major barrier in efforts to study and ultimately improve diagnosis.9,10
A recent Institute of Medicine report “Improving Diagnosis in
Health Care” concluded that most of us will experience at least
one diagnostic error in our lifetime and argued for efforts to improve the diagnostic process through better measurement of diagnostic performance.11 It reiterated that the diagnostic process is a
dynamic, team-based activity that involves uncertainty, plays out
over time, and requires effective communication and collaboration
among multiple providers, diagnostic services, and the patient.
Measurement as a necessary first step in quality improvement is
the cornerstone for many policy initiatives focused on improving
quality and safety.12,13 The proliferation of health-care performance measures has been remarkable, with the National Quality
Forum currently endorsing more than 600 measures in the
United States.14 Health-care organizations (HCOs) commit substantial resources to comply with required measures from the Joint
Commission and the Centers for Medicare and Medicaid Services, and many also participate in voluntary measure reporting
sponsored by advocacy organizations such as the Leapfrog Group.
Given the abundance of performance measures already in use, it is
surprising how few are focused on diagnosis.15
Multitask theory, proposed by economists Holmstrom and
Milgrom, posits that when incentives put in place by an organization omit key dimensions of performance, those dimensions will
receive less attention; in effect, the organization risks getting only
what is measured.16 Thus, it would not be surprising that in the absence of specific process or outcome measures related to diagnosis, the HCO and its members may focus their attention elsewhere.
All HCOs are resource constrained, and by necessity, they will direct their attention first to the measures specifically required by
accrediting agencies and payers.
The recent IOM report11 creates a propitious moment to rectify
this imbalance and encourages development of measures related
to diagnosis. Accepting that measurement is an effective and essential component of performance improvement and that the lack
of measurement is in itself deleterious, the IOM report presents
organization for the submitted work; no financial relationships with any
organizations that might have an interest in the submitted work in the
previous three years; no other relationships or activities that could appear to
have influenced the submitted work.
Each author meets all of the following authorship requirements: substantial
contributions to the conception or design of the work; or the acquisition,
analysis, or interpretation of data for the work; drafting the work or revising
it critically for important intellectual content; final approval of the version to
be published; agreement to be accountable for all aspects of the work in
ensuring that questions related to the accuracy or integrity of any part of the
work are appropriately investigated and resolved.
The views expressed in this article are those of the authors and do not
necessarily represent the views of the Department of Veterans Affairs or any
other funding agency. No funding agency had any role in the design and
conduct of the study; data collection, management, analysis, and
interpretation of the data; and preparation, review, or approval of
the manuscript.
Copyright © 2016 The Author(s). Published by Wolters Kluwer Health, Inc.
This is an open-access article distributed under the terms of the Creative
Commons Attribution-Non Commercial-No Derivatives License 4.0
(CCBY-NC-ND), where it is permissible to download and share the work
provided it is properly cited. The work cannot be changed in any way or
used commercially without permission from the journal.
www.journalpatientsafety.com
311
Singh et al
both the opportunity and the impetus to address this dilemma. In
this article, we discuss how such an initiative can move forward
by balancing the need for measures and measurement with the reality that the scientific knowledge needed to inform this process is
still evolving. We focus on future measures to improve diagnosis
and highlight opportunities and challenges to encourage further
discussion and policymaking in this area.
The Challenges of Measuring the
Diagnostic Process
Despite an identified need and abundant enthusiasm to act,
there is little consensus and evidence to guide selection of appropriate performance measures. Measurement begins with a definition, and the IOM defined diagnostic error as the “failure to
establish an accurate and timely explanation of the patient’s health
problem(s) or communicate that explanation to the patient.” This
definition provides 3 key concepts that need to be operationalized:
(1) accurately identifying the explanation (or diagnosis) of the patient’s problem, (2) the timely provision of this explanation, and
(3) effective communication of the explanation. Although there
are well established tools for assessing communication in health
care, none of these are focused primarily on discussions around diagnosis. Moreover, both the “accuracy” and the “timeliness” elements of the definition are problematic from a research perspective:
Accuracy. Inaccuracy is sometimes obvious (a patient diagnosed with indigestion who is really having a myocardial infarction), but in many other circumstances, accuracy is much harder
to define. Is it acceptable to say “acute coronary syndrome” or
does the label have to indicate actual infarction, or be even more
specific, indicating location and transmural or not. Mental models
of what is or is not an accurate diagnosis can differ even among clinicians in the same specialty.17,18 Some of these problems can be
addressed by using predefined operational constructs or by using
a consensus among experts, but given the uncertainties and evolving nature of diagnosis, either approach would be challenging.
Timeliness. Although we may all agree that asthma diagnosis
should not require 7 visits over 3 years19 or that spinal cord compression from malignancy should probably be diagnosed within
weeks rather than months,20 there are no widely accepted standards for how long diagnosis should take for any given condition.
Furthermore, optimal diagnostic performance is not always about
speed; sometimes, the best approach is to defer diagnosis or testing
to some later time or to not make a definitive diagnosis until more
information is available or if symptoms persist or evolve.
Experts have yet to define how we objectively identify clinicians or teams who excel in diagnosis and those that do not. One
might argue that the best diagnosticians might be defined not only
by their accuracy and timeliness but also by their efficiency (e.g.,
minimizing resource expenditure and limiting the patient’s exposure to risk).21 In this regard, Donabedian states, “In my opinion,
the essence of quality or, in other words, ‘clinical judgment,’ is in
the choice of the most appropriate strategy for the management of
any given situation. The balance of expected benefits, risks, and
monetary costs, as evaluated jointly by the physician and his patient,
is the criterion for selecting the optimal strategy.”22 Thus, some, including authors of this paper, would argue that the measurement of
the diagnostic process should really be thought of within the
broader evaluation of value-based care that accounts for quality,
risks, and costs, rather than using an overly simplistic focus on
achieving the correct diagnosis in the shortest amount of time.23
Nevertheless, many would choose to focus on diagnostic errors as a key window into the diagnostic process, but this represents
another major challenge. The instruments that organizations rely on
to detect other patient safety concerns are poorly suited or fail
312
www.journalpatientsafety.com
J Patient Saf • Volume 15, Number 4, December 2019
completely in detecting diagnostic error.24 Newer approaches are
needed that improve reporting by patients, physicians, and other clinicians and that take advantage of information stored in electronic
medical records to detect errors or patients at risk for error.25,26 Autopsy reports, preoperative versus postoperative surgical discrepancies, escalations of care, and conducting selected chart reviews are
other options for detecting missed diagnoses or preventable diagnostic delay.
Even when diagnostic errors are identified, learning from
them can be challenging. Diagnosis is influenced by complex dynamics involving system-, patient-, and team-related and individual cognitive factors. While identifying these factors may be
feasible in some cases,26 dissecting the root causes of these elements requires substantial inference, and there is risk of bias from
looking retrospectively. Although factors can be suspected as “contributing,” it is hard to identify causal links.27 Discerning the effect
of individual heuristics, biases, overconfidence, affective influences, distractions, and time constraints as well as key systems,
environmental, and team factors is often not possible. For measurement to be effective and actionable, analysis needs to reflect realworld practice, in which systems, team members, and patients
themselves inevitably influence the clinicians’ thought processes.28
For the many diagnoses that are made by teams, arriving at a diagnosis creates dual problems of attribution and ownership in the setting of fragmented and complex teams that exist in health care
today. Thus, it might be difficult to determine who should receive
the feedback that results from measurement and how to deliver useful and actionable feedback to a “team.”
Finally, there can be differences regarding whether it is more
important to measure success or failure in diagnosis. Some
experts29 have argued that “safety is better measured by how everyday work goes well than by how it fails.” This represents a paradigm change from the current dominant focus on errors that
would substantially change how we would design a measurement
system of “diagnostic safety.”
Suggestions for Moving Forward
One of the first steps toward useful measures of diagnostic
safety is to understand and use appropriate definitions of diagnostic error. In addition to the IOM definition, there are 3 other
definitions of diagnostic error in active use, and each may be appropriate for research in particular circumstances. Graber et al defines it as diagnosis that was unintentionally delayed (sufficient
information was available earlier), wrong (another diagnosis was
made before the correct one), or missed (no diagnosis was ever made),
as judged from the eventual appreciation of more definitive information.30 Schiff et al defines it as any mistake or failure in the diagnostic process leading to a misdiagnosis, missed diagnosis, or
delayed diagnosis.31 Lastly, Singh defines it as missed opportunities to make a correct or timely diagnosis based on the available
evidence, regardless of patient harm,32 and calls for unequivocal
evidence that some critical finding or abnormality was missed
or not investigated when it should have been.26 These definitions
convey complimentary concepts that are useful to understand the
“failure” referred to in the IOM definition and might be useful to
operationalize the IOM definition as it is used in future work.
Assuming sufficient motivation exists to address and improve
diagnostic safety, what measures should be considered? Recalling
Donabedian’s framework, measures that focus on structures
and processes can and should be considered, and where possible their downstream diagnosis-related outcomes, bearing in
mind Donabedian’s admonition that none of these aspects of care
are worth measuring without convincing demonstration of the causal
associations between them.33 Although this framework provides
© 2016 The Author(s). Published by Wolters Kluwer Health, Inc.
J Patient Saf • Volume 15, Number 4, December 2019
an appropriate and logical approach to begin developing measures
of diagnosis, it is critical to continue to emphasize that candidate
measures are only as good as the quality of the evidence that supports causal links between specific structures, processes, and outcomes, underscoring the need for substantial amount of research
work that needs to be done in this area.
Table 1 describes a set of candidate measurement concepts
drawn from recent studies that focus on diagnostic error. This is
Measures to Improve Diagnostic Safety
in no way a complete list but rather a conversation starter based
on emerging evidence on risks related to diagnostic safety (versus
patient safety in general). For example, many studies show lack of
timely follow-up of diagnostic test results in missed diagnosis, but
only a handful of HCOs in the United States are tracking followup of abnormal test results.24,41–43 Although these proposed measurement concepts are all reasonable candidates for consideration,
developing an actionable set of measures would ideally require a
TABLE 1. Candidate Set of Measurement Concepts to Consider for Evaluation of Diagnostic Safety
Measurement Concept
Rationale
Structure
Web-based decision support tools and online reference materials
are available to all providers to aid differential diagnosis.
Radiologists are available 24/7 to read stat diagnostic imaging
studies in real time.
The organization has expertise to conduct a comprehensive root
cause analysis in cases involving diagnostic error.34
80% of diagnostic errors in one study had no documented
differential diagnosis.26
Diagnostic errors are common if nonexperts are reading
imaging studies.30
This measure indicates institutional readiness and leadership
buy-in to address identified diagnostic errors; analyzing
one’s own cases will motivate corrective efforts.
University training programs provide specific training on diagnostic Interdisciplinary training is recommended by the IOM to address
error35 that include, for example, simulated case-based learning
teamwork, communication, and the cognitive and system-based
and virtual learning platforms.
underpinnings of diagnosis and diagnostic errors.
Attending staff are on site to supervise trainees 24/7.
Appropriate supervision of trainees is a training program requirement,
and inadequate supervision results in diagnostic errors.30,36
The organization uses an interoperable and certified electronic
Electronic records improve access to data, test results, decision
health record with clinical decision support functionality.
support resources, and improve the quality of clinical care.37,38
The organization has an electronic health record data warehouse and Automated measurement is a fundamental requirement for monitoring
informatics team to enable analytics related to diagnostic safety.
diagnostic safety, and EHRs should help detect inconsistencies
suggestive of mislabeled or incorrect diagnosis.
The organization has an established mechanism for providing
Lack of feedback has been cited as a contributing factor to physician
feedback to previous clinicians when there is a significant
overconfidence,39,40 and feedback is known to promote expertise.
change in diagnosis
Health-care organizations develop processes and procedures to
Efforts to monitor safety at most organizations currently focus
identify and learn from cases of diagnostic error
primarily on treatment and management; local cases of error
provide excellent opportunities for learning.
Process
Proportion of laboratory test results or diagnostic imaging not
Delays in diagnostic testing lead to delays in diagnosis and increased
performed within the expected turnaround time
chances for iatrogenic injury in the interim.41
Proportion of abnormal diagnostic test results returned but not
Failure to follow-up test results is common and occurs in all types of
acted upon within an appropriate time window
clinical settings.42 Measurement criteria are better defined for
missed test results than other types of missed opportunities.42,43
Proportion of clinical providers who identify a surrogate to review
Diagnostic test results that “fall through the cracks” due to role
diagnostic test results while on vacation or when
ambiguity are a preventable cause of diagnostic delay.44
leaving employment
Proportion of patients with an unexpected hospitalization within
80% of diagnostic errors in one study had no documented differential
14 days of primary care or emergency department visit who
diagnosis at the earlier visit.26 Premature closure is one of the most
common factors identified in cases of diagnostic error.30,45
had a differential diagnosis noted at the earlier visit
Time from a diagnostic colonoscopy request to colonoscopy
Delays in cancer diagnosis are the leading cause for malpractice
performance
litigation.2,46
Proportion of patients diagnosed with a specified target disease
Second opinions can “catch” diagnostic errors in radiology, pathology,
of interest (e.g., known diagnostic dilemmas) who received
and potentially in clinical medicine.47
a second opinion
Proportion of patients with no-shows to cancer related
Missed colonoscopy and bronchoscopy appointments could lead to
diagnostic procedures
delays in cancer diagnosis.48,49
Proportion of patients who sign up for portals that actually log on
Patient engagement creates a safety net for minimizing diagnostic
to patient portals to see test results electronically
errors by preventing abnormal test results from “falling through
the cracks.”
Organization monitors adenoma detection rates and provides
Higher detection rates improve the chances of detecting early-stage
feedback to endoscopists
colon cancers, and detection rates vary across individual
endoscopists.50
Outcomes
Proportion of patients with newly diagnosed colorectal cancer
Nearly a third of patients with colorectal cancer have missed
diagnosed within 60 days of first presentation of known red-flags
opportunities for an earlier diagnosis.48,51–53
© 2016 The Author(s). Published by Wolters Kluwer Health, Inc.
www.journalpatientsafety.com
313
Singh et al
validation process that samples a broader range of informed opinion and experience in keeping with the emerging standards for the
development of quality measures. Even if a particular measure
is endorsed broadly, it should be considered a hypothesis to be
tested. Empirical confirmation of its beneficial effect on patient
outcomes should be demonstrated before it can be considered a
standard to which organizations are held accountable, an essential
step that is rarely considered in the development of performance
measure sets.
A real challenge to implementing performance measurement
in diagnosis is that harm might outweigh the benefit. Launching
more measures, especially measures lacking robust evidence, tends
to alienate front-line caregivers and HCOs already overburdened
with other performance measures.54 Recently, experts have called
for a moratorium on new measures, citing concerns that flawed
measures will be used for public reporting and value-based purchasing.12,15 Turning again to the theory of performance measurement, Holmstrom and Milgrom observe that “the desirability of
providing incentives for any one activity decreases with the difficulty of measuring performance in any other activities that make
competing demands on the [provider]’s time and attention.” A concern that follows from this observation is that unintended consequences of performance measures will inevitably emerge and
undermine efforts to improve diagnostic safety. One could easily
imagine that measures of underdiagnosis might lead to higher utilization of unnecessary tests.
Summary and Recommendations
Measurement, benchmarking, and transparency of performance are playing a major role in improving health care. Current
performance measures pertain almost exclusively to treatment,
and a recent IOM report has strongly endorsed broadening this focus to include diagnosis. We cannot make progress toward this
goal without advancing the science of measurement around diagnostic performance. Compared with most performance measures,
diagnostic safety may be particularly salient to physicians and
their teams, given how central diagnosis is to our professional
identity and the degree of control that physicians exert over the diagnostic process.
However, the IOM also recognizes the importance of system
and organizational factors in improving diagnosis. For example,
improved communication and care coordination and large scale
initiatives to measure and improve care delivery (such as implementation of accountable care organizations) are important targets. The United Kingdom has already embraced measurement
in its large initiative focused on improving the timeliness of cancer
diagnosis,55 and the United States could follow this lead as a first
step to measure diagnostic safety.
To create a foundation for further discussion on evidence for
measures for diagnostic safety, 6 questions should be considered:
• What are the appropriate time intervals to diagnose specific
conditions of interest that are frequently associated with
diagnostic error?
• How can we measure competency in clinical reasoning in realworld practice settings?
• What measurable physician or team behaviors characterize ideal
versus suboptimal diagnostic performance?
• What system properties translate into safe diagnostic performance, and how can we measure those?
• How do we leverage information technology, including electronic health records (EHRs), to help measure and improve
diagnostic safety?
314
www.journalpatientsafety.com
J Patient Saf • Volume 15, Number 4, December 2019
• How do we leverage patient experiences and reports to measure
and improve diagnostic safety?
Pioneering organizations can begin by identifying “missed
opportunities in diagnosis” or “diagnostic safety concerns.”32
For example, both Kaiser Permanente and the Department of Veterans Affairs are involved in initiatives to improve follow-up of
abnormal test results.24,56 The case for measuring diagnostic outcomes in certain high-risk areas such as cancer diagnosis has also
become clear.57 Nearly a third of patients with colorectal cancer
have missed opportunities for an earlier diagnosis.48,53 Thus, outcome measures could be considered, such as ratio of early stage to
late stage colorectal cancer diagnosed within the previous year
and proportion of patients with newly diagnosed colorectal cancer
diagnosed within 60 days of first presentation of known red
flags.51,52
HCOs should also consider using their EHRs to enable diagnostic safety measurement. Although most HCOs are now using
EHRs, very few are doing any analytics for patient safety improvement.58 In addition to using digital data to identify patients with potential diagnostic process failures, the EHR could be leveraged for
recognizing incorrect diagnosis and internal inconsistencies suggestive of mislabeled diagnosis (patient with “coronary artery disease,” despite normal coronary angiogram; patient with “COPD”
with normal lung function tests). This process would require HCOs
to better capture and use structured clinical data in an electronic
format for safety improvement, for which the time is now ripe.59
Additionally, in any efforts to measure underdiagnosis, it is
important that attention also be paid to overdiagnosis,60 acknowledging that overdiagnosis has its own measurementrelated conceptual challenges.61 We should learn from the mistakes of performance measurement in the treatment realm, where
a single-minded focus on undertreatment in highly monitored
areas of practice has led to harmful instances of overtreatment.62
We should also consider how perspectives from both patients
and their care teams (physicians and other team members) can
help develop novel measurement approaches that involve asking
them directly about the diagnostic process and their roles. This approach is consistent with the fact that diagnosis is a “team sport”
where patients play a critical role.63
Some experts caution against too much emphasis on measurement to guide decisions because of unknown and unknowable
data.64 Nevertheless, evidence suggests it is now time to address
measurement of diagnostic safety while balancing to avoid both
underdiagnosis and overdiagnosis. We propose a starter set of
measurement concepts for initial consideration that seem reasonably related to diagnostic quality and safety and call for these to
be studied and further refined. This would enable safe diagnosis
to become an organizational priority and facilitate quality improvement. Meanwhile, researchers should work on the evidence
base needed for more rigorous measurement of structure and process elements that are connected to the real clinical outcomes of
interest, more timely and accurate diagnosis, and less preventable
diagnostic harm.
REFERENCES
1. Singh H, Graber ML. Improving diagnosis in health care—the next
imperative for patient safety. N Engl J Med. 2015;373:2493–2495.
2. Bishop TF, Ryan AM, Casalino LP. Paid malpractice claims for adverse
events in inpatient and outpatient settings. JAMA. 2011;305:2427–2431.
3. Chandra A, Nundy S, Seabury SA. The growth of physician medical
malpractice payments: evidence from the National Practitioner Data Bank.
Health Aff (Millwood). 2005 Suppl Web Exclusives:W5-240–W5-249.
© 2016 The Author(s). Published by Wolters Kluwer Health, Inc.
J Patient Saf • Volume 15, Number 4, December 2019
4. Gandhi TK, Kachalia A, Thomas EJ, et al. Missed and delayed diagnoses in
the ambulatory setting: a study of closed malpractice claims. Ann Intern
Med. 2006;145:488–496.
5. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-Year summary of US
malpractice claims for diagnostic errors 1986–2010: an analysis from the
National Practitioner Data Bank. BMJ Qual Saf. 2013;22:672–680.
6. Schiff GD, Puopolo AL, Huben-Kearney A, et al. Primary care closed
claims experience of Massachusetts malpractice insurers. JAMA Intern
Med. 2013;173:2063–2068.
7. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in
outpatient care: estimations from three large observational studies
involving US adult populations. BMJ Qual Saf. 2014;23:727–731.
8. Graber ML, Wachter RM, Cassel CK. Bringing diagnosis into the quality
and safety equations. JAMA. 2012;308:1211–1212.
9. El-Kareh R. Making clinical diagnoses: how measureable is the process?
The National Quality Measures Clearinghouseâ„¢ (NQMC) [serial online]
2014. Available at: Agency for Healthcare Research and Quality (AHRQ).
Accessed June 6, 2016.
10. McGlynn EA, McDonald KM, Cassel CK. Measurement is essential for
improving diagnosis and reducing diagnostic error: a report from the
Institute of Medicine. JAMA. 2015;314:2501–2502.
11. Improving diagnosis in health care. National Academies of Sciences
Engineering and Medicine [serial online] 2015, Available at: The National
Academies Press. Accessed June 14, 2016.
12. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus
measuring what matters: a call for balance and parsimony. BMJ Qual Saf.
2012;21:964–968.
13. Jha A, Pronovost P. Toward a safer health care system: the critical need to
improve measurement. JAMA. 2016;315:1831–1832.
14. National Quality Forum. National Quality Forum [serial online]. 2016.
15. Thomas EJ, Classen DC. Patient safety: let’s measure what matters.
Ann Intern Med. 2014;160:642–643.
16. Holmstrom B, Milfrom P. Multitask principal-agent analyses: incentive
contracts, asset ownership, and job design. JELO. 1991;7:24–52.
17. Zwaan L, de Bruijne M, Wagner C, et al. Patient record review of the
incidence, consequences, and causes of diagnostic adverse events. Arch
Intern Med. 2010;170:1015–1021.
18. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based
surveillance of diagnostic errors in primary care. BMJ Qual Saf.
2012;21:93–100.
19. Charlton I, Jones K, Bain J. Delay in diagnosis of childhood asthma and its
influence on respiratory consultation rates. Arch Dis Child. 1991;66:
633–635.
20. Levack P, Graham J, Collie D, et al. Don’t wait for a sensory level–listen to
the symptoms: a prospective audit of the delays in diagnosis of malignant
cord compression. Clin Oncol (R Coll Radiol). 2002;14:472–480.
21. Singh H. Diagnostic errors: moving beyond ‘no respect’ and getting ready
for prime time. BMJ Qual Saf. 2013;22:789–792.
22. Donabedian A. The quality of medical care. Science. 1978;200:856–864.
23. Hofer TP, Kerr EA, Hayward RA. What is an error? Eff Clin Pract.
2000;3:261–269.
24. Graber ML, Trowbridge RL, Myers JS, et al. The next organizational
challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient
Saf. 2014;40:102–110.
25. Danforth KN, Smith AE, Loo RK, et al. Electronic clinical surveillance to
improve outpatient care: diverse applications within an integrated delivery
system. EGEMS (Wash DC). 2014;2:1056 [serial online] 2014;2.
Available at: The Berkeley Electronic Press. Accessed June 6, 2016.
26. Singh H, Giardina TD, Meyer AN, et al. Types and origins of diagnostic
errors in primary care settings. JAMA Intern Med. 2013;173:418–425.
© 2016 The Author(s). Published by Wolters Kluwer Health, Inc.
Measures to Improve Diagnostic Safety
27. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce
diagnostic error: a narrative review. BMJ Qual Saf. 2012;21:535–557.
28. Henriksen K, Brady J. The pursuit of better diagnostic performance: a
human factors perspective. BMJ Qual Saf. 2013;22:ii1–ii5.
29. Braithwaite J, Wears RL, Hollnagel E. Resilient health care: turning patient
safety on its head. Int J Qual Health Care. 2015;27:418–420.
30. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine.
Arch Intern Med. 2005;165:1493–1499.
31. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of
583 physician-reported errors. Arch Intern Med. 2009;169:1881–1887.
32. Singh H. Editorial: Helping health care organizations to define diagnostic
errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf.
2014;40:99–101.
33. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;
260:1743–1748.
34. Reilly JB, Myers JS, Salvador D, et al. Use of a novel, modified fishbone
diagram to analyze diagnostic errors. Diagnosis. 2014;1:167–171.
35. Reilly JB, Ogdie AR, Von Feldt JM, et al. Teaching about how doctors
think: a longitudinal curriculum in cognitive bias and diagnostic error for
residents. BMJ Qual Saf. 2013;22:1044–1050.
36. Singh H, Thomas EJ, Petersen LA, et al. Medical errors involving trainees:
a study of closed malpractice claims from 5 insurers. Arch Intern Med.
2007;167:2030–2036.
37. El-Kareh R, Hasan O, Schiff GD. Use of health information technology to
reduce diagnostic errors. BMJ Qual Saf. 2013;22:ii40–ii51.
38. Liebovitz D. Next steps for electronic health records to improve the
diagnostic process. Diagnosis. 2015;2:111–116.
39. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in
medicine. Am J Med. 2008;121:S2–S23.
40. Meyer AN, Payne VL, Meeks DW, et al. Physicians’ diagnostic accuracy,
confidence, and resource requests: a vignette study. JAMA Intern Med.
2013;173:1952–1958.
41. Casalino LP, Dunham D, Chin MH, et al. Frequency of failure to inform
patients of clinically significant outpatient test results. Arch Intern Med.
2009;169:1123–1129.
42. Singh H, Thomas EJ, Sittig DF, et al. Notification of abnormal lab test
results in an electronic medical record: do any safety concerns remain?
Am J Med. 2010;123:238–244.
43. Singh H, Thomas EJ, Mani S, et al. Timely follow-up of abnormal
diagnostic imaging test results in an outpatient setting: are electronic
medical records achieving their potential? Arch Intern Med. 2009;169:
1578–1586.
44. Menon S, Smith MW, Sittig DF, et al. How context affects electronic health
record-based test result follow-up: a mixed-methods evaluation. BMJ Open.
2014;4:e005985.
45. Balla J, Heneghan C, Goyder C, et al. Identifying early warning signs for
diagnostic errors in primary care: a qualitative study. BMJ Open. 2012;
2:e001539.
46. Wallace E, Lowry J, Smith SM, et al. The epidemiology of malpractice
claims in primary care: a systematic review. BMJ Open. 2013;3:e002929.
47. Payne VL, Singh H, Meyer AN, et al. Patient-initiated second opinions:
systematic review of characteristics and impact on diagnosis, treatment, and
satisfaction. Mayo Clin Proc. 2014;89:687–696.
48. Singh H, Daci K, Petersen L, et al. Missed opportunities to initiate
endoscopic evaluation for colorectal cancer diagnosis. Am J Gastroenterol.
2009;104:2543–2554.
49. Singh H, Hirani K, Kadiyala H, et al. Characteristics and predictors of
missed opportunities in lung cancer diagnosis: an electronic health
record-based study. J Clin Oncol. 2010;28:3307–3315.
50. Corley DA, Jensen CD, Marks AR, et al. Adenoma detection rate and risk
of colorectal cancer and death. N Engl J Med. 2014;370:1298–1306.
www.journalpatientsafety.com
315
Singh et al
J Patient Saf • Volume 15, Number 4, December 2019
51. Singh H, Kadiyala H, Bhagwath G, et al. Using a multifaceted approach to
improve the follow-up of positive fecal occult blood test results. Am J
Gastroenterol. 2009;104:942–952.
58. Russo E, Sittig DF, Murphy DR, et al. Challenges in patient safety
improvement research in the era of electronic health records. Healthc
(Amst). 2016;4:285–290.
52. Singh H, Petersen LA, Daci K, et al. Reducing referral delays in colorectal
cancer diagnosis: is it about how you ask? Qual Saf Health Care.
2010;19:e27.
59. Sittig DF, Ash JS, Singh H. The SAFER guides: empowering organizations
to improve the safety and effectiveness of electronic health records. Am J
Manag Care. 2014;20:418–423.
53. Singh H, Khan R, Giardina TD, et al. Postreferral colonoscopy delays in
diagnosis of colorectal cancer: a mixed-methods analysis.
Qual Manag Health Care. 2012;21:252–261.
60. Welch HG, Schwartz L, Woloshin S. Overdiagnosed: Making People Sick
in the Pursuit of Health. 1st ed. Boston: Beacon Press; 2012.
54. Cassel CK, Conway PH, Delbanco SF, et al. Getting more performance
from performance measurement. N Engl J Med. 2014;371:2145–2147.
55. National Awareness and Early Diagnosis Initiative—NAEDI. Cancer
Research UK [serial online] 2014. Accessed June 6, 2016.
56. Murphy DR, Laxmisan A, Reis BA, et al. Electronic health record-based
triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf. 2014;
23:8–16.
57. Lyratzopoulos G, Vedsted P, Singh H. Understanding missed opportunities
for more timely diagnosis of cancer in symptomatic patients after
presentation. Br J Cancer. 2015;112:S84–S91.
316
www.journalpatientsafety.com
61. Hofmann B. Diagnosing overdiagnosis: conceptual challenges and
suggested solutions. Eur J Epidemiol. 2014;29:599–604.
62. Kerr EA, Lucatorto MA, Holleman R, et al. Monitoring performance for
blood pressure management among patients with diabetes mellitus:
too much of a good thing? Arch Intern Med.
2012;172:938–945.
63. Heyhoe J, Lawton R, Armitage G, et al. Understanding diagnostic error:
looking beyond diagnostic accuracy. Diagnosis.
2015;2:205–209.
64. Berenson RA, News@JAMA. If you can’t measure performance, can you
improve it? JAMA [serial online] 2016. Accessed July 6, 2016.
© 2016 The Author(s). Published by Wolters Kluwer Health, Inc.
Narrative review
Application of electronic trigger
tools to identify targets for
improving diagnostic safety
Daniel R Murphy,1,2 Ashley ND Meyer,1,2 Dean F Sittig,1,3,4
Derek W Meeks,1,2 Eric J Thomas,4 Hardeep Singh1,2
1
Center for Innovations in
Quality, Effectiveness and
Safety, Michael E DeBakey
Veterans Affairs Medical Center,
Houston, Texas, USA
2
Department of Medicine, Baylor
College of Medicine, Houston,
Texas, USA
3
School of Biomedical
Informatics, University of Texas
Health Science Center, Houston,
Texas, USA
4
Department of Medicine,
University of Texas-Memorial
Hermann Center for Healthcare
Quality and Safety, Houston,
Texas, USA
Correspondence to
Dr Daniel R Murphy, Center
for Innovations in Quality,
Effectiveness and Safety,
Michael E DeBakey Veterans
Affairs Medical Center, Houston,
TX 77030, USA;
​drmurphy@​bcm.​edu
Received 14 March 2018
Revised 20 June 2018
Accepted 14 August 2018
Published Online First
5 October 2018
© Author(s) (or their
employer(s)) 2019. Re-use
permitted under CC BY-NC. No
commercial re-use. See rights
and permissions. Published by
BMJ.
To cite: Murphy DR,
Meyer AND, Sittig DF, et al.
BMJ Qual Saf
2019;28:151–159.
Abstract
Progress in reducing diagnostic errors remains slow
partly due to poorly defined methods to identify errors,
high-risk situations, and adverse events. Electronic trigger
(e-trigger) tools, which mine vast amounts of patient data
to identify signals indicative of a likely error or adverse
event, offer a promising method to efficiently identify
errors. The increasing amounts of longitudinal electronic
data and maturing data warehousing techniques and
infrastructure offer an unprecedented opportunity
to implement new types of e-trigger tools that use
algorithms to identify risks and events related to the
diagnostic process. We present a knowledge discovery
framework, the Safer Dx Trigger Tools Framework, that
enables health systems to develop and implement
e-trigger tools to identify and measure diagnostic errors
using comprehensive electronic health record (EHR)
data. Safer Dx e-trigger tools detect potential diagnostic
events, allowing health systems to monitor event rates,
study contributory factors and identify targets for
improving diagnostic safety. In addition to promoting
organisational learning, some e-triggers can monitor data
prospectively and help identify patients at high-risk for
a future adverse event, enabling clinicians, patients or
safety personnel to take preventive actions proactively.
Successful application of electronic algorithms requires
health systems to invest in clinical informaticists,
information technology professionals, patient safety
professionals and clinicians, all of who work closely
together to overcome development and implementation
challenges. We outline key future research, including
advances in natural language processing and machine
learning, needed to improve effectiveness of e-triggers.
Integrating diagnostic safety e-triggers in institutional
patient safety strategies can accelerate progress in
reducing preventable harm from diagnostic errors.
Nearly two decades after the Institute of
Medicine’s report ‘To Err is Human’,1
medical errors remain frequent.2–4
Methods are needed to efficiently and
effectively identify high-risk situations to
prevent harm as well as identify patient
safety events to enable organisational
learning for error prevention.5 6 Measurement needed for improving diagnosis
is particularly challenging due to the
complexity of an evolving diagnostic
process.7 Use of health information technology (HIT) is essential to monitor
patient safety8 but has received limited
application in diagnostic error detection. Widespread recent adoption of
comprehensive electronic health records
(EHR) and clinical data warehouses have
advanced our ability to collect, store, use
and analyse vast amounts of electronic
clinical data that helps map the diagnostic
process.
Triggers have helped measure safety
in hospitals; for example, use of inpatient naloxone administration outside
of the postanaesthesia recovery room
could suggest oversedation due to opioid
administration. Trigger development
and use have steadily increased over the
past decade in prehospital,9 emergency
room,10 inpatient,11 ambulatory care12
and home health settings,13 and helped
identify adverse drug reactions,10 surgical
complications14 15 and other potentially
preventable harm.14 Electronic trigger
(e-trigger) tools,16 which mine vast
amounts of clinical and administrative
data to identify signals for likely adverse
events,17–19 offer a promising method to
detect patient safety events. Such tools are
more efficient and effective in detecting
adverse events as compared with voluntary reporting or use of patient safety indicators20 21 and offer the ability to quickly
mine large data sets, reducing the number
of records requiring human review to
those at highest risk of harm. While most
e-triggers rely on structured (non-free
text) data, some can detect specific words
within progress notes or reports.22
The most widely used trigger tools (the
Institute for Healthcare Improvement’s
Global Trigger Tools)23 include both
manual and e-trigger tools to detect inpatient events.20 24–28 However, they were
not designed to detect diagnostic errors.
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
   151
Narrative review
Meanwhile, other types of trigger tools have been
developed for diagnostic errors and are getting ready
for integration within existing patient safety surveillance systems.4 29 30 To stimulate progress in this area,
we present a knowledge discovery framework, the Safer
Dx Trigger Tools Framework, that could enable health
systems to develop and implement e-trigger tools that
measure diagnostic errors using comprehensive EHR
data. Health systems would also need to leverage and/
or develop their existing safety and quality improvement infrastructure and personnel (such as clinical
leadership, HIT professionals, safety managers and
risk management) to operationalise this framework. In
addition to showcasing the application of diagnostic
safety e-trigger tools, we highlight several strategies to
bolster their development and implementation. Triggers can identify diagnostic events, allowing health
systems to monitor event rates and study contributory factors, and thus potentially learn from these
events and prevent similar events in the future. Some
e-triggers additionally allow monitoring of data more
prospectively and help identify patients at high risk for
future adverse events, enabling clinicians, patients or
safety personnel to take preventive actions proactively.
Conceptualising diagnostic safety
e-triggers
Triggers are not new to patient safety measurement.
Several existing triggers focus on identifying errors
related to medications, such as administering incorrect dosages, or procedure complications, such as
returning to the operating room. Only recently has
this concept been adapted to detect potential problems with diagnostic processes, such as patterns of
care suggestive of missed or delayed diagnoses.19 For
instance, a clinic visit followed several days later by
an unplanned hospitalisation or subsequent visit to the
emergency department could be indicative of something missed at the first visit.31 Similarly, misdiagnosis
could be suggested by an unusually prolonged hospital
stay for a given diagnosis19 or an unexpected inpatient
transfer to a higher level of care,19 32 particularly when
Table 1
considering younger patients with minimal comorbidity.33 Event identification can promote organisational learning with the goal of addressing underlying factors that led to the error, similar to what was
proposed earlier in the 2015 Safer Dx framework for
measuring diagnostic errors.34 Additionally, e-triggers
enable tracking of events over time to allow an assessment of the impact of efforts to reduce adverse events.
Certain e-trigger tools can additionally monitor for
high-risk situations prospectively, such as when risk of
harm is high, even if no harm has yet occurred. Several
studies have shown that e-trigger tools offer promise
in detecting errors of omission, such as detecting
delays in care after an abnormal test result suspicious
for cancer,26–30 35 kidney failure,29 36 infection29 and
thyroid conditions,37 as well as patients at risk of
delayed action on pregnancy complications.38 39 Such
triggers can identify situations where earlier intervention can potentially improve patient outcomes. Future
e-triggers could explore other process breakdowns
associated with diagnostic errors, such as when insufficient history has been collected or diagnostic testing is
not completed for a high-risk symptom (eg, no documented fever assessment or temperature recording in
patients with back pain, where an undiagnosed spinal
epidural abscess might be missed).40 41 In table 1, we
provide several examples of ‘Safer Dx’ e-trigger tools
that align with the process dimensions of the Safer Dx
framework. To promote the uptake of Safer Dx trigger
tools by health systems, we now discuss essential steps
for their development and implementation.
Safer Dx Trigger Tools Framework
Overview
e-Trigger development may be viewed as a form of data
mining or pattern matching to discover knowledge
about clinical processes. Several knowledge discovery
frameworks have evolved from fields such as statistics,
machine learning and database research. Hripcsak et
al proposed a framework for mining complex clinical
data for patient safety research, which is composed of
seven iterative steps: define the target events, access the
Examples of potential Safer Dx e-triggers mapped to diagnostic process dimensions of the Safer Dx framework34
Safer Dx diagnostic process
Safer Dx trigger example
Potential diagnostic error
Patient-provider encounter
Primary care office visit followed by unplanned
hospitalisation
ER visit within 72 hours after ER or hospital discharge
Missed red flag findings or incorrect diagnosis during
initial office visit
Missed red flag findings during initial ER or hospital
visit
Missed red flag findings during admission
Missed findings on initial read, or lack of
communication of amended findings
Abnormal test result missed
Performance and interpretation of
diagnostic tests
Follow-up and tracking of diagnostic
information over time
Referral and/or patient-specific factors
Unexpected transfer from hospital general floor to ICU
Amended imaging report
Abnormal test result with no timely follow-up action
Urgent specialty referral followed by discontinued
referral or patient no-show within 7 days
ER, emergency room; ICU, intensive care unit.
152
Delay in diagnosis from lack of specialty expertise
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
Narrative review
clinical data repository, use natural language processing
(NLP) for interpreting narrative data, generate queries
to detect and classify events, verify target detection,
characterise errors using system or cognitive taxonomies, and provide feedback.42 We build on essential
components of Hripcsak’s framework to demonstrate
the steps of Safer Dx e-trigger tools design and development (figure 1), with an emphasis on operationalising them using a multidisciplinary approach.
These development methods (table 2) have now been
validated to identify several diagnostic events of
interest.25–29 31 35 36 43
suspicious for new lung malignancy and ‘timely’
follow-up could include repeat imaging or a lung
biopsy performed within 30 days of the initial radiograph. While the 30-day time frame is longer than
what is required to act on an abnormal radiograph,
it is short enough to ‘catch’ an abnormality before
clinically significant disease progression, allowing
an opportunity to intervene. Consensus on this time
frame might involve primary care physicians, pulmonologists, oncologists and patient safety experts, and
definitions may vary from site to site. The criteria
should also exclude patients where additional diagnostic evaluation is unnecessary, such as in those with
known lung cancer or terminal illness.
Step 1: Identify and prioritise diagnostic error of interest
Step 3: Determine potential electronic data sources
The choice of which diagnostic error to focus on
could be guided by high-risk areas identified in prior
research and/or local priorities.44 Because of challenges to define error, we recommend risk areas where
clear evidence exists of a missed opportunity to make a
correct or timely diagnosis1 45–48 since this emphasises
preventability (focusing efforts where improvement is
more feasible) and accounts for the evolution of diagnosis over time.
Take, for example, a potentially missed diagnosis
of lung cancer related to delayed follow-up after an
abnormal chest radiograph.26 35 A robust body of
literature suggests that poor outcomes and malpractice suits can result from delays in follow-up of
abnormal imaging when potential lung malignancies
are missed.49–51
The nature and quality of the available data will determine to what extent the trigger can reliably capture
the desired cohort, and operational definitions will
often require refinements based on available data.
All safety triggers ultimately involve manual medical
record reviews to both validate (during trigger development) and act on (during trigger implementation)
trigger output. EHR built-in functionality may provide
sufficient data access and querying capabilities for
e-trigger development where only a few simple criteria
are required, but a data warehouse may be required
when numerous inclusion and exclusion criteria are
present. In addition to access to clinical and administrative data, e-trigger development relies on query
software to develop, refine and test algorithms, as well
as temporary storage for holding data from identified
records.
Both structured and/or unstructured data can be
used. Structured, or ‘coded’, data, such as International Classification of Diseases (ICD) codes and lab
results can be used to objectively identify data items.
More advanced text mining algorithms, like NLP,
can be optionally added to an e-trigger to allow use
of the vast amounts of unstructured (ie, free-text)
data, particularly when a structured data field for a
key criterion does not exist, but the relevant data are
contained in progress notes or reports. For example, a
structured Breast Imaging Reporting and Data System
(BIRADS) code may be helpful in detecting possible
cancers on mammography results; however, no analogous coding system is widely used for detecting liver
masses on abdominal imaging tests. Instead, an NLP
algorithm could scan abdominal imaging result text for
radiologist interpretations describing the presence of
liver masses.53 While NLP methods are being actively
explored, barriers to further deployment include
limited access to shared data for comparisons, lack of
annotated data sets for training and lack of user-centred development and concerns regarding reproducibility of results in different settings.54 Deployment of
NLP systems usually requires an expert developer to
build algorithms specific to the concepts being queried
Development methods
Step 2: Operationally define criteria to detect diagnostic error
Developing operational definitions involves creating
unambiguous language to objectively describe all
inclusion and exclusion characteristics to identify
the event. For example, an operational definition of
‘unexpected readmission’ might be ‘unplanned readmission to the same hospital for the same patient
within 14 days of discharge’. In many cases, standard
definitions will not exist and will need to be developed
by patient safety and clinical stakeholders. Published
literature, clinical guidelines from academic societies,
and input from clinicians, staff and other stakeholders
with expertise or involvement in related care processes
will allow development of rigorous criteria matched
to local processes and site characteristics. Consensus
may be achieved by having a designated team review
and approve all final criteria or Delphi-like methods52
with iterative revisions based on individual feedback
and re-review by the group.
In the above example, defining what is an ‘abnormal’
radiograph, a follow-up action and length of time that
should be considered a ‘delay’ is seemingly straightforward, but in absence of any standards, a key step.
‘Abnormal radiographs’ could include any plain chest
radiograph where the radiologist documents findings
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
153
Narrative review
Figure 1
154
The Safer Dx e-trigger tools framework. Diagnostic process dimensions adapted from Safer Dx framework.34
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
Narrative review
Table 2
Safer Dx e-trigger tool development process
e-trigger tool development steps
Stakeholders involved
Example
Identify and prioritise diagnostic error
of interest.
Operationally define criteria to detect
diagnostic error.
Organisational leadership and
patient safety personnel
Clinicians and staff involved in diagnostic process and
patient safety personnel
Determine potential data sources.
Informaticists, IT/programmers and data warehouse
personnel
Delays in follow-up of lung nodules identified as a
patient safety concern
Trigger development team defines delay as a patient
with a lung nodule on a chest imaging, but no repeat
imaging or specialty visit within 30 days.
Team identifies necessary structured data elements for
imaging results and specialty visits within local clinical
data warehouse.
Programmer develops electronic query based on
operational definition of delayed lung nodule follow-up.
Construct e-trigger algorithm.
Clinicians and staff involved in diagnostic process,
informaticists, IT/programmers and data warehouse
personnel
Test e-trigger on data source and
Clinicians and staff involved in diagnostic process,
review medical record.
informaticists, IT/programmers and data warehouse
personnel
Assess e-trigger algorithm performance. Clinicians and staff involved in diagnostic process,
informaticists and patient safety personnel
Iteratively refine e-trigger algorithm to
improve performance.
Clinicians and staff involved in diagnostic process,
informaticists, IT/programmers and
patient safety personnel
Triggers are applied to data warehouse and clinicians
perform chart reviews on 50 randomly selected records
from those identified by the trigger.
Positive and negative predictive values, sensitivity and
specificity of the trigger are calculated to understand
the trigger’s performance.
Trigger development team determines terminal illness
to be a major cause of false positive results and adds
this to the exclusion criteria.
IT, information technology.
in the free-text data, and are often not easily reused in
subsequent projects.54 This may put NLP-based triggers beyond current user capabilities, requiring more
developer support and limiting wider use. Similarly,
unsupervised machine learning, where computers
act without being explicitly programmed, can help
develop and improve triggers.55 Such algorithms could
potentially ‘learn’ to identify patterns in clinical data
and make predictions on potential diagnostic errors.
However, application of machine learning to make
triggers ‘smarter’ requires more research and development and not ready for widespread implementation.
Step 4: Construct an e-trigger algorithm to obtain cohort of interest
The clinical logic for selecting a cohort of interest
can be converted into the necessary query language
to extract electronic data. This requires individuals
with database and query programming expertise, such
as Structured Query Language programming knowledge. Detailed understanding of the clinical event
of interest and available data sources are needed to
generate patient cohorts for subsequent validation,
which requires clinical experts to work closely with
the query programmer.
While inclusion criteria will initially identify at-risk
patients, a robust set of exclusions is needed to narrow
down the population of interest. These exclusions
could remove patients in hospice care or those unlikely
to have a diagnostic error, such as patients where
timely follow-up actions were already performed
(eg, imaging test or biopsy done within 30 days) or
patients hospitalised electively for procedures rather
than unexpected admissions after primary care visits.
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
The remaining cohort will include an enriched sample
of patients with the highest risk for error.
Step 5: Test e-trigger on data source and review medical records
Depending on algorithm complexity, individual inclusion and exclusion criterion should be validated via
reviewing a small sample of records. This may isolate
potential algorithm or data-related issues (eg, additional ICD codes that need to be considered) not
immediately apparent when initially testing the full
algorithm. For instance, a small sample of records will
reveal if exclusions such as terminal illness, known
lung cancer, imaging testing within 30 days and biopsy
testing within 30 days indeed were made accurately.
Application of the full e-trigger algorithm yields a list
of patients at high risk of diagnostic error (‘e-trigger
positive’ patients). Medical records of e-trigger-positive patients should be reviewed by a clinician to
assess for presence or absence of diagnostic error.
For instance, when timely follow-up was performed
at an outside institution or when the return visit was
planned and mentioned only in a free-text portion
of a progress note, the record will be false positive.
Reviews also help determine whether initial criteria
require refinement to increase future predictive
value. A review of patients excluded from the cohort
(‘e-trigger negative’ patients) may identify information
to help refine the e-trigger (eg, ensuring appropriate
data are used for selection and whether additional
patient information should be incorporated into the
trigger). Directed interviews of involved clinicians (eg,
physicians, nurses) and subject matter experts may also
yield information to modify criteria.42
155
Narrative review
Step 6: Assess e-trigger algorithm performance
Several assessment measures can be used to evaluate
e-triggers, including positive predictive values (PPV)
based on the number of records flagged by the e-trigger
tool confirmed as diagnostic error on review (numerator) divided by the total number of records flagged
(denominator).56 If ‘negative’ records (ie, those not
flagged by the trigger) are reviewed, negative predictive values (NPV; number of patients without the
diagnostic error divided by all patients not flagged by
the e-trigger), sensitivity and specificity can additionally be calculated. Use of criteria to select higher risk
populations will often yield higher PPVs (eg, including
lipase orders to identify patients presenting with acute
abdominal pain to the emergency department).57 Tradeoffs will often be needed to achieve the best discrimination of patients of interest from patients without
the target or event of interest. e-Triggers with higher
PPVs will reduce resources spent on manual confirmatory reviews, while those with higher NPVs will
miss fewer records that contain the event of interest.
With uncommon events, such as seen in patient safety
research, it may only be possible to provide an estimated NPV by reviewing a modest sample of records
(eg, 100) because the number of ‘e-trigger negative’
records that need to be reviewed to find a single event
is vast and cost prohibitive. Higher sensitivity may be
desirable for certain e-triggers where the importance
of all events being captured outweighs the additional
review burden introduced by false positives.
The PPV helps plan for human resources to review
records and to act on e-trigger output. Clinical
personnel would intervene in high-risk situations
to prevent harm, whereas patient safety personnel
would investigate events and factors that contributed
to errors. Process improvement and organisational
learning activities would follow. Reviews and actions
for missed opportunities to close the loop on abnormal
test results will require just a few minutes per patient,
allowing a single individual to handle many records
per week. However, others related to whether a cognitive error occurred and subsequent investigation and
debriefs about what transpired will take much longer.
and discovery of diagnostic errors by leveraging electronic data. After e-trigger tools are developed and
validated to capture the desired cohort of patients
with acceptable performance, safety analysis activities
and potential solutions can result based on what is
learnt.58 59 Use of e-triggers as diagnostic safety indicators is promising for identifying historic trends, generating feedback and learning, facilitating understanding
of underlying contributory processes and informing
improvement strategies. Additionally, certain e-triggers can help health systems intervene to prevent
patient harm if applied prospectively.
In addition to having leadership support, health systems
will need to either leverage existing or build additional
infrastructure necessary to develop and implement diagnostic safety triggers. In organisations with advanced
safety programmes, development and implementation will require only modest additional investment of
resources; but for others in initial stages, trigger tools
could provide a useful starting point. We envision many
validated algorithms could be freely shared across institutions to reduce development workload.60 61 All health
systems will need to convene a multidisciplinary team to
harvest knowledge generated by the e-trigger tool. This
group should address implementation factors related to
how best to use e-trigger results, including who should
receive them and how. Prospective application warrants
these findings to be communicated to clinicians to take
action. Traditional methods of communicating such
findings have posed challenges62; thus, additional work
to reliably deliver such information is needed. Certain
detected events may require further investigation and
dedicated patient safety and process improvement teams
will need time and resources to collect and analyse data
and recommend improvement strategies. Such a group
should be composed of clinicians involved in the care
processes, informaticists, patient safety professionals,
and patients and garner multidisciplinary expertise for
understanding data, safety events, and creating and
implementing effective solutions. While there is need
to invest in additional resources and infrastructure,
building such an institutional programme could make
significant advances in the quality, accuracy and timeliness of diagnoses.
Step 7: Iteratively refine e-triggers to improve trigger performance
Using the knowledge gained from the previous steps,
the e-trigger may be iteratively refined to improve
capture of the defined cohort. This may involve simply
changing the value of a structured field or potentially
redesigning the entire algorithm to better capture the
clinical event. Similar to initial development, revision
should be informed by content experts and iterative
review of the available data. Clinicians can also suggest
revisions based on clinical circumstances.
Trigger implementation and use
The ultimate goal of Safer Dx e-trigger development is
to improve patient safety through better measurement
156
Discussion
We demonstrate the application of a knowledge
discovery framework to guide development and
implementation of e-triggers to identify targets for
improving diagnostic safety. This approach has shown
early promise to identify and describe diagnostic safety
concerns within health systems using comprehensive
EHRs.25–28 35 This discovery approach could ensure
progress towards the goal of using the EHR to monitor
and improve patient safety, the most advanced and
challenging aspect of EHR use.8 63
Application of the Safer Dx e-trigger tool framework
is not without limitations. First, a sizeable proportion of
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
Narrative review
healthcare information is contained in free-text notes
or documents. This may necessitate use of NLP-based
methods if e-trigger performance is inadequate to
detect the cohort of interest, but NLP requires additional expertise and methods to improve portability,
and accessibility of NLP tools is still being explored.64 65
Use of statistical models to estimate the likelihood of a
diagnostic error or machine learning55 66 to program a
computer to ‘learn’ from data patterns and make subsequent predictions may allow subsequent improvements
in performance. Maturation of these techniques will
stimulate the development and use of more sophisticated
and effective e-trigger tools. Second, data availability
and quality remain important issues that impact trigger
feasibility and performance. Even at organisations that
provide comprehensive and longitudinal care, we have
found data sharing across institutions to be incomplete,
requiring deliberate processes to actively collect and
record external findings.60 This highlights the need for
more meaningful sharing of data across institutions in
a manner computers can use. Efforts to improve data
sharing are already under way, but in early states (eg,
view-only versions of data from external organisations).
As data sharing improves, e-trigger tools will have better
opportunities to impact patient safety.67 Furthermore,
even when all care is delivered within a single organisation, absent, incomplete, outdated or incorrect data
can affect trigger tool performance. Similarly, certain
elements of patients’ histories, exams or assessments
may not be recorded in the medical record, limiting both
e-trigger performance and subsequent chart reviews used
to verify trigger results.68 69 However, this is a limitation
of most current safety measurement methods.
Conclusion
Use of HIT and readily available electronic clinical
data can enable better patient safety measurement. The
Safer Dx Trigger Tools Framework discussed here has
potential to advance both real-time and retrospective
identification of opportunities to improve diagnostic
safety. Development and implementation of diagnostic safety e-trigger tools along with institutional
investments to do so can improve our knowledge on
reducing harm from diagnostic errors and accelerate
progress in patient safety improvement.
Contributors All authors contributed to the development,
review and revision of this manuscript.
Funding Work described is heavily drawn from research
funded by the Veteran Affairs Health Services Research and
Development Service CREATE grant (CRE-12-033), the
Agency for Healthcare Research and Quality (R18HS017820)
and the Houston VA HSR&D Center for Innovations in
Quality, Effectiveness and Safety (CIN 13–413). Dr Murphy
is additionally funded by an Agency for Healthcare Research
& Quality Mentored Career Development Award (K08HS022901), and Dr Singh is additionally supported by the
VA Health Services Research and Development Service
(Presidential Early Career Award for Scientists and Engineers
USA 14-274), the VA National Center for Patient Safety, the
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
Agency for Health Care Research and Quality (R01HS022087)
and the Gordon and Betty Moore Foundation. Drs Sittig and
Thomas are supported in part by the Agency for Health Care
Research and Quality (P30HS023526). These funding sources
had no role in the design and conduct of the study; collection,
management, analysis and interpretation of the data; and
preparation, review or approval of the manuscript.
Competing interests None declared.
Patient consent Not required.
Provenance and peer review Not commissioned; externally
peer reviewed.
Open access This is an open access article distributed in
accordance with the Creative Commons Attribution Non
Commercial (CC BY-NC 4.0) license, which permits others
to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different
terms, provided the original work is properly cited, appropriate
credit is given, any changes made indicated, and the use is noncommercial. See: http://​creativecommons.​org/​licenses/​by-​nc/​4.0
References
1 Kohn LT, Corrigan JM, Donaldson MS, eds. Committee on
quality of health Care in America, Institute of Medicine. To Err
is human: building a safer health system. Washington, DC: The
National Academies Press, 2000.
2 Nepple KG, Joudi FN, Hillis SL, et al. Prevalence of delayed
clinician response to elevated prostate-specific antigen values.
Mayo Clin Proc 2008;83:439–45.
3 Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic
errors in outpatient care: estimations from three large
observational studies involving US adult populations. BMJ
Qual Saf 2014;23:727–31.
4 Improving diagnostic quality and Safety. Washington, DC:
National Quality Forum, 2017.
5 Liberman AL, Newman-Toker DE. Symptom-Disease Pair
Analysis of Diagnostic Error (SPADE): a conceptual framework
and methodological approach for unearthing misdiagnosisrelated harms using big data. BMJ Qual Saf 2018;27:557–66.
6 Dhaliwal G, Shojania KG. The data of diagnostic error: big,
large and small. BMJ Qual Saf 2018;27:499–501.
7 Lorincz C, Drazen E, Sokol P. Research in ambulatory patient
safety 2000–2010: a 10-year review. Chicago IL: American
Medical Association, 2011.
8 Sittig DF, Singh H. Electronic health records and national
patient-safety goals. N Engl J Med 2012;367:1854–60.
9 Howard IL, Bowen JM, Al Shaikh LAH, et al. Development
of a trigger tool to identify adverse events and harm in
Emergency Medical Services. Emerg Med J 2017;34:391–7.
10 De Almeida SM, Romualdo A, De Abreu Ferraresi A, et al.
Use of a trigger tool to detect adverse drug reactions in an
emergency department. BMC Pharmacol Toxicol 2017;18:71.
11 Unbeck M, Lindemalm S, Nydert P, et al. Validation of triggers
and development of a pediatric trigger tool to identify adverse
events. BMC Health Serv Res 2014;14:655.
12 Lipitz-Snyderman A, Classen D, Pfister D, et al. Performance
of a trigger tool for identifying adverse events in oncology. J
Oncol Pract 2017;13:e223–e230.
13 Lindblad M, Schildmeijer K, Nilsson L, et al. Development of
a trigger tool to identify adverse events and no-harm incidents
that affect patients admitted to home healthcare. BMJ Qual Saf
2018;27:502-511.
14 Sammer C, Miller S, Jones C, et al. Developing and evaluating
an automated all-cause harm trigger system. Jt Comm J Qual
Patient Saf 2017;43:155–65.
157
Narrative review
15 Menendez ME, Janssen SJ, Ring D. Electronic health recordbased triggers to detect adverse events after outpatient
orthopaedic surgery. BMJ Qual Saf 2016;25:25–30.
16 Triggers and Targeted Injury Detection Systems (TIDS) Expert
Panel Meeting: conference summary report. Rockville, MD:
Agency for Healthcare Research and Quality, 2009.
17 Glossaries. AHRQ Patient safety network, 2005.
18 Clancy CM. Common formats allow uniform collection and
reporting of patient safety data by patient safety organizations.
Am J Med Qual 2010;25:73–5.
19 Shenvi EC, El-Kareh R. Clinical criteria to screen for inpatient
diagnostic errors: a scoping review. Diagnosis 2015;2:3–19.
20 Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’
shows that adverse events in hospitals may be ten times greater
than previously measured. Health Aff 2011;30:581–9.
21 Asgari H, Esfahani SS, Yaghoubi M, et al. Investigating selected
patient safety indicators using medical records data. J Educ
Health Promot 2015;4:54.
22 Kane-Gill SL, MacLasco AM, Saul MI, et al. Use of text
searching for trigger words in medical records to identify
adverse drug reactions within an intensive care unit discharge
summary. Appl Clin Inform 2016;7:660–71.
23 Classen DC, Pestotnik SL, Evans RS. Computerized
surveillance of adverse drug events in hospital patients. JAMA
1991;266:2847–51.
24 Doupi P, Svaar H, Bjørn B, et al. Use of the global trigger tool
in patient safety improvement efforts: nordic experiences.
Cogn Technol Work 2015;17:45–54.
25 Murphy DR, Meyer AN, Vaghani V, et al. Application of
electronic algorithms to improve diagnostic evaluation for
bladder cancer. Appl Clin Inform 2017;8:279–90.
26 Murphy DR, Wu L, Thomas EJ, et al. Electronic trigger-based
intervention to reduce delays in diagnostic evaluation for
cancer: a cluster randomized controlled trial. J Clin Oncol
2015;33:3560–7.
27 Murphy DR, Laxmisan A, Reis BA, et al. Electronic health
record-based triggers to detect potential delays in cancer
diagnosis. BMJ Qual Saf 2014;23:8–16.
28 Murphy DR, Thomas EJ, Meyer AN, et al. Development
and validation of electronic health record-based triggers to
detect delays in follow-up of abnormal lung imaging findings.
Radiology 2015;277:81–7.
29 Danforth KN, Smith AE, Loo RK, et al. Electronic clinical
surveillance to improve outpatient care: diverse applications
within an integrated delivery system. EGEMS 2014;2:1056.
30 Wandtke B, Gallagher S. Reducing delay in diagnosis:
multistage recommendation tracking. AJR Am J Roentgenol
2017;209:970–5.
31 Singh H, Thomas EJ, Khan MM, et al. Identifying diagnostic
errors in primary care using an electronic screening algorithm.
Arch Intern Med 2007;167:302–8.
32 Resar RK, Rozich JD, Simmonds T, et al. A trigger tool to
identify adverse events in the intensive care unit. Jt Comm J
Qual Patient Saf 2006;32:585–90.
33 Bhise V, Sittig DF, Vaghani V, et al. An electronic trigger based
on care escalation to identify preventable adverse events in
hospitalised patients. BMJ Qual Saf 2018;27:241-246.
34 Singh H, Sittig DF. Advancing the science of measurement of
diagnostic errors in healthcare: the Safer Dx framework. BMJ
Qual Saf 2015;24:103–10.
35 Murphy DR, Meyer AN, Bhise V, et al. Computerized triggers
of big data to detect delays in follow-up of chest imaging
results. Chest 2016;150:613–20.
158
36 Sim JJ, Rutkowski MP, Selevan DC, et al. Kaiser permanente
creatinine safety program: a mechanism to ensure widespread
detection and care for chronic kidney disease. Am J Med
2015;128:1204–11.
37 Meyer AND, Murphy DR, Al-Mutairi A, et al. Electronic
detection of delayed test result follow-up in patients with
hypothyroidism. J Gen Intern Med 2017;32:753–9.
38 Hedriana HL, Wiesner S, Downs BG, et al. Baseline
assessment of a hospital-specific early warning trigger system
for reducing maternal morbidity. Int J Gynaecol Obstet
2016;132:337–41.
39 Shields LE, Wiesner S, Klein C, et al. Use of maternal early
warning trigger tool reduces maternal morbidity. Am J Obstet
Gynecol 2016;214:527.e1–527.e6.
40 Singh H. Improving diagnostic safety in primary care
by unlocking digital data. Jt Comm J Qual Patient Saf
2017;43:29–31.
41 Bhise V, Meyer AND, Singh H, et al. Errors in diagnosis of
spinal epidural abscesses in the era of electronic health records.
Am J Med 2017;130:975–81.
42 Hripcsak G, Bakken S, Stetson PD, et al. Mining complex
clinical data for patient safety research: a framework for event
discovery. J Biomed Inform 2003;36(1-2):120–30.
43 Singh H, Giardina TD, Forjuoh SN, et al. Electronic health
record-based surveillance of diagnostic errors in primary care.
BMJ Qual Saf 2012;21:93–100.
44 Olson APJ, Graber ML, Singh H. Tracking progress in
improving diagnosis: a framework for defining undesirable
diagnostic events. J Gen Intern Med 2018;33:1187–91.
45 Graber ML, Franklin N, Gordon R. Diagnostic error in
internal medicine. Arch Intern Med 2005;165:1493–9.
46 Singh H, Hirani K, Kadiyala H, et al. Characteristics and
predictors of missed opportunities in lung cancer diagnosis:
an electronic health record-based study. J Clin Oncol
2010;28:3307–15.
47 Zwaan L, Thijs A, Wagner C, et al. Relating faults in diagnostic
reasoning with diagnostic errors and patient harm. Acad Med
2012;87:149–56.
48 Zwaan L, Schiff GD, Singh H. Advancing the research agenda
for diagnostic error reduction. BMJ Qual Saf 2013;22(Suppl
2):ii52–ii57.
49 Berlin L. Malpractice issues in radiology admitting mistakes.
AJR Am J Roentgenol 1999;172:879–84.
50 Gale BD, Bissett-Siegel DP, Davidson SJ, et al. Failure to notify
reportable test results: significance in medical malpractice. J
Am Coll Radiol 2011;8:776–9.
51 Gandhi TK, Kachalia A, Thomas EJ, et al. Missed and
delayed diagnoses in the ambulatory setting: a study of closed
malpractice claims. Ann Intern Med 2006;145:488–96.
52 Brender J, Ammenwerth E, Nykänen P, et al. Factors
influencing success and failure of health informatics systems–a
pilot Delphi study. Methods Inf Med 2006;45:125–36.
53 Danforth KN, Early MI, Ngan S, et al. Automated
identification of patients with pulmonary nodules in an
integrated health system using administrative health plan data,
radiology reports, and natural language processing. J Thorac
Oncol 2012;7:1257–62.
54 Chapman WW, Nadkarni PM, Hirschman L, et al. Overcoming
barriers to NLP for clinical text: the role of shared tasks and
the need for additional creative solutions. J Am Med Inform
Assoc 2011;18:540–3.
55 Deo RC. Machine learning in medicine. Circulation
2015;132:1920–30.
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
Narrative review
56 Bramer M. Principles of data mining. London: Springer
London, 2013.
57 Medford-Davis L, Park E, Shlamovitz G, et al. Diagnostic
errors related to acute abdominal pain in the emergency
department. Emerg Med J 2016;33:253–9.
58 Rosen AK, Mull HJ, Kaafarani H, et al. Applying trigger tools
to detect adverse events associated with outpatient surgery. J
Patient Saf 2011;7:45–59.
59 Resar RK, Rozich JD, Classen D. Methodology and rationale
for the measurement of harm with trigger tools. Qual Saf
Health Care 2003;12:39–45.
60 Murphy DR, Meyer AND, Vaghani V, et al. Electronic triggers
to identify delays in follow-up of mammography: harnessing
the power of big data in health care. J Am Coll Radiol
2018;15:287–95.
61 Murphy DR, Meyer AND, Vaghani V, et al. Development and
validation of trigger algorithms to identify delays in diagnostic
evaluation of gastroenterological cancer. Clin Gastroenterol
Hepatol 2018;16:90–8.
62 Meyer AN, Murphy DR, Singh H. Communicating findings of
delayed diagnostic evaluation to primary care providers. J Am
Board Fam Med 2016;29:469–73.
63 Meeks DW, Takian A, Sittig DF, et al. Exploring the
sociotechnical intersection of patient safety and electronic
Murphy DR, et al. BMJ Qual Saf 2019;28:151–159. doi:10.1136/bmjqs-2018-008086
64
65
66
67
68
69
health record implementation. J Am Med Inform Assoc
2014;21(e1):e28–e34.
Soundrarajan BR, Ginter T, DuVall SL. An interface for rapid
natural language processing development in UIMAProceedings
of the 49th Annual Meeting of the Association for
Computational Linguistics: Human Language Technologies:
Systems Demonstrations. Association for Computational
Linguistics 2011:139–44.
Divita G, Carter ME, Tran LT, et al. v3NLP Framework: tools
to build applications for extracting concepts from clinical text.
EGEMS 2016;4:1228.
Wu J, Roy J, Stewart WF. Prediction modeling using EHR
data: challenges, strategies, and a comparison of machine
learning approaches. Med Care 2010;48(6 Suppl):S106–13.
Russo E, Sittig DF, Murphy DR, et al. Challenges in patient
safety improvement research in the era of electronic health
records. Healthc 2016;4:285–90.
Schwartz A, Weiner SJ, Weaver F, et al. Uncharted territory:
measuring costs of diagnostic errors outside the medical
record. BMJ Qual Saf 2012;21:918–24.
Weiner SJ, Schwartz A. Directly observed care: can
unannounced standardized patients address a gap in
performance measurement? J Gen Intern Med
2014;29:1183–7.
159
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.

Purchase answer to see full
attachment

  
error: Content is protected !!