Health

Data in EHRs Don’t Match Physicians’ Exam, Study Suggests

Electronic health records (EHRs) have the potential to improve the accuracy of medical documentation, but they may be having the opposite effect.

A small study of patient visits to the emergency department reveals concerning inconsistencies between real-time observational data — what physicians were actually seen or heard to do during patient encounters — and what was documented in the medical records. Specifically, observers were able to verify fewer than 40% of review of systems (ROS) and just over 50% of physical examination (PE) systems, pointing to a clear need to improve the accuracy of EHR content.

“These findings raise the possibility that some documentation may not accurately represent physician actions,” write Carl T. Berdahl, MD, MS, from the National Clinician Scholars Program at the University of California, Los Angeles, and colleagues. They published their findings online today in JAMA Network Open.

While calling for further studies to determine the extent of this discordance, they concede that such studies are unlikely to be done “owing to institution-level barriers that exist nationwide.” In the meantime, they suggest that “payers should consider removing financial incentives to generate lengthy documentation.”

Berdahl, who is also an emergency physician at Cedars-Sinai Medical Center in Los Angeles, notes that the information errors tend to relate to parts of the chart and areas of the body that are less clinically relevant to the complaints that bring patients to the emergency room. For example, among patients presenting with gastrointestinal or genitourinary problems, the researchers found EHR documentation of an uncorroborated abdominal or genitourinary examination in just 3 of 55 instances (5.4%). For the same group of patients, an unsubstantiated ear, nose, and/or throat examination was documented in 27 of 33 instances (81.8%).

In his view, the causes of this systemic problem are mainly twofold. “First, these are outstanding doctors doing a very good job, and their first order of business is to address patients’ concerns. In order to get to the next bedside, they may do things to save time and take shortcuts that could add data to the charts that observers don’t see during the encounters,” he told Medscape Medical News.

“Given the extensive demands on emergency doctors’ time, many find it easy to standardize a template in their EHR that they can then modify for pertinent positives, as filling out an entire ROS or PE template may not be feasible,” agrees Stephen Harding, MD, an assistant professor of emergency medicine at Baylor College of Medicine in Houston, Texas, who was not involved in the study.

A second possible driver of discrepancy is billing. Although this study deliberately did not look at that, “[t]here are pressures to write more on the charts because you can charge more,” Berdahl continued.

Harding concurs. “Given billing levels are determined and verified based on the number of ROS or PE systems documented, there is a certain degree of pressure to document to a certain level,” he said.

Yet another problem according to Berdahl is design: templates are formatted to autopopulate with information that tends to be misleading. Hence, EHRs may be vulnerable to inaccuracy in ROS and PE documentation because of templates meant to ease the data-entry burden.

Study Details

Between 2016 and 2018, nine licensed emergency medicine residents at two academic medical centers were shadowed by 12 trained observers for 20 encounters (10 per doctor per site). The objective was to quantify the percentage of emergency physicians‘ ROS and PE documentation in EHRs that could be confirmed by observers’ concurrent observation and review of audio recordings.

For ROS, the trainees documented a median of 14 systems (interquartile range [IQR], 8 – 14), while audio recordings confirmed a median of 5 (IQR, 3 – 6). Overall, 755 of 1961 documented ROS systems (38.5%) were confirmed by audio recording data.

For PE, residents documented a median of eight verifiable systems (IQR, 7 – 9), while observers confirmed a median of 5.5 (IQR, 3 – 6). Overall, observation confirmed 760 of 1429 documented PE systems (53.2%), with interrater reliability for both ROS and PE exceeding 90% on all measures.

The observer teams consisted of two attending emergency physicians and 10 undergraduate students selected as outstanding observers with 97% accuracy from a pool of 28 applicants. The study also calculated false-positive events (unsubstantiated documentations) and false-negative events (failures to document an ROS or PE finding).

Cedric Dark, MD, MPH, who, like Harding, is an assistant professor of emergency medicine at Baylor and not involved in the current study, said he is not surprised at its findings, as they demonstrate what he’s routinely observed in multiple emergency room environments. “It’s a lot easier to over-document using EHRs than paper charts, and a lot of the danger relates to the use of these macros or autopopulations,” he told Medscape Medical News.

Also problematic in Dark’s view is that doctors are working with outdated documentation guidelines that have been in place since 1995.

Dark offered another element in the exam-vs-chart discordance seen in the study: physicians‘ skill. Doctors are trained to assess patients by sight alone without having to lay hands on them. “What I can observe about a patient may not be evident to observers, especially those in this study. I have the ability to see things that I don’t have to verbalize but will document,” he said. If a patient has liver failure, for example, the physician can see that by the color of his skin and eyes. “I can see how a patient is breathing from the end of the bed, so I can do an assessment without having to touch him.”

Well-thought-out Study

“It is a well-thought-out observational study that overcomes the challenges of data collection, physician resistance to auditing, and the desire to preserve an image of physician infallibility,” write Nathalie Jetté, MD, MSc, from the department of neurology at Icahn School of Medicine, Mount Sinai, in New York City, and colleagues in an accompanying commentary.

They add, however, that as the physicians being observed were residents, the findings may not be widely generalizable. Residents in training may be unaware of compliance requirements, and earlier compliance education may be called for, according to the editorialists.

“It is of vital importance for clinical health services and legal purposes that clinicians document medical record information consistent with the level of care given,” Jetté and colleagues write. They point to a need for further study of the discrepancy phenomenon among attending physicians and across different specialties and health systems.

Berdahl’s center is now working with institutional partners to address this systemic problem.

“We hope that those who write the rules on billing — essentially Medicare — can transform the regulations so we can experiment with innovative ways that are more effective,” Berdahl said. These might include working from audio recordings of consultations with details entered into EHRs from transcripts. “Doctors would actually need to write only an abbreviated note,” he said. “There are new ways to make documentation accurate, but billing rules are holding us back.”

Berdahl’s hope may soon be fulfilled, at least partially. Last year, the Centers for Medicare and Medicaid Services released the 2019 Medicare Physician Fee Schedule Final Rule, effective January 1, 2021, and outlining revised documentation rules and regulations, including a new payment method for evaluation and management services.

The study was funded by the National Clinician Scholars Program at the University of California, Los Angeles, and the Korein Foundation. The authors and commentators Harding and Dark have disclosed no relevant financial relationships. Jetté has reported grant funding from the National Institute of Neurological Disorders and Stroke and from Alberta Health outside the submitted work, and grant funding from the Patient-Centered Outcomes Research Institute, for unrelated research on using EHRs for quality improvement and comparative effectiveness.

JAMA Network Open. Published online September 18, 2019. Article, Commentary

For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube.




Source link

Tags
Back to top button
close
Thanks !

Thanks for sharing this, you are awesome !

Pin It on Pinterest

Share This

Share this post with your friends!

Close