I’ve been thinking more about the Rory Staunton case since I wrote my first post about it a few days ago. As I think is clear from that post, my thoughts about the terribly tragic circumstances of his death changed pretty significantly based on one data point — his blood test results. I can construct a vaguely plausible explanation for why doctors in the emergency department discharged him with fever and an elevated heart rate and respiratory rate. It’s not a great explanation, but at least it’s one I could understand someone making.
But there’s simply no way I can reconcile his vital signs, his blood test results and the decision to discharge him from the ED. It’s not entirely clear to me from the Times article about the situation whether the pediatrician in the ED knew the results before discharging, since it only mentions when the labs were printed. She might have seen them through the hospital’s computer system, for example. But either she didn’t see them (which is negligence) or she didn’t think much of them (which is incompetence, in my estimation). Not defensible either way.
Now, we could simply call her a negligent or incompetent physician and leave it at that. But I’m still not comfortable with that. While I cannot describe her actions in this case (as I understand them from a one-sided article in a lay publication, it should be noted) as anything but negligent or incompetent, I think it’s too easy to simply tell ourselves she must be a bad doctor. I think it’s entirely possible that she is a good, competent pediatrician who made a major, catastrophic error. I’m still trying to understand what may have happened that allowed an otherwise good doctor to make such a bad clinical decision. Perhaps I’m being to generous in my estimation of her, but I don’t think it’s particularly illuminating to shrug our shoulders, say “bad doctor” and move on. Is there something more we could learn?
How could an important test, the results of which were very clinically significant, have been ordered and then ignored? Why would that happen? And I have a guess.
Before I go any further, I need to make it perfectly clear that this is nothing more than a guess. Everything from here on out is entirely speculative on my part, and is based on nothing more than my broad experience of patient care as delivered in emergency departments. As I have already said, I know nothing more about this case than any other reader of the Times article, and I certainly don’t have special insight into the mind of the doctor at NYU Medical Center’s ED.
My guess is that the tests were drawn without anyone thinking about them, as standard procedure in the department. Indeed, I think we have good reason to suspect this — the documents attached to the article include the “Severe Sepsis Triage Screening Tool,” which lists diagnostic criteria for sepsis. Rory had three, which (per the form) automatically triggered the order set that follows on the document. The orders that follow include a CBC (the test that showed the very abnormal white blood cell counts), as well as other tests that are often abnormal in septic patients. (The only other lab results we can see for Rory were from his metabolic panel, and were basically normal.)
I think this case may illustrate a pitfall of such automatic order sets. They serve a purpose by taking some of the guesswork out of clinical decision-making: if these risk factors, then those orders. But the tests are only valuable insofar as anyone actually thinks about what they show. If nobody really pays much attention to them, then they are just a costly, automatic addition to the expense of an ED visit.
If the tests are ordered without thinking, I wonder if they are more likely to be reviewed without thinking. If nobody stops and considers what pieces of information they’re really looking for, and what tests they really want to help uncover that information, do the tests become part of the background noise? Order sets are designed to reduce medical provider error, but I find myself asking if they may also in some way contribute to it.
None of this is to excuse the physician or downplay the magnitude of her error. The patient did have evidence of sepsis, which means she absolutely should have been paying attention to what steps were to come, automatic or otherwise. However, rather than merely condemning one doctor’s bad decisions, shouldn’t we also ask is there was something about the system that surrounded her that contributed to her error?
I think there are similar concerns with automated train driving and autopilot. When there is some sort of automated system, it may catch some mistakes, but create others.
Had the CBC not been done, then what other red flags would there have been for this patient at this stage? It seems to me that if an automated system ordered the test, and it provided potentially life saving information (including insight into how this could have been prevented), then the problem is not with the system, it’s with a culture that allows that information to be overlooked.
In the absence of the documentation of the physical exam, it’s a little bit hard to say. And I agree that the problem lies in the culture (or, if you prefer, system) that allows for a potentially life-saving result to be ignored. But since we are humans and commit human errors, that reality must be accounted for when crafting a solution.
For me, the other big problem (which is mentioned in my previous post and the Times article) is the patient’s respiratory rate in the office prior to the ED referral. If his respiratory rate really was 36, that is very abnormal, and far more vigilance was necessary on his primary physician’s part to make sure she followed up and was satisfied that he had been properly evaluated.
So, the IT guy Offers An Observation.
Information systems are really bad at changing behavior, but they are good at forcing behavior, by triggering certain sorts of escalation procedures.
When you have an automated test, you’re right: someone has to interpret the results for the test to be meaningful, and having tests triggered by a checklist rather than a doctor means that they may not be reviewed by the doctor in the same way that tests ordered by the doctor may be reviewed.
So the right way for the system to work is to change permissions from default-allow to default-deny. If a checklist triggers a test, the ability to discharge the patient is now outside the bounds of the physician of record, barring a proper handling of an escalation. In other words, Dr. Foo admits patient. Patient has symptoms X, W, Z, and Z triggers a checklist requirement for Test Whatever. Nurse draws blood for Test Whatever, as the Test no longer requires the doctor ordering it.
But that also means that the system has control of the test, from a process standpoint… NOT the doctor. The system shouldn’t let you discharge the patient until the results are in, and the results are processed (by processed, I mean, if the test comes back all in order, the doctor has to acknowledge that inside the system, which then unlocks the patient and the patient can be discharged… but if the test comes back with a diagnostic flag, the doctor has to acknowledge that *and* there has to be a verification check of some sort before the system unlocks the patient and the doctor can discharge them).
The tricky part is codifying what constitutes normal vs. abnormal results. For some tests, I imagine this is pretty straightforward, and for others it requires something other than “is this number < X".
At the end of the day, there’s a problem of whether you want your system to have to deal with more false positives in an effort to catch everything vs. the risk of false negatives but having enough hours in the day to look at everything you do catch. (And, of course, how you’ll have both… but which side you will be erroring toward.)
You could tweak a system based on the severity of the outcome. Sepsis seems like such a case.
In systems where agent freedom is really important, false positives are a pain in the ass. In systems where agent freedom isn’t impacted terribly or it isn’t important, false positives are not a big deal unless they eat a lot of resources.
“Click this box to show that you read this report (and possibly get another medical agent to click the verify, depending) and you can let ’em go home” may or may not be a burden, I don’t see the process side of enough ERs to know.
I imagine there will need to be lots of tweaking in the early stages of implementing a system such as this, but I still think (as I said in my reply in the other thread) it’s a very good idea, and one I would be inclined to support.
That said, I don’t work in EDs any longer, so I have no idea how providers in that milieu might respond to this kind of system.
Generally speaking, in my experience people who are highly trained to do anything break down at about 15-25% think a system like this is awesome, because they want someone to catch their mistake, 25% of them find it annoying because the implementation isn’t done well and system changes aren’t made to respond to their actual needs, and 50-60% just hate it for a long laundry list of reasons that they’ll enumerate when asked… but the real root reason is because they feel like the computer is telling them what to do, and by gum what business is it of the computer to tell me how to do my job?
Is there usually at least a protocol of having doctors sign all test results?
I think that varies from practice to practice and department to department. Certainly at Children’s all lab values have to be signed off, and the same is true in my office. Every lab test that gets sent to us is signed and dated by the person who reviews it. If some kind of follow-up conversation with parents occurs, or if a change in management is triggered, we document it on the form.
As far as I know, a linkage between signing off on test results and discharging patients would be novel.
I’m surprised by this, I would have assumed that Patrick’s system was already in place, but the patient hold was being manually overridden by the doctor in charge…
No, unless things are very different than I understand, systems like this are not standard at all.
There are medical informatics systems out there, but adoption in the U.S. is slow.