What happens when a good idea is poorly implemented? A recent article reported on a catastrophic catalog of errors in Electronic Health Records (EHRs) in the US. Tests were missed. Notes for one patient would appear on another’s profile. Alerts for dangerous drug interactions failed. Data was lost when text was entered with a certain combination of punctuation.
The list goes on. And the consequences were, in some cases, tragic. Lives were lost or irreparably damaged.
And while there were no doubt multiple factors at play, it’s hard to avoid the conclusion that one of them was a failure to test software adequately.
This was digital disruption on a massive scale. It wasn’t just about the usability of individual applications—although that certainly seems to have been an issue. Key to this was the way in which different applications interacted with each other. That presents a huge software testing challenge. Maybe the EMR vendor’s software works perfectly. Maybe the Lab software has even been tested with the EMR and it works perfectly. But what about when the Lab software releases an update? And what if wasn’t just the Lab software, but PACs, Blood Bank, Pharmacy and so on? And what if they’ve adopted agile process and they’re release updates all the time?
How do you begin to ensure that everything keeps working?
It’s the sort of thing that continuous intelligent test automation is made for. The number of possible workflows is mind-boggling, and human testers couldn’t begin to tackle something on these proportions, especially when those testers are also clinicians who are already desperately short of time. It’s crying out for AI-driven, automated exploratory testing to hunt down and uncover the bugs that manual testers simply wouldn’t have the time to find.
So if intelligent automated testing was the way to go, what might have prevented it from happening?
One clue lies in the speed with which the systems had to be deployed. Thanks in part to the political will that lay behind the project, it appears that EHRs were rolled out far more quickly than many felt prudent. It was just too much too soon. And while test automation delivers a massive resource saving in the medium to long term, it often requires a short-term spike in investment in time and resources.
Another potential issue was the requirement to test the interaction between applications and the fact that many of the issues were with usability as opposed to function. Image-based testing—testing through the eyes of the end user—has come a long way over the past few years. This kind of approach makes it possible to test any software or device in the hospital, and do true end-to-end testing to ensure all systems are interoperable. It also means we can test for usability issues automatically. These fall well within our current capability at Eggplant, but some of this technology is still relatively new. The kind of testing that we can do now simply might not have been available when the EHR applications were first developed.
Ultimately, our lives are in the hands of software applications just as much as they are medical professionals. If this story tells us anything, it’s that this software needs to be tested to the same rigorous standards that apply to those professionals.
Learn more about how we’re working with the healthcare sector to keep patients safe.