A new method that uses machine learning in hospital intensive care units could help doctors and nurses distinguish between false alarms from real medical issues.
False alarms in ICUs are all too common. They use valuable time and increase the risk that medical personnel will miss real alarms in the flood of false ones.
As part of a feasibility study within a data science project called ICU Cockpit, researchers used comprehensive intensive care data recordings. With patients’ consent, scientists systematically stored vital signs at high temporal resolution along with any alarms.
As is generally the case in ICUs, the various devices for circulatory monitoring, artificial ventilation, and brain monitoring work independently of each other. Consequently, the devices each sound their own alarm whenever their readings go above or below a certain threshold value.
Researchers combined and synchronized the data from these various devices and then applied machine learning techniques to identify which alarms were irrelevant from a medical standpoint.
COMPUTER, TEACH THYSELF
“Usually, before a computer can start learning, humans first need to have categorized a certain number of alarms as relevant or nonrelevant,” says Walter Karlen, professor of mobile health systems at ETH Zurich.
“Computer systems can then use this information to understand the principle behind the classification and ultimately categorize alarms themselves.”
However, having someone classify alarms in intensive care is a never-ending task. And because someone has to do it for each patient individually, medical personnel treating patients wouldn’t have the time to teach a computer, too.
This means the ideal system for use in an ICU is one that can teach itself even if nurses or doctors have classified only a small number of alarms. That’s where the new method really comes into its own.
AN ALARM EVERY 2 MINUTES
Scientists tested the method using a small data set from the Zurich neurocritical care unit: records of the vital signs and alarms for 14 patients over a period of several days.
On average, medical devices sounded the alarm almost 700 times per patient per day; in other words, every two minutes. Although only 1,800 (13 percent) of the data set’s total of 14,000 alarms were classified manually, the algorithm was able to categorize the remaining alarms as real or false. If the scientists allowed for the system to have an error rate of 5 percent, it reduced the number of false alarms by 77 percent.
The scientists also demonstrated that the method even works with a significantly lower degree of manual help: all it took was 25 or 50 manual classifications for the system to flag a large number of alarms as false.
The researchers presented their findings at the Proceedings of the 35th International Conference on Machine Learning in Stockholm.