Morning Briefing
Summaries of health policy coverage from major news organizations
Could Alexa Be Trained To Recognize Gasping Sounds Associated With A Cardiac Event?
When someone’s heart stops beating, there is little time to waste. Half of the people hit by cardiac arrest are outside a hospital, and more than 90% of them die unless they are lucky enough to be near a bystander who can start CPR or call 911. What if the bystander was a smartphone or a digital assistant like Amazon’s Alexa? Researchers from the University of Washington tested that idea, training their digital tool to alert such devices to the gasping sounds  — called agonal breathing — that about half of people make shortly after cardiac arrest. Their proof-of-concept study appears Wednesday in NPJ Digital Medicine. (Cai, 6/19)
Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison? Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. ...In health care, Zittrain said, AI is particularly problematic because of how easily it can be duped into reaching false conclusions. As an example, he showed an image of a cat that a Google algorithm had correctly categorized as a tabby cat. On the next slide was a nearly identical picture of the cat, with only a few pixels changed, and Google was 100 percent positive that the image on the screen was guacamole. (Ross, 6/19)