John McQuaid, Author at Ñî¹óåú´«Ã½Ò•îl Health News Thu, 15 Mar 2018 21:46:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 /wp-content/uploads/sites/2/2023/04/kffhealthnews-icon.png?w=32 John McQuaid, Author at Ñî¹óåú´«Ã½Ò•îl Health News 32 32 161476233 Entrenando al Dr. Robot: tecnología de Google y Amazon llega a la atención médica /news/entrenando-al-dr-robot-tecnologia-de-google-y-amazon-llega-a-la-atencion-medica/ Wed, 14 Feb 2018 19:47:03 +0000 https://khn.org/?p=817052 La tecnología utilizada por Facebook, Google y Amazon para convertir el lenguaje hablado en texto, reconocer rostros y administrar estrategias de publicidad podría ayudar a los médicos a combatir a uno de los asesinos más letales en los hospitales estadounidenses.

Clostridium difficile (C-diff), una bacteria mortal que se transmite por contacto físico con objetos o personas infectadas, se disemina fácilmente en los hospitales, causando 453,000 casos al año y 29,000 muertes en los Estados Unidos, según publicado en New England Journal of Medicine. Las estrategias tradicionales como promover la higiene y señales de advertencia generalmente no logran detenerla.

Pero, ¿y si fuera posible detectar a los pacientes vulnerables que C-diff atacará? Erica Shenoy, especialista en enfermedades infecciosas del Hospital General de Massachusetts, y Jenna Wiens, científica en computación y profesora asistente de ingeniería en la Universidad de Michigan, intentaron justamente eso cuando de un paciente de desarrollar una infección por C-diff, o CDI. Según las investigadoras, este método –que utiliza signos vitales de los pacientes y otros registros de salud, y que aún está en fase experimental– debería formar parte de las rutinas hospitalarias.

El algoritmo CDI, basado en una forma de inteligencia artificial llamada aprendizaje automático, está listo para pasar al mundo real, dijo Zeeshan Syed, quien dirige el Programa de Inferencia Clínica y Algoritmos de la Universidad de Stanford.

El aprendizaje automático (ML) se basa en redes neuronales artificiales que imitan la forma en que aprenden los cerebros de los animales.

Por ejemplo, puede reproducir como un zorro mapea nuevos terrenos, respondiendo a olores, imágenes y ruidos, y cómo continuamente adapta y refina su comportamiento para maximizar las probabilidades de encontrar su próxima comida.

El algoritmo CDI de Shenoy y Wiens analizó un conjunto de datos de 374,000 internaciones en el Hospital General de Massachusetts y en el Sistema de Salud de la Universidad de Michigan, buscando conexiones entre los casos de CDI y las circunstancias detrás de ellos.

Los registros contenían más de 4,000 variables distintas. “Tenemos datos relacionados con todo, desde resultados de laboratorio hasta en qué cama está el paciente, quién está junto a esa persona y si están infectados. Incluimos todos los medicamentos, resultados de pruebas y diagnósticos. Y recopilamos esta información a diario”, explicó Wiens. “Queríamos capturar la evolución del riesgo”, agregó.

A medida que el sistema analiza estos datos en repetidas ocasiones, extrae señales de advertencia de enfermedades que los médicos pueden pasar por alto: constelaciones de síntomas, circunstancias y detalles de la historia médica que probablemente causen una infección en algún momento de la estadía en el hospital.

Tales algoritmos, que ahora son comunes en el comercio por Internet y las finanzas, no se han probado todavía mucho en medicina y salud. En los Estados Unidos, la transición de informes médicos escritos a electrónicos ha sido lenta, y el formato y la calidad de los datos todavía varían según el sistema de salud y la práctica médica, creando obstáculos para los científicos expertos en informática.

Pero el poder de estas tecnologías ha crecido exponencialmente, a la vez que se ha abaratado. En el pasado, la creación de un algoritmo de aprendizaje automático requería redes de computadoras; ahora se puede hacer en una laptop.

Los algoritmos de aprendizaje automático ahora pueden diagnosticar confiablemente el (a partir de fotografías) y el , y predecir el .

Lily Peng, científica investigadora de Google, dirigió un equipo que desarrolló un algoritmo de aprendizaje automático para diagnosticar el riesgo de un paciente de retinopatía diabética a partir de un escáner de retina.

El año pasado, la Administración de Alimentos y Medicamentos (FDA) aprobó el primer algoritmo médico de aprendizaje automático para uso comercial de la empresa Saner Arterys. Su algoritmo, llamado “DeepVentricle”, realiza en 30 segundos una tarea que los médicos suelen hacer a mano: dibujar los contornos de los ventrículos de múltiples imágenes de resonancia magnética del músculo cardíaco en movimiento, para calcular el volumen de sangre que circula. Eso habitualmente toma un promedio de 45 minutos. “Se está automatizando algo que es importante y tedioso de hacer”, dijo Carla Leibowitz, directora de estrategia y marketing de Arterys.

Si se adopta a gran escala, estas tecnologías podrían ahorrar mucho tiempo y dinero. Pero también puede ser un cambio disruptivo.

“El hecho que hayamos identificado posibles formas de eliminar los costos es una buena noticia. El problema es que a las personas que pueden ser desplazadas no les va a gustar, por lo que habrá resistencia”, dijo Eric Topol, director del . “Socava la forma en que los radiólogos hacen su trabajo. Su principal tarea es leer escaneos: ¿qué sucederá cuando ya no tengan que hacerlo?”.

El cambio puede no dejar a muchos médicos sin trabajo, dijo Topol, quien fue coautor de que explora el tema. Más bien, probablemente los empujará a encontrar nuevas formas de aplicar su experiencia. Por ejemplo, podrán enfocarse en diagnósticos más desafiantes en los que los algoritmos siguen siendo insuficientes, o interactuar más con los pacientes.

Más allá de esta frontera, los algoritmos pueden proporcionar un pronóstico más preciso para el curso de una enfermedad, lo que podría reconfigurar el tratamiento de condiciones progresivas o abordar las incertidumbres en la atención al final de la vida. Pueden anticipar infecciones de rápido movimiento como la CDI y dolencias crónicas como la insuficiencia cardíaca, permitiendo intervenciones tempranas y reduciendo el costo de la enfermedad.

Pero, a pesar de la esperanza científica, el aprendizaje automático en medicina sigue siendo un terreno desconocido en muchos aspectos. Por ejemplo, agrega una nueva voz, la voz de la máquina, a decisiones médicas clave. Los médicos y los pacientes tardarán en acostumbrarse.

“Hará una gran diferencia en cómo se toman las decisiones médicas: estarán mucho más impulsadas por los datos de lo que solían estar”, opinó John Guttag, profesor de informática del Massachusetts Institute of Technology. Los médicos confiarán en estas herramientas cada vez más complejas para tomar decisiones, pero “no tienen idea cómo funcionan”. Y, en algunos casos, será difícil descubrir por qué se ofrecieron malos consejos.

Ñî¹óåú´«Ã½Ò•îl Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .

USE OUR CONTENT

This story can be republished for free (details).

]]>
817052
The Training Of Dr. Robot: Data Wave Hits Medical Care /news/the-training-of-dr-robot-data-wave-hits-medical-care/ Wed, 14 Feb 2018 10:00:07 +0000 https://khn.org/?p=811922 The technology used by Facebook, Google and Amazon to turn spoken language into text, recognize faces and target advertising could help doctors combat one of the deadliest killers in American hospitals.

Clostridium difficile, a deadly bacterium spread by physical contact with objects or infected people, thrives in hospitals, causing 453,000 cases a year and 29,000 deaths in the United States, according to a in the New England Journal of Medicine. Traditional methods such as monitoring hygiene and warning signs often fail to stop the disease.

But what if it were possible to systematically target those most vulnerable to C-diff? Erica Shenoy, an infectious-disease specialist at Massachusetts General Hospital, and Jenna Wiens, a computer scientist and assistant professor of engineering at the University of Michigan, did just that when they a patient’s risk of developing a C-diff infection, or CDI. Using patients’ vital signs and other health records, this method — still in an experimental phase — is something both researchers want to see integrated into hospital routines.

The CDI algorithm — based on a form of artificial intelligence called machine learning — is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force — likely to alter the way medical decisions are made and put some people out of work.

Machine learning (ML) relies on artificial neural networks that roughly mimic the way animal brains learn.

As a fox maps new terrain, for instance, responding to smells, sights and noises, it continually adapts and refines its behavior to maximize the odds of finding its next meal. Neural networks map virtual terrains of ones and zeroes. A machine learning algorithm programmed to identify images of coffee cups might compare photos of random objects against a database of coffee cup pictures; by examining more images, it systematically learns the features to make a positive ID more quickly and accurately.

Shenoy and Wiens’ CDI algorithm analyzed a data set from 374,000 inpatient admissions to Massachusetts General Hospital and the University of Michigan Health System, seeking connections between cases of CDI and the circumstances behind them.

The records contained over 4,000 distinct variables. “We have data pertaining to everything from lab results to what bed they are in, to who is in the bed next to them and whether they are infected. We included all medications, labs and diagnoses. And we extracted this on a daily basis,” Wiens said. “You can imagine, as the patient moves around the hospital, risk evolves over time, and we wanted to capture that.”

As it repeatedly analyzes this data, the ML process extracts warning signs of disease that doctors may miss — constellations of symptoms, circumstances and details of medical history most likely to result in infection at any point in the hospital stay.

Such algorithms, now commonplace in internet commerce, finance and self-driving cars, are relatively untested in medicine and health care. In the U.S., the transition from written to electronic health records has been slow, and the format and quality of the data still vary by health system — and sometimes down to the medical practice level — creating obstacles for computer scientists.

But other trends are proving inexorable: Computing power has grown exponentially while getting cheaper. Once, creating a machine learning algorithm required networks of mainframe computers; now it can be done on a laptop.

Radiology and pathology will experience the changes first, experts say. Machine learning programs will most easily handle analyzing images. X-rays and MRI, PET and CT scans are, after all, masses of data. By crunching the data contained in thousands of existing scan images along with the diagnoses doctors have made from them, algorithms can distill the collective knowledge of the medical establishment in days or hours. This enables them to duplicate or surpass the accuracy of any single doctor.

Machine learning algorithms can now reliably diagnose (from photographs) and , and predict .

Google research scientist Lily Peng, a physician, led a team that developed a machine learning algorithm to diagnose a patient’s from a retinal scan. DR, a common side effect of diabetes, can lead to blindness if left untreated. The worldwide rise in diabetes rates has turned DR into a global health problem, with the number of cases from 126.6 million in 2011 to 191 million by 2030 — an increase of nearly 51 percent. Its presence is indicated by increasingly muddy-looking scan images.

Peng’s team gathered 128,000 retinal scans from hospitals in India and the U.S. and assembled a team of 54 ophthalmologists to grade them on a 5-point scale for signs of the disease. Multiple doctors reviewed each image to average out individual differences of interpretation.

Once “trained” on an initial data set with the diagnoses, the algorithm was tested on another set of data — and there it slightly exceeded the collective performance of the ophthalmologists.

Now Peng is working on applying this tool in India, where a chronic shortage of ophthalmologists means DR often goes undiagnosed and untreated until it’s too late to save a patient’s vision. (This is also a problem in the U.S., where 38 percent of adult diabetes patients do not get the recommended annual eye check for the disease, according to the Centers for Disease Control.)

A group of Indian hospitals is now testing the algorithm. Ordinarily, a scan is done, and a patient may wait days for results after a specialist — if available — reads the image. The algorithm, via software running on hospital computers, makes the results available immediately and a patient can be referred to treatment.

Last year, the Food and Drug Administration approved the first medical machine learning algorithm for commercial use by the San Francisco company . Its algorithm, “DeepVentricle,” performs in 30 seconds a task doctors typically do by hand — drawing the contours of ventricles from multiple MRI scans of the heart muscle in motion, in order to calculate the volume of blood passing through. That takes an average of 45 minutes. “It’s automating something that is important — and tedious,” said Carla Leibowitz, Arterys’ head of strategy and marketing.

If adopted on a broad scale, such technologies could save lots of time and money. But such change is disruptive.

“The fact that we have identified potential ways to gut out costs is good news. The problem is the people who get gutted are not going to like it — so there will be resistance,” said Eric Topol, director of the . “It undercuts how radiologists do their work. Their primary work is reading scans — what happens when they don’t have to do that?”

The shift may not put a lot of doctors out of work, said Topol, who co-authored a exploring the issue. Rather, it will likely push them to find new ways to apply their expertise. They may focus on more challenging diagnoses where algorithms continue to fall short, for instance, or interact more with patients.

Beyond this frontier, algorithms can provide a more precise prognosis for the course of a disease — potentially reshaping treatment of progressive ailments or addressing the uncertainties in end-of-life care. They can anticipate fast-moving infections like CDI and chronic ailments such as heart failure.

As the U.S. population ages, heart failure will be a rising burden on the health system and on families.

“It’s the most expensive single disease as a category because of the extreme disability it causes and the high demand for care it imposes, if not managed really tightly,” said Walter “Buzz“ Stewart, vice president and chief research officer at , a health system in Northern California. “If we could predict who was going to get it, perhaps we could begin to intervene much earlier, maybe a year or two years earlier than when it usually happens — when we admit a patient to the hospital after a cardiac event or crash.”

Stewart has collaborated on several studies aiming to address that problem. , predicts whether a patient will develop heart failure within six months, based on 12 to 18 months of outpatient medical records.

These tools, Stewart said, are leading to the “mass customization of health care.” Once algorithms can anticipate incipient stages of conditions like heart failure, doctors will be better able to offer treatments tailored to the patient’s circumstances.

Despite its scientific promise, machine learning in medicine remains terra incognita in many ways. It adds a new voice — the voice of the machine — to key medical decisions, for instance. Doctors and patients may be slow to accept that. Adding to potential doubts, machine learning is often a black box: Data go in, and answers come out, but it’s often unclear why certain patterns in a patient’s data point, say, to an emerging disease. Even the scientists who program neural networks often don’t understand how they reach their conclusions.

“It’s going to make a big difference in how decisions are made — things will become much more data-driven than they used to be,” said , a professor of computer science at MIT. Doctors will rely on these increasingly complex tools to make decisions, he said, and “have no idea how they work.” And, in some cases, it will be hard to figure out why bad advice was given.

And while health data are proliferating, the quantity, quality and format vary by institution, and that affects what the algorithms “learn.”

“That is a huge issue with modeling and electronic health records,” Sun said. “Because the data are not curated for research purposes. They are collected as a byproduct of care in day-to-day operations, and utilized mainly for billing and reimbursement purposes. The data is very, very noisy.”

This also means that data may be inconsistent, even in an individual patient’s records. More important, one size does not fit all: An algorithm developed with data from one hospital or health system may not work well for another. “So you need models for different institutions, and the models become quite fragile, you might put it,” Sun said. He is working on a National Institutes of Health grant studying how to develop algorithms that will work across institutions.

And the tide of available medical data continues to rise, tantalizing scientists. “Think about all the data we are collecting right now,” Wiens said. “Electronic health records. Hospitalizations. At outpatient centers. At home. We are starting to collect lots of data on personal monitors. These data are valuable in ways we can’t yet know.”

Ñî¹óåú´«Ã½Ò•îl Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .

USE OUR CONTENT

This story can be republished for free (details).

]]>
811922