AI Cardiologist Aces Its First Medical Exam

A neural network outperforms human cardiologists in a task involving heart scans

Rima Arnaout wants to be clear: The AI she created to analyze heart scans, which easily outperformed human experts on its task, is not ready to replace cardiologists.

It was a limited task, she notes, just the first step in what a cardiologist does when evaluating an echocardiogram (the image produced by bouncing sound waves off the heart). “The best technique is still inside the head of the trained echocardiographer,” she says.

But with experimental artificial intelligence systems making such rapid progress in the medical realm, particularly on tasks involving medical images, Arnaout does see the potential for big changes in her profession. And when her 10-year-old cousin expressed the desire to be a radiologist when she grows up, Arnaout had some clear advice: “I told her that she should learn to code,” she says with a laugh.

Arnaout, an assistant professor and practicing cardiologist at UC San Francisco, is keeping up with the times through her research in computational medicine; she published this new study in the journal Digital Medicine.

In the study, Arnaout and her colleagues used deep learning, specifically something called a convolutional neural network, to train an AI system that can classify echocardiograms according to the type of view shown.

This classification is a cardiologist’s first step when examining an image of the heart. Because the heart is such a complex structure—it’s an asymmetrical organ with four chambers, four valves, and blood constantly flowing in and out through several vessels—echocardiographers take videos from many different positions. When the doctors are ready to analyze those videos, they first have to figure out which view they’re looking at and which anatomical features they can see.

Typically the cardiologist would look at a relatively high-resolution video of the echocardiogram, showing a shifting image captured as the imaging tool was moved around the patient’s chest. But the AI had a much harder task. It was given still images taken from video clips, and the images were shrunken down to just 60 by 80 pixels each. [READ MORE]