Analyzing Heart SoundsDaniel Leeds, Zeeshan Hassan Syed, Dorothy Curtis & John GuttagIntroductionHeart auscultation, the process of listening to the heart via stethoscope, is commonly employed to detect potential signs of heart problems. During a standard checkup, the physician uses auscultation to determine whether to refer a patient to a cardiologist for further examination. Unfortunately, this task is made difficult by the low intensity of pathological sounds, as well as their similarities to benign sounds. While specialists can appreciate these acoustic nuances, primary care physicians often cannot. In fact, 87% of patients currently refered for suspected heart problems are actually healthy. These false referrals cost significant time and money, requiring $300 to $1000 for careful diagnosis. We aim to provide primary care practitioners with computer-based assistance for observing and for analyzing heart sounds. We both create visualizations of acoustic data and create a program to recognize signs of Mitral Regurgitation (MR), indicative of common heart problems. We also have developed a tool to train new doctors in auscultation. ApproachOur system is intended to run on standard computer platforms already present in doctors' offices. It relies on data from an electronic stethoscope and from a one-lead EKG, recorded concurrently. Similar to ordinary checkup procedure, the physician must place the stethoscope on the patient for him (and the computer) to listen. Our software development utilized data collected from about 100 patients at the Massachusetts General Hospital (MGH), under the aegis of Dr. Robert Levine. We represent each recording (taken from one patient at one location) by its "prototypical beat," constructed through a process designed by Zeeshan Syed. A prototypical beat expresses the acoustic energy in four pre-defined frequency bands across the course of systole, averaged across most of the beats in the recording [2]. The example below shows a late systolic murmur, requiring further medical attention; the murmur's presence only in higher frequencies, and the attenuation of these frequencies visible from the y-axis units, demonstrates the utility of this visualization.
As indicated above, our project focuses on detecting MR murmurs. Working from the prototypical beat, we have sought to identify visual features within each frequency band associated with murmur shape, intensity, and location. Following the advice of doctors and of medical literature, we look for murmurs occuring in part in the middle and end of systole. ProgressWe have investigated several methods for diagnosing Mitral Valve Prolapse (MVP), a condition frequently concurrent with MR. Our work often has relied upon Support Vector Machines (SVMs) drawing class boundaries in a feature space. We have tried to define features both based on human-determined measurements (e.g., murmur height) and based on statistical characteristics across the recordings (represented by PCA). In addition, we have explored several visual representations of each the feature space, particularly through Self-Organizing Maps (SOMs). Results from our SVM-based classifiers, and from several simple threshold classifiers, have led us to re-examine our goals. As indicated above, we are switching our diagnostic focus from MVP to MR, while carefully comparing our results against human ears, rather than against the final classification (made by echocardiograms, which cost hundreds of dollars per run). We have noted even cardiologists often mis-diagnose during auscultation; we seek, at least, to compete with them. In addition to automated diagnoses, we constructed a prototype of a tool to assist cardiology students in learning to listen for murmurs. This tool has received various positive reviews, though it has not been introduced formally into hospital environments. FutureIn the next few months, we plan to re-evaluate our diagnostic approach, possibly changing our features to detect murmurs localized to the beginning of systole (in addition to the murmurs already correctly detected). We also will be writing a program to recognize and to reject noisy EKG and acoustic data as part of preprocessing. We plan to publish our results in a few articles. Research SupportThis research was supported by a grant from the Deshpande Center, as well as by Acer Inc., Delta Electronics Inc., HP Corp., NTT Inc., Nokia Research Center, and Philips Research under the MIT Project Oxygen Partnership, CIMIT, and the Center for the Integration of Medicine and Innovative Technology. Meditron provided the stethoscopes. References[1] A. Pease. If the Heart Could Speak. In Siemens Webzine, Fall 2001. http://w4.siemens.de/FuI/en/archiv/pof/heft2_01/artikel19/index.html. [2] Zeeshan Hassan Syed. MIT Automated Auscultation System. Master's thesis, Massachusetts Institute of Technology, 2003. |
||
|