Automated interpretation of echocardiography by deep neural networks could support clinical reporting and improve efficiency. Whereas previous studies have evaluated spatial relationships using still frame images, we aimed to train and test a deep neural network for video analysis by combining spatial and temporal information to automate the recognition of left ventricular regional wall motion abnormalities.

In a series of 10,638 echocardiograms, our view selection model identified 6454 (61%) examinations with sufficient image quality in all standard views. In this training set, 2740 frames were annotated to develop the segmentation model, which achieved a Dice similarity coefficient of 0.756. External validation was performed in 1756 examinations from an independent hospital. A regional wall motion abnormality was observed in 8.9% and 4.9% in the training and external validation datasets. The final model recognized regional wall motion abnormalities in the cross-validation and external validation datasets with an area under the receiver operating characteristic curve of 0.912 and 0.891. In the external validation dataset, the sensitivity was 81.8%, and specificity was 81.6%.

In echocardiographic examinations of sufficient image quality, it is feasible for deep neural networks to automate the recognition of regional wall motion abnormalities using temporal and spatial information from moving images. Further investigation is required to optimize model performance and evaluate clinical applications.

Ref: https://www.ahajournals.org/doi/10.1161/CIRCULATIONAHA.120.047530

Author