title banner

OR in the News

OR in the News (selected articles)

Tsai, C. H., van der Burgt, J., Vukovic, D., Kaur, N., Demi, L., Canty, D., Wang, A., Royse, A., Royse, C., Haji, K., Dowling, J., Chetty, G., Fontanarosa, D. Automatic deep learning-based pleural effusion classification in lung ultrasound images for respiratory pathology diagnosis. Phys Med 2021: 8338-45

October 16, 2021

Lung ultrasound (LUS) imaging as a point-of-care diagnostic tool for lung pathologies has been proven superior to X-ray and comparable to CT, enabling earlier and more accurate diagnosis in real-time at the patient’s bedside. The main limitation to widespread use is its dependence on the operator training and experience. COVID-19 lung ultrasound findings predominantly reflect a pneumonitis pattern, with pleural effusion being infrequent. However, pleural effusion is easy to detect and to quantify, therefore it was selected as the subject of this study, which aims to develop an automated system for the interpretation of LUS of pleural effusion. A LUS dataset was collected at the Royal Melbourne Hospital which consisted of 623 videos containing 99,209 2D ultrasound images of 70 patients using a phased array transducer. A standardized protocol was followed that involved scanning six anatomical regions providing complete coverage of the lungs for diagnosis of respiratory pathology. This protocol combined with a deep learning algorithm using a Spatial Transformer Network provides a basis for automatic pathology classification on an image-based level. In this work, the deep learning model was trained using supervised and weakly supervised approaches which used frame- and video-based ground truth labels respectively. The reference was expert clinician image interpretation. Both approaches show comparable accuracy scores on the test set of 92.4% and 91.1%, respectively, not statistically significantly different. However, the video-based labelling approach requires significantly less effort from clinical experts for ground truth labelling.