Lung ultrasound (LUS) is increasingly used in clinics for diagnosing and monitoring acute and chronic lung diseases due to its low cost and accessibility. LUS works by emitting diagnostic pulses, receiving pressure waves, and converting them into radio frequency (RF) data, which are then processed into B-mode images for radiologists to interpret. However, unlike conventional ultrasound, LUS interpretation is complicated by reverberation physics caused by the inability of ultrasound to penetrate air and the complex wave behavior in lung tissue. These challenges make interpretation highly dependent on reader expertise, requiring years of training, which limits its widespread use despite its potential for high accuracy in skilled hands.
To address these challenges and democratize LUS as a reliable diagnostic tool, we propose LungNO , a surrogate model that directly reconstructs lung aeration maps from RF data, bypassing the need for indirect interpretation of B-mode images. LungNO uses a Fourier neural operator, which processes RF data efficiently in Fourier space, enabling accurate reconstruction of lung aeration maps up to 2.6 wavelengths deep. From reconstructed aeration maps, we calculate lung percent aeration, a key clinical metric, offering a quantitative, reader-independent alternative to traditional semi-quantitative LUS scoring methods. Trained primarily on simulated data and fine-tuned with real-world data, LungNO achieves robust performance, demonstrated by an aeration estimation error of less than 10% in ex-vivo swine lung scans.
This pilot study demonstrates the potential of directly reconstructing lung aeration maps from RF data, providing a foundation for improving LUS interpretability, reproducibility and diagnostic utility while making this powerful tool more accessible across clinical settings.
@article{wang2024ultrasound,
title={Ultrasound Lung Aeration Map via Physics-Aware Neural Operators},
author={Wang, Jiayun and Ostras, Oleksii and Sode, Masashi and Tolooshams, Bahareh and Li, Zongyi and Azizzadenesheli, Kamyar and Pinton, Giammarco and Anandkumar, Anima},
year={2024}
}