AI and Tech for Medicine
Artificial Intelligence for Medicine
One of our primary areas of interest is to harness innovative AI methods, coupled with unique signal processing tools and devices, to tackle real-world health challenges. Through strong collaborations with radiologists and clinicians both in Israel and abroad, we identify unmet clinical needs and engage in clinical projects aimed at enhancing early disease detection, minimizing diagnostic errors, assisting physicians in their decision-making processes, and developing improved imaging and diagnostic devices with enhanced quality in challenging settings. To this end, we develop methods that integrate the physics and the understanding of the clinical data into our AI and devices. Our focus lies in developing model-based AI techniques that can effectively operate on limited training datasets while producing results that are interpretable and comprehensible. By incorporating domain expertise and understanding into our AI methods, we aim to achieve more reliable and meaningful outcomes in healthcare applications.
Our research topics include
- Multimodal deep learning: combining the information obtained from different imaging modalities in the interest of better diagnosis. For example, physicians use various imaging modalities for the diagnosis of breast cancer (mammography, ultrasound, MRI) or for the diagnosis of Crohn’s disease (Capsule endoscopy, MRI). In our research, we aim at integrating the information obtained from these different modalities in order to enhance the accuracy of diagnosis.
- Use of AI methods for the analysis of ultrasound “channel data” (the pre-beamformed RF data received at the ultrasound machine): our objective is to extract important tissue properties that can aid in disease diagnosis and assessment, e.g., help determine whether lesions are benign or malignant or help quantify liver fat. By leveraging model-based AI techniques, we strive to uncover meaningful patterns and features within the ultrasound channel data, enabling more accurate and informative diagnostic capabilities.
- Use of AI for conversion between imaging modalities: e.g., using deep learning techniques to synthetically convert ultrasound images into semi-CT images. This synthetic conversion has the potential to provide additional insights and expand the diagnostic capabilities in situations where obtaining CT scans may not be feasible or desirable.
- AI-guided ultrasound image acquisition: We are actively exploring AI-guided image acquisition techniques with the aim of addressing the operator-dependency of ultrasound imaging. By leveraging AI guidance, we aim to enhance the consistency and accuracy of ultrasound imaging, ultimately making it more accessible and enabling non-expert sonographers to produce high-quality scans.
- Deep learning for super-resolution vascular ultrasound imaging: applying model-based deep learning methods for the processing of data received from contrast-enhanced ultrasound, in order to create vascular reconstructions with high resolution with applications in cancer diagnosis or inflammatory diseases.
- Use of AI for Covid-19 diagnosis and prediction of outcome: in order to support the fight against the global pandemic.
- AI methods for radar imaging of body organs including brain imaging.
- AI methods for the use of radar for vital sign monitoring: smart use of the power of AI to refine non-contact vital signs monitoring capabilities.
References
- D. Keidar et. al, “COVID-19 Classification of X-ray Images Using Deep Neural Networks”, European Radiology, pp. 1-10, May 2021.
- O. Frank et. al, "Integrating Domain Knowledge into Deep Networks for Lung Ultrasound with Applications to COVID-19", IEEE Transaction on Medical Imaging, vol. 41, issue 3, pp. 571-581, March 2022.
- A movie illustrating our work in the Covid-19 domain.
-
O. Bar-Shira, A. Grubstein, Y. Rapson, D. Suhami, E. Atar , K. Peri-Hanania, R. Rosen, Y. C. Eldar, "Learned Super Resolution Ultrasound for Improved Breast Lesion Characterization", MICCAI 2021.
-
B. Luijten, R. Cohen, F. J. de Bruijn, H. A. W. Schmeitz, M. Mischi, Y. C. Eldar, and R. J. G. Van Sloun, "Adaptive Ultrasound Beamforming Using Deep Learning", IEEE Transactions on Medical Imaging, vol. 39, issue 12, pp. 3967-3978, December 2020.
-
R. J. G. van Sloun, R. Cohen, Y. C. Eldar, "Deep Learning in Ultrasound Imaging", Proceedings of the IEEE, vol. 108, issue 1, pp. 11-29, January 2020.
-
T. Sharon, H. Naaman, Y. Eder, and Y.C. Eldar, “Real-Time Quantitative Ultrasound and Radar Medical Imaging”, to appear in 2023 IEEE International Ultrasonics Symposium
Radar for Medical Applications
Over the past decade, there has been a significant advancement in the development of small, robust, and sophisticated millimeter-wavelength (mm-Wave) radar systems. This progress has opened up new and exciting opportunities, particularly in the field of healthcare. One significant application of these radar systems is remote monitoring of vital signs. Traditional monitoring devices in clinics and hospitals require medical staff to physically connect patients, which consumes valuable time and increases the risk of disease transmission, especially during pandemics like COVID-19. In addition, these wired connections may produce discomfort or irritations, are affected by the manner of contact and can be easily detached. In contrast, radar technology eliminates the need for patient contact, thus addressing issues of inconvenience and disease transmission.
Our research focuses on developing accurate methods for heart rate and respiration monitoring as well as other vital signs through reduced hardware usage. The purpose of this study is to obtain accurate multi-person non-contact vital signs monitoring, for real-time implementation.
In addition to vital sign monitoring, we explore the potential of radar technology for other various medical applications. Our research delves into leveraging radar for non-contact pulmonary function testing, monitoring drug delivery processes, radar-based medical imaging (e.g. intra-cranial imaging), and other medical applications.
Examples of our work in this field:
- Y. Eder and Y. C. Eldar, “Sparsity-Based Multi-Person Non-Contact Vital Signs Monitoring Via FMCW Radar”, to appear in IEEE Journal of Biomedical and Health Informatics.
- Y. Eder, Z. Liu and Y. C. Eldar, "Sparse Non-Contact Multiple People Localization and Vital Signs Monitoring Via FMCW Radar," 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5.
- Contactless Vital Sign monitoring – Demo movie – ICASSP 2022 IEEE International Conference: https://youtu.be/VzhkbVaiR4M