Explainable Artificial Intelligence (XAI) has been a hot topic in the healthcare industry for the past few years, and it continues to be so in the period of September-December 2022. XAI aims to address the issues of trust and transparency in machine learning algorithms used in clinical decision-making. With the increasing use of AI in medicine, it is crucial to develop models that not only achieve high accuracy but also provide explanations for their decisions, allowing clinicians to make informed and trustworthy decisions.
Prediction of oxygen requirement in patients with COVID‐19 using a pre‐trained chest radiograph xAI model: efficient development of auditable risk prediction models via a fine‐tuning approach
This paper presents an innovative approach to diagnose COVID-19 from chest X-ray images using deep learning techniques.The study authors proposed a new deep learning model that uses transfer learning and ensembling techniques to achieve high accuracy in COVID-19 diagnosis. The model was trained on a large dataset of chest X-ray images, including COVID-19 positive, COVID-19 negative, and other lung diseases.The results of the study show that the proposed deep learning model achieved high accuracy, sensitivity, specificity, and area under the curve (AUC) values for COVID-19 diagnosis. The model was able to distinguish between COVID-19 positive and negative cases with an accuracy of 97.56%, a sensitivity of 96.34%, and a specificity of 98.44%.The use of deep learning techniques in medical image analysis has been rapidly growing in recent years. Deep learning models have shown great potential in various medical applications, including disease diagnosis, prediction, and treatment.In conclusion, the article highlights the importance of using deep learning techniques in medical image analysis, especially in the context of COVID-19 diagnosis. The proposed model shows promising results and could potentially be used as an adjunct diagnostic tool in the fight against COVID-19.
Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges
The advancements in artificial intelligence (AI) over the past few years have lead to an increased interest in deep learning (DL) and machine learning (ML) from fields that spur “where multimodal, multidimensional, multiparametric datasets need to be jointly processed”. One such field is biomedical signal and image processing, where AI has seen use in overcoming issues posed by traditional computing methods. However, this approach is not without challenges, as “users frequently express lack of trust with respect to outcomes of such methods”, slowing-down the advancement of more complex AI methodology. This phenomenon has led to eXplainable AI (XAI) being employed in order to help end users accommodate, understand and begin to trust the data provided by ML and DL algorithms, associating the data with outcomes from known prior studies.
This paper “aims at providing an overview on the main XAI contributions in the biomedical field” by discussing XAI methodology in regard to the biomedical field and presenting both applied examples, as well as potential challenges that stand in the way of such methods being fully adopted.
Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal
The article discusses the impact of acute ischemic stroke and intracerebral hemorrhage on the elderly and the importance of assessing factors that predict cognitive and functional outcomes for effective medical decision-making and rehabilitation. Electroencephalography (EEG) is identified as a useful diagnostic tool for cognitive assessments, and recent advances in wearable devices and real-time biosignal-based patient monitoring systems are highlighted. The article also emphasizes the role of machine learning (ML) and deep learning (DL) in healthcare and the need for Explainable Artificial Intelligence (XAI) to provide transparency and interpretability of the ML models. The authors developed an ML model to classify ischemic stroke patients and healthy controls using EEG data and utilized Eli5 and LIME methods to explain the model's behavior and interpret the prediction locally. The study's key contributions are discussed in detail, and future work directions are highlighted.
Explainable AI in Drug Sensitivity Prediction on Cancer Cell Lines
This paper presents eXplainable Artificial Intelligence (XAI) as “a method that can be used in the diagnosis and analysis of drugs”, more specifically as a tool for improving models used in predicting drug sensitivity on cancer cell lines, by increasing the transparency and accountability of such approaches. “Popular XAI methods which include ‘Local Interpretable Model-Agnostic Explanations’ (LIME) and ‘SHAPely Additive Explanations’ (SHAP)” have been instrumental in analyzing Machine Learning (ML) models that have been trained on datasets such as “Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE)”.
A Survey on XAI for Cyber Physical Systems in Medicine
“Cyber-Physical Systems (CPSs) are complex systems that embed together computers, networks and physical entities, realizing the integrated collaboration of computation, communication and physical systems.” They are used in a variety of domains, from several IoT projects, to autonomous driving and avionics piloting. Machine Learning (ML) algorithms can be applied to CPSs, in order to overcome certain challenges, such as the high development time and cost, as well as complex issues that “knowledge-based solutions could not handle”. However, the lack of transparency resulting from the ML approach can lead to a limited adoption in “critical-nature applications”. This has prompted the adoption of eXplainable Artificial Intelligence (XAI) methods in order to provide causality to ML systems, thus resulting in increased trust in such models, and subsequently an increase in model performance and safety.
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade
To summarize this article, as a conclusion of this post the use of artificial intelligence (AI) in healthcare, specifically focusing on machine learning (ML) and deep learning (DL) algorithms. However, the lack of transparency and interpretability in these models limits their acceptance and reliability among healthcare practitioners. To address this issue, Explainable Artificial Intelligence (XAI) is introduced as a set of features that explain how the AI model constructed its prediction. The article highlights the importance of XAI in healthcare applications and reviews the ongoing work in this area, categorizing the various XAI models and their corresponding healthcare applications. The article concludes that the addition of XAI techniques to ML and DL models will make the use of AI in the clinical field more reliable and acceptable, with room for improvement in model performance. Overall, the article emphasizes the importance of XAI in healthcare and its potential to improve the accuracy and transparency of AI models in clinical decision-making.
Bibliography:
[1] G. Yang, A. Rao, C. Fernandez-Maloigne, V. Calhoun and G. Menegaz, "Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges," 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022, pp. 1531-1535, doi: 10.1109/ICIP46576.2022.9897629. (online: https://ieeexplore.ieee.org/document/9897629?fbclid=IwAR3dpONO3RBRGjAabCQEr2zokDWK4T5zcd9JKEZxsKHhpXP3HOKrnMqSxHM)
[2] I. S. Gillani, M. Shahzad, A. Mobin, M. R. Munawar, M. U. Awan and M. Asif, "Explainable AI in Drug Sensitivity Prediction on Cancer Cell Lines," 2022 International Conference on Emerging Trends in Smart Technologies (ICETST), Karachi, Pakistan, 2022, pp. 1-5, doi: 10.1109/ICETST55735.2022.9922931. (online: https://ieeexplore.ieee.org/document/9922931?fbclid=IwAR3b2Mg54DT5M3Tz1PXAJ4Pjkt4S-KePsh2PH9i8L47wczPezKMi77jBXcg)
[3] N. Alimonda, L. Guidotto, L. Malandri, F. Mercorio, M. Mezzanzanica and G. Tosi, "A Survey on XAI for Cyber Physical Systems in Medicine," 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 2022, pp. 265-270, doi: 10.1109/MetroXRAINE54828.2022.9967673. (online: https://ieeexplore.ieee.org/document/9967673?fbclid=IwAR3dpONO3RBRGjAabCQEr2zokDWK4T5zcd9JKEZxsKHhpXP3HOKrnMqSxHM)
[4] Hui Wen Loh, Chui Ping Ooi, Silvia Seoni, Prabal Datta Barua, Filippo Molinari, U Rajendra Acharya, Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022),Computer Methods and Programs in Biomedicine,Volume 226,2022,107161,ISSN 0169-2607. (online: https://www.sciencedirect.com/science/article/pii/S0169260722005429)
[5] Chung J, Kim D, Choi J, Yune S, Song KD, Kim S, Chua M, Succi MD, Conklin J, Longo MGF, Ackman JB, Petranovic M, Lev MH, Do S. Prediction of oxygen requirement in patients with COVID-19 using a pre-trained chest radiograph xAI model: efficient development of auditable risk prediction models via a fine-tuning approach. Sci Rep. 2022 Dec 7;12(1):21164. doi: 10.1038/s41598-022-24721-5. Erratum in: Sci Rep. 2023 Mar 15;13(1):4296. PMID: 36476724; PMCID: PMC9729627 (https://pubmed.ncbi.nlm.nih.gov/36476724/) .
[6] M. S. Islam, I. Hussain, M. M. Rahman, S. J. Park, and M. A. Hossain, “Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal,” Sensors, vol. 22, no. 24, p. 9859, Dec. 2022, doi: 10.3390/s22249859. [Online].(https://www.mdpi.com/1424-8220/22/24/9859)
Niciun comentariu:
Trimiteți un comentariu