sâmbătă, 11 martie 2023

 Medicine XAI January - April 2021

Created by Team Cat Ladies, Andreea-Raluca TRANDAFIR & Eliza-Maria CIOVICA for Expert Systems 2023

Introduction


    The use of Artificial Intelligence (AI) and Machine Learning (ML) techniques is rapidly expanding in various medical fields. AI has the potential to provide accurate and effective preventive and curative interventions. However, concerns have also been raised about potential risks, harm, and trust issues due to the opacity of some AI algorithms, which makes them unexplainable. If the decision-making logic cannot be adequately explained, how can the decisions from these AI-based systems be trusted? Explainable Artificial Intelligence (XAI) aims to address these questions. In this article, we examined developments in XAI within the medical domain for a precise period of time, January to April 2021. Our objective was to see what has been uncovered in the previously stated period of time.


1. Expert Level Evaluations for Explainable AI (XAI) Methods in the Medical Domain - February 2021

    In this article, the aim is to show on an experimental level the way the expert-level evaluation of XAI methods in medical applications can be used and coincide with the actual explanations of clinicians. This is approached by collecting annotations provided by expert subjects equipped with an eye-tracker as they classify medical images and compute an approach for comparing results with those provided by XAI methods. The effectiveness of this technique is being demonstrated through a several experiments.


2. Feature-Guided CNN for Denoising Images From Portable Ultrasound Devices - February 2021

    This article refers to the ultrasound, the non-invasive medical imaging scanning device that has significantly improved medical diagnosis accuracy and efficiency. Portable ultrasound devices have become more popular due to their convenience and lower cost. Patients and physicians can easily access scanned images via a wireless network, but the image quality of portable devices is often inferior to that of standard hospital ultrasound equipment. This is because portable devices capture images with significant noise, which can hinder diagnosis accuracy. Addressing this issue, the article presents the Feature-guided Denoising Convolutional Neural Network (FDCNN) which has been proposed to remove noise while retaining important feature information. This model employs a hierarchical denoising framework that uses a feature masking layer for medical images. Additionally, an Explainable Artificial Intelligence (XAI) based feature extraction algorithm has been developed for medical images. The experimental results show that this feature extraction method outperforms previous methods, and when combined with the new denoising neural network architecture, portable ultrasound devices can achieve better diagnostic performance.


(First, the original image is extracted with a feature mask layer via a U-net network based on Guided Backpropagation. Then, add noise to the featureless areas using the mask layer. Subsequently, feed the image into the noise reduction network and perform residual learning. Finally, merge the feature information and the denoised images by a Laplacian fusion algorithm. )

3.  A Roadmap towards Breast Cancer Therapies Supported by Explainable Artificial Intelligence - April 2021


    Personalized medicine has become increasingly important in recent years, particularly in the development of oncological therapies. Patients' profiling strategies have shown promising rewards. In this study is introduced an explainable artificial intelligence (XAI) framework based on an adaptive dimensional reduction that outlines the most important clinical features for oncological patients' profiling and determines the profile, i.e., the cluster a patient belongs to based on these features. The data was collected on 267 breast cancer patients for this purpose. The dimensional reduction method employed identifies the relevant subspace, and distances among patients are used by a hierarchical clustering procedure to identify the corresponding optimal categories. The results revealed  that the molecular subtype is the most important feature for clustering. It was also assessed the robustness of current therapies and guidelines and found that the available patients' profiles determined in an unsupervised manner correspond to either molecular subtypes or therapies chosen based on guidelines, highlighting the interpretability of explainable approaches to machine learning techniques. This study suggests that data-driven therapies can be designed to emphasize the differences observed among patients.

4. The fourth scientific discovery paradigm for precision medicine and healthcare: Challenges ahead - April 2021


    The advancement of modern information techniques, such as next-generation sequencing (NGS), smart sensors based on the Internet of Everything (IoE), and artificial intelligence (AI) algorithms, has led to the emergence of data-intensive research and applications as the fourth paradigm for scientific discovery. Despite this, there are various challenges in the practical implementation of this paradigm. This article summarizes some challenges related to data-intensive discovery and applications in precision medicine and healthcare, and discusses future perspectives on next-generation medicine.



5. Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence - February 2021


    The use of machine learning in analyzing medical data is common, but the internal workings of the algorithms are often opaque. To address this, researchers have proposed an enhanced convolutional neural network for optical coherence tomography image segmentation, which incorporates a Traceable Relevance Explainability (T-REX) technique. This involves generating ground truth data by multiple graders, calculating the differences between human graders and the algorithm, and visualizing the results. The T-REX setup resulted in a small average variability between the human graders and the algorithm, and the convolutional neural network allowed for modifiable predictions depending on the compartment. By making machine learning processes more transparent and understandable, the T-REX setup may lead to optimized applications.

6.    Enhancing Human-Machine Teaming for Medical Prognosis Through Neural Ordinary Differential Equations (NODEs) February 2021


    The article discusses the importance of producing explainable AI (XAI) to progress in improving the interpretability of some ML models, but also taking into account that all the efforts suffer from limitations: they work best at identifying why a system fails, but do poorly at explaining when and why a model’s prediction is correct. The authors supposed that the acceptability of ML predictions in expert domains is limited by: the machine’s horizon of prediction and the inability for machine predictions to incorporate human intuition into their model., so they proposed: Neural Ordinary Differential Equations (NODEs) to enhance human understanding and encourage acceptability. Their approach prioritized human cognitive intuition as the center of the algorithm. This approach aims to improve human-machine collaboration even in medical prognoses.


7.    Situated Case Studies for a Human-Centered Design of Explanation User Interfaces March 2021

   
    The article makes an introduction to how the idea of a human-centered perspective in the design of machine-learning based applications, especially in the context of Explainable Artificial Intelligence (XAI) is increasingly researched. The authors revealed that the problems they saw with a clear methodological guidance were due to the fact that each new situation seems to require new setup. They proposed a collection of case studies for human-centered XAI that can provide methodological guidance. They proposed three case studies in which they apply a HCD approach in the context of XAI: 
Explaining Privacy-Preserving Machine Learning. This case study is in the medical domain and focuses on the value-oriented data donation of patients. The goal of this is to balance the trade-off between the protection of patients’ personal data and the need for unrestricted data in individualized medicine.
Explaining Interactive Clustering Results, on how ML techniques are used to handle large-scale data in qualitative research settings.
Explanations in Narrative-based Decision-Making, as a collaboration with a medical ethicist. As in complex situations there is a need for a holistic perspective of the patient, their situation, preferences, and moral concepts of what a good life represents for them. What is best for the patient must be determined in each specific case using a narrative-structured decision-making process rather than relying solely on external information.`

8. A Comparative Approach to Explainable Artificial Intelligence Methods in Application to High-Dimensional Electronic Health Records: Examining the Usability of XAI March 2021

  
  As explanatory AI (XAI) is on the rise, it aims to produce a factor of trust (which for humans is achieved by communicative means, rather than ML algorithms that cannot solely produce). The authors approached the medical field, and encountered challenges when dealing with the involvement of human-subjects, as trusting a machine to tend towards the livelihood of a human was hard, leaving trust as the basis of the human-expert in acceptance to the machines decision. This article aims to demonstrate the usability of explainable architecture as a layer to the medical domain supporting ML predictions and human-expert opinion. They executed different algorithms on a given data sets, and the conclusion was to proceed with XGBoost algorithm, as it was the best performing across all the given datasets:


9. Deep Learning Based Decision Support for Medicine -- A Case Study on Skin Cancer Diagnosis March 2021

  
  This article gets the light on the fact that early detection of skin cancers like melanoma is crucial when referring to  the survival of humans. As the majority of work in the medical AI community focusses on a diagnosis setting, that is more relevant for autonomous operations, the authors see the importance of practical decision support to not only have plain diagnosis, but provide explanations. This article focuses on an overview of the works towards explainable, DL-based decision support in medical applications with skin cancer diagnosis from clinical, dhermoscopic and histopathologic images as examples. As analysis reveals that the current work is dominated by visual relevance maps, the authors aim to focus on explanation of the images.


10. A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease

    
As Alzheimer’s disease (AD) is the most common type of dementia, its diagnosis and progression detection have been increasingly studied. The authors developed an accurate and interpretable AD diagnosis and progression detection model, which provide accurate decisions along with a set of explanations for each decision. This study took 1048 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) real-world dataset: 294 cognitively normal, 254 stable mild cognitive impairment (MCI), 232 progressive MCI, and 268 AD. It is actually a two-layer model with random forest (RF) as classifier algorithm. The performance of the model is optimized with key markers selected from a large set of biological and clinical measures.


Bibliography

[1] Muddamsetty, S.M., Jahromi, M.N.S., Moeslund, T.B. (2021). Expert Level Evaluations for Explainable AI (XAI) Methods in the Medical Domain. In: , et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham. https://doi.org/10.1007/978-3-030-68796-0_3

[2] G. Dong, Y. Ma and A. Basu, "Feature-Guided CNN for Denoising Images From Portable Ultrasound Devices," in IEEE Access, vol. 9, pp. 28272-28281, 2021, doi: 10.1109/ACCESS.2021.3059003.

[3] Amoroso, Nicola, Domenico Pomarico, Annarita Fanizzi, Vittorio Didonna, Francesco Giotta, Daniele La Forgia, Agnese Latorre, Alfonso Monaco, Ester Pantaleo, Nicole Petruzzellis, Pasquale Tamborra, Alfredo Zito, Vito Lorusso, Roberto Bellotti, and Raffaella Massafra. 2021. "A Roadmap towards Breast Cancer Therapies Supported by Explainable Artificial Intelligence" Applied Sciences 11, no. 11: 4881. https://doi.org/10.3390/app11114881

[4] Li Shen, Jinwei Bai, Jiao Wang, Bairong Shen, The fourth scientific discovery paradigm for precision medicine and healthcare: Challenges ahead, Precision Clinical Medicine, Volume 4, Issue 2, June 2021, Pages 80–84, https://doi.org/10.1093/pcmedi/pbab007

[5]  Maloca, P.M., Müller, P.L., Lee, A.Y. et al. Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence. Commun Biol 4, 170 (2021). https://doi.org/10.1038/s42003-021-01697-y

 [6] Fompeyrine, D.A., Vorm, E.S., Ricka, N., Rose, F. and Pellegrin, G., 2021. Enhancing human-machine teaming for medical prognosis through neural ordinary differential equations (NODEs). Human-Intelligent Systems Integration, 3(4), pp.263-275.

[7] Müller-Birn, C., Glinka, K., Sörries, P., Tebbe, M. and Michl, S., 2021. Situated Case Studies for a Human-Centered Design of Explanation User Interfaces. arXiv preprint arXiv:2103.15462.

[8] Duell, J.A., 2021. A comparative approach to explainable artificial intelligence methods in application to high-dimensional electronic health records: Examining the usability of xai. arXiv preprint arXiv:2103.04951.

[9] Lucieri, A., Dengel, A. and Ahmed, S., 2021. Deep Learning Based Decision Support for Medicine--A Case Study on Skin Cancer Diagnosis. arXiv preprint arXiv:2103.05112.

[10] El-Sappagh, S., Alonso, J.M., Islam, S.M.R. et al. A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci Rep 11, 2660 (2021). https://doi.org/10.1038/s41598-021-82098-3


Niciun comentariu:

Trimiteți un comentariu

VPDA - Mall Customers Data Analysis

Introduction Exploring a dataset of mall customers can be important because it can uncover patterns in spending habits, help identify distin...