duminică, 19 martie 2023

XAI for medicine, period Jan-April 2022

1. Explainable artificial intelligence (XAI): closing the gap between image analysis and navigation in complex invasive diagnostic procedures

    In this literature review, the challenges of accurately diagnosing bladder cancer through cystoscopy are discussed. False negatives and false positives are highlighted as risks associated with this procedure. The authors propose that XAI robot-assisted cystoscopes could be used to overcome these risks and provide a more accurate diagnosis. The authors suggest that cystoscopy is a good starting point for automation and could establish a model for other procedures. Additionally, the use of a specialized nurse to perform cystoscopy could free up urologists' time. The result of automated diagnostic cystoscopy would be a short video that could be reviewed by the urologist at a more convenient time. 

2. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery

    Artificial intelligence (AI) has shown great promise in medicine in recent years. However, its black-box nature has made clinical applications of AI challenging due to explainability issues. To overcome these limitations, some researchers have explored the use of explainable artificial intelligence (XAI) techniques. XAI can provide both decision-making and explanations of the model, making it a more transparent and interpretable AI method. In this literature review, the authors surveyed recent trends in medical diagnosis and surgical applications using XAI by searching articles published between 2019 and 2021 from several databases. The authors included articles that met their selection criteria and extracted and analyzed relevant information from the studies. The review includes an experimental showcase on breast cancer diagnosis and illustrates how XAI can be applied in medical XAI applications. The authors also summarize the XAI methods used in medical XAI applications, the challenges encountered by researchers, and discuss future research directions. The survey result indicates that medical XAI is a promising research direction, and the review aims to serve as a reference for medical experts and AI scientists when designing medical XAI applications.

3. Explainable artificial intelligence in skin cancer recognition: A systematic review

    The use of deep neural networks (DNNs) in medical applications is becoming increasingly popular due to their ability to solve complex problems. However, the decision-making process of DNNs is essentially a black-box process, which makes it difficult for physicians to judge the reliability of the decisions. Explainable artificial intelligence (XAI) has been suggested as a solution to this problem. In this study, the authors investigate how XAI is used for skin cancer detection, including the development of new DNNs, commonly used visualizations, and evaluations with dermatologists or dermatopathologists. The authors searched for peer-reviewed studies published between January 2017 and October 2021 in various databases using specific search terms. They found that XAI is commonly applied during the development of DNNs for skin cancer detection, but there is a lack of systematic and rigorous evaluation of its usefulness in this scenario.

4. Explainable artificial intelligence for precision medicine in acute myeloid leukemia

    In this article, the author discusses the limitations of using artificial intelligence (AI) in personalized treatments based on drug screening and whole-exome sequencing experiments (WES) due to the "black box" nature of AI decision-making. The article introduces explainable AI (XAI) as a potential solution to make AI results more understandable to humans. The article presents a new XAI method called multi-dimensional module optimization (MOM) that associates drug screening with genetic events to provide an interpretable and robust therapeutic strategy for acute myeloid leukemia (AML) patients. The article highlights the success of the MOM method in predicting AML patient response to several drugs based on FLT3, CBFβ-MYH11, and NRAS status. The article emphasizes the potential of XAI to aid healthcare providers and drug regulators in better understanding AI medical decisions.

5. Machine learning in postgenomic biology and personalized medicine

    In recent years Artificial Intelligence in the form of machine learning has been revolutionizing biology, biomedical sciences, and gene-based agricultural technology capabilities. Massive data generated in biological sciences by rapid and deep gene sequencing and protein or other molecular structure determination, on the one hand, requires data analysis capabilities using machine learning that are distinctly different from classical statistical methods; on the other, these large datasets are enabling the adoption of novel data-intensive machine learning algorithms for the solution of biological problems that until recently had relied on mechanistic model-based approaches that are computationally expensive. This review provides a bird's eye view of the applications of machine learning in post-genomic biology. Attempt is also made to indicate as far as possible the areas of research that are poised to make further impacts in these areas, including the importance of explainable artificial intelligence (XAI) in human health. Further contributions of machine learning are expected to transform medicine, public health, agricultural technology, as well as to provide invaluable gene-based guidance for the management of complex environments in this age of global warming.

6. Deep Learning in Neuroimaging: Overcoming Challenges With Emerging Approaches

    This article discusses the potential of deep learning (DL) in psychiatry, particularly in using multidimensional datasets like fMRI data to predict clinical outcomes. However, typical DL methods have limitations that make them less suitable for medical imaging, such as requiring large datasets and having opaque models. The article introduces three relatively novel DL approaches that could help address these limitations and accelerate DL's incorporation into mainstream psychiatry research: transfer learning, data augmentation (via Mixup), and explainable artificial intelligence (XAI). Transfer learning and data augmentation can reduce the amount of training data required to develop accurate models, while XAI can reveal the mechanisms that produce clinical outcomes and solve the "black box" criticism of DL. These techniques could enhance the applicability of DL in psychiatric research and help identify novel mechanisms and potential pathways for therapeutic intervention in mental illness.

7. Application of explainable artificial intelligence in the identification of Squamous Cell Carcinoma biomarkers

    Non-melanoma skin cancers (NMSCs), including Squamous Cell Carcinoma (SCC), are a common type of cancer affecting both men and women worldwide. This study aimed to identify potential diagnostic biomarkers for SCC using eXplainable Artificial Intelligence (XAI) and XGBoost machi    ne learning (ML) models trained on binary classification datasets of 40 SCC, 38 AK, and 46 healthy skin samples. By incorporating SHAP values into the ML models, the study identified 23 significant genes associated with the progression of SCC, which may serve as diagnostic and prognostic biomarkers for patients with SCC.

8. Physician Experience Design (PXD): More Usable Machine Learning Prediction for Clinical Decision Making

    Delirium is a difficult-to-identify acute neurocognitive disorder. Using data from Canada's largest hospital data and analytics study, the authors developed machine learning models to predict delirium. However, to increase physician trust and the uptake of the model results, they also developed an Explainable Artificial Intelligence (XAI) framework for physician experience design (PXD). This framework improves the transparency of model results, allowing physicians to interact with the models and better understand their predictions. The authors used a participatory design process with physicians to develop a dashboard that presents ML delirium identification results interactively based on physician inputs, ultimately allowing physicians to select their preferred ML model for clinical decision making through PXD evaluation.

9. ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions

    ExAID (Explainable AI for Dermatology) is a novel explainable AI (XAI) framework designed to address the lack of transparent decision-making in AI-based computer-aided diagnosis (CAD) systems for dermatology. ExAID provides multi-modal concept-based explanations, consisting of easy-to-understand textual explanations and visual maps, to justify predictions for the malignancy of skin lesions from dermoscopic images. The framework uses Concept Activation Vectors and Concept Localisation Maps to map human-understandable concepts and highlight them in the input space, respectively. The identification of relevant concepts is used to construct fine-grained textual explanations supplemented by concept-wise location information to provide comprehensive and coherent multi-modal explanations. ExAID includes an educational mode that provides dataset-level explanation statistics and tools for data and model exploration. The framework is evaluated on a range of publicly available dermoscopic image datasets and shows the utility of multi-modal explanations for CAD-assisted scenarios, even in cases of incorrect disease predictions. The authors believe that ExAID will accelerate the transition from AI research to practice by providing dermatologists and researchers with an effective tool that they can both understand and trust.

10. An Explainable AI Approach for the Rapid Diagnosis of COVID-19 Using Ensemble Learning Algorithms

    Artificial intelligence-based disease prediction models have a greater potential to screen COVID-19 patients than conventional methods. However, their application has been restricted because of their underlying black-box nature.
This study aimed to develop an explainable artificial intelligence (XAI) approach to screen patients for COVID-19 using blood test indices. The retrospective study included 1,737 participants (759 COVID-19 patients and 978 controls) admitted to San Raphael Hospital from February to May 2020. Four ensemble learning algorithms were used, and feature importance was illustrated using local interpretable model-agnostic explanations (LIME) plots. Results showed that GBDT and LIME plots were efficient in screening patients with COVID-19, and patients with higher WBC count, higher LDH level, or higher EOT count were more likely to have COVID-19. This XAI approach could serve as a potential tool in the auxiliary diagnosis of COVID-19.

[1] https://pubmed.ncbi.nlm.nih.gov/35084542/
[2] https://pubmed.ncbi.nlm.nih.gov/35204328/
[3] https://pubmed.ncbi.nlm.nih.gov/35390650/
[4] https://pubmed.ncbi.nlm.nih.gov/36248800/
[5] https://pubmed.ncbi.nlm.nih.gov/35966173/
[6] https://pubmed.ncbi.nlm.nih.gov/35722548/
[7] https://pubmed.ncbi.nlm.nih.gov/35477047/
[8] https://pubmed.ncbi.nlm.nih.gov/35854747/
[9] https://pubmed.ncbi.nlm.nih.gov/35033756/
[10] https://pubmed.ncbi.nlm.nih.gov/35801239/ 
Team
Raul SARBU
Bogdan TATU


Niciun comentariu:

Trimiteți un comentariu

VPDA - Mall Customers Data Analysis

Introduction Exploring a dataset of mall customers can be important because it can uncover patterns in spending habits, help identify distin...