duminică, 26 martie 2023

XAI for Bioinformatics January - April 2021

DeepGS: Predicting phenotypes from genotypes using Deep Learning

    The article presents DeepGS, a deep learning-based method for predicting phenotypes from genotypes. The authors propose using convolutional neural networks (CNNs) to model complex gene interactions and improve the accuracy of phenotype predictions. Their approach leverages sliding window-based input representation to capture local genomic patterns and learns high-level representations of genotypes for prediction tasks. 

    The authors evaluate DeepGS on four diverse datasets, including wheat, maize, rice, and Arabidopsis thaliana, demonstrating that their approach consistently outperforms traditional genomic prediction methods. The results indicate that DeepGS can effectively model complex genetic architectures and has the potential to advance genomic prediction and genome-wide association studies (GWAS), being a promising tool for plant and animal breeding programs and advancing the field of genomics.


Deep Learning Enables Fast and Accurate Imputation of Gene Expression

    The article presents a deep learning-based approach for fast and accurate imputation of gene expression. The authors propose a method called DeepImpute, which employs a multi-layered deep neural network to predict gene expression values from single-cell RNA sequencing (scRNA-seq) data. The goal is to fill in missing data points and improve data quality, which can be crucial for downstream analyses. DeepImpute is trained on a large compendium of scRNA-seq datasets, enabling it to learn generalizable features and effectively impute gene expression across various cell types and species.

     The authors demonstrate that DeepImpute outperforms existing imputation methods in terms of both accuracy and computational efficiency. In addition, they show that their approach can improve the performance of downstream analyses, such as cell type identification and differential gene expression analysis.


DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies 

    DeepCOMBI utilizes a deep neural network (DNN) to model intricate SNP interactions and integrates Layer-wise Relevance Propagation (LRP) to generate explanations for the AI-driven findings. This method allows for the detection and interpretation of significant SNPs, SNP-SNP interactions, and potential epistatic effects, which can enrich our comprehension of complicated genetic structures. 

    The authors assess DeepCOMBI on both simulated and real-world datasets, showing that their approach surpasses existing GWAS techniques in accuracy, interpretability, and computational efficiency. The outcomes emphasize DeepCOMBI's potential to further the genomics field and aid in uncovering new genetic factors related to complex traits and diseases.


Explainable machine learning can outperform Cox regression predictions and provide insights in breast cancer survival

   The article investigates the potential of Explainable Machine Learning (XAI) in predicting breast cancer survival and providing insights compared to traditional Cox regression models. The authors develop an XAI-based approach that leverages the SHapley Additive exPlanations (SHAP) method to generate interpretable predictions. The study uses a large dataset of breast cancer patients from the Netherlands Cancer Registry, comparing the performance of the XAI-based model with the traditional Cox regression model.

    The results show that the XAI-based approach outperforms the Cox regression model in terms of predictive accuracy, while also providing valuable insights into the factors affecting breast cancer survival. The use of SHAP values allows the researchers to quantify the contribution of each feature in the prediction, helping to identify the most important factors influencing survival outcomes. These insights can facilitate a better understanding of breast cancer prognosis, ultimately contributing to improved patient care and personalized treatment strategies. The proposed approach offers not only improved predictive performance but also valuable insights into the underlying factors that influence survival outcomes, which can be of great importance in clinical decision-making and personalized medicine. 


Learning the Mental Health Impact of COVID-19 in the United States With Explainable Artificial Intelligence: Observational Study 

    The article investigates the mental health impact of the COVID-19 pandemic in the United States using an Explainable Artificial Intelligence (XAI) approach. The authors analyze a large dataset of tweets collected from Twitter to explore the mental health consequences of the pandemic on the general population. The study employs various XAI techniques, such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations), to develop interpretable machine learning models that can detect and predict mental health issues, including anxiety, depression, and stress, based on the content of the tweets. 

    The authors demonstrate that their XAI-driven approach can effectively identify and quantify the mental health impact of the COVID-19 pandemic, providing valuable insights into the factors contributing to the observed changes in mental health during this period. Moreover, the explainability of the models enables a better understanding of the underlying reasons for the detected mental health issues, which can inform targeted interventions and policies. 


    LIME explanation for the prediction made by custom CNN model on (a) COVID-19 positive and (b) COVID-19 negative chest X-ray scans. 


An Explainable Artificial Intelligence based Prospective Framework for COVID-19 Risk Prediction 

    The article presents an Explainable Artificial Intelligence (XAI) based framework for predicting the risk of COVID-19 infection in individuals. The authors develop a machine learning model that can estimate the likelihood of a person contracting the virus based on various factors, such as demographics, pre-existing health conditions, and exposure history. The proposed framework employs several XAI techniques, including Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), to provide interpretable and transparent predictions. This allows users to understand the factors contributing to the estimated risk and facilitates trust in the AI-driven decision-making process. 

    The authors evaluate their framework on a dataset of COVID-19 cases and demonstrate that it can effectively predict the risk of infection with a high degree of accuracy. Additionally, the explainability of the model enables the identification of the most important features affecting the risk, which can help inform targeted interventions and preventive measures. 


Prediction of caregiver quality of life in amyotrophic lateral sclerosis using explainable machine learning

    The article presents a study on predicting caregiver quality of life (QoL) in amyotrophic lateral sclerosis (ALS) using explainable machine learning (ML) techniques. The authors develop a model that can estimate the QoL of caregivers for individuals with ALS based on various factors, such as caregiver demographics, patient characteristics, and clinical data. The study employs Explainable Artificial Intelligence (XAI) techniques, including SHapley Additive exPlanations (SHAP) and feature importance measures, to provide interpretable and transparent predictions. This allows users to understand the factors contributing to the estimated QoL and enables the identification of key variables affecting caregiver well-being. 

    The authors evaluate their model using a dataset of ALS patients and caregivers, demonstrating that the explainable ML approach can accurately predict caregiver QoL. Moreover, the explainability of the model provides valuable insights into the most important factors influencing caregiver well-being, which can help inform targeted interventions and support strategies. 


Establishing Machine Learning Models to Predict Curative Resection in Early Gastric Cancer with Undifferentiated Histology: Development and Usability Study

    The article presents the development and usability study of machine learning (ML) models to predict curative resection in early gastric cancer (EGC) patients with undifferentiated histology. The authors focus on creating ML models that can estimate the likelihood of successful curative resection, which is crucial for optimizing treatment strategies and improving patient outcomes.

    The study involves the analysis of a large dataset of EGC patients with undifferentiated histology, using various machine learning techniques, such as logistic regression, support vector machines, decision trees, and random forests. The goal is to identify the most accurate and reliable ML model for predicting curative resection outcomes. The authors evaluate the performance of the developed ML models and demonstrate that they can effectively predict curative resection with high accuracy. Moreover, they show that ML models can outperform conventional statistical models, providing more accurate and reliable predictions. 


The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies

    The article presents a comprehensive survey on the role of explainability in creating trustworthy artificial intelligence (AI) for health care. The authors focus on the terminology, design choices, and evaluation strategies related to explainable AI (XAI) in the health care domain. The survey aims to provide a clear understanding of XAI's potential and challenges in creating reliable and interpretable AI systems for medical applications.     

    The authors review and discuss the various aspects of XAI, including: Terminology: They provide an overview of the key terms and concepts related to explainability in AI, such as interpretability, transparency, and trustworthiness. Design choices: They explore different design choices in XAI, examining various methods, techniques, and approaches for developing explainable models. Evaluation strategies: They discuss the evaluation of XAI models, focusing on various metrics and benchmarks to assess the quality of explanations and the overall performance of AI systems. The survey highlights the growing importance of explainability in the adoption of AI in health care, emphasizing the need for transparent and interpretable models that can be trusted by both medical professionals and patients. The authors also identify challenges and future research directions, including the development of standardized evaluation methods and the integration of domain knowledge into XAI techniques.



Bibliography: 

1. https://www.biorxiv.org/content/10.1101/241414v1.full

2. https://www.frontiersin.org/articles/10.3389/fgene.2021.624128/full

3. https://academic.oup.com/nargab/article/3/3/lqab065/6324603?login=false

4. https://www.nature.com/articles/s41598-021-86327-7

5. https://mental.jmir.org/2021/4/e25097

6. https://www.medrxiv.org/content/10.1101/2021.03.02.21252269v1.full

7. https://link.springer.com/content/pdf/10.1038/s41598-021-91632-2.pdf

8. https://www.jmir.org/2021/4/e25053/

9. https://www.sciencedirect.com/science/article/pii/S1532046420302835




Niciun comentariu:

Trimiteți un comentariu

VPDA - Mall Customers Data Analysis

Introduction Exploring a dataset of mall customers can be important because it can uncover patterns in spending habits, help identify distin...