
Full text loading...
We use cookies to track usage and preferences.I Understand
The heterogeneity in tumours poses significant challenges to the accurate prediction of cancer stages, necessitating the expertise of highly trained medical professionals for diagnosis. Over the past decade, the integration of deep learning into medical diagnostics, particularly for predicting cancer stages, has been hindered by the black-box nature of these algorithms, which complicates the interpretation of their decision-making processes.
This study seeks to mitigate these issues by leveraging the complementary attributes found within functional genomics datasets (including mRNA, miRNA, and DNA methylation) and stained histopathology images. We introduced the Extended Squeeze- and-Excitation Multiheaded Attention (ESEMA) model, designed to harness these modalities. This model efficiently integrates and enhances the multimodal features, capturing biologically pertinent patterns that improve both the accuracy and interpretability of cancer stage predictions.
Our findings demonstrate that the explainable classifier utilised the salient features of the multimodal data to achieve an area under the curve (AUC) of 0.9985, significantly surpassing the baseline AUCs of 0.8676 for images and 0.995 for genomic data.
Furthermore, the extracted genomics features were the most relevant for cancer stage prediction, suggesting that these identified genes are promising targets for further clinical investigation.
Article metrics loading...
Full text loading...
References
Data & Media loading...