Disentangling Neurodegeneration with Brain Age Gap Prediction Models: A Graph Signal Processing Perspective
Speaker: Saurabh Sihag
Abstract: Neurodegenerative disorders exhibit focal and correlated cortical atrophy patterns across the brain, where the amount of atrophy deviates from what is expected for a typical healthy individual. Brain age is a data-driven estimate of the biological age from brain imaging datasets, where increasing brain age gap characterized by an elevated brain age relative to the chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline. Hence, the brain age gap is a promising biomarker derived using machine learning for monitoring brain health. However, many machine learning approaches for this application exhibit narrow applicability (they are restricted to a family of health conditions), provide opaque decisions, and do not sufficiently accommodate the complexities of statistical phenomena inherent to neurodegeneration; all being factors that hinder their adoption in clinical practice. In this context, this talk will discuss a coVariance neural networks (VNNs)-driven principled deep learning framework to infer brain age gap. VNNs leverage graph signal processing advances and hence, are adept at exploiting the network structure inherent to neuroimaging datasets. Pertinent to the application at hand, the decisions formed by VNNs exhibit stability, transferability across multi-scale datasets, and explainability; all properties embellishing the reproducibility and transparency needed for principled applications of deep learning for brain age gap prediction in clinical applications. A pre-trained VNN model will be presented (with pre-training solely on the healthy population) to infer anatomically interpretable and explainable brain age gap for health conditions that exhibit accelerated brain atrophy relative to healthy controls.
Robust Covariance Neural Networks
Speaker: Andrea Cavallo
Abstract: Learning deep representations from covariance information via coVariance Neural Networks (VNNs) has shown an improved performance and insights with respect to Principal Component Analysis (PCA)-based alternatives and better stability in finite-sample regimes. VNNs extend the PCA transform by learning end-to-end the spectral processing function on the principal directions of the data in each layer. However, VNNs operate on the pre-computed sample covariance matrix, which is prone to estimation errors, sensitive to outliers, and not adapted to the task at hand. To overcome this limitation, we propose Robust coVariance Neural Networks (RVNNs), a framework that simultaneously learns a robust estimator of the covariance matrix and the VNN parameters in an end-to-end manner, leading to a fully task-aware pipeline. We prove that RVNNs combine the robustness to outliers with the finite-sample stability of VNNs, and we show that their end-to-end robust covariance learning leads to better prediction performance compared to robust PCA-based approaches on simulated and real-world data from brain recordings and human motion sensor measurements.