Signal Processing over Expanding Graphs
Speaker: Bishwadeep Das
Location: Building 36, 01.150 Lipkenszaal
Time: 10:30 - 11:15
Zoom: link
Abstract: This talk addresses the problem of processing data over dynamic graphs, with a particular focus on networks that grow in size. Such problems frequently arise in domains such as recommender systems, sensor networks, and social networks. Network, or graph signal processing, provides tools to process data defined over networks by incorporating the underlying topology. However, when networks expand over time, two key challenges emerge: scale, due to the increasing size and sequential nature of the data; and uncertainty, arising from the unknown connectivity of newly arriving nodes. The talk will begin by addressing this uncertainty in the context of designing data processing tools for dynamic networks, with an emphasis on graph convolutional filters. Here, we will focus on filter design for a range of signal processing tasks. The second part of the talk will turn to online data processing algorithms for expanding graphs: the first targets signal processing under topological uncertainty, and the second focuses on estimating the evolving graph structure itself. Theoretical results concerning the steady-state behavior of these approaches will also be presented.
Graph Topology Identification Based on Covariance Matching
Speaker: Yongsheng Han
Location: Building 36, 01.150 Lipkenszaal
Time: 11:15 - 11:35
Zoom: link
Abstract: This paper addresses graph topology identification for applications where the underlying structure of systems like brain and social networks is not directly observable. Traditional approaches based on signal matching and spectral templates have limitations, particularly in handling scale issues and sparsity assumptions. We introduce a novel covariance matching methodology that efficiently reconstructs the graph topology using observable data. For the structural equation model (SEM) using an undirected graph, we demonstrate that our method can converge to the correct result under relatively soft conditions. Furthermore, we extend our methodology to polynomial models and any known distribution of latent variables, broadening its applicability and utility in diverse graph-based systems.
Relational Learning via Covariance Information
Speaker: Andrea Cavallo
Location: Building 36, 01.150 Lipkenszaal
Time: 11:35 - 11:55
Zoom: link
Abstract: Statistical moments, especially covariance, are commonly used in machine learning to reveal data interdependencies. Covariance-based techniques are popular, for example, in network topology inference (e.g., graphical lasso) and dimensionality reduction (e.g., Principal Component Analysis (PCA)), but they typically represent separate preprocessing steps in data-driven pipelines. This limits the ability to explore how neural networks interact with data statistics. Furthermore, PCA is unstable to covariance estimation errors, i.e., small data perturbations might lead to large changes in principal directions.
To address this, coVariance Neural Networks (VNNs) were introduced. These networks perform graph convolutions on the sample covariance matrix, an operation that, similarly to PCA, modulates the data principal components, but with enhanced representation power and greater stability against covariance estimation errors.
However, in sparse, high-dimensional settings with limited data, covariance estimation is particularly difficult, which hinders VNNs’ performance despite their stability. Sparse VNNs overcome this by using theoretically grounded covariance sparsification, which improves stability, reduces the impact of spurious correlations on performance and improves computation and memory efficiency. The success of VNNs motivates their extension to different settings.
SpatioTemporal VNNs, for instance, process multivariate time series by applying graph convolutions on the online estimated covariance and temporal convolutions over time, achieving stability to estimation errors in both covariance and model parameters due to streaming data.
Finally, VNNs’ stability promotes fairness in datasets with poorly represented groups. Building on this, Fair VNNs leverage equitable covariance estimates and fairness penalties in the loss function to ensure a more balanced treatment of these groups.