The success of neural networks is rooted in their ability to approximate complex non-linear mappings, as demonstrated by the universal approximation theorem for feed-forward architectures. However, the expressive power of Graph Neural Networks (GNNs) remains less understood due to the unique inductive biases imposed by graph-structured data. Unlike traditional neural networks, GNNs must simultaneously capture node features and the underlying graph topology, making their theoretical analysis more challenging.
This talk explores the theoretical foundations of GNN expressivity, focusing on their ability to represent and approximate functions over graph-structured data. While GNNs have demonstrated practical success in node classification, link prediction, and graph classification tasks, their comprehensive expressive power—defined as the capacity to model intricate relationships between node features and graph structure—remains difficult to quantify. Current evaluation methods often rely on indirect experimental metrics, lacking a unified framework to assess GNNs’ ability to express structural and feature-based information.
We will discuss key challenges in measuring GNN expressivity, including the limitations imposed by their parameter space and the trade-offs between expressiveness and computational efficiency. Additionally, we will highlight recent advancements in theoretical frameworks, such as the connection between GNNs and the Weisfeiler-Lehman (WL) graph isomorphism test, which provides insights into their ability to distinguish graph structures. Finally, we will outline potential directions for developing more rigorous evaluation methods to comprehensively assess GNN expressivity, bridging the gap between theoretical understanding and practical applications.
Additional resources: