Welcome!
Networks, or graphs, are a valuable tool for comprehending complex systems -- they can represent a wide range of real-world phenomena such as social networks, biological networks, financial systems, communication networks, and more. Graphs enable modeling various aspects of these systems and help us understand the intricate relationships and interactions between their entities. Apart from being useful in studying complex systems, applications of machine learning to graphs have led to important new applications -- from information systems and social media to drug discovery -- and have become one of the fastest growing areas in artificial intelligence. However, standard graph representations of complex relational data are limited in their representation capabilities as they only capture dyadic relationships but do not take higher-order properties of the analyzed systems into account.
In recent years, deep learning techniques combined with higher-order network models have gained significant attention as they can effectively capture the multi-relational and multi-dimensional characteristics of complex systems. These models, such as simplicial complexes, manifolds, hypergraphs, De Bruijn graphs, or memory network representations, provide a more faithful representation of the system, enabling a deeper understanding of the intricate relationships within the data.
Previous editions of HONS have been a valuable platform for scientific discussions between researchers who study the challenges and opportunities of higher-order network models. In particular, the previous two editions of HONS explored the connection between higher-order models and deep learning. Building on the discussions in these meetings, this year's edition of HONS will focus on the question how higher-order network models can facilitate interpretability in machine learning applications. This includes two different perspectives, namely (i) how the intrinsic interpretability of higher-order network models can help us to gain insights into complex networks, and (ii) how recent advances in higher-order modeling can facilitate the explainability of black box models in deep learning. The purpose of this event is to create an environment for the exchange of ideas and collaborations between participants to discuss challenges and opportunities. Some of the topics that will be discussed are:
- How can higher-order network models be applied for the mechanistic interpretability of large language models?
- How can higher-order network models help to encode topological inductive biases into deep learning models?
- How do inherently interpretable graphical models for sequential data compare to deep learning approaches like LSTM or transformers?
- How can higher-order network models help to find interpretable motifs and anomalies in large data sets?
- How can higher-order network models for time series data help us to discover causal mechanisms?
We look forward to seeing you at the HONS satellite!