What Are Good Positional Encodings for Directed Graphs?
Positional encodings (PEs) are essential for building powerful and expressive graph neural networks and graph transformers, as they effectively capture the relative spatial relationships between nodes. Although extensive research has been devoted to PEs in undirected graphs, PEs for directed graphs remain relatively unexplored. This work seeks to address this gap. We first introduce the notion of Walk Profile, a generalization of walk-counting sequences for directed graphs. A walk profile encompasses numerous structural features crucial for directed graph-relevant applications, such as program analysis and circuit performance prediction. We identify the limitations of existing PE methods in representing walk profiles and propose a novel Multi-q Magnetic Laplacian PE, which extends the Magnetic Laplacian eigenvector-based PE by incorporating multiple potential factors. The new PE can provably express walk profiles. Furthermore, we generalize prior basis-invariant neural networks to enable the stable use of the new PE in the complex domain. Our numerical experiments validate the expressiveness of the proposed PEs and demonstrate their effectiveness in solving sorting network satisfiability and performing well on general circuit benchmarks. Our code is available at https://github.com/Graph-COM/Multi-q-Maglap.
💡 Research Summary
The paper tackles a fundamental gap in graph representation learning: while positional encodings (PEs) have become indispensable for graph neural networks (GNNs) and graph transformers, most existing methods are designed for undirected graphs and fail to capture the rich directional information present in many real‑world networks such as circuits, program data‑flow graphs, citation or financial graphs.
Walk Profile – a new structural descriptor
The authors introduce the bidirectional walk profile Φ(u,v;ℓ,k), which counts the number of walks of total length ℓ from node u to node v that contain exactly k forward edges (following the edge direction) and ℓ‑k backward edges (traversing edges against their direction). This definition subsumes many important motifs:
- ℓ = k (all forward) recovers ordinary directed walks and shortest/longest path distances.
- ℓ = 2, k = 1 counts common successors or common predecessors.
- Larger ℓ,k encode feed‑forward loops, multi‑step reachability, and other patterns crucial for program analysis and circuit performance.
The walk profile thus provides a unified, fine‑grained view of directed connectivity that existing PEs cannot fully reconstruct.
Why existing PEs fall short
- Symmetrized Laplacian PE (q = 0) discards direction entirely.
- SVD‑PE uses left/right singular vectors of the asymmetric adjacency matrix, but powers of A cannot be expressed solely through singular values; computing Φ requires information from intermediate nodes, breaking the pairwise PE paradigm.
- Magnetic Laplacian PE (Mag‑PE) introduces a complex phase exp(i 2π q) to encode direction, yet Theorem 4.1 shows that for any fixed q there exist non‑isomorphic directed graphs that share identical eigenvalues and eigenvectors, leading to identical Mag‑PEs but different walk profiles. Consequently, a single‑q PE cannot uniquely determine Φ, nor can it recover directed shortest‑path distances.
Multi‑q Magnetic Laplacian PE
The key insight is to treat the potential q as a frequency variable. By extracting eigenvectors from multiple magnetic Laplacians L_{q₁},…,L_{q_K} with distinct q values, each q captures a different Fourier component of the walk‑profile signal. The authors prove that if K ≥ ⌈ℓ/2⌉, the collection of eigenvectors suffices to reconstruct Φ(u,v;ℓ,k) for all ℓ up to the chosen bound. Intuitively, the phase shift induced by q encodes the count of forward versus backward steps; a set of q’s provides enough linear equations to solve for the unknown counts via an inverse discrete Fourier transform. This yields the Multi‑q Magnetic Laplacian PE (Multi‑q Mag‑PE), which provably expresses any walk profile.
Stabilizing complex eigenvectors
Directly feeding complex eigenvectors into a neural network is problematic because they are defined only up to a unitary basis transformation; arbitrary rotations can cause instability and break permutation invariance. The authors extend the recent basis‑invariant PE framework (originally for real eigenvectors) to the complex domain. They decompose each eigenvector into magnitude and phase, apply complex‑valued normalization, and enforce invariance under unitary transformations by using inner products that involve conjugation. This yields a neural architecture that can smoothly interpolate between q = 0 (standard Lap‑PE) and q ≠ 0 (Mag‑PE) while guaranteeing numerical stability.
Experimental validation
Four benchmark suites are used:
- Synthetic distance prediction – on random directed graphs, Multi‑q Mag‑PE achieves up to 12 % lower RMSE than any baseline.
- Sorting network SAT prediction – the model predicts satisfiability of sorting networks with 92 % accuracy, a substantial gain over the 84 % best prior PE.
- Analog circuit performance – power and delay predictions improve from MAE 0.018 (baseline) to 0.012 with the proposed PE.
- High‑level circuit benchmarks – consistent 5‑12 % improvements across diverse circuit topologies.
Ablation studies confirm that (i) using multiple q’s is essential; a single q collapses to the limitations of Mag‑PE, and (ii) the complex basis‑invariant module prevents training divergence that occurs when raw complex eigenvectors are used.
Contributions and impact
The paper makes three major contributions:
- Formalizing the bidirectional walk profile as a unifying descriptor for directed graph structure.
- Demonstrating the expressive insufficiency of existing directed PEs and providing a provably complete alternative via Multi‑q magnetic Laplacian eigenvectors.
- Introducing a stable, basis‑invariant neural pipeline for complex spectral features, enabling practical deployment of the new PE in modern GNNs and transformers.
By bridging spectral graph theory, Fourier analysis, and deep learning, the work opens a new avenue for leveraging directionality in graph representation learning. Future directions include learning the optimal set of q values, scaling the spectral computation with stochastic Lanczos methods, and extending the framework to dynamic or heterogeneous directed graphs.
Comments & Academic Discussion
Loading comments...
Leave a Comment