The GeometricKernels Package: Heat and Matérn Kernels for Geometric Learning on Manifolds, Meshes, and Graphs
Kernels are a fundamental technical primitive in machine learning. In recent years, kernel-based methods such as Gaussian processes are becoming increasingly important in applications where quantifying uncertainty is of key interest. In settings that involve structured data defined on graphs, meshes, manifolds, or other related spaces, defining kernels with good uncertainty-quantification behavior, and computing their value numerically, is less straightforward than in the Euclidean setting. To address this difficulty, we present GeometricKernels, a Python software package which implements the geometric analogs of classical Euclidean squared exponential - also known as heat - and Matérn kernels, which are widely-used in settings where uncertainty is of key interest. As a byproduct, we obtain the ability to compute Fourier-feature-type expansions, which are widely used in their own right, on a wide set of geometric spaces. Our implementation supports automatic differentiation in every major current framework simultaneously via a backend-agnostic design. In this companion paper to the package and its documentation, we outline the capabilities of the package and present an illustrated example of its interface. We also include a brief overview of the theory the package is built upon and provide some historic context in the appendix.
💡 Research Summary
The paper introduces GeometricKernels, a Python library that brings heat (squared‑exponential) and Matérn kernels to non‑Euclidean domains such as graphs, meshes, and Riemannian manifolds. The authors motivate the work by pointing out that, unlike in Euclidean space where the squared‑exponential kernel is automatically positive‑semi‑definite (PSD), naïve distance‑based kernels on structured spaces often fail to be PSD, making uncertainty quantification in Gaussian processes (GPs) problematic. To overcome this, the library defines kernels via the spectral decomposition of the Laplace–Beltrami operator (or graph Laplacian) on each space, guaranteeing PSD by construction.
The package is organized into three main components. The spaces sub‑package provides classes for a wide range of geometric objects: compact manifolds (circles, hyperspheres, special orthogonal and unitary groups), non‑compact manifolds (hyperbolic spaces, symmetric positive‑definite matrix manifolds), triangle meshes, generic undirected graphs, hypercube graphs, edge sets of simplicial complexes, and products of discrete‑spectrum spaces. Each class exposes the Laplacian spectrum and a convenient interface for evaluating points on the manifold.
The kernels sub‑package implements the actual kernels. The central class, MaternGeometricKernel, automatically dispatches to the appropriate heat or Matérn formulation based on the supplied space object. Hyperparameters (ν, lengthscale, variance, etc.) are stored in a Python dictionary and initialized via init_params. For composite domains, ProductGeometricKernel builds a kernel on a Cartesian product by multiplying individual kernels, allowing each factor to retain its own hyperparameters and enabling automatic relevance determination across heterogeneous components.
A notable feature is the provision of finite‑dimensional approximate feature maps in the feature_maps sub‑package. By default, default_feature_map selects a suitable approximation method (e.g., random Fourier features, Nyström, or eigenfunction truncation) tailored to the space and kernel. These maps satisfy (k(x,x’)\approx \phi(x)^\top\phi(x’)) and enable linear‑time GP inference, sampling, and kernel PCA without the cubic cost of full kernel matrix inversion.
GeometricKernels achieves backend‑agnostic operation through the LAB library, which supplies multiple dispatch based on the array type. Users can import geometric_kernels.torch, geometric_kernels.jax, geometric_kernels.tensorflow, or fall back to NumPy for debugging. The library preserves the input array type throughout, supports batching, and runs on GPUs when the selected backend does. This design eliminates the need for separate implementations for each deep‑learning framework and makes the kernels fully differentiable in all supported environments.
Integration with existing GP ecosystems is provided via thin wrappers: GPyTorchGeometricKernel, GPJaxGeometricKernel, and GPflowGeometricKernel. These allow practitioners to plug the geometric kernels directly into popular GP models, benefiting from advanced inference techniques (variational inference, stochastic training, etc.) while retaining the geometric fidelity of the kernel.
The paper includes a concise example: constructing a Matérn kernel on the 2‑sphere, initializing hyperparameters (ν=5/2, lengthscale=1), and evaluating the kernel matrix on three points. The same code works unchanged with NumPy, JAX, or PyTorch backends, demonstrating the library’s seamless interoperability.
In conclusion, GeometricKernels bridges a critical gap between rigorous geometric kernel theory and practical machine‑learning tooling. By guaranteeing PSD through spectral definitions, offering automatic differentiation across major frameworks, providing scalable feature‑map approximations, and supporting product spaces, the library enables uncertainty‑aware learning on complex structured data. The authors suggest future directions such as extending to additional non‑compact symmetric spaces, improving large‑scale spectral approximations for massive graphs, and tighter integration with deep neural architectures for hybrid geometric‑deep models.
Comments & Academic Discussion
Loading comments...
Leave a Comment