Inverse problems in spin models
Several recent experiments in biology study systems composed of several interacting elements, for example neuron networks. Normally, measurements describe only the collective behavior of the system, even if in most cases we would like to characterize how its different parts interact. The goal of this thesis is to extract information about the microscopic interactions as a function of their collective behavior for two different cases. First, we will study a system described by a generalized Ising model. We find explicit formulas for the couplings as a function of the correlations and magnetizations. In the following, we will study a system described by a Hopfield model. In this case, we find not only explicit formula for inferring the patterns, but also an analytical result that allows one to estimate how much data is necessary for a good inference.
💡 Research Summary
**
This dissertation tackles the inverse problem of spin models, focusing on how to infer microscopic interaction parameters from macroscopic observations that are typically available in biological systems such as neuronal networks and families of homologous proteins. The work is divided into two main parts: (i) a generalized Ising model and (ii) a Hopfield model.
In the first part, the author derives explicit formulas for the couplings (J_{ij}) and external fields (h_i) in terms of the measured one‑point magnetizations (m_i) and two‑point correlations (c_{ij}). The derivation is based on a systematic high‑temperature (small‑correlation) expansion of the free energy. By expanding the free energy in powers of the inverse temperature (\beta), each order corresponds to a set of diagrammatic contributions (loops) involving two, three, four, or more spins. The second‑order term reproduces the familiar mean‑field result (J_{ij}\approx -c_{ij}^{-1}). Higher‑order terms introduce corrections that account for multi‑spin loops (triangles, squares, etc.). The author develops a “loop‑summation” technique that resums an infinite class of such diagrams, thereby improving the accuracy of the inferred couplings beyond standard mean‑field or TAP approximations. Numerical tests on a one‑dimensional chain, the Sherrington‑Kirkpatrick model, and random graphs demonstrate that the expansion converges rapidly and yields coupling estimates with significantly lower error than traditional methods.
The second part addresses the Hopfield model, where the energy is defined by a set of stored patterns ({\xi_i^{\mu}}). The goal is to reconstruct these patterns from observed spin statistics. For a single pattern ((p=1)), the author distinguishes ferromagnetic and paramagnetic regimes and provides closed‑form expressions for the pattern components in terms of magnetizations and correlations. For multiple patterns ((p>1)), a gauge‑fixing procedure is introduced to enforce orthogonality among patterns, and the inference reduces to solving a linear system that can be efficiently performed via singular‑value decomposition of the empirical correlation matrix.
A particularly valuable contribution is the analytical estimation of the amount of data required for reliable inference. By evaluating the Fisher information matrix and the entropy of the posterior distribution, the author derives scaling laws for the number of samples (M) as a function of system size (N). In the paramagnetic phase, accurate reconstruction typically needs (M = O(N)) samples, whereas in the ferromagnetic phase the requirement can increase to (M = O(N\log N)) depending on pattern strength and external fields. These results give concrete guidelines for experimental design in neuroscience or protein‑sequence analysis, indicating how many recordings or sequence alignments are necessary to recover underlying interaction networks with a prescribed confidence level.
Overall, the thesis bridges classical statistical‑mechanical techniques (mean‑field, replica, TAP) with modern inverse‑problem methodology. The high‑order diagrammatic expansion and loop‑summation provide a systematic way to improve inference accuracy while retaining computational tractability. The Hopfield‑model analysis supplies both practical algorithms for pattern extraction and theoretical bounds on data requirements, making the work directly relevant to fields that aim to reconstruct hidden interaction structures from noisy, collective measurements. Future directions suggested include extensions to non‑equilibrium data, time‑dependent dynamics, and multi‑state (Potts‑type) variables.
Comments & Academic Discussion
Loading comments...
Leave a Comment