Parsimonious module inference in large networks
We investigate the detectability of modules in large networks when the number of modules is not known in advance. We employ the minimum description length (MDL) principle which seeks to minimize the total amount of information required to describe the network, and avoid overfitting. According to this criterion, we obtain general bounds on the detectability of any prescribed block structure, given the number of nodes and edges in the sampled network. We also obtain that the maximum number of detectable blocks scales as $\sqrt{N}$, where $N$ is the number of nodes in the network, for a fixed average degree $
💡 Research Summary
The paper addresses the fundamental problem of community (module) detection in large networks without prior knowledge of the number of groups. By applying the Minimum Description Length (MDL) principle, the authors formulate a model‑selection framework that balances data fit against model complexity, thereby avoiding over‑fitting.
Starting from the stochastic block model (SBM) and its degree‑corrected variant, the authors express the entropy (log‑number of graph realizations) in closed form. Pure entropy minimization would always favor the trivial partition with as many blocks as nodes, so they augment the objective with a description length term that encodes the cost of specifying the block matrix and node assignments. The total description length Σ = S + L can be written analytically; the model‑cost L grows with the number of blocks B as E·h
Comments & Academic Discussion
Loading comments...
Leave a Comment