Tsallis Entropy derived from the Chaitin-Kolmogorov Informational Entropy

Tsallis Entropy derived from the Chaitin-Kolmogorov Informational Entropy
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We provide a rigorous first-principle derivation of the non-additive Tsallis’ entropy by employing the Chaitin-Kolmogorov algorithmic information theory. By applying non-local restrictive rules on the string formation (grammar), we show that the algorithmic cost follows a power-law of the string length, instead of the linear behaviour obtained in the classical theory. As a result, the Tsallis entropy governs the increase of information. We explore the result showing, through Landauer’s limit, that the heat dissipation in systems with long-range correlations is diminished. The $Ω_q$ number, which remains incompressible, now offers the possibility of a continuous increase of complexity, measured by the parameter $q$. We show the consistency of the results by a numerical simulation, and discuss Zipf’s law in light of the new findings.


💡 Research Summary

The paper presents a first‑principles derivation of the non‑additive Tsallis entropy (S_q) by embedding Chaitin‑Kolmogorov algorithmic information theory within a framework that imposes grammatical constraints on string generation. In the classical algorithmic setting, where an alphabet of size M can produce arbitrary strings of length L, the optimal program length N scales as ln M, and the total algorithmic cost C grows exponentially with L (C ≈ e^{(ln M)L}). This yields an entropy H proportional to L, reproducing the extensive Boltzmann‑Gibbs (BG) entropy and its additive composability.

The authors introduce a “grammar” that restricts the set of admissible N‑symbol words. They model the reduction in admissible words by a power‑law ν(n) = n^{α}, where n = L/N is the number of symbols in the optimal program and α (>0) quantifies the strength of the constraint. Under this restriction the cost becomes C ≈ ν(n) M^{N}. Optimizing ln C with respect to N yields N = α ln M, and consequently C scales as (ln M)^{α} L^{α}. The associated entropy therefore follows H(L) = k (ln M)^{α} L^{α}, which is non‑additive. By defining the Tsallis index q = (α − 1)/α (or equivalently q = 1 − 1/α), the growth law dH/dL = ln M H^{q} reproduces the differential form that integrates to the Tsallis entropy S_q = (W^{1−q} − 1)/(1 − q) for equiprobable microstates. Thus, the power‑law reduction of admissible strings directly generates the Tsallis functional form.

Thermodynamic implications are explored via Landauer’s principle, which states that erasing one bit of information costs at least k T ln 2 of heat. Since the algorithmic cost now grows as L^{α} rather than linearly, the minimal heat dissipation scales as L^{α − 1}. For systems with long‑range correlations (α > 1) the heat required per added symbol is reduced, suggesting that non‑additive statistics naturally describe more thermodynamically efficient information processing in correlated media.

The paper further connects the exponent α to empirical linguistic laws. Heaps’ law (vocabulary size V ∝ L^{β}) is interpreted as ν(L) ∝ L^{α}, giving β ≈ α. Empirical studies report 0.4 < β < 0.6, implying α in the same range. Zipf’s law for word frequencies (frequency ∝ rank^{−s}) is derived by noting that the probability distribution that maximizes S_q is the q‑exponential p(x) ∝


Comments & Academic Discussion

Loading comments...

Leave a Comment