Inequalities for the Tsallis q-entropy and Information Theory
In this work, we derive information-theoretic properties for a modified Tsallis entropy, hereinafter referred to as q-entropy. We introduce the notions of joint q-entropy, conditional q-entropy, relative q-entropy, conditional mutual q-information, a…
Authors: Marco A. S. Trindade
Inequalities for the Tsallis q-en trop y and Information Theory 1 Marco A. S. T rindade, Colegiado de F ´ ısica, Departamen to de Ciˆ encias Exatas e da T erra, Univ ersidade do Estado da Bahia Rua Silv eira Martins, Cabula, 41150000, Salv ador, Bahia, Brazil Abstract In this w ork, we deriv e information-theoretic prop erties for a mod- ified Tsallis en tropy , hereinafter referred to as q -entrop y . W e introduce the notions of joint q -entrop y , conditional q -en tropy , relativ e q -entrop y , conditional m utual q -information, and establish sev eral inequalities analogous to those of classical information theory . Within the con text of Mark o v c hains, these results are emplo yed to pro ve a v ersion of the second la w of thermo dynamics. F urthermore, we inv estigate the maxim um en tropy metho d in this setting. Finally , w e pro ve a Tsallis v ersion of the Shannon-McMillan-Breiman theorem. Keywor ds: q-en trop y; Tsallis entrop y , information theory . P ACS: 05.20.-y; 89.70.Cf; 89.75.-k 1 matrindade@uneb.br 1 In tro duction Shannon en tropy is a crucial concept in Information Theory [1]. It was in tro- duced by Claude Shannon in 1948 and is a measure of av erage uncertaint y in the random v ariable [2]. As highligh ted b y Nielsen [3] there is t w o comple- men tary views: it ho w muc h information we acquired when know the v alue of random v ariable, on a verage, or alternativ ely it is a measure of uncertain ty ab out the v ariable b efore we know its v alue. This definition is analogous to the statistical en tropy in statistical mechanics, in tro duced b y Boltzmann, in 1870 [4, 5, 6]. A generalization of the Boltzmann-Gibbs entrop y within the scenario of statistical mechanics w as prop osed b y Tsallis in 1988 [7], motiv ated by the scaling properties observed in multifractal systems. Unlik e the stan- dard Boltzmann-Gibbs en trop y , the Tsallis en trop y is nonadditiv e, thereb y violating the additivity prop erty that constitutes one of the fundamental as- sumptions of Callen’s third p ostulate of equilibrium thermo dynamics. This generalized en tropy is c haracterized by an en tropic index q , whic h quantifies the degree of nonextensivity of the system. The classical Boltzmann-Gibbs en tropy is recov ered in the limiting case q → 1, ensuring consistency with standard statistical mec hanics. Ov er the past decades, Tsallis statistics has b een successfully applied to a broad class of physical systems in whic h long-range interactions, long-term memory effects, or multifractal phase-space structures pla y a relev an t role [7, 8]. Suc h systems frequen tly arise in statistical ph ysics, plasma ph ysics, turbulence, gra vitational systems, and other complex systems [9, 10, 11], where the assumptions underlying Boltzmann-Gibbs statistics are not fully satisfied [12]. In this scenario, the nonadditiv e nature of Tsallis en trop y pro vides a theoretical framew ork for describing anomalous scaling b eha viors and generalized thermo dynamic relations, whic h are not adequately captured b y conv en tional entropic measures [12]. On the other hand, information theory provides a set of statistical to ols for the study of complex systems [13]. The maxim um en trop y method, for in- stance, has b een regarded as a foundation for the construction of a theory of complex systems [14]. This leads to the question of whether Shannon’s infor- mation theory admits a generalization based on Tsallis entrop y and whether suc h a generalization can shed ligh t on attempts to ov ercome limitations of information theory in the study of complex systems [15]. F rom this per- sp ectiv e, theoretical information-theoretic prop erties of Tsallis entropies w ere 1 deriv ed by F uruic hi, with the aim of constructing a non-additive information theory [16]. This w ork dev elops information-theoretic definitions and inequalities within the con text of a mo dified Tsallis entrop y , which can b e regarded as a nat- ural generalization of Shannon en trop y . W e introduce notions suc h as join t q -entrop y , conditional q -en tropy , relativ e q -en tropy , and conditional m utual q -information, form ulated in a manner distinct from the approach of Ref. [16], and establish several inequalities analogous to those of classical information theory . In the con text of Marko v c hains, the deriv ed inequalities are emplo yed to pro v e a version of the second la w of thermodynamics. F urthermore, the applicabilit y of these results to the maxim um en trop y method is in vestigated. Finally , w e prov e a q -v ersion of the Shannon-McMillan-Breiman theorem. The pap er is organized as follo ws. Section 2 con tains the basic definitions and inequalities. In Section 3 w e p erform a sto chastic analysis and we deriv e a second law of thermodynamics in the context of Mark o v c hain. In Section 4 w e apply the maxim um en tropy metho d in this scenario. Section 6 is devoted to presen ting a Tsallis version of the Shannon-McMillan-Breiman theorem. Section 7 is dedicated to conclusions and p ersp ectives. Basic probabilistic concepts relev an t to this study are summarized in App endix A. 2 Basic Results Let X b e a discrete random v ariable with alphab et χ and probabilit y mass function { p ( x ) } x ∈ χ . The Tsallis en tropy [7, 8, 12, 16] S q ( X ) = − X x ∈ χ p q ( x ) ln q p ( x ) , (1) where ln q ( x ) = x 1 − q − 1 1 − q ( f or x > 0 , q ∈ R ) , (2) is not nonadditiv e: S ( A + B ) q = S ( A ) q + S ( B ) q + (1 − q ) k S ( A ) q + S ( B ) q (3) In this w ork we consider the Tsallis q -entrop y giv en by H q ( X ) = − X x ∈ χ p ( x ) ln q p ( x ) (4) 2 F uruic hi [16] explores Tsallis en tropy given by (1) in the context of infor- mation theory to construct a nonadditiv e information theory . Here, w e deriv e prop erties of Tsallis q -en tropy , giv en by (4), analogous to information theory and w e obtain a q -version of second la w of thermo dynamics and a q - v ersion of Shannon-McMillan-Breiman theorem. Therefore, we obtain sev eral results ab out q -en tropy to ensure the consistency of the formulation. Similarly to Shannon en tropy , we hav e the following result Prop osition 1 H q ≥ 0 , for q = 1 . Pro of. Fix q = 1 and let t ∈ (0 , 1]. By definition, ln q t = t 1 − q − 1 1 − q . If q < 1, then 1 − q > 0 and, since t ≤ 1, one has t 1 − q ≤ 1, which implies ln q t ≤ 0. If q > 1, then 1 − q < 0 and, again using t ≤ 1, one has t 1 − q ≥ 1, whic h also yields ln q t ≤ 0. Hence, for all q = 1 and all t ∈ (0 , 1], ln q t ≤ 0 , with equalit y if and only if t = 1. Since p ( x ) ≥ 0 for all x ∈ χ , it follo ws that − p ( x ) ln q p ( x ) ≥ 0 , Therefore, every term in the s um defining H q ( X ) is non-negativ e, and con- sequen tly H q ( X ) ≥ 0. Definition 1 The c onditional q -entr opy H q ( Y | X ) with ( X , Y ) ∼ p ( x, y ) is define d by H q ( X , Y ) = − X x ∈ χ X y ∈ y p ( x, y ) ln q p ( y | x ) (5) Prop osition 2 If ( X , Y ) ∼ p ( x, y ) and 0 ≤ q < 1 , we have the fol lowing ine quality H q ( X , Y ) ≥ H q ( X ) + H q ( Y | X ) (6) 3 Pro of. H q ( X , Y ) = − X x ∈ χ X y ∈ y p ( x, y ) ln q p ( x ) p ( y | x ) = − X x ∈ χ X y ∈ y p ( x, y ) ln q p ( x ) − X x ∈ χ X y ∈ y p ( x, y ) ln q p ( y | x ) − (1 − q ) X x ∈ χ X y ∈ y p ( x, y ) ln q p ( x ) ln q p ( y | x ) = H q ( X ) + H q ( X | Y ) + (1 − q ) X x ∈ χ X y ∈ y p ( x, y ) ln q p ( x ) ln q p ( y | x ) where w e use prop erty ln q ( xy ) = ln q ( x ) + ln q ( y ) + (1 − q ) ln q ( x ) l n q ( y ) (7) Since last term is non-negativ e, H q ( X , Y ) ≥ H q ( X ) + H q ( Y | X ) (8) Prop osition 3 If X and Y ar e indep endent and 0 ≤ q < 1 , then H q ( X , Y ) ≥ H ( X ) + H ( Y ) (9) Pro of. W e ha ve H q ( X , Y ) = H q ( X ) + H q ( Y ) = H q ( X ) + H q ( Y ) + (1 − q ) X x ∈ χ X y ∈ y p ( x, y ) ln q p ( x ) ln q p ( y ) , ≥ H q ( X ) + H q ( Y ) (10) since 0 ≤ q < 1 Prop osition 4 F or 0 ≤ q < 1 , we have H q ( X , Y | Z ) ≥ H q ( X | Z ) + H q ( Y | X, Z ) (11) 4 Pro of. H q ( X , Y | Z ) = − X x ∈ χ X y ∈ y X z ∈ Z p ( x, y , z ) ln q p ( x, y | z ) = − X x ∈ χ X y ∈ y X z ∈ Z p ( x, y , z ) ln q p ( y | x, z ) p ( x | z ) = − X x ∈ χ X y ∈ y X z ∈ Z p ( x, y , z ) ln q p ( x | z ) − X x ∈ χ X y ∈ y X z ∈ Z p ( x, y , z ) ln q p ( y | x, z ) − (1 − q ) X x ∈ χ X y ∈ y X z ∈ Z p ( x, y , z ) ln q p ( y | x, z ) ln q p ( x | z ) ≥ H q ( X | Z ) + H q ( Y | X, Z ) , (12) since (1 − q ) X x ∈ χ X y ∈ y X z ∈ Z p ( x, y , z ) ln q p ( y | x, z ) ln q p ( x | z ) ≤ 0 (13) and using the equation (7) The follo wing Lemma will be imp ortant in our sto c hastic analysis, for the en tropy rate Lemma 1 L et X 1 , X 2 , ..., X n r andom variables with joint distribution p ( x 1 , x 2 , ..., x n ) and 0 ≤ q < 1 . Then H q ( X 1 , X 2 , ..., X n ) ≥ n X i =1 H q ( X i | X i − 1 , ..., X 1 ) (14) Pro of. W e ha ve H q ( X 1 , X 2 , ..., X n ) = − X x 1 ,x 2 ,...,x n p ( x 1 , x 2 , ..., x n ) ln q [ p ( x 1 , x 2 , ..., x n )] = − X x 1 ,x 2 ,...,x n p ( x 1 , x 2 , ..., x n ) ln q " n Y i =1 p ( x i | x i − 1 , ..., x 1 ) # ≥ − X x 1 ,x 2 ,...,x n n X i =1 p ( x 1 , x 2 , ..., x n ) ln q [ p ( x i | x i − 1 , ..., x 1 )] = − n X i =1 X x 1 ,x 2 ,...,x n p ( x 1 , x 2 , ..., x n ) ln q [ p ( x i | x i − 1 , ..., x 1 )] = − n X i =1 H q ( X i | X i − 1 , ..., X 1 ) (15) 5 The follo wing definitions are similar to those concerning the Shannon en tropy and they are fundamental in our form ulation. Definition 2 The r elative q -entr opy b etwe en two pr ob ability functions p ( x ) and r ( x ) is define d by D q ( p || r ) = X x ∈ χ p ( x ) ln q p ( x ) r ( x ) (16) Definition 3 The mutual q -information I q ( X ; Y ) is define d as the r elative entr opy b etwe en the joint distribution p ( x, y ) and the pr o duct distribution p ( x ) p ( y ) : I q ( X ; Y ) = X x ∈ χ X y ∈ y p ( x, y ) ln q p ( x, y ) p ( x ) p ( y ) (17) wher e p ( x ) and p ( y ) ar e the mar ginal pr ob ability functions. Definition 4 The c onditional mutual q -information of r andom variables X , Y and Z is define d as I q ( X ; Y | Z ) = X x,y ,z p ( x, y , z ) ln q p ( x, y | z ) p ( x | z ) p ( y | z ) (18) W e ha ve the chain rule for q -information Prop osition 5 I q ( X 1 , X 2 , ..., X n , Y ) = n X i =1 I q ( X i ; Y | X i − 1 , X i − 2 , ...X 1 ) = (1 − q ) X x 1 ,x 2 ,...,x n ,y p ( x 1 , x 2 , ..., x n , y ) × n X i =1 (ln q p ( x i ; y | x i − 1 , x i − 2 , ...x 1 ) p ( x i | x i − 1 , x i − 2 , ...x 1 ) p ( y | x i − 1 , x i − 2 , ...x 1 ) × ln q " n Y i ′ = i +1 p ( x i ′ ; y | x i ′ − 1 , x i ′ − 2 , ...x 1 ) p ( x i ′ | x i ′ − 1 , x i ′ − 2 , ...x 1 ) p ( y | x i ′ − 1 , x i ′ − 2 , ...x 1 ) # ) 6 Pro of. W e ha ve that I q ( X 1 , X 2 , ..., X n , Y ) = X x 1 ,x 2 ,...,x n ,y p ( x 1 , x 2 , ..., x n , y ) ln q p ( x 1 , x 2 , ..., x n , y ) p ( x 1 , x 2 , ..., x n ) p ( y ) = X x 1 ,x 2 ,...,x n ,y p ( x 1 , x 2 , ..., x n , y ) × ln q n Y i =1 p ( x i ; y | x i − 1 , x i − 2 , ...x 1 ) p ( x i | x i − 1 , x i − 2 , ...x 1 ) p ( y | x i − 1 , x i − 2 , ...x 1 ) = X x 1 ,x 2 ,...,x n ,y n X i =1 p ( x 1 , x 2 , ..., x n , y ) × ln q p ( x i ; y | x i − 1 , x i − 2 , ...x 1 ) p ( x i | x i − 1 , x i − 2 , ...x 1 ) p ( y | x i − 1 , x i − 2 , ...x 1 ) (19) + (1 − q ) X x 1 ,x 2 ,...,x n ,y p ( x 1 , x 2 , ..., x n , y ) × n X i =1 ln q p ( x i ; y | x i − 1 , x i − 2 , ...x 1 ) p ( x i | x i − 1 , x i − 2 , ...x 1 ) p ( y | x i − 1 , x i − 2 , ...x 1 ) × ln q " n Y i ′ = i +1 p ( x i ′ ; y | x i ′ − 1 , x i ′ − 2 , ...x 1 ) p ( x i ′ | x i ′ − 1 , x i ′ − 2 , ...x 1 ) p ( y | x i ′ − 1 , x i ′ − 2 , ...x 1 ) # = n X i =1 I q ( X i ; Y | X i − 1 , X i − 2 , ...X 1 ) + (1 − q ) X x 1 ,x 2 ,...,x n ,y p ( x 1 , x 2 , ..., x n , y ) × n X i =1 ln q p ( x i ; y | x i − 1 , x i − 2 , ...x 1 ) p ( x i | x i − 1 , x i − 2 , ...x 1 ) p ( y | x i − 1 , x i − 2 , ...x 1 ) × ln q " n Y i ′ = i +1 p ( x i ′ ; y | x i ′ − 1 , x i ′ − 2 , ...x 1 ) p ( x i ′ | x i ′ − 1 , x i ′ − 2 , ...x 1 ) p ( y | x i ′ − 1 , x i ′ − 2 , ...x 1 ) # The next Lemma is a consequence of the concavit y of the q -logarithm, and it is the analogous the classical log sum inequality [2]. A similar result w as obtained by F uruichi [16]. 7 Lemma 2 ( q -ln sum ine quality) L et r 1 , r 2 , ..., r n and s 1 , s 2 , ..., s n nonne gative numb ers. Then n X i =1 r i ln q r i s i ≥ n X i =1 r i ! ln q P n i =1 r i P n i =1 s i (20) with e quality if and only r i s i = c , with c = constant . Pro of. Note that f ( u ) = u ln q ( u ) is strictly con vex for q ≤ 2: f ”( u ) = (2 − q )(1 − q ) p 1 − q − 1 1 − q = (2 − q ) p − q ≤ 0 (21) Therefore the Jensen’s inequalit y provides X i β i f ( u i ) ≥ f ( X i β i u i ) , (22) with P i β i = 1 and β i ≥ 0. The conclusion follows setting β i = s i P n j =1 s j and u i = r i s i , so that n X i =1 r i P n j =1 s j ln q r i s i ≥ n X i =1 r i P n j =1 s j ! ln q n X i =1 r i P n j =1 s j ! (23) Next, w e hav e the q -information inequalit y . Theorem 1 F or q ≤ 2 , the r elative q -entr opy satisfies D q ( r ( x ) || s ( x )) ≥ 0 (24) wher e r ( x ) and s ( x ) ar e two pr ob ability functions and we have the e quality if and only if r ( x ) = s ( x ) , for al l x . Pro of. Using that q -ln sum inequality , D q ( r ( x ) || s ( x )) = X x r ( x ) ln q r ( x ) s ( x ) ≥ X x r ( x ) ! ln q P x r ( x ) P x s ( x ) = 0 , (25) where the equality o ccurs if and only if r ( x ) = s ( x ), once r ( x ) and s ( x ) are probabilit y functions. 8 Theorem 2 L et X a r andom variable with pr ob ability function p ( x ) and q ≤ 2 . Then H q ≤ ln q | χ | , (26) wher e | χ | denotes the numb ers of elements in the r ange of X and the e quality is satisfie d if and only X has a uniform distribution. Pro of. Consider r ( x ) = | χ | − 1 the uniform probabilit y function. Thus D q ( p ( x ) || r ( x )) = X x p ( x ) ln q ( p ( x ) | χ | ) = X x p ( x ) ln q p ( x ) [1 + (1 − q ) ln q | χ | ] + ln q | χ | ≥ 0 (27) b y nonnegativity of relative q -entrop y . Therefore − X x p ( x ) ln q p ( x ) ≤ ln q | χ | (28) as desired. The next theorem is analogous to the data pro cessing inequalit y in clas- sical information theory [2] Theorem 3 If X → Y → Z and 0 ≤ q < 1 (Markov chain), then I q ( X ; Y ) ≥ I q ( X ; Z ) + (1 − q ) X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x ) p ( z ) ln q p ( x, y , z ) p ( x | z ) p ( y | z ) Pro of. W e ha ve that I q ( X ; Y , Z ) = X x,y ,z p ( x, y , z ) ln q p ( x, y , z ) p ( x ) p ( y , z ) = X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x, y | z ) p ( x ) p ( z ) p ( x | z ) p ( y | z ) = X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x ) p ( z ) + X x,y ,z p ( x, y , z ) ln q p ( x, y | z ) p ( x | z ) p ( y | z ) + (1 − q ) X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x ) p ( z ) ln q p ( x, y | z ) p ( x | z ) p ( y | z ) = I q ( X ; Z ) + I q ( X ; Y | Z ) + (1 − q ) X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x ) p ( z ) ln q p ( x, y | z ) p ( x | z ) p ( y | z ) 9 Similarly , one can pro ve that I q ( X ; Y , Z ) = I q ( X ; Y ) + I q ( X ; Z | Y ) + (1 − q ) X x,y ,z p ( x, y , z ) ln q p ( x, y ) p ( x ) p ( y ) ln q p ( x, z | y ) p ( x | y ) p ( z | y ) Note that I q ( X ; Y | Z ) = 0, since X and Z are conditionally indep enden t giv en Y . Hence I q ( X ; Y ) = I q ( X ; Z ) + I q ( X ; Y | Z ) + (1 − q ) X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x ) p ( z ) ln q p ( x, y | z ) p ( x | z ) p ( y | z ) The pro of is finished since I q ( X ; Y | Z ) ≥ 0 (29) and X x,y ,z p ( x, y , z ) ln q p ( x, z ) p ( x ) p ( z ) ln q p ( x, y | z ) p ( x | z ) p ( y | z ) ≤ 0 (30) 3 Sto c hastic analysis and the second la w of thermo dynamics The follo wing chain rule for relative q -entrop y is used to pro ve the violation of the second law of thermo dynamics analogously to classical analysis to Shannon relativ e en tropy where the second law is deriv ed [2] (in this case, it is not violated). Prop osition 6 F or 0 ≤ q < 1 , the r elative q -entr opy satisfies D q ( p ( x, y ) || r ( x, y )) = D q (( p ( x ) || r ( x )) + D q (( p ( y | x ) || r ( y | x )) + (1 − q ) X x,y p ( x, y ) ln q p ( x ) r ( x ) ln q p ( y | x ) r ( y | x ) (31) 10 Pro of. D q ( p ( x, y ) || r ( x, y )) = X x,y p ( x, y ) ln q p ( x, y ) r ( x, y ) = X x,y p ( x, y ) ln q p ( x ) p ( y | x ) r ( x ) r ( y | x ) = X x,y p ( x, y ) ln q p ( x ) r ( x ) + X x,y p ( x, y ) ln q p ( y | x ) r ( y | x ) + (1 − q ) X x,y p ( x, y ) ln q p ( x ) r ( x ) ln q p ( y | x ) r ( y | x ) (32) No w, w e show that our form ulation can b e extended to a sto c hastic con- text. Definition 5 The q -entr opy of sto chastic pr o c ess { X i } is define d as H q ( χ ) = lim n →∞ 1 n H q ( X 1 , X 2 , ..., X n ) , (33) when the limit exists. Also w e define the conditional q -entrop y of sto c hastic pro cess { X i } H ′ q ( χ ) = lim n →∞ 1 n n X i =1 H q ( X n | X n − 1 , X n − 2 , ..., X 1 ) (34) when the limit exists. Theorem 4 F or an arbitr ary sto chastic pr o c ess { X i } , we have H ′ q ( χ ) ≤ H q ( χ ) (35) Pro of. Using the Lemma 1, w e obtain n X i =1 H q ( X i | X i − 1 , X i − 2 , ..., X 1 ) ≤ H q ( X 1 , X 2 , ..., X n ) 1 n n X i =1 H q ( X i | X i − 1 , X i − 2 , ..., X 1 ) ≤ 1 n H q ( X 1 , X 2 , ..., X n ) (36) 11 Th us lim n →∞ 1 n n X i =1 H q ( X i | X i − 1 , X i − 2 , ..., X 1 ) ≤ lim n →∞ 1 n H q ( X 1 , X 2 , ..., X n ) = H q ( χ ) (37) P articularly , for a stationary Marko v chain H ′ q ( χ ) = H q ( X i +1 | X i ) ≤ H q ( X ) (38) Although Maxwell’s demon w as prop osed more than a cen tury ago, it remains a conceptually relev an t problem in the foundations of physics [17, 18, 19, 20, 21, 22]. Aquino [23] analyzes the implications of adopting a non-extensive thermo dynamics, sho wing that, in this case, the effect of Maxw ell’s demon w ould be determined by the memory of the system and w ould therefore b e temp orary , in contrast with dynamical approaches based on L ´ evy statistics. W e b eliev e that the parameter q of non-extensiv e entrop y may pro vide a the- oretical p ersp ective for inv estigating apparen t deviations from the standard form ulation of the second la w of thermo dynamics in generalized scenarios. The following theorem illustrates this p ossibilit y through an analysis analo- gous to the proof of the second la w of thermo dynamics presen ted b y Co v er [2] for an isolated system mo deled b y a Marko v chain. Theorem 5 L et ψ n and ψ ′ n b e two pr ob ability distributions on the state sp ac e of a Markov chain (pr ob ability function p ( x n , x n +1 ) = p ( x n ) r ( x n +1 , x n ) wher e the r is the pr ob ability tr ansition function for Markov chain), and 0 ≤ q < 1 . Then H q ( ψ n +1 ) − H q ( ψ n ) ≥ 1 − q [1 + (1 − q ) ln q | χ | ] X n p ( x n +1 , x n ) ln q [ p ( x n +1 ) | χ | ] ln q [ p ( x n , x n +1 ) | χ | ] wher e χ is the uniform distribution Pro of. W e consider ψ n and ψ ′ n b e tw o probabilit y distributions on the state space of a Marko v chain with probability functions p ( x n , x n +1 ) = p ( x n ) r ( x n +1 , x n ) and s ( x n , x n +1 ) = s ( x n ) r ( x n +1 , x n ). By the proposition 6, the relative q - 12 en tropy can b e written as D q ( p ( x n , x n +1 ) || s ( x n , x n +1 )) = D q ( p ( x n ) || s ( x n )) + D q ( p ( x n +1 || x n ) || s ( x n +1 || x n )) + (1 − q ) X n p ( x n , x n +1 ) ln q p ( x n ) s ( x n ) ln q p ( x n +1 | x n ) s ( x n +1 | x n ) = D q ( p ( x n +1 ) || s ( x n +1 )) + D q ( p ( x n || x n +1 ) || s ( x n || x n +1 )) + (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) But p ( x n +1 | x n ) = s ( x n +1 | x n ) = r ( x n +1 | x n ) . (39) Th us, by Theorem 1 (nonnegativity of relativ e q-entrop y) D q ( p ( x n ) || s ( x n )) ≥ D q ( p ( x n +1 ) || s ( x n +1 )) + (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) namely D q ( ψ n || ψ ′ n ) ≥ D q ( ψ n +1 ) || ψ ′ n +1 ) + (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) If ψ ′ n = ψ is a stationary distribution D q ( ψ n || ψ ) ≥ D q ( ψ n +1 ) || ψ ) + (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) P articularly , for an uniform stationary distribution D q ( ψ n || ψ ) = ln q | χ | − [1 + (1 − q )] ln q | χ | H q ( ψ n ) ≥ D q ( ψ n +1 || ψ ) + (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) = ln q | χ | − [1 + (1 − q )] ln q | χ | H q ( ψ n +1 ) + (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) 13 Therefore [ H q ( ψ n +1 ) − H q ( ψ n )][1 + (1 − q ) ln q | χ | ] ≥ T q , (40) where T q = (1 − q ) X n p ( x n +1 , x n ) ln q p ( x n +1 ) s ( x n +1 ) ln q p ( x n | x n +1 ) s ( x n | x n +1 ) (41) Notice that the term T q ma y b e negativ e, allowing a decrease in en tropy . 4 Maxim um en trop y metho d for the q-en trop y The Maxim um En tropy metho d w as originally form ulated b y Edwin T. Ja ynes [24, 2]. Its cen tral idea is to select, among all probability distributions com- patible with a giv en set of macroscopic constrain ts (such as known exp ecta- tion v alues). In the classical framework, the Shannon en trop y is employ ed, and its maximization naturally leads to the familiar exp onen tial distributions of the statistical mechanics dev elop ed by Boltzmann and Gibbs [24]. In this con text, we can apply the maxim um en tropy metho d [24, 2] for q - en trop y with the constrain ts: n X i =1 p i ϵ i = b ϵ (42) and n X i =1 p i = 1 (43) The lagrangian is giv en by L = − n X i =1 p i ln q p i − ( λ − 1)( n X i =1 p i − 1) − µ ( n X i =1 p i ϵ i − b ϵ ) (44) Th us we hav e the asso ciated probability distribution p i = exp q − λ − µϵ i 2 − q (45) The parameters λ and µ can b e obtained through system of nonlinear equa- tions 14 y = P n i =1 exp q − λ − µϵ i 2 − q ϵ i = b ϵ P n i =1 exp q − λ − µϵ i 2 − q = 1 It remains for us to prov e that we hav e the maximum entrop y distribution. Let H ′ = − n X i =1 f i ln q f i (46) b e another probability arbitrary distribution. W e hav e that, H − H ′ = − n X i =1 p i ln q p i + n X i =1 f i ln q f i = ( f i − p i ) − λ − µϵ i 2 − q + n X i =1 [1 + (1 − q ) ln q p i ] f i ln q f i p i (47) Then H − H ′ = − n X i =1 [1 + (1 − q ) ln q p i ] f i ln q f i p i ≥ n X i =1 f i ln q P n i =1 f i P n i =1 p i = 0 , (48) b y using q -ln sum inequalit y (Lemma 2). Besides, w e ha ve the equalit y if and only if f i = p i . 5 q-V ersion Shannon-McMillan-Breiman The- orem In this section, we presen t a v ersion of Shannon-McMillan-Breiman theorem [2] within the nonadditive en tropic scenario considered in this work. The result characterizes the asymptotic b eha vior of information associated with stationary sto chastic pro cesses and establishes a q –v ersion of the asymptotic equipartition prop erty for general ergo dic pro cess. In this section, w e will 15 consider 1 / 2 < q < 1. Some basic probabilistic concepts relev an t to this section are summarized in App endix A Lemma 3 L et { X n } n ∈ Z b e a stationary er go dic sto chastic pr o c ess over a finite alphab et X . Fix q < 1 and define the q -lo garithm: ln q ( x ) = x 1 − q − 1 1 − q . (49) Define the non-extensive c onditional entr opy r ate: H q , ∞ := E − ln q p ( X 0 | X − 1 −∞ ) , (50) wher e the exp e ctation is over the joint distribution of the p ast X − 1 −∞ and the pr esent X 0 . Using the rule of pr ob ability c onditioning, p ( x − 1 −∞ , x 0 ) = p ( x − 1 −∞ ) p ( x 0 | x − 1 −∞ ) , (51) we c an exp and H q , ∞ as a double sum over the alphab et X and p ast se quenc es: H q , ∞ = − X x − 1 −∞ p ( x − 1 −∞ ) X x 0 ∈X p ( x 0 | x − 1 −∞ ) p ( x 0 | x − 1 −∞ ) 1 − q − 1 1 − q . (52) Then, almost sur ely, lim sup n →∞ − 1 n ln q p ( X n − 1 0 | X ∞ − 1 ) ≤ H q , ∞ . (53) Pro of. Let Z i := − ln q p ( X i | X i − 1 −∞ ) . (54) Since { X i } is stationary and ergo dic, { Z i } is also stationary and ergo dic. By the ergo dic theorem [2]: 1 n n − 1 X i =0 Z i a.s. − − → E [ Z 0 ] = H q , ∞ . (55) W e ha ve ln q ( xy ) = ln q ( x ) + ln q ( y ) + (1 − q ) ln q ( x ) ln q ( y ) , (56) 16 so that for an y n , with 0 ≤ q < 1, − ln q p ( X n − 1 0 | X ∞ − 1 ) = − ln q n − 1 Y i =0 p X i | X i − 1 −∞ ≤ n − 1 X i =0 Z i (57) Dividing b y n : − 1 n ln q p ( X n − 1 0 | X ∞ − 1 ) ≤ 1 n n − 1 X i =0 Z i . (58) F rom (55), taking lim sup giv es lim sup n →∞ − 1 n ln q p ( X n − 1 0 | X ∞ − 1 ) ≤ H q , ∞ . (59) Using the expanded form of H q , ∞ : lim sup n →∞ − 1 n ln q p ( X n − 1 0 | X ∞ − 1 ) ≤ − X x − 1 −∞ p ( x − 1 −∞ ) X x 0 ∈X p ( x 0 | x − 1 −∞ ) p ( x 0 | x − 1 −∞ ) 1 − q − 1 1 − q . (60) Here the first p comes from the distribution of the past, and the second p comes from the conditional probabilit y of X 0 giv en the past. The k th-order Mark o v appro ximation to the probabilit y is defined for n ≥ k as [2] p k ( X n − 1 0 ) = p ( X k − 1 0 ) n − 1 Y i = k p X i | X i − 1 i − k . (61) Lemma 4 L et { X n } n ∈ Z b e a stationary sto chastic pr o c ess with finite alpha- b et. Assume in addition that ther e exists a c onstant C > 0 such that for al l blo cks x n − 1 0 , p ( x n − 1 0 ) ≤ C p k ( x n − 1 0 ) . (62) Then, almost sur ely, lim sup n →∞ 1 n ln q p ( X n − 1 0 | X − 1 −∞ ) p ( X n − 1 0 ) ≤ 0 , (63) lim sup n →∞ 1 n ln q p ( X n − 1 0 ) p k ( X n − 1 0 ) ≤ 0 . (64) 17 Pro of. Let A b e the supp ort of p ( X n − 1 0 ). Then E p ( X n − 1 0 | X − 1 −∞ ) p ( X n − 1 0 ) = E E p ( X n − 1 0 | X − 1 −∞ ) p ( X n − 1 0 ) X − 1 −∞ (65) = E X x n − 1 0 ∈ A p ( x n − 1 0 | X − 1 −∞ ) (66) ≤ 1 . (67) Similarly , using the additional h yp othesis, we ha ve E p ( X n − 1 0 ) p k ( X n − 1 0 ) = X x n − 1 0 ∈ A p ( x n − 1 0 ) p ( x n − 1 0 ) p k ( x n − 1 0 ) (68) ≤ X x n − 1 0 ∈ A p ( x n − 1 0 ) C (69) = C . (70) By Mark ov inequality [25], for an y t n > 0, Pr p ( X n − 1 0 | X − 1 −∞ ) p ( X n − 1 0 ) ≥ t n ≤ 1 t n , (71) Pr p ( X n − 1 0 ) p k ( X n − 1 0 ) ≥ t n ≤ C t n . (72) Since d dx ln q ( x ) > 0 for x > 0, w e obtain Pr 1 n ln q p ( X n − 1 0 | X − 1 −∞ ) p ( X n − 1 0 ) ≥ 1 n ln q t n ≤ 1 t n , (73) Pr 1 n ln q p ( X n − 1 0 ) p k ( X n − 1 0 ) ≥ 1 n ln q t n ≤ C t n . (74) Cho ose t n = n 2 . Then P ∞ n =1 1 /t n < ∞ and for q > 1 / 2, ln q ( t n ) = ln q ( n 2 ) = n 2(1 − q ) − 1 1 − q , (75) 18 1 n ln q ( t n ) − → 0 . (76) By the Borel-Can telli lemma [25], lim sup n →∞ 1 n ln q p ( X n − 1 0 | X − 1 −∞ ) p ( X n − 1 0 ) ≤ 0 a.s., (77) and lim sup n →∞ 1 n ln q p ( X n − 1 0 ) p k ( X n − 1 0 ) ≤ 0 a.s. (78) Lemma 5 L et { X n } n ∈ Z b e a stationary er go dic sto chastic pr o c ess over a finite alphab et. Fix k ≥ 1 and define p k ( X n − 1 0 ) = p ( X k − 1 0 ) n − 1 Y i = k p ( X i | X i − 1 , . . . , X i − k ) . (79) F or 1 2 < q < 1 , define Z i := − ln q p ( X i | X i − 1 , . . . , X i − k ) , (80) and H q ,k := E [ Z 0 ] . (81) Define T 3 as the sum of al l inter action terms of or der gr e ater than or e qual to two arising fr om the iter ative applic ation of ln q ( xy ) = ln q ( x ) + ln q ( y ) + (1 − q ) ln q ( x ) ln q ( y ) , (82) to the pr o duct Q n − 1 i = k p ( X i | X i − 1 , . . . , X i − k ) , i.e., ln q n − 1 Y i = k p i ! = n − 1 X i = k ln q ( p i ) + T 3 . (83) Assume that 1 n T 3 a.s. − − → 0 . (84) Then, almost sur ely, lim inf n →∞ − 1 n ln q p k ( X n − 1 0 ) ≥ H q ,k . (85) 19 Pro of. W e write p k ( X n − 1 0 ) = p ( X k − 1 0 ) n − 1 Y i = k p i , (86) where p i := p ( X i | X i − k i − 1 ). Applying iterativ ely the identit y ln q ( xy ) = ln q ( x ) + ln q ( y ) + (1 − q ) ln q ( x ) ln q ( y ) , (87) w e obtain the decomp osition − ln q p k ( X n − 1 0 ) = T 1 + T 2 + T 3 , (88) where T 1 := − ln q p ( X k − 1 0 ) , (89) T 2 := n − 1 X i = k Z i . (90) Since 0 < p i ≤ 1 and q < 1, we ha ve ln q ( p i ) ≤ 0, hence Z i = − ln q ( p i ) ≥ 0 . (91) Th us, − ln q p k ( X n − 1 0 ) = T 1 + T 2 + T 3 ≥ T 2 − | T 1 | + T 3 . (92) Dividing b y n , we obtain − 1 n ln q p k ( X n − 1 0 ) ≥ 1 n T 2 − | T 1 | n + 1 n T 3 . (93) Since T 1 is constan t, we hav e | T 1 | n → 0 . (94) By assumption, 1 n T 3 → 0 a.s. (95) 20 T aking the limit inferior, it follo ws that lim inf n →∞ − 1 n ln q p k ( X n − 1 0 ) ≥ lim n →∞ 1 n T 2 . (96) By the ergo dic theorem, since { Z i } is stationary and ergo dic, 1 n T 2 = 1 n n − 1 X i = k Z i a.s. − − → E [ Z 0 ] = H q ,k . (97) Hence, lim inf n →∞ − 1 n ln q p k ( X n − 1 0 ) ≥ H q ,k . (98) Lemma 6 L et { X n } n ∈ Z b e a bilater al sto chastic pr o c ess with finite alphab et X . F or e ach k ≥ 1 , define the or der- k c onditional Tsal lis entr opy by H q ,k := H q ( X 0 | X − 1 , . . . , X − k ) . (99) We write H q ,k ↘ H q , ∞ (100) to indic ate that ( H q ,k ) is monotone de cr e asing and c onver gent. Then, for q ≤ 2 , H q ,k ↘ H q , ∞ = H q ( X 0 | X − 1 −∞ ) . (101) Pro of. F or p ∈ [0 , 1] and q ≤ 2, ln q ( p ) ≤ 0 . (102) Hence H q ,k ≥ 0 (103) for all k . F or q ≤ 2, define ϕ ( p ) = − p ln q ( p ) . (104) 21 Then ϕ is conca ve on [0 , 1]. Let U, V , W b e finite-v alued random v ariables. Fix v in the range of V . By the law of total probability , p ( u | V = v ) = X w p ( u, W = w | V = v ) . (105) By the conditional pro duct rule, p ( u, W = w | V = v ) = p ( u | V = v , W = w ) p ( W = w | V = v ) . (106) Hence p ( u | V = v ) = X w p ( u | V = v , W = w ) p ( W = w | V = v ) . (107) Th us p ( u | V = v ) is a conv ex combination of the n umbers p ( u | V = v , W = w ) with weigh ts p ( W = w | V = v ). Since ϕ is concav e, Jensen’s inequality giv es ϕ ( p ( u | V = v )) ≥ X w p ( W = w | V = v ) ϕ ( p ( u | V = v , W = w )) . (108) Summing o ver u , X u ϕ ( p ( u | V = v )) ≥ X w p ( W = w | V = v ) X u ϕ ( p ( u | V = v , W = w )) . (109) T aking exp ectation with resp ect to V yields H q ( U | V ) ≥ H q ( U | V , W ) . (110) Hence H q ( U | V , W ) ≤ H q ( U | V ) . (111) Applying this with U = X 0 , V = ( X − 1 , . . . , X − k ) , W = X − ( k +1) , (112) w e obtain H q ,k +1 ≤ H q ,k . (113) 22 Th us ( H q ,k ) is decreasing and b ounded b elo w by 0, so the limit H q , ∞ := lim k →∞ H q ,k (114) exists. Let F k = σ ( X − 1 , . . . , X − k ) . (115) Then ( F k ) is an increasing sequence of σ -algebras and _ k ≥ 1 F k = σ ( X − 1 −∞ ) . (116) F or eac h fixed x 0 ∈ X , p ( x 0 | X − k − 1 ) = E 1 { X 0 = x 0 } | F k . (117) By L ´ evy’s martingale con vergence theorem [2], p ( x 0 | X − k − 1 ) a.s. − − − → k →∞ p ( x 0 | X − 1 −∞ ) . (118) Since ϕ is con tinuous on [0 , 1] and b ounded for q ≤ 2, there exists M > 0 suc h that | ϕ ( p ) | ≤ M (119) for all p ∈ [0 , 1]. Because X is finite, X x 0 ∈X ϕ ( p ( x 0 | X − k − 1 )) ≤ |X | M , (120) whic h is integrable. Therefore, by the dominated conv ergence theorem [25, 26], lim k →∞ H q ,k = E " − X x 0 ∈X p ( x 0 | X − 1 −∞ ) ln q p ( x 0 | X − 1 −∞ ) # . (121) By definition, the righ t-hand side equals H q ( X 0 | X − 1 −∞ ) , (122) and therefore H q , ∞ = H q ( X 0 | X − 1 −∞ ) . (123) 23 Theorem 6 L et { X n } n ∈ Z b e a stationary er go dic sto chastic pr o c ess over a finite alphab et X . Assume in addition that ther e exists a c onstant C > 0 such that for al l blo cks x n − 1 0 , p ( x n − 1 0 ) ≤ C p k ( x n − 1 0 ) . (124) and p ( X 0 | X − 1 −∞ ) ≥ p ( X n − 1 0 ) (125) p ( X n − 1 0 ) ≥ p k ( X n − 1 0 ) (126) for al l n . Assume also the c onditions of L emma 5 ar e satisfie d. Define the non-extensive c onditional entr opy r ate H q , ∞ := E − ln q p ( X 0 | X − 1 −∞ ) . Then, for 1 / 2 < q < 1 , almost sur ely, − 1 n ln q p ( X n − 1 0 ) − → H q , ∞ . Pro of. Using the conditions (125) and (126), w e hav e ln q p ( X 0 | X − 1 −∞ ) − ln q p ( X n − 1 0 ) ≤ ln q p ( X 0 | X − 1 −∞ ) p ( X n − 1 0 ) , (127) ln q p ( X n − 1 0 ) − ln q p k ( X n − 1 0 ) ≤ ln q p ( X n − 1 0 ) p k ( X n − 1 0 ) . (128) Therefore, b y Lemmas 3 and 4, lim sup n →∞ − 1 n ln q p ( X n − 1 0 ) ≤ lim sup n →∞ − 1 n ln q p ( X n − 1 0 | X − 1 −∞ ) ≤ H q , ∞ . (129) F rom Lemma 4 and 5, lim inf n →∞ − 1 n ln q p ( X n − 1 0 ) ≥ lim inf n →∞ − 1 n ln q p k ( X n − 1 0 ) = H k (130) 24 Com bining, H q ,k ≤ lim inf n →∞ − 1 n ln q p ( X n − 1 0 ) ≤ lim sup n →∞ − 1 n ln q p ( X n − 1 0 ) ≤ H q , ∞ . (131) Hence, b y Lemma 6, we obtain, almost surely , lim n →∞ − 1 n ln q p ( X n − 1 0 ) = H q , ∞ . (132) Although the nonadditive parameter q giv es rise to further technical dif- ficulties, the result holds rigorously for 1 / 2 < q < 1 provided that condition (124) is satisfied. 6 Conclusions In this work, we ha ve introduced definitions and deriv ed information-theoretic inequalities based on a mo dified Tsallis en tropy , whic h we argue pro vides a more natural generalization of Shannon en tropy . W e defined the notions of joint q -entrop y , conditional q -en tropy , relativ e q -en tropy , and conditional m utual q -information, follo wing an approach different from that of Ref. [16], and established several inequalities analogous to those of classical informa- tion theory . These results are shown to hold, in general, for 0 < q < 1. This restriction may be relaxed, leading to a distinct set of inequalities that no longer preserve a direct analogy with Shannon’s con text. Within this sce- nario, the information-theoretic results dev elop ed here were emplo yed, in the con text of Mark ov c hains, to prov e a version of the second la w of thermo- dynamics. W e also applied the formalism to the maximum en tropy metho d. In addition, we prov ed a Tsallis v ersion of the Shannon-McMillan-Breiman theorem for 1 / 2 < q < 1. As p ersp ectiv es for future work, p oten tial applica- tions in the con text of fractals and m ultifractals ma y b e explored through the information dimension l im ϵ → 0 −⟨ ln q p ⟩ ln q 1 /ϵ , where ϵ stands for the scaling factor. A Probabilistic T o ols for Finite Alphab et Sources In this app endix, we summarize fundamental probabilistic concepts that are frequen tly used in information theory for pro cesses ov er finite alphab ets. W e fo cus on ergo dic transformations, the ergo dic theorem for stationary sources, 25 and almost sure conv ergence results such as the Borel-Cantelli lemma and the con vergence dominated theorem. These to ols pro vide the theoretical foundation for results lik e the asymptotic equipartition prop ert y and the Shannon-McMillan-Breiman theorem. In the following, we closely follow the presen tations in References [2, 25, 26]. A.1 Ergo dic T ransformations Let (Ω , F , P ) b e a probability space, and let T : Ω → Ω b e a transformation suc h that P ( T A ) = P ( A ) , ∀ A ∈ F . Suc h a transformation is called me asur e-pr eserving . A measure-preserving transformation T is er go dic if ev ery T -in v ariant ev ent A (i.e., T A = A ) satisfies P ( A ) ∈ { 0 , 1 } . F or a stationary sto c hastic pro cess { X n } n ≥ 1 o ver a finite alphab et χ , the shift op erator T (( X 1 , X 2 , X 3 , . . . )) = ( X 2 , X 3 , X 4 , . . . ) defines a measure-preserving transformation. The pro cess is ergodic if this shift is ergo dic. A.2 Ergo dic Theorem for Stationary Sources Let { X n } n ≥ 1 b e a stationary ergo dic source ov er a finite alphab et χ , and let T b e the shift op erator defined ab o ve. Then, for any integrable function f defined on the source outputs (e.g., self-information of blocks of length n ), the time a verage conv erges almost surely: lim n →∞ 1 n n − 1 X i =0 f ( T i X ) = E [ f ( X )] a.s. In particular, for the self-information of a blo c k of length n , − 1 n log Pr( X 1 , X 2 , . . . , X n ) − → H ( X ) almost surely , where H ( X ) is the entrop y rate of the stationary ergo dic source. 26 A.3 Borel-Can telli Lemma The Borel-Can telli lemma provides a criterion for the almost sure o ccurrence of infinitely man y even ts. Lemma 7 (Borel-Cantelli) L et { A n } n ≥ 1 b e a se quenc e of events in (Ω , F , P ) . 1. If P ∞ n =1 P ( A n ) < ∞ , then P ( A n o c curs infinitely often ) = 0 . 2. If the events { A n } ar e indep endent and P ∞ n =1 P ( A n ) = ∞ , then P ( A n o c curs infinitely often ) = 1 . A.4 Dominated Con vergence Theorem An imp ortant to ol from measure theory is the Dominated Conv ergence The- orem, whic h allows the in terchange of limits and exp ectations under certain conditions. Theorem 7 (Dominated Con vergence Theorem) L et { f n } n ≥ 1 b e a se- quenc e of inte gr able functions on (Ω , F , P ) such that f n → f almost sur ely and ther e exists an inte gr able function g with | f n ( ω ) | ≤ g ( ω ) for al l n ≥ 1 , a.s. ω ∈ Ω . Then f is inte gr able and lim n →∞ E [ f n ] = E [ f ] . References [1] Claude E. Shannon, A Mathematic al The ory of Communic ation (Bell System T ec hnical Journal, 27, 3, 379-423, 1948). [2] T. Cov er, J. A. Thomas, Elements of Information the ory (Wiley- In terscience, New Y ork, 2006). [3] M. Nielsen, I. Chuang, Quantum Computation and Quantum Informa- tion (Cam bridge Universit y Press, Cambridge, 2000). 27 [4] K. Huang, Statistic al Me chanics (Johs Wiley and Sons, New Y ork, 1963). [5] L. D. Landau and E. M. Lifshitz, Physique Statistique ( MIR, Moscow, 1967). [6] R. K. Pathria, Statistic al Me chanics ( Pergamon Press, Oxford, 1972). [7] C. Tsallis , J. Stat. Phys. 52 , 479 (1988). [8] C. Tsallis, Intr o duction to nonextensive statistic al me chanics: Ap- pr o aching a Complex World (Springer-verlag, New Y ork, 2009). [9] A. J. da Silv a, M. A. S. T rindade, D. O. C. Santos and R. F. Lima, Biol Cyb ern 110 , 31 (2016). [10] D. O. C. Santos, M. A. S. T rindade, A. J. da Silv a, BioSystems 232 , 105005 (2023). [11] M. A. S. T rindade, S. Flo quet, L. M. S. Filho, Physic a A 541 , 12377, (2020). [12] M. Gell-Mann and C. Tsallis Eds. Nonextensive Entr opy- Inter disciplinary Applic ations (Oxford Univ ersity Press, Oxford, 2003). [13] K. Lindgren, Information The ory for Complex Systems (Springer, Berlin, 2024). [14] A. Golan and J. Harte, Information theory: a foundation for complexity science, Pr o c. Natl. A c ad. Sci. U. S. A. 119 , 33, e2119089119 (2022). [15] T. F. V arley , Information Theory for Complex Systems Scientists: What, Wh y and How?, arXiv: 23041248v2 (2023). [16] S. F uruichi, Journal of Mathematic al Physics 47 , 023302 (2006). [17] J. C. Maxwell, The ory of He at (Longmans, Green and Co., London, 1871). [18] L. Szilard, Zeitscrift fur Physik 53 , 840-856 (1926). 28 [19] R. P . F eynman, R. B. Leighton and M. Sands The F eynman L e ctur es on Physics (Addison-W eslwy Reading, Mass., 1965). [20] R. Landauer, J. R es. R ev. 183 (1961). [21] C. H. Bennet, Int. J. The or. Phys. 21 , 12, 905-939 (1982). [22] C. H. Bennet, Sci. Am. 5 , 295, 108 (1987). [23] G. Aquino, P . Grigolini, N. Scafetta, Chaos, Solitons and F r actals 12 , 2023-2038 (2001). [24] E. T. Jaynes , Physic al R eview. Series II 4 , 106, 620-630 (1957). [25] R. Durret, Pr ob ability The ory and Examples (W adsw orth and Bro oks/Cole, Pacific Grov e, California, 1991). [26] J. L. Do ob, Me asur e The ory (Springer-V erlag, New Y ork and Berlin, 1991). 29
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment