Stabilization of stochastic networks in Markovian environment

We establish criteria under which stochastic networks in a Markovian environment stabilize, thus confirming Conjecture 7.2 from Levine-Greco [GL23]. The networks evolve on finite connected graphs $G=(V,E)$, and their dynamics are encoded by $V \times…

Authors: Robin Kaiser, Martin Klötzer, Ecaterina Sava-Huss

Stabilization of sto c hastic net w orks in Mark o vian en vironmen t Robin Kaiser ∗ , Martin Kl¨ otzer † , Ecaterina Sa v a-Huss ‡ Marc h 27, 2026 Abstract W e establish criteria under whic h stochastic netw orks in a Mark o vian en vironment stabilize, th us confirming [GL23, Conjecture 7.2]. The net works ev olve on finite connected graphs G = ( V , E ), and their dynamics are encoded b y V × V toppling matrices M , whose columns record the exp ected num b er of topplings when the environmen t is in stationarity . Stabilization and non-stabilization are characterized by a parameter ρ which dep ends on the largest eigen v alue of the matrix M + αI , with α = 1 + max {− M ( v , v ) : v ∈ V } . The pro ofs rely on the toppling random walk, in which toppled vertices are sampled according to the eigen vector asso ciated with the largest eigen v alue of M . 2020 Mathematics Subje ct Classific ation. 60J80, 60F05, 60F15. Keywor ds: random walk, toppling, stabilization, phase transition, random en vironment. 1 In tro duction The term Ab elian networks w as coined b y Levine [BL16a, BL16b, BL16c, CL22], but in physics Ab elian net works w ere in tro duced b y Dhar [Dha99], who called them A b elian distribute d pr o c essors . Shortly , an Ab elian net work on a graph G = ( V , E ) may b e seen as a s ystem of comm unicating automata ( P v ) v ∈ V indexed b y the vertices V of the graph, such that each P v satisfies a comm u- tativit y condition. In [BL16a], the concept of Ab elian netw orks is characterized via tw o simple axioms and it is sho wn that suc h net w orks obey a least-action principle, which roughly means that eac h pro cessor in the netw ork p erforms the minim um amount of w ork in order to reach a halting state. In [BL16b], it is shown that under natural assumptions, finite and irreducible Ab elian net- w orks halt for all inputs. In [BL16c], the critical group of an Ab elian net work as a generalization of the sandpile group in the Ab elian sandpile mo del is introduced and sev eral of its prop erties are in vestigated. The theory of net works that halt on all inputs is extended to non-halting netw orks in [CL22]. ∗ T echnisc he Univ ersit¨ at M¨ unc hen, German y . ro.kaiser@tum.de † Univ ersit¨ at Innsbruc k, Austria. Martin.Kloetzer@uibk.ac.at ‡ Univ ersit¨ at Innsbruc k, Austria. Ecaterina.Sava-Huss@uibk.ac.at 1 Examples of Ab elian netw orks are: Ab elian sandpiles [BTW87, Dha90], bo otstrap p ercolation [Hol03, vE87], rotor net works [DF91], oil and water mo del [Tse90]. Allowing the transition function in an Ab elian netw ork to dep end on a probability space, one obtains a larger class of pro cesses that are called sto chastic Ab elian networks (shortly sto chastic networks ), and which include classical Mark ov chains, branching Marko v chains, internal DLA, excited w alks [BW03], activ ated random w alks [RS12], locally Marko v walks [KLSH26], and sto chastic sandpiles [ST17] as sp ecial examples. Sev eral of these systems of interacting particles can b e characterized b y an Ab elian prop erty whic h means that c hanging the order of certain in teractions has no effect on the final state of the system. This prop ert y turned out to b e v ery useful when proving phase transitions for such systems. In the curren t w ork, w e consider the stac kwise represen tation of sto c hastic net works, in whic h v ertices of G comm unicate with each other b y sending particles along adjacent edges. The n umber of particles sent out is randomly sampled, and depends on the en vironment at the curren t state, while the environmen t is up dated according to a transition matrix. This w ork is motiv ated b y Levine-Greco [GL23], who ask ed whether the surviv al/extinction of a multit ype branching process in a Marko vian en vironment is determined solely b y the largest eigenv alue of the exp ected offspring distribution matrix. W e confirm [GL23, Conjecture 7.2] and giv e a pro of that extends b eyond m ultitype branching pro cesses to a broader class of sto chastic netw orks. F or stating the main result, we briefly in tro duce the mo del. Let G = ( V , E ) b e a finite connected graph. F or every v ∈ V , w e consider a finite set of envir onment states S v and a tr ansition matrix P v that gov erns transitions among these states. Let π v b e the stationary distribution of P v . The exp e cte d toppling matrix M : V × V → R is defined as M ( v , w ) := X s ∈ S v π v ( s ) µ v ,s ( w ) , (1) where for v ∈ V and s ∈ S v w e define µ v ,s : V → R as the exp ectation of a giv en probability measure ν v ,s ∈ Prob ( Z V ); ν v ,s describ es how v ertex v in teracts with the other v ertices when the en vironment at v is in state s . Starting from an initial configuration η 0 : V → N 0 and a threshold function t : V → N 0 , define a Mark ov chain ( η n ) n ∈ N as follows: given η n , select at random a vertex v ∈ V with η n ( v ) ≥ t ( v ), up date the en vironment at v according to P v , and then add to η n a column sampled from ν v ,s , where s ∈ S v is the updated en vironmen t. W e call ( η n ) n ∈ N a sto chastic network in Markovian envir onment . The netw ork stabilizes if there exists n ∈ N such that η n ( v ) ≤ t ( v ) for all v ∈ V . W e introduce three assumptions, under which the main result hold. The first one (MOLI) states that we can reduce mass only at the toppled v ertex. The second ass umption (BFB) means that w e can reduce mass at the toppled v ertex b y at most t ( v ). The last assumption (IRR) requires that there exists k > 0 such that ( M − Diag ( M )) k > 0 on the off-diagonal entries, where Diag( M ) denotes the matrix that coincides with M on the diagonal and is zero off-diagonal. Theorem 1.1. L et G = ( V , E ) b e a finite, c onne cte d, dir e cte d gr aph and let ( η n ) n ∈ N b e a sto chastic network in a Markovian envir onment on G which satisfies (MOLI), (BFB) and (IRR). L et M b e the exp e cte d offspring matrix as in (1) and α = max {− M ( v , v ) | v ∈ V } + 1 and ρ = r ( M + αI ) − α, wher e r ( M + α I ) is the Perr on-F r ob enius eigenvalue of M + αI . 1. (Sub critic al c ase) If ρ < 0 , then for any initial state η 0 : V → N 0 , the network stabilizes almost sur ely. 2 2. (Critic al c ase) If ρ = 0 , then either ther e exists a c onserve d quantity, or the network stabilizes almost sur ely for every initial state. 3. (Sup er critic al c ase) If ρ > 0 , then ther e exists an initial state η 0 : V → N 0 , for which the network do es not stabilize with p ositive pr ob ability. The sub critical case is established in Prop osition 5.1, the sup ercritical case in Prop osition 5.2, and the critical case follows from Prop ositions 5.3 and 5.4. The argument in all three cases is to sample toppled v ertices according to the eigen vector p corresp onding to the sp ectral radius of M + αI , while also allowing vertices b elo w the threshold to topple. Then stabilization under p -sampling is equiv alent to stabilization in the original dynamics in whic h toppling abov e threshold is allow ed. This equiv alence reduces the analysis of stabilization to standard m ethods in random walks theory . Outline. Section 2 in tro duces sto c hastic net works and la ys out the mo del. In Section 3, we use the least-action principle to sho w that stabilization can b e reached by exhibiting any sequence of topplings, even if vertices b elo w the threshold are allo wed to topple. Section 4 in tro duces the toppling random walk and its connection with the sto c hastic netw ork. In Section 5 we pro ve Theorem 1.1. 2 Preliminaries Let G = ( V , E ) b e a finite, connected and directed graph with v ertex set V and edge set E ⊂ V × V . F or eac h v ∈ V , fix a finite set S v of environmen ts, and call S := Q v ∈ V S v the set of glob al envir onments. En vironmen t c hain and toppling rules. F or each v ∈ V , we are given a sto chastic matrix P v ∈ R S v × S v , which defines a Marko vian environmen t at v , namely a Mark ov chain ( Y v j ) j ∈ N with state space S v and transition matrix P v . W e assume that for eac h v the chain ( Y v j ) is irreducible and ap erio dic, and denote by π v the stationary distribution of P v , that is π v P v = π v , for all v ∈ V . Fix also for all v ∈ V and s ∈ S v a probability distribution ν v ,s ∈ Prob ( Z V ), where Prob ( Z V ) denotes the set of probability distributions on Z V , the space of functions from V to Z . A sample from ν v ,s is denoted b y ξ v ,s and its exp ectation is E  ξ v ,s  = µ v ,s ∈ R V . Finally , we define the exp ected toppling matrix in stationarity M ∈ R V × V en trywise as in equation (1). The matrix M ev aluated at ( v , w ) ∈ V × V giv es the exp ected mass added to w if v has b een toppled. Sto c hastic net w orks in Marko vian environmen ts. Fix an initial configuration of particles η 0 : V → N 0 , an initial global environmen t q 0 ∈ S , and a toppling threshold t : V → N 0 . In order to guarantee that the configuration stays non-negative throughout the pro cess, w e assume 0 ≤ t ( v ) + ξ v ,s ( v ) , for all v ∈ V and s ∈ S v almost surely . Define the Mark ov chain ( η n , q n ) n ∈ N , where η n represen ts the random particle configuration at time n and q n the random global environmen t at time n , whic h ev olves as follows: giv en that the state of the chain at time n ∈ N 0 is ( η n , q n ), we first c ho ose a random vertex v n indep enden tly of the past with probability P ( v n = v ) = max { η n ( v ) − t ( v ) , 0 } | η n − t | + , (2) 3 where | η n − t | + = P v ∈ V max { η n ( v ) − t ( v ) , 0 } is the total w eigh t of the p ositiv e part of η n − t . Then, the environmen t at time n + 1 is up dated to q n +1 ( w ) = q n ( w ) , for w ∈ V \{ v n } , and for s ∈ S v n q n +1 ( v n ) = s, with probability P v n ( q n ( v n ) , s ) . In the up dated environmen t s = q n +1 ( v n ) ∈ S v w e sample a random toppling ξ n +1 from the probabilit y distribution ν v n ,s and up date the particle configuration η n +1 = η n + ξ n +1 . The pro cess terminates if for all v ∈ V , we hav e η n ( v ) ≤ t ( v ), in which case we sa y that η n is stable or it stabilizes. W e refer to b oth ( η n ) n ∈ N and ( η n , q n ) n ∈ N as sto chastic networks in a Markovian envir onment. In words, at each timestep one chooses a random v ertex v ∈ V , whose curren t state is s ∈ S v ; the state of v is up dated to a new state s ′ ∈ S v according to the transition matrix P v of the Mark ov c hain ( Y v j ) j , and then a toppling is applied to the whole system by reducing/increasing the mass at the toppled v ertices b y ξ n +1 . The pro cess stabilizes if even tually the particle configuration drops b elo w the given threshold t , and it stays activ e if it do es not stabilize. Stac kwise representation of sto c hastic netw orks. W e also define sto c hastic net works using the stac kwise represen tation as in tro duced in Diaconis-F ulton [DF91]. Let I = ( I v j ) v ∈ V ,j ∈ N 0 b e a stac k of toppling instructions, either deterministic or random, where I v j : V → Z for all v ∈ V and j ∈ N . W e assume again that for all v ∈ V and all j ∈ N 0 0 ≤ t ( v ) + I v j ( v ). A sto c hastic net work in stac kwise represen tation is a Marko v chain ( η n , h n ) n ∈ N , with initial states η 0 : V → N 0 and h 0 : V → N 0 with h 0 = 0. If the particle configuration η n at time n is stable for the threshold t , then the pro cess terminates and we set ( η n +1 , h n +1 ) = ( η n , h n ). Otherwise, w e choose a random v ertex v n with probability given in (2), and define η n +1 = η n + I v h n ( v n ) and h n +1 = h n + δ v n . Stac ks sampled from the Marko vian en vironmen t. A sto c hastic net w ork in Mark o vian en vironment can also b e seen as a sp ecial case of a stac kwise represented sto c hastic netw ork in the following w ay . F or q 0 ∈ S an initial global environmen t and Mark ov c hain ( Y v j ) j ∈ N at v that starts in state q 0 ( v ), we sample the instruction I v j according to the law of the toppling rule ν v ,Y v j indep enden tly of all other instructions. The stochastic net w ork in a Mark ovian en vironmen t is then giv en as the stac kwise represented sto c hastic net work proces s with random stac k I = ( I v j ) v ∈ V ,j ∈ N . Assumptions. Throughout this pap er, we imp ose three assumptions on the toppling rules of the sto c hastic net work that enable us to pro ve the main theorem. W e state these assumptions in both the sto c hastic netw ork framew orks introduced ab o v e. MOLI: mass only lost internal ly. First, a toppling at a vertex v ∈ V may decrease mass only at v ; all other v ertices ma y only gain mass or remain unchanged: for every v ertex v ∈ V , ev ery state s ∈ S v , and every v ertex w  = v , ξ v ,s ( w ) ≥ 0 . (MOLI) 4 Ths assumption holds for the stac k I = ( I v j ) j ∈ N 0 ,v ∈ V , if the equation ab ov e holds for every I v j . IRR: irr e ducibility. Every pair of vertices should b e able to communicate: for any v , w ∈ V , there m ust exist a toppling sequence that transfers mass from v to w . That is, there exists k ∈ N 0 suc h that for all distinct v , w ∈ V ( M − Diag ( M )) k ( v , w ) > 0 , . (IRR) In the stac kwise represen tation this reads as : for v ertices v  = w and n ∈ N , there exists a directed path ( v = v 1 , . . . , v k = w ) from v to w such that P ( I v j n + j ( v j +1 ) = 0) < 1 for ev ery j ∈ { 1 , 2 , . . . , k } . BFB: b ounde d fr om b elow. The sto c hastic netw ork should remain nonnegative during its evolution, and the mass at any v ∈ V may be decreased b y at most t ( v ). Th us for every v ∈ V and s ∈ S v , − ξ v ,s ( v ) ≤ t ( v ) . (BFB) Consequen tly , the mass reduction is uniformly b ounded by K := max { t ( v ) | v ∈ V } . In the stack- wise representation, this assumption holds if the same inequalit y is satisfied for ev ery I v j . W e presen t tw o examples of stochastic net w orks in Marko vian en vironment that fall in to the frame- w ork studied in this pap er. Example 2.1 (Multitype branching process in a Marko vian environmen t) . Consider the multit ype branc hing pro cess in a Marko vian en vironment as in tro duced in [GL23, Section 7.4]. F or a finite, connected, and directed graph G = ( V , E ), asso ciate to each vertex v ∈ V a finite set of en vironmen t states S v , a sto c hastic matrix P v ∈ R S v × S v (called the transition matrix) and a sto c hastic matrix R v ∈ R S v × Q w : v → w N 0 (called the repro duction matrix); v → w means that w is an out-neighbour of v . Repro duction of an individual at v results in an up date of the environmen t at v according to P v , follo wed by replacing the individual with a random num b er of offspring sent out to the out-neigh b ours of v . The repro duction matrix R v dictates the distribution of the offspring vector, whic h depends on the curren t en vironment at v . If no more individuals are alive, then the branc hing pro cess is said to halt. Example 2.2 (Sto c hastic sandpiles) . Let G = ( V , E ) b e a finite, connected graph, and for eac h v ∈ V let ν v ∈ Prob( Z V ) b e a probabilit y distribution such that, if I v ∼ ν v , then almost surely I v ( v ) ≤ deg( v ) and I v ( w ) ≤ 0 for all w ∈ V \ { v } . A sandpile configuration is a function η : V → N 0 . W e call a sandpile stable if η ( v ) ≤ deg G ( v ) for every v ∈ V . If η is unstable at some v ∈ V , we p erform a legal toppling at v , defined by T v η = η − I v , where I v is sampled from ν v , indep enden tly of everything else. W e stabilize η by p erforming all possible legal topplings, and we say that η stabilizes if only finitely many legal topplings o ccur during this pro cedure. 3 Stac kwise represen tation Deterministic stacks. W e first consider the deterministic v ersion of stackwise sto chastic net- w orks, where the toppling instruction stacks are fixed. Our first goal is to establish a least-action principle under additional assumptions on the toppling rules. As a consequence, toppling at a v ertex v only when the particle configuration exceeds its threshold alw a ys yields a shorter toppling sequence than one that ignores the threshold when selecting topplings. 5 T ak e an initial configuration of particles η : V → N 0 , a threshold function t : V → N 0 , and I = ( I v j ) v ∈ V ,j ∈ N a fixed stack of instructions with I v j : V → Z for all v ∈ V and j ∈ N . F or this set of initial data, define the toppling Ψ v at v as: for a function h : V → N 0 Ψ v ( η , h ) := ( η + I v h ( v ) , h + δ v ) , where δ v : V → Z is the function that is constan tly 0, except 1 at p osition v . The toppling at v is called legal if η ( v ) > t ( v ). Informally , the op erator Ψ v reads h ( v ) – the n umber of times v has already toppled – and applies the h ( v )-th instruction at v from the stac k I to the configuration. Afterw ards, it up dates the counter by setting h ( v ) ← h ( v ) + 1 (lea ving all other co ordinates of h unc hanged). W e will use the follo wing shorthand notation: Ψ v ( η ) := Ψ v ( η , 0) , and, for a vertex sequence ( v 1 , . . . , v n ) with n ≥ 2, Ψ ( v 1 ,...,v n ) ( η ) := Ψ v n  Ψ ( v 1 ,...,v n − 1 ) ( η )  . W e write ψ ( v 1 ,...,v n ) ( η ) for the first comp onen t of Ψ ( v 1 ,...,v n ) ( η ). A sequence ( v 1 , . . . , v n ) is called legal for η if, for ev ery i ∈ { 1 , . . . , n } , the toppling at v i is legal for ψ ( v 1 ,...,v i − 1 ) ( η ). F or a toppling sequence ( v 1 , . . . , v n ), the o dometer function m ( v 1 ,...,v n ) records the num b er of topplings: m ( v 1 ,...,v n ) ( v ) = # { 1 ≤ i ≤ n : v i = v } . W e show that the final configuration is independent of the toppling order, i.e. the mo del is Ab elian. Lemma 3.1 (Ab elian prop ert y) . Assuming (MOLI) , for any initial c onfigur ation η and any two toppling se quenc es v = ( v 1 , . . . , v n ) and w = ( w 1 , ..., w m ) with identic al o dometers m v = m w , we have Ψ v ( η ) = Ψ w ( η ) . Pr o of. In view of the definition of the toppling op erator, w e hav e Ψ v ( η ) =  η + X v ∈ V m v X i =1 I v i , m v  =  η + X v ∈ V m w X i =1 I v i , m w  = Ψ w ( η ) . The next result establishes that an y sequence of potentially illegal topplings must alwa ys b e longer than any legal sequence of topplings. Lemma 3.2. Assuming (MOLI) , let η b e any initial p article c onfigur ation and let v = ( v 1 , . . . , v n ) and w = ( w 1 , . . . , w n ) b e toppling se quenc es such that m v ( v ) = m w ( v ) for some v ∈ V and m v ( w ) ≤ m w ( w ) for al l w ∈ V \ { v } . Then ψ v ( η )( v ) ≤ ψ w ( η )( v ) . Pr o of. W e hav e ψ v ( η )( v ) = η ( v ) + m v ( v ) X i =1 I v i ( v ) + X w ∈ V \{ v } m v ( w ) X i =1 I w i ( v ) ≤ η ( v ) + m w ( v ) X i =1 I v i ( v ) + X w ∈ V \{ v } m w ( w ) X i =1 I w i ( v ) = ψ w ( η )( v ) , 6 where m v ( v ) = m w ( v ) w as used for P m v ( v ) i =1 I v i ( v ) = P m w ( v ) i =1 I v i ( v ) and (MOLI) together with m v ( w ) ≤ m w ( w ) for w ∈ V \{ v } yields the upp er b ound X w ∈ V \{ v } m v ( w ) X i =1 I w i ( v ) ≤ X w ∈ V \{ v } m w ( w ) X i =1 I w i ( v ) . Prop osition 3.1 (Least-action principle) . Assume (MOLI) holds. F or any initial c onfigur ation η and toppling se quenc es v = ( v 1 , . . . , v n ) and w = ( w 1 , . . . , w m ) , if w is le gal and ψ v ( η ) is stable, then m w ≤ m v (p ointwise). Pr o of. Assume the contrary , that is, there exists at least one v ∈ V such that m w ( v ) > m v ( v ). W e define the sequences w j , for all j ≤ m , as w j = ( w 1 , ..., w j ). Since m w ( v ) > m v ( v ), there m ust exist j ≤ m such that m w j ≰ m v . Let J ∈ N b e the smallest such n umber, and let x ∈ V b e the unique v ertex suc h that m w J ( x ) > m v ( x ). W e then ha ve m w J − 1 ≤ m v and m w J − 1 ( x ) = m v ( x ). Since w is a legal toppling sequence, this implies ψ w J − 1 ( η )( x ) > t ( x ), which together with Lemma 3.2 yields ψ v ( η )( x ) ≥ ψ w J − 1 ( η )( x ) > t ( x ) , and this con tradicts the assumption that v is stabilizing. Random stac ks. W e next apply Prop osition 3.1 to sho w that, for sto c hastic net works with random stac ks of toppling instructions, it suffices to find any stabilizing sequence; this ensures the existence of a legal stabilizing sequence. Consider no w the random stack I = ( I v j ) v ∈ V ,j ∈ N 0 of indep enden t instructions distributed as I v j ∼ ξ v ,Y v j . F or an y realization of the stac k I , w e are in the deterministic setup studied previously . Prop osition 3.2. L et I b e a r andom instruction stack satisfying (MOLI) , and let η b e any ini- tial p article c onfigur ation. If ther e exists a r andom se quenc e ( v i ) i ∈ N and some n ∈ N such that ψ ( v 1 ,...,v n ) ( η ) is almost sur ely stable, then for any r andom se quenc e ( w i ) i ∈ N ther e exists m ∈ N such that w m is not a le gal toppling for ψ ( w 1 ,...,w m − 1 ) ( η ) . Pr o of. Define for the sequence ( v i ) i ∈ N the first time it stabilizes η as τ := inf { n ∈ N | ψ ( v 1 ,...,v n ) ( η ) is stable } , whic h by assumption is finite almost surely . Assume there exists a random sequence ( w i ) i ∈ N for whic h all topplings are legal with p ositive probabilit y . Then on the even t that all topplings in the sequence ( w i ) i ∈ N are legal, by Proposition 3.1 w e get m ( w 1 ,...,w n ) ≤ m ( v 1 ,...,v τ ) , for all n ∈ N . Thus n = X v ∈ V m ( w 1 ,...,w n ) ( v ) ≤ X v ∈ V m ( v 1 ,...,v τ ) ( v ) = τ , whic h implies that τ is infinite with p ositiv e probabilit y , contradicting the assumption. W e emphasize the imp ortance of Prop osition 3.2. First, the existence of a stabilizing toppling sequence almost surely implies that ev ery legal toppling sequence is finite almost surely . On the other hand, the existence of an infinite legal toppling sequence rules out the existence of a legal stabilizing sequence. Therefore, Prop osition 3.2 allows the pro of of the Theorem 1.1 to b e reduced to the construction of a toppling sequence — p ossibly non-legal — with the desired stabilization prop erties. 7 4 T oppling random w alk Recall the exp ected toppling matrix M defined in (1), and α := max {− M ( v , v ) : v ∈ V } + 1 . W e construct a random toppling sequence; w e sample such a sequence according to the Perron- F rob enius eigen vector of M + αI . F or doing so, w e sho w that this eigenv ector - denoted b y p - has all p ositive entries. A matrix 0 ≤ A ∈ R V × V is called primitive, if there exists some k ∈ N such that A k > 0. Lemma 4.1. Assuming (IRR) and (MOLI) , the matrix M + αI is primitive. Pr o of. By the c hoice of α w e hav e ( M + αI )( v , v ) > 0, for all v ∈ V , and for v  = w by (MOLI) w e ha ve ( M + α I )( v , w ) ≥ 0. The irreducibility assumption (IRR) yields the existence of k ∈ N suc h that ( M − Diag ( M )) k ( v , w ) > 0 for all v  = w . Thus ( M + αI ) k ( v , w ) ≥ ( M − Diag ( M )) k ( v , w ) > 0 . F urthermore, for all v ∈ V we ha v e that ( M + αI ) K ( v , v ) ≥ ( M + αI )( v , v ) K ≥ 1 > 0 , and hence ( M + αI ) K > 0. Th us, by Perron-F rob enius, there exists r ( M + αI ) > 0 (P erron-F rob enius eigenv alue of M + α I ) and p > 0 the corresp onding left eigen vector normalized suc h that P v ∈ V p ( v ) = 1. W rite ρ := r ( M + αI ) − α. Consider the sequence ( w i ) i ∈ N , by sampling eac h w i indep enden tly and identically distributed as P ( w 1 = v ) = p ( v ) , for all v ∈ V . Recall that ( Y v j ) j ∈ N is the environmen t chain at v with initial state q 0 ( v ) ∈ S v , where q 0 = ( q 0 ( v )) v ∈ V is the initial global en vironment, and the random stack is giv en b y I = ( I v j ) v ∈ V ,j ∈ N 0 , where I v j has the same distribution as ξ v ,Y v j . F or the initial configuration of particles η 0 , write ˆ η n = ψ ( w 1 ,..., w n ) ( η 0 ) , and define the sequence of global en vironments as ˆ q n =  Y v m ( w 1 ,..., w n ) ( v )  v ∈ V . The sequence ( ˆ η n , ˆ q n ) n ∈ N is a Mark ov chain on Z V × S , and also ( ˆ q n ) n ∈ N and ( w n +1 , ˆ q n ) n ∈ N are Mark ov c hains with state spaces S and V × S resp ectiv ely . Define the sequence of stopping times ( τ j ) j ∈ N as the consecutive return times to the initial environmen t q 0 : set τ 0 = 0 and for j ≥ 1 τ j = inf { i > τ j − 1 : ˆ q i = q 0 } . The sequence ( ˆ η τ j , ˆ q τ j ) j ∈ N is again a Marko v chain and ( ˆ η τ j ) j ∈ N is a random walk on Z V . Define Z j = ˆ η τ j , for j ≥ 0; we call ( Z j ) j ∈ N the toppling r andom walk ; this is a random walk on Z V with indep enden t incremen ts. 8 Lemma 4.2. The stationary distribution of the Markov chain ( w n +1 , ˆ q n ) n ∈ N is given by Π( v , q ) = p ( v ) · Y u ∈ V π u ( q ( u )) . Pr o of. F or q ∈ S and w ∈ V , write q  = w = ( q ( v )) v  = w . It is easy to see that the Marko v chain ( w n +1 , ˆ q n ) n ∈ N has transition probabilities Q  ( w , r ) , ( v , q )  = p ( v ) P w ( r ( w ) , q ( w )) 1 { r  = w = q  = w } where ( w , r ) , ( v , q ) ∈ V × S . W e hav e X w ∈ V X r ∈ S Π( w , r ) Q (( w , r ) , ( v , q )) = X w ∈ V X r ∈ S p ( w )  Y u ∈ V π u ( r ( u ))  p ( v ) P w ( r ( w ) , q ( w )) 1 { r  = w = q  = w } = p ( v ) X w ∈ V p ( w ) X r ∈ S  π w ( r ( w )) P w ( r ( w ) , q ( w )) Y u  = w π u ( q ( u ))  1 { r  = w = q  = w } = p ( v ) X w ∈ V p ( w )  Y u  = w π u ( q ( u ))  X r ∈ S π w ( r ( w )) P w ( r ( w ) , q ( w )) 1 { r  = w = q  = w } = p ( v ) X w ∈ V p ( w )  Y u  = w π u ( q ( u ))  X s ∈ S w π w ( s ) P w ( s, q ( w )) = p ( v ) X w ∈ V p ( w )  Y u  = w π u ( q ( u ))  π w ( q ( w )) = p ( v )  Y u ∈ V π u ( q ( u ))  X w ∈ V p ( w ) = p ( v ) Y u ∈ V π u ( q ( u )) = Π( v , q ) . Lemma 4.3. F or al l v ∈ V and q ∈ S E h τ − 1 X n =0 1 { w n +1 = v , ˆ q n = q } i = E [ τ ]Π( v , q ) , wher e Π is the stationary distribution fr om L emma 4.2 and τ is the first r eturn time to the initial envir onment q 0 . Pr o of. By the definition of ( τ j ) j ∈ N 0 together with the Marko v prop ert y , the sequence of random v ariables  P τ j +1 − 1 n = τ j 1 { w n +1 = v , ˆ q n = q }  j ∈ N 0 is i.i.d. with distribution P τ − 1 n =0 1 { w n +1 = v , ˆ q n = q } . The exp ectation of eac h sequence term exists, b ecause P τ − 1 n =0 1 { w n +1 = v , ˆ q n = q } ≤ τ , and E [ τ ] < ∞ . Thus, the la w of large n umbers and the ergo dic theorem for Marko v c hains yields E h τ − 1 X n =0 1 { w n +1 = v , ˆ q n = q } i = lim l →∞ 1 l l X j =1 τ j +1 − 1 X n = τ j 1 { w n +1 = v , ˆ q n = q } = lim l →∞ 1 l τ l +1 − 1 X n =0 1 { w n +1 = v , ˆ q n = q } 9 whic h in turn equals to = lim l →∞ τ l +1 l 1 τ l +1 τ l +1 − 1 X n =0 1 { w n +1 = v , ˆ q n = q } = lim l →∞ τ l +1 l · lim k →∞ 1 τ k τ k − 1 X n =0 1 { w n +1 = v , ˆ q n = q } = E [ τ ]Π( v , q ) . Com bining Lemmas 4.2 and 4.3 yields the exp ected increments of the toppling random walk ( Z j ) j ∈ N . The next result extends [GL23, Lemma 3.3] to higher dimensions, and the tw o pro ofs are similar. Prop osition 4.1. It holds E [ Z 1 − Z 0 ] = E [ τ ] ρ p . Pr o of. Without loss of generality , assume Z 0 = 0. Then Z 1 = τ − 1 X n =0 X v ∈ V X q ∈ S 1 { w n +1 = v , ˆ q n = q } I v m ( w 1 ,..., w n ) ( v ) . T aking the conditional exp ectation with resp ect to τ yields E [ Z 1 | τ ] = τ − 1 X n =0 X v ∈ V X q ∈ S E h 1 { w n +1 = v , ˆ q n = q } I v m ( w 1 ,..., w n ) ( v )    τ i = τ − 1 X n =0 X v ∈ V X q ∈ S E h 1 { w n +1 = v , ˆ q n = q }    τ i µ v ,q ( v ) = X v ∈ V X q ∈ S E h τ − 1 X n =0 1 { w n +1 = v , ˆ q n = q }    τ i µ v ,q ( v ) . Seting S  = v = Q w  = v S w for v ∈ V , and taking again the exp ectation on b oth sides ab o ve yields E [ Z 1 ] = X v ∈ V X q ∈ S E h τ − 1 X n =0 1 { w n +1 = v , ˆ q n = q } i µ v ,q ( v ) = X v ∈ V X q ∈ S E [ τ ]Π( v , q ) µ v ,q ( v ) = E [ τ ] X v ∈ V X q ∈ S p ( v )  Y u ∈ V π u ( q ( u ))  µ v ,q ( v ) = E [ τ ] X v ∈ V X s ∈ S v X r ∈ S  = v p ( v ) π v ( s )  Y u  = v π u ( r ( u ))  µ v ,s = E [ τ ] X v ∈ V p ( v ) X s ∈ S v π v ( s ) µ v ,s X r ∈ S  = v  Y u  = v π u ( r ( u ))  = E [ τ ] X v ∈ V p ( v ) X s ∈ S v π v ( s ) µ v ,s = E [ τ ] X v ∈ V p ( v ) M ( v , − ) = E [ τ ] ρ p . 10 5 Pro of of the Theorem 1.1 Kno wing the exp ected increments of the toppling random walk ( Z j ) j ∈ N allo ws to pro ve the stabi- lization/explosion of the sto c hastic net w ork in Mark ovian en vironment ( η n ) n ∈ N in the three regimes - sub critical ( ρ < 0), sup ercritical ( ρ > 0), and critical ( ρ = 0). The subcritical regime. This is the easiest to pro ve and is a direct consequence of the tw o previous sections. Prop osition 5.1 (Sub critical Case) . If ρ < 0 , then for any initial c onfigur ation η 0 : V → N 0 the se quenc e ( ˆ η n ) n ∈ N stabilizes almost sur ely in finite time. Pr o of. Since ρ < 0, almost surely exists some j ∈ N suc h that ˆ η τ j = Z j < t , in view of Prop osition 4.1, and this completes the pro of. The supercritical regime. The sup ercritical case needs extra care. As in Prop osition 5.2, Lemma 4.1 implies that, with p ositiv e probability , the toppling random walk ( Z j ) j ∈ N ev entually crosses the threshold t and stays ab o ve it forever. F or the particle configuration, this only guarantees that ( ˆ η j ) j ∈ N remains ab ov e t at the stopping times τ j . Hence, we m ust additionally sho w that, with p ositiv e probabilit y , the configuration do es not fall b elo w t at an y times b et ween successiv e times. W e call an initial state ( η , q ) ∈ Z V × S viable if there exists j ∈ N suc h that P  ˆ η τ j − η ≥ 1 and ˆ η k ≥ t with ˆ η k  = t for all k ≤ τ j  > 0 , where τ j is the j -th return time of the environmen t to q . Viable configurations are cen tral to the sup ercritical case; w e establish that suc h configurations exist. Lemma 5.1. If ρ > 0 , then for any initial envir onment q ∈ S ther e exists an initial c onfigur ation η ∈ N V such that ( η , q ) is viable. Pr o of. Fix the initial environmen t q ∈ S , and call a sequence of environmen ts ( q 0 , ..., q l ) an excur- sion, if q l = q 0 = q and q i  = q for i ∈ { 1 , ..., l − 1 } . Define Cyc := n  ( η 0 , . . . , η l ) , ( q 0 , . . . , q l )  : l ∈ N , ∀ i ≤ l : η i ∈ Z V , η 0 = 0 , ( q 0 , . . . , q l ) is an excursion o . F or any c = (( η 0 , . . . , η l ) , ( q 0 , . . . , q l )) ∈ Cyc , set H ( c ) = η l , th us H denotes the final particle configuration in an excursion. Recall that ( η τ k ) k ∈ N is a random walk with exp ected step size ρ p E [ τ ] > 0. Therefore, there exists j ∈ N suc h that P ( ∀ v ∈ V : η τ j ( v ) ≥ 1) > 0 . This implies there exist excursions c 1 , ..., c j ∈ Cyc with P j k =1 H ( c k ) > 1 . Let K ∈ N b e the constan t from (BFB). Setting c = (( η 0 , . . . , η l ) , ( q 0 , . . . , q l )) ∈ Cyc and its length len ( c ) = l , then the initial configuration t + K P j k =1 len ( c k ) is viable for the en vironment q . 11 Prop osition 5.2 (Sup ercritical case) . If ρ > 0 , then for every envir onment q ∈ S ther e exists an initial c onfigur ation η ≥ t such that P  ∀ n ∈ N : ˆ η n ≥ t and ˆ η n  = t  > 0 . Pr o of. This is a generalization of the pro of of [GL23, Theorem 3.7]. F or the environmen t q ∈ S , let η ≥ t b e a particle configuration such that ( η , q ) is viable. F urthermore, let j ∈ N denote the n umber of returns to q required for ev ery comp onent of η to increase by 1. Set δ := P  ˆ η τ j ≥ η + 1 and for all k ≤ τ j it holds ˆ η k ≥ t and ˆ η k  = t  > 0 . Using the strong Marko v prop ert y we obtain P  ˆ η τ j n ≥ η + n and for all k ≤ τ j n it holds ˆ η k ≥ t and ˆ η k  = t  ≥ δ n > 0 for every n ∈ N . F or E n = { ˆ η τ j n ≥ η + n } , again b y the strong Mark ov prop erty for τ j n w e get P  ˆ η k ≥ t and ˆ η k  = t for all k ∈ N 0  ≥ P  ˆ η k ≥ t and ˆ η k  = t for all k > τ j n | E n , ˆ η m ≥ t , ˆ η m  = t for all m ≤ τ j n  · δ n = P  ˆ η k ≥ t and ˆ η k  = t for all k > τ j n | E n  · δ n . So it suffices to find n ∈ N suc h that the right hand side ab ov e is strictly p ositiv e. By considering the complemen tary ev ent, we search for an n ∈ N suc h that P ( ˆ η k ≤ t for some k ≥ τ j n | E n ) < 1. Set p min := min v ∈ V p ( v ) > 0 and consider the follo wing even ts: for k ∈ N A k = { ˆ η m ≤ t for some m ∈ ( τ k , τ k +1 ] } , B k = n ˆ η τ k ≤ t + ρ L p E [ τ ] k o , C k = n τ k +1 − τ k ≥ ρ p min LK E [ τ ] k o , with L ∈ N c hosen such that j ( ρ/L ) p E [ τ ] < 1. Observ e that { ˆ η k ≤ t for some k > τ j n } = S ∞ k = j n A k , and for each k ∈ N 0 it holds A k ⊂ B k ∪ C k . This is due to (BFB); if it would exist v ∈ V with ˆ η τ k ( v ) > t ( v ) + ρ L p ( v ) E [ τ ] k , suc h that τ k +1 − τ k < ρ p min LK E [ τ ] k , then for every m ∈ ( τ k , τ k +1 ] it would hold ˆ η m ( v ) > t ( v ) + ρ L p ( v ) E [ τ ] k − K ( m − τ k ) ≥ t ( v ) + ρ L p ( v ) E [ τ ] k − K ρ p min LK E [ τ ] k ≥ t ( v ) , th us B c k ∩ C c k ⊂ A c k . So w e need to find an n ∈ N suc h that P  S ∞ k = j n A k   E n  < 1. W e first show P  ∞ [ k = j n C k    E n  < 1 2 (3) for n big enough. F or any k ≥ j n the even t C k is indep enden t of E n b y the strong Marko v prop ert y , hence since all τ k +1 − τ k ha ve the same distribution as τ , it holds P ∞ k =1 P ( C k ) < ∞ , which together with an union b ound implies the existence of n 1 ∈ N suc h that (3) holds. Next, w e show P  ∞ [ k = j n B k    E n  < 1 2 , (4) 12 for n big enough. W e hav e P  ∃ k ≥ j n : ˆ η τ k ≤ ρ L p E [ τ ] k + t    ˆ η τ j n ≥ η + n  = P  ∃ k ≥ j n : ( ˆ η τ k − ˆ η τ j n ) + ˆ η τ j n ≤ ρ L p E [ τ ] k + t    ˆ η τ j n ≥ η + n  ≤ P  ∃ k ≥ j n : ( ˆ η τ k − ˆ η τ j n ) + η + n ≤ ρ L p E [ τ ] k + t    ˆ η τ j n ≥ η + n  = P  ∃ k ≥ j n : ( ˆ η τ k − ˆ η τ j n ) ≤ ρ L p E [ τ ] k + ( t − η ) − n    ˆ η τ j n ≥ η + n  = P  ∃ k ≥ 0 : Z k − Z 0 ≤ ρ L p E [ τ ] k − n  1 − j ρ L p E [ τ ]  . T o show that (4) holds, it remains to upp er bound the righ t-hand side ab o ve. Since ( Z k ) k is a random walk on Z V , the pro jection on the v -th comp onen t is a random w alk on Z , whic h we denote b y ( R k ) k ∈ N for some fixed v ∈ V . Note that E [ R 1 − R 0 ] = ρ p ( v ) E [ τ ] > 0. Assume R 0 = 0 and set C = 1 − j ( ρ/L ) p ( v ) E [ τ ] > 0. Then P  ∃ k ≥ 0 : Z k ≤ ρ L p E [ τ ] k − n  1 − j ρ L p E [ τ ]   ≤ P  ∃ k ≥ 0 : R k ≤ ρ L p ( v ) E [ τ ] k − C n  ≤ ∞ X k =0 P  R k − ρ p ( v ) E [ τ ] k ≤  ρ L − ρ  p ( v ) E [ τ ] k − C n  = ∞ X k =0 P  − ( R k − k E [ R 1 ]) ≥  ρ − ρ L  p ( v ) E [ τ ] k + C n  . Setting ˆ R k := − R k + k E [ R 1 ], then ( ˆ R k ) k is a cen tered random w alk on Z , and (BFB) implies ˆ R 1 ≤ τ K + ρ p ( v ) E [ τ ]. Since τ is the first return time of a Marko v c hain on a finite state space, there exists ˜ t > 0, such that E  e ˜ tτ  < ∞ . Setting ˆ t := ˜ t/K > 0 w e get E  e ˆ t ˆ R 1  ≤ E  e ˜ tτ  e ˆ tρ p ( v ) E [ τ ] < ∞ . Thus the logarithmic moment generating function Λ : [0 , ˆ t ] → R : t 7→ log E  e t ˆ R 1  is contin uously differ- en tiable on (0 , ˆ t ), fulfills Λ ′ (0+) = E [ ˆ S 1 ] = 0, and is strictly conv ex. Set c := ( ρ − ρ/L ) p ( v ) E [ τ ] > 0, c ho ose an arbitrary t ∈ [0 , ˆ t ], and apply the Marko v inequality to obtain P ( ˆ R k ≥ ck + C n ) ≤ E [ e t ˆ R k ] e t ( ck + C n ) = e − C tn  E [ e t ˆ R 1 ] e tc  k = e − C tn e − ( ct − Λ( t )) k . Since Λ(0) = Λ ′ (0+) = 0 and Λ is strictly con vex, it exists t ∗ > 0 with I ∗ := ct ∗ − Λ( t ∗ ) > 0. Therefore P  ∃ k ≥ 0 : Z k ≤ ρ 2 p E [ τ ] k − n  1 − j ρ L p E [ τ ]  ≤ e − C t ∗ n ∞ X k =0 e − I ∗ k = e − C t ∗ n 1 1 − e − I ∗ , whic h implies the existence of n 2 ∈ N suc h that P  ∪ ∞ k = j n 2 B k | E n 2  < 1 2 . F or n 3 = max { n 1 , n 2 } P  ∞ [ k = j n 3 A k    E n 3  ≤ P  ∞ [ k = j n 3 B k ∪ C k    E n 3  ≤ P  ∞ [ k = j n 3 B k    E n 3  + P  ∞ [ k = j n 3 C k    E n 3  < 1 2 + 1 2 < 1 . The critical regime. When ρ = 0, the total mass ma y b e conserved during topplings, which can cause non-stabilization for sufficien tly large initial configurations. T o formalize this, w e define 13 conserv ed quantities: a function a ∈ R V and functions φ ( v , · ) : S v → R constitute a conserved quan tity if, almost surely for all initial configurations, X v ∈ V  a ( v ) η n ( v ) + φ ( v , q n ( v ))  = const. W e first analyze the prop erties of a . Lemma 5.2. If a ∈ R V and φ ( v , · ) : S v → R form a c onserve d quantity for the sto chastic network ( η n ) n , then a is entrywise p ositive; that is, a ( v ) > 0 for al l v ∈ V . Pr o of. Fix w ∈ V and let ( s 0 , . . . , s l ) ∈ S l +1 w b e an excursion of ( Y w k ) k ∈ N . By choosing the initial configuration η 0 large enough at w , w e can ensure that, with p ositiv e probabilit y , the first l topplings o ccur at w . By conserv ation, X v ∈ V  a ( v ) η l ( v ) + φ ( v , q l ( v ))  = X v ∈ V  a ( v ) η 0 ( v ) + φ ( v , q 0 ( v ))  . Rearranging yields P l i =1 P v ∈ V a ( v ) ˜ I w i ( v ) = 0, where the instructions ˜ I w i are indep endent and ˜ I w i ∼ ξ w,s i . Here we also used that the en vironment returns to its initial state. If τ w denotes the first return time of the environmen t at w to its initial state, then P τ w i =1 P v ∈ V a ( v ) I w i ( v ) = 0, where the instruction stack I = ( I v j ) v ∈ V , j ∈ N 0 consists of indep enden t instructions with I v j ∼ ξ v ,Y v j . T aking conditional exp ectation giv en τ w yields τ w X i =1 X v ∈ V a ( v ) µ w,Y w i ( v ) = 0 , and by the ergo dic theorem, X s ∈ S w π w ( s ) X v ∈ V a ( v ) µ w,s ( v ) = 0 . Reordering the sums giv es M · a = 0. As in the analysis for p , this implies a > 0 entrywise (here a is the unique right eigen vector corresp onding to ρ = 0). Using the entrywise p ositivit y of a , we can now sho w that if a conserved quan tity exists, then there are initial configurations that never stabilize, i.e., remain unstable indefinitely . Prop osition 5.3. If ther e exist a c onserve d quantity given by a ∈ R V and φ ( v , · ) : S v → R , then ther e is an initial state ( η 0 , q 0 ) ∈ N V 0 × S such that η n r emains unstable for al l n ∈ N . Pr o of. W e choose ( η 0 , q 0 ) such that X v ∈ V  a ( v ) η 0 ( v ) + φ ( v , q 0 ( v ))  ≥ X v ∈ V a ( v ) t ( v ) + | V | · max { φ ( v , s ) | v ∈ V , s ∈ S v } . 14 Then for all n ∈ N , w e obtain X v ∈ V a ( v ) η n ( v ) = X v ∈ V  a ( v ) η n ( v ) + φ ( v , q n ( v )) − φ ( v , q n ( v ))  ≥ X v ∈ V  a ( v ) η n ( v ) + φ ( v , q n ( v ))  − | V | · max { φ ( v , s ) | v ∈ V , s ∈ S v } = X v ∈ V  a ( v ) η 0 ( v ) + φ ( v , q 0 ( v ))  − | V | · max { φ ( v , s ) | v ∈ V , s ∈ S v } ≥ X v ∈ V a ( v ) t ( v ) , whic h implies, in view of the p ositivit y of a , that there exist v ∈ V suc h that η n ( v ) ≥ t ( v ). T o pro ve the con verse — that non-stabilization entails the existence of conserved quantities — w e first iden tify when a d -dimensional random walk visits the orthan t O = { x ∈ R d : x i < 0 for all i } . The next lemma sho ws that an y random w alk that n ever en ters O m ust be confined to a hyperplane in R d . Throughout, w e use ⟨ a, b ⟩ = P d i =1 a i b i for the inner pro duct of a, b ∈ R d . Lemma 5.3. L et ( X i ) i ∈ N b e i.i.d. r andom ve ctors in R d with me an zer o and finite c ovarianc e matrix Σ ∈ R d × d . Define the r andom walk S n = P n i =1 X i . Then exactly one of the fol lowing holds: (i) P ( ∃ n ∈ N : S n ∈ O ) = 1 , wher e O = { x ∈ R d : x i < 0 for al l i } . (ii) Ther e exists a ∈ R d with a > 0 such that ⟨ X 1 , a ⟩ = 0 almost sur ely. Pr o of. Note that if (ii) holds, then ( S n ) n is almost surely confined to the hyperplane { x ∈ R d : ⟨ x, a ⟩ = 0 } , whic h does not intersect O . Th us, assume (ii) do es not hold; we will then sho w that (i) must hold. F or a ∈ R d w e hav e Σ a = 0 ⇐ ⇒ E [ X 1 ( i ) ⟨ X 1 , a ⟩ ] = 0 for all i ∈ { 1 , 2 , . . . , d } . Multiplying E [ X 1 ( i ) ⟨ X 1 , a ⟩ ] by a i and summing o ver i = 1 , . . . , d yields E  ⟨ X 1 , a ⟩ 2  = 0, which implies that the w alk is almost surely confined to the hyperplane orthogonal to a . Since (ii) does not hold, it follows that for every a ∈ R d with a > 0, we ha v e Σ a  = 0. By the cen tral limit theorem, the normalized sums S n / √ n conv erge in distribution to a mean-zero Gaussian with co v ariance Σ. Since Σ a  = 0 for every a > 0, this Gaussian has nondegenerate mass in all directions and hence assigns p ositive probabilit y to the orthan t O . Hence, there exist n 0 ∈ N and a constan t C > 0 such that for all n ≥ n 0 , it holds P ( S n ∈ O ) > C . Let τ := sup { n ∈ N : S n ∈ O } denote the last time the random walk S n visits the orthant O . F or any k ∈ N and n ≥ max { k , n 0 } , C < P ( S n ∈ O ) = P ( S n ∈ O , τ > k ) ≤ P ( τ > k ) , so P ( τ > k ) > C for all k ∈ N . Hence, C < P ( τ = ∞ ) = P ( S n ∈ O infinitely often). Hewitt–Sav age zero–one law implies P ( S n ∈ O infinitely often) = 1 , and therefore P ( ∃ n ∈ N : S n ∈ O ) = 1 . 15 Prop osition 5. 4 (Critical case) . F or ρ = 0 , if ther e exists an initial c onfigur ation ˆ η 0 ∈ N V 0 such that P  ∀ n ∈ N ∃ v ∈ V with ˆ η n ( v ) ≥ t ( v )  > 0 , i.e., the sto chastic network ( η n ) n ∈ N fails to stabilize with p ositive pr ob ability, then a c onserve d quantity exists for the sto chastic network. Pr o of. Recall the toppling random walk ( Z j ) j ∈ N started at ˆ η 0 . Since ρ = 0, we ha v e E [ Z 1 − Z 0 ] = 0, so ( Z j ) is a zero-drift random w alk in | V | dimensions. Moreov er, by the assumption that the system do es not stabilize from ˆ η 0 , the w alk ( Z j ) av oids the set Q t = { x ∈ R V : x ( v ) < t ( v ) } with p ositiv e probabilit y . By Lemma 5.3, there exists a vector a ∈ R V with strictly p ositiv e en tries suc h that, almost surely for every j ∈ N , it holds P v ∈ V a ( v ) Z j ( v ) = const. T o find a conserved quan tity , it remains to sp ecify appropriate functions φ ( v , · ) for all v ∈ V . As a first step, we show that the inner pro duct with a is inv ariant under the toppling op erations. Fix v ∈ V and s ∈ S v , and consider a random toppling ξ v ,s with law ν v ,s . W e couple tw o steps of the toppling walk so that the first toppling at v is indep enden t across the tw o copies, while all subsequen t topplings are iden tical. Start the toppling walk from the zero configuration. Pic k an initial environmen t state σ ∈ S with σ ( v ) = s . Let ( σ , σ 1 , . . . , σ n ) be an excursion of the en vironment (so σ n = σ ), and let ( v , v 1 , . . . , v n ) b e the asso ciated sequence of toppled vertices, assumed to hav e p ositiv e probability . Sample topplings ( I j ) n j =1 with I j ∼ ν v j , σ j ( v j ) , and sample I 0 and I ′ 0 indep enden tly from ν v ,s . Define Z 1 = I 0 + n X j =1 I j , Z ′ 1 = I ′ 0 + n X j =1 I j . Then b oth Z 1 and Z ′ 1 are distributed as a single step of the toppling random w alk and X u ∈ V a ( u ) I 0 ( u ) + n X j =1 X u ∈ V a ( u ) I j ( u ) = X u ∈ V a ( u ) I ′ 0 ( u ) + n X j =1 X u ∈ V a ( u ) I j ( u ) , almost surely , whic h implies that P u ∈ V a ( u ) ξ v ,s ( u ) = const. almost surely . Next we construct the functions φ ( v , · ). Set Φ( v , s ) = X u ∈ V a ( u ) ξ v ,s ( u ) , for all v ∈ V and s ∈ S v . F rom the first step, Φ( v , s ) is constant almost surely , so the map Φ : S v ∈ V  { v } × S v  → R is deterministic. Moreov er, for an y excursion ( s 0 , . . . , s n ) ∈ S n +1 v that o ccurs with p ositiv e probability for a giv en v ∈ V , w e ha ve P n i =1 Φ( v , s i ) = 0, since this sum corresp onds to a single step of the toppling random walk. Consequen tly , for an y v ∈ V , any t wo states s, s ′ ∈ S v , and an y t wo paths ( s, s 1 , . . . , s n , s ′ ) and ( s, d 1 , . . . , d m , s ′ ) that each o ccur with p ositiv e probability , n X j =1  Φ( v , s j )  + Φ( v , s ′ ) = m X j =1  Φ( v , d j )  + Φ( v , s ′ ) . The quan tity λ ( v , s, s ′ ) = P n j =1  Φ( v , s j )  + Φ( v , s ′ ) is w ell-defined b ecause it do es not dep end on the particular path chosen from s to s ′ . F or each v ∈ V , select a reference state s v ∈ S v and set 16 φ ( v , s ) = − λ ( v , s v , s ) . Let I = ( I v j ) v ∈ V , j ∈ N 0 b e a stac k of toppling instructions. If ( v 1 , . . . , v n ) = v denotes the sequence of vertices toppled in going from ˆ η 0 to ˆ η n , then we ha v e X u ∈ V a ( u )  ˆ η n ( u ) − ˆ η 0 ( u )  = X u ∈ V X w ∈ V m v ( w ) X i =1 a ( u ) I w i ( u ) = X w ∈ V m v ( w ) X i =1 Φ( w , Y w i ) = X w ∈ V λ ( w , Y w 0 , Y w m v ( w ) ) = X w ∈ V  λ ( w , s w , Y w m v ( w ) ) − λ ( w, s w , Y w 0 )  = X w ∈ V  − φ ( w , Y w m v ( w ) ) + φ ( w, Y w 0 )  , whic h completes the pro of. Questions and remarks Infinite en vironments and infinite graphs G . In our model, each environmen t S v (for v ∈ V ) is finite, and the chains ( Y v j ) j ∈ N are irreducible and ap erio dic. A natural question is whether the result holds when the environmen ts are infinite and the en vironmen t chains are p ositiv e recurrent. This should b e a straigh tforw ard extension of the finite case. The same question can b e ask ed when the graph G = ( V , E ) is infinite as well. Driv en dissipativ e system. Consider a sub critical sto c hastic netw ork, or a critical one without a conserved quan tit y , on a finite graph G = ( V , E ) with initial state ( η , q ). Let the system stabilize, then pic k a uniformly random vertex in V , add a particle, and stabilize again. The resulting sequence of stable configurations forms a Marko v c hain whose recurrent states are R ⊂ Z V × S = Z V × Q v ∈ V S v . This chain admits a stationary distribution ι ∈ Prob( R ). What can b e said ab out this stationary distribution? Infinite v olume limit. Let G n = ( V n , E n ) b e an increasing sequence of subgraphs exhausting an infinite graph G = ( V , E ). A ttach to each v ∈ V an environmen t S v with chain ( Y v j ) j ∈ N , and for eac h G n consider the driv en dissipative system with stationary distribution ι n . The question whic h arises is whether there exists a probabilit y measure ι on Z V × Q v ∈ V S v suc h that, for every configuration ( η , q ) and every finite A ⊂ V , it holds ι n  ( η , q ) | A  − − − → n →∞ ι  ( η , q ) | A  , where ( η , q ) | A denotes the cylinder even t that configurations (on G n or G ) agree with ( η , q ) on all vertices of A . Stabilization time. F or a sto c hastic netw ork in a Mark ovian environmen t on G = ( V , E ) with initial state ( η , q ), define the stabilization time T := inf { k ≥ 0 : η k ≤ t } . What can b e said about T ? F or instance, if T n denotes the stabilization time on an increasing sequence of graphs G n = ( V n , E n ) exhausting an infinite graph G = ( V , E ), with initial data ( η , q ) | V n for some ( η , q ) ∈ Z V × Q v ∈ V S v , do es there exist a scaling function Φ : N → N and a constan t γ > 0 such that lim n →∞ T n Φ( n ) = γ . F unding information. The research of M. Kl¨ otzer and E. Sav a-Huss w as funded in part b y the Austrian Science F und (FWF) 10.55776/P A T3123425. F or op en access purp oses, the authors ha ve applied a CC BY public copyrigh t license to an y author-accepted manuscript v ersion arising from this submission. 17 References [BL16a] Benjamin Bond and Lionel Levine. Abelian net works I. Foundations and examples. SIAM J. Discr ete Math. , 30(2):856–874, 2016. [BL16b] Benjamin Bond and Lionel Levine. Ab elian netw orks II: Halting on all inputs. Sele cta Math. (N.S.) , 22(1):319–340, 2016. [BL16c] Benjamin Bond and Lionel Levine. Ab elian netw orks I II: The critical group. J. Algebr aic Combin. , 43(3):635–663, 2016. [BTW87] P er Bak, Chao T ang, and Kurt Wiesenfeld. Self-organized criticalit y: An explanation of the 1/f noise. Phys. R ev. L ett. , 59:381–384, Jul 1987. [BW03] Itai Benjamini and David B. Wilson. Excited random walk. Ele ctr on. Comm. Pr ob ab. , 8:86–92, 2003. [CL22] Sw ee Hong Chan and Lionel Levine. Ab elian netw orks IV. Dynamics of nonhalting net works. Mem. Amer. Math. So c. , 276(1358):vii+89, 2022. [DF91] P ersi Diaconis and William F ulton. A growth mo del, a game, an algebra, Lagrange in version, and characteristic classes. volume 49, pages 95–119. 1991. Commutativ e algebra and algebraic geometry , I I (Italian) (T urin, 1990). [Dha90] Deepak Dhar. Self-organized critical state of sandpile automaton mo dels. Phys. R ev. L ett. , 64(14):1613–1616, 1990. [Dha99] Deepak Dhar. The ab elian sandpile and related mo dels. Physic a A: Statistic al Me chanics and its Applic ations , 263(1–4):4–25, F ebruary 1999. [GL23] Lila Greco and Lionel Levine. Branc hing in a Mark ovian environmen t. Markov Pr o c ess. R elate d Fields , 29(1):1–33, 2023. [Hol03] Alexander E. Holroyd. Sharp metastability threshold for t wo-dimensional b ootstrap p ercolation. Pr ob ab. The ory R elate d Fields , 125(2):195–224, 2003. [KLSH26] Robin Kaiser, Lionel Levine, and Ecaterina Sav a-Huss. Lo cally Marko v Walks on Finite Graphs. R andom Structur es Algorithms , 68(1):P ap er No. e70045, 2026. [RS12] Leonardo T. Rolla and Vladas Sidora vicius. Absorbing-state phase transition for driv en- dissipativ e sto chastic dynamics on Z . Invent. Math. , 188(1):127–150, 2012. [ST17] Vladas Sidoravicius and Augusto T eixeira. Absorbing-state transition for sto c hastic sandpiles and activ ated random walks. Ele ctr on. J. Pr ob ab. , 22:no. 33, 35, 2017. [Tse90] P aul Tseng. Distributed computation for linear programming problems satisfying a certain diagonal dominance condition. Math. Op er. R es. , 15(1):33–48, 1990. [vE87] Aernout C. D. v an Enter. Proof of Straley’s argumen t for b ootstrap p ercolation. J. Statist. Phys. , 48(3-4):943–945, 1987. 18

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment