The Law of Large Numbers for Time-inhomogeneous Markov Chains under General Conditions

The weak and strong laws of large numbers for time-inhomogeneous Markov chains are studied under general conditions. First, under Drift Condition and Contraction Condition in total variation, we prove the weak law of large numbers. Then, assuming Dri…

Authors: Aaron Lau, Kouji Yano

The La w of Large Num b ers for Time-inhomogeneous Mark o v Chains under General Conditions Aaron Lau and Kouji Y ano Abstract The w eak and strong laws of large n umbers for time-inhomogeneous Mark ov chains are studied under general conditions. First, under Drift Condition and Con traction Condition in total v ariation, w e prov e the weak la w of large num b ers. Then, assuming Drift Condition together with a time-inhomogeneous Do eblin minorization, w e develop a Nummelin-t yp e splitting and obtain a strong law of large num bers. Our results utilize the in v ariant measure family in the sense of Liu–Lu (2025), and extend the classical Harris–ergodic LLN to the time-inhomogeneous setting. 1 In tro duction The Law of Large Num b ers (LLN) is one of the central results in probability theory . Beginning with the classical form for indep enden t random v ariables (Borel, Kolmogorov), it was later extended to Marko v chains. F or finite-state regular Marko v chains, Kemeny–Snell [5] sho wed that if π j denotes the limiting probabilit y of b eing in state j , then π j also equals the long-run empirical fre- quency of visits to j ; the LLN therefore connects inv ariant measures with time a verages. Subsequent developmen ts established links b et w een LLN for Mark ov c hains and martingale con vergence theorems (see Hall–Heyde [3]). F or general state-space homo gene ous Marko v c hains, the mo dern theory was dev elop ed by Meyn–Tw eedie [8], utilizing Drift Conditions, small sets, and Har- ris recurrence. Their regeneration (or Nummelin splitting) method has since b ecome a canonical tool for proving limit theorems such as LLN and CL T in the presence of dependence. Also see [2] for the standard theory of the Lyapu nov function metho d to study the ergo dicity of homogeneous Mark ov chains. In many applications of curren t interest—reinforcemen t learning, adaptive sto c hastic control, and sto chastic optimization—the natural ob jects are no longer homogeneous Marko v chains. Their transition kernels ma y c hange with time, giving rise to time-inhomo gene ous Mark ov pro cesses. In this setting, even basic ergo dic principles are substantially more delicate. An inv arian t distribution is replaced by a family of time-indexed in v ariant measures { µ n } n ∈ Z (See (2.4)). 1 Only recently has a general ergodic theory for such c hains b een developed. Liu– Lu [7] established existence and uniqueness of { µ n } n ∈ Z and the exp onen tial ergo dicit y under Drift and Contraction Conditions which is weak er than Do e- blin’s Condition. Note that V assiliou [10] obtained the La w of Large Numbers for time-inhomogeneous Marko v systems on finite state spaces in the settings differen t from ours. Let ( X n ) n ∈ Z b e a time-inhomogeneous Marko v chain satisfying Drift Con- dition and Do eblin’s Condition, and let { µ n } n ∈ Z denote its unique in v arian t measure family . Using a Nummelin-type splitting construction combined with a maximal coupling argument exploiting the geometric forgetting rate of [7], we sho w that for any initial state x with V ( x ) < ∞ and any b ounded measurable function g : X → R , 1 n n − 1 X k =0 g ( X k ) − 1 n n − 1 X k =0 µ k ( g ) a.s. − − − − → n →∞ 0 . That is, the empirical av erages of the c hain conv erge almost surely to the av- erages of g with resp ect to the evolving in v arian t measures. This reco vers the classical Harris-ergo dic LLN for homogeneous Mark ov chains as a sp ecial case, and extends the “time av erage = space av erage” principle to ev olving, nonsta- tionary sto c hastic dynamics. Consequen tly , the present results offer a theoretical foundation for Monte Carlo metho ds in nonstationary sto c hastic dynamics, where explicit inv ariant distributions are probably una v ailable but long-time a v erages are of primary in terest. The organization of this pap er is as follo ws. In Section 2, w e introduce cer- tain basic notions and earlier results ab out time-inhomogeneous Marko v chains as preliminaries. In Section 3, w e sho w the main results without expositing pro ofs. In Section 4, we pro ve the weak la w of large num b ers based on the work of Liu–Lu [7], where Drift Condition and Contraction Condition are lev eraged. Section 5 consists of three subsections. In Subsection 5.1, w e construct a split- ting chain of Nummelin type. In Subsection 5.2, w e see the Drift Condition guaran tees the Mark ov chain to return to small set with the probability of geo- metric tail. In Subsection 5.3, we build the strong la w of large n umbers based on Drift Condition and Doeblin’s Condition. 2 Preliminaries W e recall basic definitions and notation for time–inhomogeneous Marko v chains and recall some results of Liu–Lu [7]. Let X b e a Polish space equipp ed with its Borel σ –field B ( X ). Denote by P ( X ) the set of all probabilit y measures on X . Let P b e a family of transition probabilit y kernels. That is, for m ≤ n and x ∈ X , we hav e P ( m, x, n, · ) ∈ P ( X ) , 2 with P ( m, x, m, · ) = ϵ x , where ϵ x denotes the Dirac mass at x , and the Chapman–Kolmogorov identit y: P ( m 1 , x, m 3 , A ) = Z X P ( m 1 , x, m 2 , dy ) P ( m 2 , y , m 3 , A ) , for m 1 ≤ m 2 ≤ m 3 , ∀ m 1 , m 2 , m 3 ∈ Z , x ∈ X and A ∈ B ( X ). W e wan t to consider an inhomogeneous Marko v chain { X n } n ∈ Z + with the transition probability kernels P , that is, P ( X n +1 ∈ A | X j : j ≤ n ) = P ( n, X n , n + 1 , A ) a.s. (2.1) for n ∈ Z + and A ∈ B ( X ). Giv en a probabilit y measure ν ∈ P ( X ), w e define the push-forward of ν under the transition kernel from time m to n b y P ∗ m,n ν ( A ) := Z X P ( m, x, n, A ) ν ( dx ) , A ∈ B ( X ) . (2.2) The asso ciated transition semigroup acts by P m,n f ( x ) := Z X f ( y ) P ( m, x, n, dy ) . (2.3) for any b ounded measurable function f : X → R . Definition 2.1 (Liu–Lu [7]) . A family of probability measures { µ n } n ∈ Z ⊂ P ( X ) is called an invariant me asur e family for P if P ∗ m,n µ m = µ n , for all m ≤ n. (2.4) Although w e are primarily interested in the forward chain ( X n ) n ≥ 0 , w e as- sume the family of kernels P ( n, x, m, · ) n ≤ m is defined for all n ∈ Z . This allows us to use the in v arian t measure family framework of Liu–Lu [7], which is natu- rally formulated on Z . The results then apply in particular to the restriction of the chain to Z + . If such a family { µ n } n ∈ Z exists, then there exists an inhomo- geneous Marko v c hain { X n } n ∈ Z + suc h that the distribution of X n is µ n for all n ∈ Z + . The supremum norm ∥ · ∥ ∞ is denoted by ∥ φ ∥ ∞ := sup x ∈X | φ ( x ) | for a measurable function φ : X → R . The total v ariation norm of µ is denoted b y ∥ µ ∥ TV := sup ∥ φ ∥ ∞ ≤ 1     Z X φ ( x ) µ ( dx )     = sup A ∈B ( X ) µ ( A ) − inf A ∈B ( X ) µ ( A ) (2.5) 3 for a finite signed measure on X . The difference of t wo probability measures is alw ays a finite signed measure, hence the total v ariation distance ∥ ν 1 − ν 2 ∥ TV is well-defined on P ( X ). Assumption 2.2 (Drift Condition) . There exists a Borel measurable function V : X → [0 , ∞ ] such that (i) the set { x : V ( x ) < ∞} is nonempt y; (ii) there exist 0 < γ < 1 and C > 0 suc h that for an y x ∈ X and n ∈ Z , Z X V ( y ) P ( n − 1 , x, n, dy ) ≤ γ V ( x ) + C. F or any R > 0, define the set C R := { ( x, y ) : V ( x ) + V ( y ) ≤ R } . Assumption 2.3 (Contraction Condition) . F or any R > 0, there exist con- stan ts n 0 = n 0 ( R ) ∈ N and 0 < δ = δ ( R ) < 1 suc h that for an y n ∈ Z , sup ( x,y ) ∈C R ∥ P ( n − n 0 , x, n, · ) − P ( n − n 0 , y , n, · ) ∥ T V ≤ 2(1 − δ ) , whic h is equiv alent to     Z X φ ( z ) P ( n − n 0 , x, n, dz ) − φ ( z ) P ( n − n 0 , y , n, dz )     ≤ 2(1 − δ ) uniformly ov er all measurable functions φ : X → R with ∥ φ ∥ ∞ ≤ 1. Denote the weigh ted supremum norm ∥ · ∥ with resp ect to the function V : X → [0 , ∞ ] b y ∥ φ ∥ := sup x ∈X | φ ( x ) | 1 + V ( x ) . Liu–Lu[7] sho wed the existence and uniqueness of the inv ariant measure family for P and exponential ergo dicity under the abov e assumptions. Theorem 2.4 (Liu–Lu [7]) . Supp ose Assumption 2.2 and Assumption 2.3 hold. Then ther e exists a unique se quenc e { µ n } n ∈ Z of pr ob ability me asur es satisfying Z X V ( x ) µ n ( dx ) < + ∞ for any n ∈ Z such that P ∗ m,n µ m = µ n for any m ≤ n. Mor e over, the fol lowing assertions hold: 4 (E1) ther e exist c onstants 0 < α < 1 and M > 0 such that     Z X φ ( y ) P ( n − m, · , n, dy ) − Z X φ ( y ) µ n ( dy )     ≤ M α m     φ − Z X φ ( y ) µ n ( dy )     for any n ∈ Z , m ∈ N and x ∈ X . (E2) ther e exist c onstants 0 < α < 1 and ˜ M > 0 such that ∥ P ( n − m, x, n, · ) − µ n ∥ T V ≤ ˜ M α m (1 + V ( x )) for any n ∈ Z , m ∈ N and x ∈ X . 3 Main Results In this setion, w e present the main results regarding the W eak Law of Large Num b ers (WLLN) and the Strong La w of Large Num b ers (SLLN). W e first state the WLLN under Drift and Contraction Conditions. Theorem 3.1. Supp ose Assumptions 2.2 and 2.3 hold. F or the invariant me a- sur e family ( µ n ) n ∈ Z given by The or em 2.4, assume that we have the uniform finite V -moment: sup n ∈ N Z X V ( x ) µ n ( dx ) < ∞ . L et ( X n ) n ∈ Z + denote the time-inhomo gene ous Markov chain with tr ansition pr ob ability P such that X 0 ∼ µ 0 (Conse quently, X n ∼ µ n for al l n ∈ Z + ). L et g b e a b ounde d me asur able function on X . Then the fol lowing c onver genc e in pr ob ability holds: 1 n n − 1 X k =0 g ( X k ) − 1 n n − 1 X k =0 µ k ( g ) P − − − − → n →∞ 0 . The pro of will b e giv en in Section 4. In order to study the SLLN, we use Do eblin’s Condition to do Nummelin’s splitting coupling [9]. As a small set, we define, for R > 0, C ( R ) := { x ∈ X : V ( x ) ≤ R } . Assumption 3.2 (Do eblin’s Condition) . F or any R > 0, there exist 0 < β < 1, and a probability measure ν such that for any n ∈ Z and x ∈ C ( R ) w e hav e P ( n − 1 , x, n, · ) ≥ β ν ( · ) . W e next state the SLLN under Drift and Do eblin’s Condition. 5 Theorem 3.3. Supp ose Assumptions 2.2 and 3.2 hold. L et x ∈ X b e such that V ( x ) < ∞ . L et g : X → R b e a b ounde d me asur able function. Then, for the time-inhomo gene ous Markov chain ( X n ) n ∈ Z + with X 0 ∼ ϵ x , we have 1 n n − 1 X k =0 g ( X k ) − 1 n n − 1 X k =0 µ k ( g ) a.s. − − − − → n →∞ 0 . The pro of of Theorem 3.3 will be giv en in Section 5. No w we consider a simple example of time-dep enden t Marko v c hain satisfy- ing the law of large n umbers. Example 3.4. Consider the transition probability on the space X = { 1 , 2 } : P n − 1 ,n =  P ( n − 1 , 1 , n, 1) P ( n − 1 , 1 , n, 2) P ( n − 1 , 2 , n, 1) P ( n − 1 , 2 , n, 2)  =  1 − a ( n ) a ( n ) b ( n ) 1 − b ( n )  where a ( n ) = 1 3 + sin n 6 , b ( n ) = 1 4 + cos n 8 , n ∈ Z . W e see that 1 6 ≤ a ( n ) ≤ 1 2 , 1 8 ≤ b ( n ) ≤ 3 8 , thus Do eblin’s Condition is satisfied with β = 1 4 and ν = (1 / 2 , 1 / 2). The Drift Condition can b e easily verified if w e tak e V : X → R , V (1) = V (2) = 2, for instance. So, the corresponding inv ari- an t measure family exists and w e denote it by ( µ k ) k ∈ Z . F rom the exp onen tial ergo dicit y of Theorem 2.4, w e obtain P − n,k = P − n, − n +1 P − n +1 , − n +2 · · · P k − 1 ,k − − − − → n →∞  µ k (1) µ k (2) µ k (1) µ k (2)  for every k ∈ Z . W e consider the Mark ov chain ( X k ) k ∈ Z + with initial v alue X 0 = x 0 ∈ { 1 , 2 } and set the function g ( x ) = x . By Theorem 3.3, w e hav e the SLLN: 1 n n − 1 X k =0 X k − 1 n n − 1 X k =0 ( µ k (1) + 2 µ k (2)) a.s. − − − − → n →∞ 0 . In particular, if 1 n n − 1 P k =0 ( µ k (1) + 2 µ k (2)) conv erges to a constant, say C , we hav e 1 n n − 1 X k =0 X k a.s. − − − − → n →∞ C. Ho wev er, we do not know whether the limit C exists or not. 4 Pro of of WLLN W e give the pro of of Theorem 3.1. 6 Pr o of of The or em 3.1. Let us compute the cov ariance of g ( X i ) and g ( X j ) for i < j . By (E2) of Theorem 2.4, we hav e | Co v ( g ( X i ) , g ( X j )) | = | E [ g ( X i ) g ( X j )] − E [ g ( X i )] E [ g ( X j )] | =     Z X g ( x ) ( E [ g ( X j ) | X i = x ] − µ j ( g )) P ( X i ∈ dx )     ≤ Z X | g ( x ) |∥ g ∥ ∞ ∥ P ( i, x, j, · ) − µ j ) ∥ T V P ( X i ∈ dx ) ≤ ∥ g ∥ 2 ∞ ˜ M α j − i Z X (1 + V ( x )) P ( X i ∈ dx ) ≤ C α j − i , where C := ∥ g ∥ 2 ∞ ˜ M sup n ∈ N R X (1 + V ( x )) µ n ( dx ) < ∞ . Then we ha ve V ar n − 1 X k =0 g ( X k ) ! = n − 1 X i,j =0 Co v ( g ( X i ) , g ( X j )) ≤ n − 1 X k =0 V ar( g ( X k )) + 2 X 0 ≤ i 0, we ha ve P      1 n n − 1 X k =0 g ( X k ) − 1 n n − 1 X k =0 µ k ( g )      > ε ! ≤ V ar  P n − 1 k =0 g ( X k )  n 2 ε 2 = O  1 n  . Therefore, we obtain the desired con vergence. 5 Pro of of SLLN 5.1 Construction of splitting c hains It is easy to see that Do eblin’s Condition (Assumption 3.2) implies Contraction Condition (Assumption 2.3). In fact, w e in tro duce the probability measure Q x 7 and Q y suc h that P ( n − 1 , x, n, · ) = β ν ( · ) + (1 − β ) Q x P ( n − 1 , y , n, · ) = β ν ( · ) + (1 − β ) Q y . Then we hav e ∥ P ( n − 1 , x, n, · ) − P ( n − 1 , y , n, · ) ∥ T V ≤ (1 − β ) ∥ Q x − Q y ∥ T V ≤ 2(1 − β ) , whic h implies the Con traction Condition. Remark 5.1. Contraction Condition may not imply Do eblin’s Condition. A coun terexample is given b y X = { 1 , 2 , 3 } , C ( R ) = X , and P (1 , · ) = 1 2 ( ϵ 1 + ϵ 2 ) , P (2 , · ) = 1 2 ( ϵ 2 + ϵ 3 ) , P (3 , · ) = 1 2 ( ϵ 3 + ϵ 1 ) . Then ∥ P ( x, · ) − P ( y, · ) ∥ T V = 1 for x  = y , so the contraction b ound holds with δ = 1 2 . Ho w ever, no β > 0 and probabilit y ν can satisfy P ( x, · ) ≥ β ν ( · ). See Bansa ye–Cloez–Gabriel [1] for other results ab out Do eblin’s Condition. F rom now on, w e fix R suc h that R > C (1 − γ ) 2 and C ( R )  = ∅ in the sequel, and start to split the c hain with Nummelin ’s splitting c oupling metho d [9]. W e first split the space X itself b y writing ˇ X = X × { 0 , 1 } , where X 0 := X × { 0 } and X 1 := X × { 1 } are thought of as copies of X equipped with copies B ( X 0 ), B ( X 1 ) of the σ -field B ( X ). W e let B ( ˇ X ) b e the σ -field of subsets of ˇ X generated by B ( X 0 ), B ( X 1 ): that is, B ( ˇ X ) is the smallest σ -field containing sets of the form A 0 := A × { 0 } , A 1 := A × { 1 } , A ∈ B ( X ). W e will write x i = ( x, i ) , i = 0 , 1 for elemen ts of ˇ X , with x 0 denoting members of X 0 and x 1 denoting the member of X 1 . If λ is any measure on B ( X ), then we split the measure λ into t w o measures on each of X 0 and X 1 b y defining the measure λ ∗ on B ( ˇ X ) through λ ∗ ( A 0 ) = (1 − β ) λ ( A ∩ C ( R )) + λ ( A ∩ C ( R ) c ) λ ∗ ( A 1 ) = β λ ( A ∩ C ( R )) . It is critical to note that λ is the marginal measure induced by λ ∗ , in the sense that for any A in B ( X ) w e hav e λ ∗ ( A 0 ∪ A 1 ) = λ ( A ) . W e also note that λ ∗ ( C ( R ) c × { 1 } ) = 0 , λ ∗ ( C ( R ) × { 1 } ) = λ ( C ( R )) . No w we w ant to split the c hain ( X n ) n ∈ N to form a chain ( ˇ X n ) n ∈ N where ˇ X n = ( X n , δ n ), which lives on ( ˇ X , B ( ˇ X )) is a time-inhomogeneous Marko v chain with the transition probabilities given b y the split k ernels defined by: ˇ P ( n, x 0 , n + 1 , · ) = P ( n, x, n + 1 , · ) ∗ , if x 0 ∈ X 0 − C ( R ) 0 ; (5.1) 8 ˇ P ( n, x 0 , n + 1 , · ) = P ( n, x, n + 1 , · ) ∗ − β ν ∗ ( · ) 1 − β , if x 0 ∈ C ( R ) 0 ; (5.2) ˇ P ( n, x 1 , n + 1 , · ) = ν ∗ ( · ) , if x 1 ∈ C ( R ) 1 . (5.3) Outside C ( R ) the c hain { ˇ X n } behav es just lik e { X n } , mo ving on X 0 of the split space. Eac h time it arrives in C ( R ), it is split; with probability 1 − β it remains in C ( R ) 0 , with probability β it drops to C ( R ) 1 . No w we denote the σ -fields: G n := σ ( X 0 , X 1 , · · · , X n , δ 0 , δ 1 , · · · , δ n − 1 ) F n := σ ( X 0 , X 1 , · · · , X n , δ 0 , δ 1 , · · · , δ n − 1 , δ n ) . Prop osition 5.2. The split chain  ˇ X n = ( X n , δ n ) : n ∈ Z +  on the sp ac e X × { 0 , 1 } satisfies the fol lowing pr op erties: ˇ P { δ n = 1 | G n ; X n = x } =  β , if x ∈ C ( R ) 0 , if x / ∈ C ( R ) ; (5.4) ˇ P { X n +1 ∈ dy | F n ; δ n = 1 } = ν ( dy ); (5.5) ˇ P { X n +1 ∈ dy | F n ; X n = x, δ n = 0 } = ( P ( n,x,n +1 ,dy ) − β ν ( dy ) 1 − β , if x ∈ C ( R ) P ( n, x, n + 1 , dy ) , if x / ∈ C ( R ) (5.6) Mor e over, given that δ n = 1 , the pr e- n pr o c ess { X j , δ j : j ≤ n } and p ost- n { X j , δ j : j ≥ n + 1 } pr o c ess ar e indep endent. Pr o of. W e construct the split chain explicitly exp ositing the underlying ran- domness. First, w e introduce auxiliary i.i.d. sequences ( U X n ) n ≥ 0 and ( U δ n ) n ≥ 0 of Uniform (0 , 1) random v ariables, whic h are indep enden t. According to Prop o- sition 11.6 of Kallenberg [4], there exist measurable functions: φ 1 n : X × (0 , 1) → X s.t. φ 1 n ( x, U X n ) ∼ P ( n, x, n + 1 , · ); φ 2 n : X × (0 , 1) → X s.t. φ 2 n ( x, U X n ) ∼ P ( n, x, n + 1 , · ) − β ν ( · ) 1 − β ; φ 3 : X → X s.t. φ 3 ( U X n ) ∼ ν. Define φ n : X × { 0 , 1 } × (0 , 1) → X as φ n ( x, i, u ) :=    φ 1 n ( x, u ) , if x / ∈ C ( R ) φ 2 n ( x, u ) , if x ∈ C ( R ) , i = 0 φ 3 ( u ) , if x ∈ C ( R ) , i = 1 Let x 0 b e fixed and Construct ( X n , δ n ) ∞ n =0 as follows: • X 0 = x 0 , δ 0 =  0 , if x / ∈ C ( R ) 1 { U δ 0 ≤ β } , if x ∈ C ( R ) 9 • If ( X n , δ n ) is given, then X n +1 = φ n  X n , δ n , U x n +1  , δ n +1 =  0 , if X n +1 / ∈ C ( R ) 1 { U δ n +1 ≤ β } , if X n +1 ∈ C ( R ) Denote: ˜ G n := σ  U x 0 , U x 1 , · · · , U X n , U δ 0 , U δ 1 , · · · , U δ n − 1  ⊃ G n , ˜ F n := σ  U x 0 , U x 1 , · · · , U X n , U δ 0 , U δ 1 , · · · , U δ n  ⊃ F n . By such construction, we ha ve ˇ P  δ n = 1 | ˜ G n , X n = x  = ˇ P  δ n = 1 , X n / ∈ C ( R ) | ˜ G n , X n = x  + ˇ P  δ n = 1 , X n / ∈ C ( R ) | ˜ G n , X n = x  = 0 + ˇ P  U δ n ≤ β | ˜ G n , X n = x  1 { x ∈ C ( R ) } = β 1 { x ∈ C ( R ) } , whic h pro ves (5.4). Then, we obtain ˇ P  X n +1 ∈ dy | ˜ F n , δ n = 1  = ˇ P  φ n +1  X n , δ n , U x n +1  ∈ dy | ˜ F n , δ n = 1  = ˇ P  φ n +1  X n , δ n , U x n +1  ∈ dy , X n ∈ C ( R ) | ˜ F n , δ n = 1  = ˇ P  φ 3 ( U x n +1 ) ∈ dy , X n ∈ C ( R ) | ˜ F n , δ n = 1  = P  φ 3 ( U x n +1 ) ∈ dy  = ν ( dy ) , whic h pro ves (5.5). When x ∈ C ( R ), we hav e ˇ P  X n +1 ∈ dy | ˜ F n , X n = x, δ n = 0  = ˇ P  φ n +1 ( X n , δ n , U x n +1 ) ∈ dy , X n ∈ C ( R ) | ˜ F n , X n = x, δ n = 0  = ˇ P  φ 2 ( X n , U x n +1 ) ∈ dy , X n ∈ C ( R ) | ˜ F n , X n = x, δ n = 0  = ˇ P  φ 2 ( x, U x n +1 ) ∈ dy | ˜ F n , X n = x, δ n = 0  = P  φ 2 ( x, U x n +1 ) | ˜ F n , X n = x, δ n = 0  = P ( n, n + 1 , dy ) − β ν ( dy ) 1 − β . Similarly , when x / ∈ C ( R ), we hav e ˇ P  X n +1 ∈ dy | ˜ F n , X n = x, δ n = 0  = P ( n, x, n + 1 , dy ) , 10 whic h pro ves (5.6). W e now pro ve that  ˇ X n  n ∈ N = ( X n , δ n ) n ∈ N is a time-inhomogeneous Marko v c hain with transition probabilit y ˇ P in (5.1)–(5.3). That is, ˇ P  X n +1 ∈ B , δ n +1 = j | ˜ F n , X n = x, δ n = i  = ˇ P ( n, ( x, i ) , n + 1 , B × { j } ) . When i = 1 and j = 1, we hav e ˇ P  X n +1 ∈ B , δ n +1 = 1 | ˜ F n , X n = x, δ n = 1  = ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) , U δ n +1 ≤ β | ˜ F n , X n = x, δ n = 1  = β ˇ P  φ 3 n +1 ( U x n +1 ) ∈ B ∩ C ( R )  = β ν ( B ∩ C ( R )) = ν ∗ ( B 1 ) = ˇ P ( n, x 1 , n + 1 , B 1 ) , and when i = 1 and j = 0, we hav e: ˇ P  X n +1 ∈ B , δ n +1 = 0 | ˜ F n , X n = x, δ n = 1  = ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) , U δ n +1 > β | ˜ F n , X n = x, δ n = 1  + ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) c | ˜ F n , X n = x, δ n = 1  = (1 − β ) ˇ P  φ 3 n +1 ( U x n +1 ) ∈ B ∩ C ( R )  + ˇ P  φ 3 n +1 ( U x n +1 ) ∈ B ∩ C ( R ) c  = (1 − β ) ν ( C ∩ C ( R )) + ν ( C ∩ C ( R ) c ) = ν ∗ ( B 0 ) = ˇ P ( n, x 1 , n + 1 , B 0 ) . When X n = x ∈ C ( R ), i = 0 and j = 0, we hav e ˇ P  X n +1 ∈ B , δ n +1 = 0 | ˜ F n , X n = x, δ n = 0  = ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) , U δ n +1 > β | ˜ F n , X n = x, δ n = 0  + ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) c | ˜ F n , X n = x, δ n = 0  = (1 − β ) ˇ P  φ 2 n +1 ( x, U x n +1 ) ∈ B ∩ C ( R )  + ˇ P  φ 2 n +1 ( x, U x n +1 ) ∈ B ∩ C ( R ) c  = 1 1 − β { (1 − β ) P ( n, x, n + 1 , B ) + P ( n, x, n + 1 , B ∩ C ( R ) c ) − β [(1 − β ) ν ( B ) + ν ( B ∩ C ( R ) c )] } = ˇ P ( n, x 0 , n + 1 , B 0 ) , 11 and when X n = x ∈ C ( R ), i = 0 and j = 1 we hav e ˇ P  X n +1 ∈ B , δ n +1 = 1 | ˜ F n , X n = x, δ n = 0  = ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) , U δ n +1 ≤ β | ˜ F n , X n = x, δ n = 0  = β ˇ P  φ 2 n +1 ( B ∩ C ( R ))  = ˇ P ( n, x 0 , n + 1 , B 1 ) . Finally , when X n = x ∈ C ( R ) c and j = 1, we hav e ˇ P  X n +1 ∈ B , δ n +1 = 1 | ˜ F n , X n = x, δ n = 0  = ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) , U δ n +1 ≤ β | ˜ F n , X n = x, δ n = 0  = β ˇ P  φ 1 n +1 ( B ∩ C ( R ))  = ˇ P ( n, x, n + 1 , B 1 ) , and when X n = x ∈ C ( R ) c and j = 0, we hav e ˇ P  X n +1 ∈ B , δ n +1 = 0 | ˜ F n , X n = x, δ n = 0  = ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) , U δ n +1 ≤ β | ˜ F n , X n = x, δ n = 0  + ˇ P  X n +1 ∈ B , X n +1 ∈ C ( R ) c | ˜ F n , X n = x, δ n = 0  = β ˇ P  φ 1 n +1 ( B ∩ C ( R ))  + ˇ P  φ 1 n +1 ( B ∩ C ( R )) c  = ˇ P ( n, x, n + 1 , B 0 ) . By the ab o ve argument, we obtain ˇ P  ˇ X n +1 ∈ · | F n , δ n = 1  = ν ∗ ( · ) , whic h implies the indep endence betw een the pre- n and the post- n pro cesses giv en δ n = 1. Let us pro ve that the time -inhomogeneous Marko v chain X given in (2.1) is iden tical in law to the marginal chain of the split c hain. Prop osition 5.3. The chain ( X n ) n ∈ Z + is identic al in law to the mar ginal chain ( X n ) n ∈ Z + of the split chain ( ˇ X n ) n ∈ Z + : that is, for any initial distribution λ on B ( X ) and any E 1 , · · · , E n ∈ B ( X ) , ∀ n ∈ N , ˇ P λ ∗ ( X 1 ∈ E 1 , · · · , X n ∈ E n ) = P λ ( X 1 ∈ E 1 , · · · , X n ∈ E n ) . (5.7) Pr o of. First we let k = 1 and consider the case of the Dirac p oin t mass λ = ϵ x . When x ∈ C ( R ) c , we hav e, by (5.1), Z ˇ X ϵ ∗ x ( dy i ) ˇ P (0 , y i , 1 , A 0 ∪ A 1 ) = ˇ P (0 , x 0 , 1 , A 0 ∪ A 1 ) = P (0 , x, 1 , A 0 ∪ A 1 ) ∗ = P (0 , x, 1 , A ) . 12 On the other hand, when x ∈ C ( R ). W e hav e, from (5.2) and (5.3), Z ˇ X ϵ ∗ x ( dy i ) ˇ P (0 , y i , 1 , A 0 ∪ A 1 ) = β ˇ P (0 , x 1 , 1 , A 0 ∪ A 1 ) + (1 − β ) ˇ P (0 , x 0 , 1 , A 0 ∪ A 1 ) = β ν ( A ) + (1 − β ) P (0 , x, 1 , A 0 ∪ A 1 ) ∗ − β ν ∗ ( A 0 ∪ A 1 ) 1 − β = P (0 , x, 1 , A ) . Th us w e hav e, for any initial distribution λ , Z X λ ( dx ) P (0 , x, 1 , A ) = Z X λ ∗ ( dy i ) ˇ P (0 , y i , 1 , A 0 ∪ A 1 ) , whic h implies P λ ( X 1 ∈ E 1 ) = ˇ P λ ∗ ( X 1 ∈ E 1 ) for any E 1 ∈ B ( X ). Supp ose w e hav e (5.7) for n and E 1 , · · · , E n ∈ B ( X ). Let f : X → R b e an y b ounded measurable function. On { X n ∈ C ( R ) } , by the tow er prop ert y w e ha ve ˇ E λ ∗ [ f ( X n +1 ) | X 1 , · · · , X n ] = ˇ E λ ∗ [ ˇ E λ ∗ [ f ( X n +1 ) | G n ] | X 1 , · · · , X n ] . By (5.5), (5.6) w e hav e ˇ E λ ∗ [ f ( X n +1 ) | G n ; δ n = 1] = Z X f ( y ) ν ( dy ) , ˇ E λ ∗ [ f ( X n +1 ) | G n ; δ n = 0] = Z X f ( y ) P ( n, X n , n + 1 , dy ) − β ν ( dy ) 1 − β . Hence, ˇ E λ ∗ [ f ( X n +1 ) | G n ] = β ˇ E λ ∗ [ f ( X n +1 ) | G n ; δ n = 1] + (1 − β ) ˇ E λ ∗ [ f ( X n +1 ) | G n ; δ n = 0] = Z X P ( n, X n , n + 1 , dy ) f ( y ) . On { X n / ∈ C ( R ) } , according to (5.4) and (5.6) we hav e ˇ E λ ∗ [ f ( X n +1 ) | G n ] = Z X P ( n, X n , n + 1 , dy ) f ( y ) . Com bining the abov e argument, w e hav e that on the whole sample space, ˇ E λ ∗ [ f ( X n +1 ) | G n ] = Z X P ( n, X n , n + 1 , dy ) f ( y ) , whic h implies ˇ E λ ∗ [ f ( X n +1 ) | X 1 , · · · , X n ] = E λ [ f ( X n +1 ) | X 1 , · · · , X n ] . Th us w e obtain (5.7) for n + 1 and E 1 , · · · , E n +1 ∈ B ( X ). 13 5.2 Return to small set with geometric tail’s probability W e denote: K n ( x, · ) := P ( n − 1 , x, n, · ) . Then, Do eblin’s Condition b ecomes K n ( x, · ) ≥ β ν ( · ) for all n ∈ Z + and x ∈ C ( R ). By the Drift Condition, w e hav e K n V ( x ) = Z X P ( n − 1 , x, n, dy ) V ( y ) ≤ C + γ V ( x ) ≤ γ V ( x ) + C ′ . where w e denote C ′ := C 1 − γ . W e will see that the Drift Condition yields uni- formly return to C ( R ) × { 1 } with geometric tails. Lemma 5.4. Supp ose Assumption 2.2 and Assumption 3.2 hold. We know that R > C ′ 1 − γ . Note ρ := γ + C ′ R < 1 . Then for every starting time s and every initial state x (i.e., X s = x ), we have P s,x ( τ > n ) ≤ V ( x ) R ρ n , ∀ n ∈ Z + wher e τ := inf { k ∈ N : X s + k ∈ C ( R ) } . In p articular, P s,x ( τ = ∞ ) = 0 . Pr o of. When y / ∈ C ( R ), we hav e V ( y ) > C ( R ). Then for any t , we hav e K t V ( y ) ≤ γ V ( y ) + C ′ =  γ + C ′ V ( y )  V ( y ) ≤  γ + C ′ R  V ( y ) = ρV ( y ) . F or any n ≥ 0, on { τ > n } we hav e X s + n / ∈ C ( R ), which implies V ( X s + n ) > R . Therefore, we hav e R 1 { τ > n } ≤ V ( X s + n ) 1 { τ > n } . T aking exp ectation we hav e R P s,x ( τ > n ) ≤ E s,x [ V ( X s + n ) 1 { τ > n } ] . F or n = 0, we hav e E s,x [ V ( X s ) 1 { τ > 0 } ] = V ( x ) 1 { x / ∈ C ( R ) } ≤ V ( x ) . 14 Supp ose for some n ≥ 0 we hav e E s,x [ V ( X s + n ) 1 { τ > n } ] ≤ ρ n V ( x ) . Then using the tow er prop erty and the fact that 1 { τ > n + 1 } = 1 { τ > n } 1 { X s + n +1 / ∈ C ( R ) } ≤ 1 { τ > n } , w e obtain E s,x [ V ( X s + n +1 ) 1 { τ > n + 1 } ] = E s,x [ 1 { τ > n } E s,x [ V ( X s + n +1 ) 1 { X s + n +1 / ∈ C ( R ) } | F s n ]] ≤ E s,x [ 1 { τ > n } E s,x [ V ( X s + n +1 ) | F s n ]] = E s,x [ 1 { τ > n } K s + n +1 ( X s + n )] ≤ E s,x [ 1 { τ > n } ρV ( X s + n )] = ρ E s,x [ V ( X s + n ) 1 { τ > n } ] ≤ ρ · ρ n V ( x ) = ρ n +1 V ( x ) , where F s n := σ ( X s , X s +1 , · · · , X s + n ). Thus for all n ≥ 0, we hav e E s,x [ V ( X s + n ) 1 { τ > n } ] ≤ ρ n V ( x ) . Finally , we obtain P s,x ( τ > n ) ≤ 1 R E s,x [ V ( X s + n ) 1 { τ > n } ] ≤ V ( x ) R ρ n . No w w e lift such exp onen tial tail of return time to the split chain. Theorem 5.5. Supp ose Assumption 2.2 and Assumption 3.2 h old. Then for every starting time s and every initial state x i (i.e., ˇ X s = x i ) with V ( x ) < ∞ , ther e exist c onstant K > 0 and 0 < ζ < 1 such that ˇ P s,x i ( ˇ τ > n ) ≤ K ζ n , for ∀ n ∈ Z + wher e ˇ τ := inf  k ∈ Z + : ˇ X s + k ∈ C ( R ) × { 1 }  . In p articular, ˇ P s,x i ( ˇ τ = ∞ ) = 0 . Pr o of. Let σ 0 := τ and σ k +1 := inf { t > σ k : X s + t ∈ C ( R ) } , k ≥ 0 , denote the successive en trance times of the original c hain into C ( R ). By Lemma 5.4 and the strong Marko v prop erty , there exist constants C 1 < ∞ and ρ ∈ (0 , 1) whic h do not depend on k such that sup s,x P s,x  σ k +1 − σ k > n   F s σ k  ≤ C 1 ρ n , n ≥ 0 . 15 Hence there exist θ 0 > 0 and M < ∞ suc h that sup s,x E s,x h e θ 0 ( σ k +1 − σ k )    F s σ k i ≤ M . (5.8) A t eac h visit time σ k , the split construction tosses an indep enden t Bernoulli( β ) coin (conditionally on the past): with probability β the c hain regenerates at la yer 1, otherwise it stays in lay er 0. Let G := inf { k ≥ 0 : δ s + σ k = 1 } , so that ˇ τ = σ G . Given the past, G is geometric( β ) and indep enden t of the in ter-visit incremen ts. Conditioning on G and using (5.8), ˇ E s,x i  e θ ˇ τ  = ˇ E s,x i   e θσ 0 G − 1 Y j =0 e θ ( σ j +1 − σ j )   = ∞ X k =0 ˇ E s,x i   e θσ 0   k − 1 Y j =0 e θ ( σ j +1 − σ j )   1 { G = k }   = ∞ X k =0 ˇ E s,x i  e θσ 0 W k − 1    where W k := k Y j =0 e θ ( σ j +1 − σ j ) 1 { δ s + σ j = 0 }   = ∞ X k =0 ˇ E s,x i  e θσ 0 W k − 1  · β = ∞ X k =0 ˇ E s,x i h e θσ 0 W k − 2 · ˇ E s,x i h e θ ( σ k − σ k − 1 ) | F s σ k − 1 ii · (1 − β ) β ≤ ∞ X k =0 ˇ E s,x i  e θσ 0 W k − 2  · M (1 − β ) β ≤ · · · ≤ ∞ X k =0 ˇ E s,x i [ e θσ 0 ] M k (1 − β ) k β . F rom Lemma 5.4, σ 0 has geometric tail if V ( x ) < ∞ . Because M ( θ ) is con- tin uous at θ = 0 with M (0) = 1, we can choose 0 < θ ≤ θ 0 so small that (1 − β ) M < 1 and ˇ E s,x i [ e θσ 0 ] < ∞ . Then the geometric series con verges and sup s,x i ˇ E s,x i  e θ ˇ τ  ≤ ˇ E s,x i [ e θσ 0 ] β 1 − (1 − β ) M < ∞ . Th us, for all n ≥ 0, ˇ P s,x i ( ˇ τ > n ) ≤ ˇ E s,x i [ e θ ˇ τ ] · e − θn ≤ ˇ E s,x i [ e θσ 0 ] β 1 − (1 − β ) M e − θn =: K ζ n , ζ := e − θ ∈ (0 , 1) . Letting n → ∞ , w e hav e ˇ P s,x i ( ˇ τ = ∞ ) = 0. 16 5.3 The proof of SLLN W e let τ 0 denote the first entrance time of the split chain to the set C ( R ) × { 1 } , and τ k the k th en trance time to C ( R ) × { 1 } subsequen t to τ 0 . These random v ariables are defined inductiv ely as τ 0 = min( n ≥ 0 : δ n = 1) , τ k = min( n > τ k − 1 : δ n = 1) , for k ≥ 1 . With all the argumen ts so far, we obtain the sequence of indep endent, but not identically distributed regeneration cycles ( C l ) l ∈ Z + almost surely , where C l := { τ l + 1 , τ l + 2 , · · · , τ l +1 } . F or each k define D 0 k ( g ) := τ k +1 X j = τ k +1 g ( X j ) , where g : X → R is a b ounded function. F rom Prop ert y 5.2, every X τ k +1 is regenerated by drawing ν and so indep enden t of the past, w e ha ve that ( D 0 k ) k ∈ Z + is an indep enden t random process. Let N ( n ) b e the num b er of regenerations up to time n : N ( n ) := max { k : τ k ≤ n } . Then we hav e n = τ 0 + N ( n ) − 1 X l =0 L l + r ( n ) , where L l := τ l +1 − τ l is the length of each regeneration cycle C l and r ( n ) is the remaining term. F rom Theorem 5.5, W e hav e the following result. Corollary 5.6. F or any initial state x i with V ( x ) < ∞ , we have N ( n ) ↑ a.s. ∞ . Pr o of. By Theorem 4.5, we ha v e ˇ P 0 ,x i ( τ 0 < ∞ ) = 1. Thus w e can consider τ 0 to b e the initial time by the strong Marko v prop ert y and mo dify N ( n ) = max { k : τ k ≤ τ 0 + n } to prov e the original result. F or any M ∈ N , we hav e ˇ P τ 0 { N ( n ) ≤ M } = ˇ P τ 0 { τ M ≥ n } ≤ ˇ P τ 0 n 1 ≤ ∃ l ≤ M s.t. τ l − τ l − 1 ≥ n M o ≤ M ˇ P ν ∗ n ˇ τ > j n M ko ≤ M K ζ ⌊ n M ⌋ n →∞ − − − − → 0 . Th us the pro of is complete. W e will see that the length of each cycle has uniform twice moment. 17 Lemma 5.7. Under the c ondition of The or em 5.5, we have sup l ∈ Z + ˇ E [ L 2 l ] < ∞ and ˇ E [ τ 2 0 ] < ∞ . Pr o of. Since ˇ P ( L l > m ) = ˇ P ν ∗ ( ˇ τ > m ) ≤ K ζ m , w e ha ve ˇ E [ L 2 l ] = ∞ X k =0 ˇ P { L 2 l > k } = ∞ X m =0 (2 m + 1) ˇ P { L l > m } ≤ ∞ X m =0 (2 m + 1) K ζ m , whic h implies sup l ∈ Z + ˇ E [ L 2 l ] < ∞ . By a similar argument, we hav e ˇ E [ τ 2 0 ] < ∞ . The LLN for indep enden t but not identically distributed random v ariables pla ys a key role; see, e.g. Corollary 5.22 of Kallenberg [4]. Lemma 5.8. Assume that X 1 , X 2 , · · · ar e indep endent with me ans µ 1 , µ 2 , · · · and varianc es σ 2 1 , σ 2 2 , · · · such that ∞ P k =1 σ 2 k k 2 < ∞ c al le d Kolmo gor ov’s criterion. Then X 1 + · · · + X n − ( µ 1 + · · · + µ n ) n a.s. − → 0 . W e now obtain the SLLN in the equilibrium case. Theorem 5.9. Supp ose Assumptions 2.2 and 3.2 hold. L et x ∈ X b e such that V ( x ) < ∞ . L et g : X → R b e a b ounde d me asur able function. Then, for the time-inhomo gene ous Markov chain ( X n ) n ∈ Z + with X 0 ∼ µ 0 , we have 1 n n − 1 X k =0 g ( X k ) − 1 n n − 1 X k =0 µ k ( g ) a.s. − − − − → n →∞ 0 . Pr o of. W e denote: S n = n X i =0 g ( X i ) − n X i =0 µ i ( g ) = τ 0 X i =0 ( g ( X i ) − µ i ( g )) + N ( n ) − 1 X l =0 D l + R ( n ) , where D l := D 0 l − τ l +1 X j = τ l +1 µ j ( g ) , 18 and R ( n ) is the remaining term. Then we ha ve S n n + 1 = τ 0 P i =0 g ( X i ) + N ( n ) − 1 P l =0 D l + R ( n ) n + 1 = 1 N ( n ) τ 0 P i =0 g ( X i ) + 1 N ( n ) N ( n ) − 1 P l =0 D l + R ( n ) N ( n ) n +1 N ( n ) . F rom Lemma 5.7 and that g is b ounded, we hav e sup l ∈ Z + ˇ E [ D 2 l ] ≤ sup l ∈ Z + ∥ g ∥ 2 ∞ ˇ E [ L 2 l ] < ∞ whic h implies sup l ∈ Z + V ar ( D l ) < ∞ . Thus ( D l ) l ∈ Z + satisfy Kolmogorov’s criterion and we obtain 1 N ( n ) N ( n ) − 1 X l =0 D l a.s. − → 0 . W e claim that R ( n ) N ( n ) a.s. − → 0. Since for an y ε > 0, by Chebyshev’s inequalit y , we ha ve ˇ P ( L l > εl ) ≤ sup l ∈ Z + ˇ E [ L 2 l ] ε 2 l 2 , whic h implies ∞ X l =0 ˇ P ( L l > εl ) < ∞ . By Borel-Cantelli lemma we ha ve L l l a.s. − → 0 . Since the fact r ( n ) < L N ( n ) , we hav e r ( n ) N ( n ) < L N ( n ) N ( n ) a.s. − → 0 , whic h implies R ( n ) N ( n ) ≤ ∥ g ∥ ∞ r ( n ) N ( n ) a.s. − → 0 . Since ˇ P ( τ 0 = ∞ ) = 0 by Lemma 5.7, w e obtain lim sup n →∞ S n n + 1 ≤ lim n →∞ 1 N ( n ) τ 0 X i =0 g ( X i ) + lim n →∞ 1 N ( n ) N ( n ) − 1 X j =0 D j + lim n →∞ R ( n ) N ( n ) = 0 . The pro of is complete. W e no w proceed to the pro of of Theorem 3.3 as a consequence of Theorem 5.9 b y using the coupling metho d based on the exp onen tial ergo dicit y . 19 Pr o of of The or em 3.3. F rom the exp onential ergo dicit y of Theorem 2.4 and b y Goldstein’s theorem (see Theorem 14.10 of Lindv all [6]), there exists a maximal coupling of tw o chains: X ( x ) with X ( x ) 0 = x, X ( µ ) with X ( µ ) 0 ∼ µ 0 suc h that the coupling time: T := inf { N ∈ Z + : X ( x ) n = X ( µ ) n for n ≥ N } is almost surely finite. Consequently , for any b ounded function g : X → R , w e ha ve      1 n n − 1 X k =0  g ( X ( x ) k ) − g ( X ( µ ) k )       a.s. − − − − → n →∞ 0 , and combining with the SLLN under X 0 ∼ µ 0 , we hav e 1 n n − 1 X k =0 g ( X k ) − 1 n n − 1 X k =0 µ k ( g ) a.s. − − − − → n →∞ 0 , whic h complete the proof. Ac kno wledgmen ts A. Lau is grateful to the Graduate School of Science, The Universit y of Osak a, for the scholarship for in ternational students. This researc h was supp orted b y ISM 2025-ISMCRP-5007. The research of K. Y ano was supported by JSPS KAKENHI grant no.’s JP24K06781, JP24K00526 and JP21H01002, and b y JSPS Op en P artnership Joint Research Pro jects gran t no. JPJSBP120249936. References [1] V. Bansa ye, B. Cloez, and P . Gabriel. Ergo dic behavior of non-conserv ative semigroups via generalized Doeblin’s Conditions. A cta Appl Math 166 , 29– 72, 2020. [2] M. Hairer. Er go dic Pr op erties of Markov Pr o c esses . Lecture Notes, July 2018. [3] P . Hall and C. C. Heyde. Martingale Limit The ory and Its Applic ation . Academic Press, New Y ork, 1980. [4] O. Kallenberg. F oundations of Mo dern Pr ob ability , 3rd ed. Springer, New Y ork, 2021. [5] J. G. Kemeny and J. L. Snell. Finite Markov Chains . Springer, New Y ork, 1976. 20 [6] T. Lindv all. L e ctur es on the Coupling Metho d . Do v er Publications, 2002. [7] Z. Liu and D. Lu. Ergo dicit y of inhomogeneous Mark ov pro cesses under general criteria. F r ontiers of Mathematics in China , to app ear (2025). DOI: 10.1007/s11464-023-0102-1. [8] S. P . Meyn and R. L. Tw eedie. Markov Chains and Sto chastic Stability , 2nd ed. Cam bridge Universit y Press, 2009. [9] E. Nummelin. A splitting technique for Harris recurrent Mark ov c hains. Z. Wahrscheinlichkeitsthe orie verw. Gebiete , 43:309–318, 1978. [10] P .-C.G. V assiliou. Law of Large Num b ers for Non-Homogeneous Marko v Systems. Metho dol Comput Appl Pr ob ab 22 , 1631–1658, 2020. 21

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment