Fast random sampling and small noise analysis for stochastic control models

In this paper, we study a linear control system with a given state feedback law. The system is influenced by rapid random sampling occurring at frequency $\frac 1n, n \in \mathbb N$, as well as by white noise of small intensity $\varepsilon \in (0, 1…

Authors: Sarvesh Ravich, ran Iyer, Vivek Kumar

F ast random sampling and small noise analysis for sto c hastic con trol mo dels Sarv esh R. Iy er Departmen t of Mathematics, Ashok a Univ ersit y , Sonipat, Hary ana, India-131029 sarveshiyer@gmail.com Viv ek Kumar ∗ Departmen t of Mathematics and Statistics, I IT Kanpur, Kanpur, Uttar Pradesh, India-208016 vivekmsc118@gmail.com, vivekkumar@iitk.ac.in Abstract In this pap er, w e study a linear control system with a giv en state feedback la w. The system is influenced b y rapid r andom sampling o ccurring at frequency 1 n , n ∈ N , as well as b y white noise of small in tensity ε ∈ (0 , 1]. W e study the behavior of the system as n → ∞ and ε ↘ 0 join tly , and prov e that it conv erges to its ideal deterministic analogue. F or the random fluctuations around its analogous deterministic tra jectory , we obtain either sto c hastic differential equations or an ordinary differen tial equation dep ending on the joint b eha vior of ε and n . F urther, we extend this problem to a nonlinear system driv en b y m ultiplicative white noise, where the noise intensit y is scaled by a small parameter. In this case, w e again p erform a similar analysis as in the linear case. 1 In tro duction In any mo del based on ODE, it is important to keep systems stable, safe, and efficient. Control theory pro vides the tools to do this, and for this reason, it has b ecome a very p opular sub ject, with applications in rob otics, aircraft, cars, p ow er plan ts, and man y other fields. It helps design systems that w ork reliably under different conditions; see [ 5 , 24 , 31 ]. In practice, con trol is not * Corresp onding author. Key Wor ds: Random Sampling, Renewal Theory , Stochastic Differential Equations, La w of Large Numbers, Cen tral Limit Theorems. Mathematics Sub ject Classification (2020): 60F17, 60K05, 60F05, 60H10, 60F25. 1 alw ays applied con tin uously . Most modern con trollers are digital, and they compute and up date the control only at discrete time instan ts. Digital control uses a computer to con trol a system and it c hecks the system at fixed time in terv als and mak es adjustmen ts based on those readings. This is differen t from con tinuous con trol, where the system is monitored and adjusted all the time. F or instance, in an inv erted p endulum, the computer c hecks the pendulum’s angle and sp eed ev ery second, decides ho w muc h force the motor should apply to keep it upright, and holds that force steady until the next chec k. Bet ween tw o up dates, the con trol input is k ept fixed. This is kno wn as the sample and hold method, ensuring the p endulum sta ys balanced (see [ 24 , Chapter 1]). There are sev eral applications of sample and hold con trol, for examples, see [ 5 , 24 , 29 ] and references therein. In many applications, suc h as seismic data acquisition or net work ed control, sampling is influenced by noise, delays, and sc heduling uncertain ties, making it inheren tly random rather than strictly perio dic. This motiv ates mo deling the sampling p erio d as a random v ariable with a giv en probabilit y distribution, whic h allo ws for more realistic analysis and reconstruction. F or instance, in the case of an inv erted p endulum con trolled b y a computer, the controller ideally samples the p endulum’s p osition at fixed interv als to k eep it upright. In practice, ho wev er, net work delays, sensor glitches, or energy saving strategies can cause measurements to arriv e irregularly , so the controller holds the input ov er random time lengths while the plant evolv es op en-lo op. Ev en with a high a verage rate, these fluctuations affect p erformance and stability , highligh ting the need to study random sampling. The study of control systems with random sampling has a long and interesting history [ 18 , 20 , 21 ]. Kalman [ 18 ], w as the first to explore this idea in the 1950s. Later, Leneman [ 21 ] and also Kushner and T obias [ 20 ], extended his w ork during the 1960s. A helpful review can b e found in chapter [ 26 ], whic h gives a clear ov erview of these developmen ts. F or more studies and recent adv ances on random sampling in control systems, see [ 7 , 14 , 22 , 23 , 25 – 27 , 29 ] and references therein. F urther, real systems are often affected by some small external effects and uncertainties. T o describe these effects in dynamical systems, sto chastic mo dels of differential equations are used. In contin uous time, this leads to sto c hastic differential equations (SDEs), where a small random noise term represents these external influences in the system, [ 6 , 8 , 9 , 11 , 33 ]. In this article, we study a framework with both random sampling mec hanisms and external small sto chastic effects. The imp ortant question in suc h situations is whether such sampled systems k eep the imp ortan t prop erties of their con tinuous deterministic counterparts. F or instance, do es stability remain intact? Do es p erformance degrade? These issues are cen tral in the study of computer controlled and netw ork ed control systems. There has b een limited researc h on the in teraction betw een sampling and noise [ 6 – 9 ]. The present w ork aims to address these questions for a given dynamical problem. As mentioned earlier, sev eral w orks on sample and hold metho ds are av ailable in the lit- erature ( see [ 5 – 9 , 11 , 14 , 18 , 20 – 27 , 31 ]). F or broader survey discussions, we refer the reader to [ 26 , 29 , 32 ]. Ho wev er, in the present pap er, we fo cus only on the studies that are most closely related to our work. In particular, P ahla jani and Dhama hav e recently obtained sev eral notew orthy results on sampled data and hold control systems under small noise p erturba- tions [ 6 , 8 , 9 ], as w ell as on systems with random sampling [ 7 ]. In their w orks [ 6 , 8 , 9 ], b oth 2 authors mainly consider a con trolled dynamical system gov erned b y the deterministic ODE ˙ y t = c ( y t , u t ), with initial condition y 0 ∈ R d , where c is the drift and u t is the control. When the state is con tinuously observ ed, a feedback la w u t = κ ( y t ) is applied, leading to the closed lo op system ˙ y t = c ( y t , y t ), where c ( x, z ) = c ( x, κ ( z )), which describ es the ideal deterministic b eha vior. T o model digital con trollers, the control is updated only at discrete times and held constan t b etw een up dates, resulting in the sample and hold system ˙ y δ t = c  y δ t , y δ δ ⌊ t/δ ⌋  with sampling p erio d δ > 0. Finally , small external noise p erturbations are incorp orated, yield- ing ˙ y ε,δ t = c  y ε,δ t , y ε,δ δ ⌊ t/δ ⌋  + ε ˙ F t , where ε > 0 measures the noise intensit y and ˙ F t is a general sto c hastic forcing. Since their approac hes and analytical settings are closely connected to ours, w e review these works individually to highlight their main con tributions. Subsequently , in the next subsection, w e clarify ho w our results differ from and extend their findings. P ahla jani and Dhama first studied the linear systems in this direction. In their first w ork [ 8 ], they considered a linear system given b y d Y ε,δ t =  AY ε,δ t + B U ε,δ k  dt + εdW t , U ε,δ k = − K ( X ε,δ kδ − + εV kδ ) , , for t ∈ [ k δ, ( k + 1) δ ) and where W t and V kδ are indep endent Brownian motions representing system and measurement noise. F or p ositive in tegers m and d, A ∈ R d × d , B ∈ R d × m and K ∈ R m × d are constant matrices. The main ob jectiv e of this pap er [ 8 ] was to c haracterize the limiting b ehavior of X ε,δ t as ε, δ → 0, b oth in terms of the mean dynamics, which follow the deterministic ODE, and the fluctuations, which are go verned b y SDEs. In the work [ 9 ], b oth the authors ha v e generalized the previous work [ 8 ] b y considering nonlinear drift with multiplicativ e noise as dX ε,δ t = c ( X ε,δ t , X ε,δ δ ⌊ t/δ ⌋ ) dt + εσ ( X ε,δ t ) dW t . Here c : R d × R d → R d is a sufficiently regular mapping and tak es the sp ecific form c ( x, y ) = f ( x ) + g ( x ) κ ( y ) , x, y ∈ R d , with the functions f : R d → R d , g : R d → R d × m , κ : R d → R m satisfying certain regularit y conditions. The authors obtain asymptotic approximations for b oth the mean dynamics and the fluctuations of X ε,δ t as ε, δ ↘ 0. The mean b ehavior is described b y a limiting ODE of the form x t = x 0 + Z t 0 [ f ( x s ) + g ( x s ) κ ( x s )] ds on the in terv al [0 , T ] while the fluctuations are captured b y a linear SDE whose form dep ends on the relativ e rates at whic h ε and δ v anish. F urther, in the article [ 6 ], Dhama studied the following nonlinear sampled data systems p erturb ed by both Brownian noise and small jumps via Poisson random measures d Y ε,δ t = c ( Y t − , Y ε,δ π δ ( t ) − ) dt + εσ ( Y t − ) dW t + ε Z 0 < | x | < 1 F ( Y t − , x ) , ˜ N ( dt, dx ) with π δ ( t ) = δ ⌊ t/δ ⌋ . Metho dologically , the work p erforms a joint small parameter expansion under three regimes for δ /ε and prov es pathwise v ersions of Law of Large Numbers ( LLN) and Cen tral Limit Theorem (CL T) type results. 3 The pap er [ 7 ], differs from the ab o v e three studies. It considers a linear state feedback system implemented using a sample and hold mechanism (as mentioned in [ 8 ]), where the sampling times are random and follow a renew al pro cess. Unlike the earlier works, randomness en ters only thro ugh the sampling times, and no extra noise is added to the system. The primary ob jective is to understand ho w fast, y et finite rate, random sampling alters the system dynamics when compared to the ideal contin uous time mo del. Using the law of large num b ers and the cen tral limit theorem results for random matrix pro ducts, the study describ es b oth the a v erage b eha vior and the typical fluctuations caused purely b y the randomness in sampling. 1.1 The Nov elt y and Metho ds This w ork primarily deals with a linear con trol model sub ject to b oth rapidly increasing random sampling at rate n ∈ N and an external noise p erturbation of v ery small intensit y ε . The primary goal is to inv estigate law of large n umbers (LLN) and central limit theorem (CL T) t yp e results in the regime where the effects of random sampling and noise p erturbations b oth v anish, i.e., when ( 1 n , ε ) ↘ 0 . One of the main differences b etw een our work and that of [ 6 – 9 ] is the sim ultaneous consideration of b oth random sampling and an external noise term. In particular, our work is an extension of the work [ 7 , 8 ] b y updating sampling metho d in [ 8 ] while it extend [ 7 ] b y in tro ducing an external noise forcing term with very small in tensity ε . T o our b est knowledge, the presen t w ork has not b een done yet in the literature. F urther, we also generalize the results obtained in our linear setting to a more broader framew ork that includes nonlinear systems and m ultiplicative white noise. This extension is inspired b y the work [ 6 ], where author has w orked under deterministic sampling. In con trast, along with the nonlinearit y , w e also consider random sampling, which makes the problem more complex and interesting. In this context, Lemma 5.9 of linear case, plays a k ey role in estab- lishing the CL T and makes calculation more simple. T o the b est of our kno wledge, this setting is also new and has not yet b een inv estigated in the existing literature. This work also co vers the results obtained in [ 9 ] in the context of random sampling. Earlier, we men tioned that this pap er deals with tw o in teracting sources of randomness, whic h makes the analysis considerably more c hallenging. This can b e considered as a m ultiscale problems [ 2 – 4 , 15 , 17 , 19 ], where asymptotic behavior pla ys a key role. Multiscale systems of suc h kind w ere first in vestigated by Khasminskii in his seminal w orks [ 17 , 19 ], where he dev elop ed the a veraging metho d to derive simplified dynamics for the slow comp onen t by a v eraging o v er the fast one. This idea has since inspired extensiv e researc h; for example, see [ 2 – 4 , 15 ] and references therein. The other techniques to solv e m ultiscale problems is the idea used by F reidlin and So wers [ 12 ] where they hav e solved Large deviation principle by using first homogenization and then a veraging principle. In the present article, in the case of the linear system, our analysis is inspired b y and partially follows the framew ork introduced in [ 8 ], where authors ha ve shown the LLN and CL T t yp e results for their linear control problems with perio dic sampling. F or the linear setting in our case, in the LLN result, w e hav e shown that as ε ↘ 0 and n → ∞ , the the sampled sto chastic system b eha ves close to the ideal deterministic system. In the CL T part, we hav e analyzed the 4 b eha vior of the fluctuations of the sampled sto c hastic system around the deterministic controlled tra jectory . Dep ending on the balance b etw een noise intensit y and sampling frequency , these fluctuations con v erge either to a stochastic differential equation or to a deterministic equation. T o deriv e these results, w e encoun ter tec hnical difficulties due to the presence of combination of random sampling together with external noise. This goal cannot b e achiev ed b y simply applying the techniques from [ 8 ]. Therefore, we ha ve dev elop ed an extended framework beyond that metho d. T o ov ercome these challenges, w e first p erform a careful decomp osition of the main expression into sev eral comp onen ts, eac h of which can b e analyzed separately . In order to do this, w ell established methods from probabilit y theory are employ ed thoughtfully and in a sophisticated manner. The W ald’s identit y to handle sums of random v ariables, the elementary renew al theorem to control renewal pro cesses. F urther, we use the Donsk er’s theorem for appro ximating scaled pro cesses b y Bro wnian motion, and Do ob’s maximal inequality to bound the supremum of martingales. The con tribution of the small noise term is treated using the Burkholder-Da vis-Gundy (BDG) inequalit y in com bination with Do ob’s maximal inequalit y , ensuring precise control o v er sto chastic fluctuations. This systematic approach allo ws us to rigorously establish the limiting b eha vior of the system, despite the complexity in tro duced by the sim ultaneous presence of sampling and noise. 1.2 Plan of the pap er In the next section, we introduce the ov erall framew ork of our study and present the main results. W e first formulate the problem rigorously and set out the notation and assumptions that will b e used throughout the paper. W e then outline the k ey ideas and tec hniques underlying our analysis. Section 3 contains the necessary definitions and auxiliary results that will b e used in the subsequen t sections. In Section 4 , w e deriv e LLN t yp e results, whic h describe the a verage, or deterministic, b ehavior of the system. Section 5 is devoted to CL T type results. This section is divided in to t wo subsections, in whic h the detailed proofs of the CL T are pro vided. Subsection 5.1 fo cuses on the decomp osition of the random sampling comp onent, whereas Subsection 5.2 is concerned with the analysis of the noise comp onen t. Finally , in Section 6 , w e extend our framew ork to a more general class of linear problems and obtain analogous results for this broader setting. 1.3 Notations Let (Ω , F , P ) b e a probabilit y space, and let { ξ i } ∞ i =1 b e a sequence of i.i.d. p ositiv e real-v alued random v ariables defined on this space. Throughout the pap er, | · | denotes the induced matrix norm, and for tw o matrices A and B , we write | A | · | B | to denote the pro duct of their norms. The sym b ol C denotes a generic p ositive constant whose v alue may change from line to line. Whenev er necessary , its sp ecific dep endence will b e indicated explicitly in the relev an t state- men t. If no dep endence is sp ecified, C represen ts a univ ersal p ositiv e constant. W e denote the set of natural num b ers by N and the set of p ositive integers by Z + . The notation V ar is used to represen t v ariance. F urther, pth p o w er ov er exp ectation is denoted b y E [ · ] p :=  R Ω · d P  p . 5 2 Problem F orm ulation, Assumptions and Main Results Before going to the introduce the dynamical system of our problem, let us first discuss ab out the random sampling setup whic h we are going to use in our pap er. In our sampling and hold setup, w e are going to replace deterministic discretizations by a renewal pro cess, using the prescription from [ 7 , page 360]. F or this, w e define time pro cess asso ciated to random v ariables { ξ i } ∞ i =1 as τ k := k X i =1 ξ i , τ 0 = 0 , and the renew al pro cess N t := sup { k ∈ Z + : τ k ≤ t } . Throughout the pap er we are assuming that the sequence { ξ i } ∞ i =1 has finite moment generating functions. The appropriate “fast sampling” tak es place via a parameter n ∈ N . That is, for k ∈ Z + , n ∈ N , let ξ n k := 1 n ξ k and let { τ n k } k ∈ Z + , { N n t } t ≥ 0 b e the asso ciated time and renewal pro cesses resp ectively giv en by τ n k := k X i =1 ξ n i , τ n 0 = 0 , (2.1) and N n t := sup { k ∈ Z + : τ n k ≤ t } . (2.2) By definitions, ξ 1 k = ξ k , τ 1 k = τ k for all k ≥ 1, and N 1 t = N t for all t ≥ 0. These will b e used in terchangeably . Giv en n ∈ N , the appropriate discretizations function at level n in this setting is the function t → τ n N n t , which is the last sampling p oint smaller than t at the scale 1 n . Observe that N n T = sup { k : τ n k ≤ T } = sup { k : τ 1 k ≤ nT } = N 1 nT . (2.3) Let us now turn our attention to the dynamical feedbac k con trol framework. W e consider the linear differen tial equation giv en b elo w. ˙ x t = Ax t + B u t ; x (0) = x 0 ∈ R d (2.4) where A ∈ R d × d , B ∈ R d × m , m, d are fixed integers and t ∈ [0 , T ] for some finite T > 0. W e apply the feedbac k control law u = − K x , where K ∈ R m × d is a suitable matrix. Throughout the pap er, w e assume that the matrix A is in vertible, matrix A − B K generates a strongly con tinuous semigroup e t ( A − B K ) and matrices A and B K comm utes. Consequently , one can get follo wing an equiv alent in tegral form of equation ( 2.4 ) x t = x 0 + Z t 0 ( A − B K ) x s ds = e t ( A − B K ) x 0 . (2.5) F or the sak e of simplicit y , w e assume that the initial data x 0 is b ounded and its b ound is incorp orated in to a generic constant C . No w, consider implementing the feedback con trol law 6 describ ed earlier using a sample and hold strategy in dynamics ( 2.4 ). F or an y fix n ∈ N and an y k ∈ Z + , supp ose that the system is sampled at time instant t = τ n k . At each sampling instan t t , the curren t state x t = x τ n k is measured, and the control input is determined according to u τ n k = − K x τ n k , and this con trol v alue is then k ept constan t ov er the interv al [ τ n k , τ n k +1 ) . The tra jectory x n t o ver time t ∈ [ τ n k , τ n k +1 ) can b e obtained by solving the differen tial equation ˙ x n t = Ax n t + B ( − K x n τ n k ) = Ax n t − B K x n τ n k , (2.6) with initial condition x n τ n k = x n τ n k − . In this interv al t ∈ [ τ n k , τ n k +1 ) , x n τ n k = x n τ n N n t is fixed and the system is linear with constan t co efficients on each interv al [ τ n k , τ n k +1 ) , k ∈ Z + . Hence, the solution can b e given b y x n t =  e A ( t − τ n k ) − Z t τ n k e A ( t − s ) B K ds  x n τ n k , t ∈ [ τ n k , τ n k +1 ) . (2.7) As w e can see that, the tra jectory dep ends on n also, and we can exp ect that as n → ∞ , it should con verge to the con tinuous time solution ( 2.5 ). W e now consider a situation when the system is influenced b y some small external random force. F or this, let us consider a filtration {F t : t ≥ 0 } on the probability space (Ω , F , P ) with satisfying standard conditions (see [ 33 ]). In this probabilit y space, consider an n-dimensional Bro wnian motion, W = { W t } t ≥ 0 whic h represent the external noise in the system and and is indep enden t of the sequence { ξ i } ∞ i =1 . W e supp ose the in tensity of the noise is v ery small, say of order ε ∈ (0 , 1). No w, the system ( 2.6 ) evolv es according to the follo wing sto chastic hybrid system with additiv e noise dX ε,n t =  AX ε,n t − B K X ε,n τ n k  dt + εdW t , t ∈ [ τ n k , τ n k +1 ) , (2.8) and with initial data X ε,n 0 = x 0 . It can b e noticed that in the in terv al t ∈ [ τ n k , τ n k +1 ) , X ε,n τ n k = X ε,n τ n N n t is fixed. F ollowing the represen tation ( 2.7 ), we ha ve X ε,n t = " e ( t − τ n k ) A − Z t τ n k e ( t − s ) A B K ds # X ε,n τ n k + ε Z t τ n k e ( t − s ) A dW s and th us for an y t ≥ 0 , we ha v e X ε,n t = X k ≥ 0 1 [ τ n k ,τ n k +1 ) ( t ) (" e ( t − τ n k ) A − Z t τ n k e ( t − s ) A B K ds # X ε,n τ n k + ε Z t τ n k e ( t − s ) A dW s ) . In ( 2.8 ) , if we fix ε ∈ (0 , 1) and take n → ∞ , we c an exp e ct the limiting dynamics ar e describ e d by the pr o c ess X ε t which has fol lowing form dX ε t = [( A − B K ) X ε t ] dt + εdW t , X ε 0 = x 0 . (2.9) 7 F urther, we ar e exp e cting that by taking limit ε ↘ 0 , the c onver genc e of the solution X ε t given by ( 2.9 ) to the deterministic tr aje ctory x ( t ) solving ( 2.5 ) is str aightforwar d. Our primary obje ctive is to determine how the c ombine d pr esenc e of r andom sampling and Br ownian motion affe cts these classic al limits. Mor e sp e cific al ly, we ar e inter este d in how the r elative r ates at which ε ↘ 0 and n → ∞ simultane ously mo dify the c onver genc e of X ε,n t to x ( t ) . Considering the join t con vergence of ε ↘ 0 and n → ∞ , w e identify the following three asymptotic regimes: c := lim ε ↘ 0 ,n →∞ 1 nε      = 0 Regime 1 ∈ (0 , ∞ ) Regime 2 = ∞ Regime 3 . (2.10) Here, in the first Regime, the sampling pro cess is very fast while the noise decreases slowly in comparison. Consequently , the system is already w ell sampled while the noise is still present. In this case, the main effect comes from small noise acting in a rapidly v arying en vironment. In the second Regime, sampling and noise ev olve at almost the same sp eed, i.e., neither one dominates the other. Finally , in the third Regime, the noise in tensity ε decreases m uch faster than the sampling parameter n increases. As a result, the effect of external noise b ecome negligible at an early stage of the system’s ev olution. F or the cases c = 0 and c ∈ (0 , ∞ ), w e set κ ( ε ) :=     1 nε − c     , (2.11) Note that lim ε ↘ 0 ,n →∞ κ ( ε ) = 0. No w, for eac h n ∈ N , define a time–discretizations map π n : [0 , ∞ ) → { τ n k : k ∈ Z + } b y π n ( t ) := τ n N n t for t ∈ [0 , ∞ ) . That is, for eac h time t ∈ [0 , ∞ ), π n pic ks the closest previous sampling time in the random discretizations grid. In particular, if t ∈ [ τ n k , τ n k +1 ) for some k ∈ Z + , then N n t = k b y definition 2.2 , and hence π n ( t ) = τ n k . With the help of π n , w e can rewrite equations ( 2.6 ) and ( 2.8 ) for t ∈ [0 , ∞ ) as follows: ˙ x n t = Ax n t − B K x n π n ( t ) (2.12) with x n 0 = x 0 and dX ε,n t =  AX ε,n t − B K X ε,n π n ( t )  dt + εdW t , (2.13) with X ε,n 0 = x 0 or equiv alently X ε,n t = x 0 + Z t 0 ( AX ε,n s − B K X ε,n π n ( s ) ) ds + ε Z t 0 dW s . (2.14) W e can no w state our first main results of this pap er. The first result is a “law of large n umbers”, guaran teeing that as ε ↘ 0 , n → ∞ , X ε,n t → x t uniformly in L p , p ≥ 1 in all the Regime. 8 Theorem 2.1. L et x ( t ) denotes the solution of ( 2.5 ) and let X ε,n t b e the solution to SDE ( 2.8 ) . L et T ≥ 0 b e fixe d. Then, for arbitr ary ε > 0 , n ∈ N , p ≥ 1 , ther e exists C AB K T p > 0 , dep ending only on A, B , K , T and p such that E  sup 0 ≤ s ≤ T | X ε,n s − x s | p  ≤ C AB K T p ( E [ N p ] + ε p ) , wher e N p = R T 0 ( s − π n ( s )) p ds satisfies E [ N p ] ≤ 1 n p C T ξ 1 and so → 0 as n → ∞ by Cor ol lary 3.11 b elow. Our next main aim is to study the fluctuation of X ε,n t ab out x ( t ) with resp ect to parameters ε and n . F or this, we in tro duce the fluctuation pro cesses under different scaling regimes defined in ( 2.10 ). F or Regimes 1 and 2, w e define the fluctuations scaled by ε as Z ε,n t := X ε,n t − x t ε , while for Regime 3 we consider fluctuations under the scaling 1 /n Q ε,n t := X ε,n t − x t 1 /n . With the help of ( 2.5 ) and ( 2.14 ), we obtain X ε,n t − x t = Z t 0 ( A − B K )( X ε,n s − x s ) ds + B K Z t 0  X ε,n s − X ε,n π n ( s )  ds + εW t Therefore, Z ε,n t = Z t 0 ( A − B K ) Z ε,n s ds + B K Z t 0 X ε,n s − X ε,n π n ( s ) ε ds + W t , (2.15) and similarly Q ε,n t = Z t 0 ( A − B K ) Q ε,n s ds + B K Z t 0 X ε,n s − X ε,n π n ( s ) 1 /n ds + εnW t . (2.16) W e exp ect that as ϵ ↘ 0 , n → ∞ , the random fluctuations Z ε,n t (in Regimes 1,2) and Q ε,n t (in Regime 3) will con v erge to some effectiv e fluctuation processes Z t and Q t resp ectiv ely , which are indep endent of ε, n and X ε,n t . If suc h an approximation holds, then we can appro ximate X ε,n t = x t + εZ t + γ ε,n in Regimes 1 , 2 and X ε,n t = x t + 1 n Q t + λ ε,n in Regime 3, where γ ε,n and λ ε,n v anish as ε ↘ 0 and n → ∞ . W e analyze the tw o cases separately , b eginning with Regime 1 and 2. In Regimes 1 and 2, since lim ε ↘ 0 n →∞ 1 nε = c ∈ [0 , ∞ ) , there exists ε 0 ∈ (0 , 1) such that κ ( ε ) < 1 whenever 0 < ε < ε 0 . In the b oth Regimes, one of the main c hallenges comes from the term Z t 0 X ε,n s − X ε,n π n ( s ) ε ds which is 9 cen tral to understand the b ehavior of the system Z ε,n t . It is not immediately clear ho w it ev olv es o ver time when ε is very small and n is very large. T o get a handle on this, let us suppose that w e can show that, in b oth regimes, the integral conv erges to a well defined function. Keeping this idea, w e in tro duce a function ℓ ( t ) such that ℓ ( t ) := c Z t 0 M ( A − B K ) x ( s ) ds, (2.17) where M := E [ ξ 2 1 ] 2 E [ ξ 1 ] . F urther, w e define the pro cess Z = { Z t : t ≥ 0 } as the unique strong solution of Z t := Z t 0 ( A − B K ) Z s ds − c M B K Z t 0 ( A − B K ) x ( s ) ds + W t , (2.18) No w, supp ose w e can establish that the the function ℓ ( t ) is such that lim ε, 1 /n → 0 1 /nε → c Z t 0 X ε,n s − X ε,n π n ( s ) ε ds = ℓ ( t ) . Then, in the Regimes 1 and 2, the fluctuation pro cess Z ε,n t can b e approximated b y Z t for small ε and large n . This is the conten t of our next main result, as presen ted b elo w. Theorem 2.2 (Cen tral Limit T yp e Theorem) . L et x ( t ) denotes the solution of ( 2.5 ) and let X ε,n t b e the solution to SDE ( 2.8 ) . Assume that the sc aling p ar ameters fal l into R e gime i ∈ { 1 , 2 } , i.e., c ∈ [0 , ∞ ) . Then, ther e exists a numb er ε 0 ∈ (0 , 1) such that for every fixe d T > 0 and for al l 0 < ε < ε 0 , E h sup 0 ≤ t ≤ T | X ε,n t − x ( t ) − εZ t | i ≤  c ( n − 1 / 2 ( n − 1 + ε ) + n − 1 / 4 ) + M κ ( ε ) + n − 1 / 2  C AB K T ξ 1 . Remark 2.3. As we know that in R e gime 1, c = 0 and sp e e d of ε is very slow c omp ar e to 1 /n. Ther efor e, if we take ε ≈ 1 n 1 − δ with 0 < δ < 1 , then the r ate b e c omes M κ + n − 1 / 2 = M / ( nε ) + n − 1 / 2 ≈ M n δ − 2 + n − 1 / 2 ≈ n − 1 / 2 . In term of ε, the r ate wil l b e ε min ( 1 2(1 − δ ) , δ 1 − δ ) . In R e gime 2, c is a fixe d c onstant, and ε and 1 /n b oth c onver ge to 0 at the appr oximately same r ate. Conse quently, the dominant term in the over al l r ate is ( M κ + n − 1 / 4 ) . Her e, the c onver genc e r ate is determine d by how κ → 0 , i.e., by the r ate at which nε → 1 / c . F urther, in Regime 3, a similar result holds; ho wev er, in this case, the process Q ε,n t con verges to a deterministic limit. This o ccurs b ecause, in this regime, ε deca ys far more rapidly than n gro ws. The result can b e formulated as follo ws. Theorem 2.4. L et x ( t ) denotes the solution of ( 2.5 ) and let X ε,n t b e the solution to SDE ( 2.8 ) . Assume that the sc aling p ar ameters fal l into R e gime 3, i.e., c = ∞ . Define the pr o c ess Q = { Q t : t ≥ 0 } as the unique str ong solution of Q t = Z t 0 ( A − B K ) Q s ds − B K M Z t 0 ( A − B K ) x ( s ) ds. 10 Then, ther e exists a numb er n 0 such that for every fixe d T > 0 and for al l 0 < n 0 < n, E  sup 0 ≤ t ≤ T     X ε,n t − x ( t ) − 1 n Q t      ≤  1 n 1 / 4 + ε √ n  C AB K T ξ 1 . The pro of of this theorem follows from the pro of of Theorem 2.2 ; therefore, w e omit the pro of here. W e observ e that, in this regime, c = ∞ , meaning that εn → 0 as ε ↘ 0 and n → ∞ . Remark 2.5. The term M c ontributing in the effe ctive drift in function ( 2.17 ) is the long run time aver age d age or long run r esidual life of the r enewal pr o c ess { N n t } t ≥ 0 cr e ate d by { ξ n i } ∞ i =1 (se e [ 13 , Chapter 5]). In the pr esent r enewal pr o c ess, the time b etwe en c onse cutive sampling is r andom and given by ξ n 1 , ξ n 2 , ξ n 3 , . . . . The age at any time is how long it has b e en sinc e the last sampling, and the residual life is how long until the next sampling. Within e ach interval b etwe en two sampling, the age incr e ases line arly fr om 0 to the length of the interval, while the r esidual life de cr e ases line arly fr om the interval length to 0 . If we plot age or r esidual life versus time, e ach interval forms a right angle d triangle, with the ar e a under the triangle r epr esenting the total ac cumulate d age or r esidual life in that interval. Dividing this ar e a by the interval length gives the aver age age or r esidual life in that interval. Summing over al l intervals and aver aging over time [0 , T ] , the long run time aver age of age or r esidual life c onver ges to E [ X 2 1 ] 2 E [ X 1 ] . The triangle pictur e is just a simple visual to ol to understand how the aver age over time c omes fr om the line ar incr e ase or de cr e ase of age and r esidual life within e ach interval. F or further details on the term M , the r e ader is r eferr e d to [ 13 ]. Remark 2.6. The tr aje ctories { Z t } t ≥ 0 and { Q t } t ≥ 0 c an b e explicitly c alculate d. In fact, we c an r ewrite Z t in ( 2.18 ) as d Z t = ( A − B K ) Z t dt − c M B K ( A − B K ) x ( t ) dt + dW t , Z 0 = 0 . Define Y t := e − ( A − B K ) t Z t and applying Itˆ o formula, we get Z t = − c M B K Z t 0 e ( A − B K )( t − s ) ( A − B K ) x ( s ) ds + Z t 0 e ( A − B K )( t − s ) dW s . (2.19) Thus, the limiting fluctuation pr o c ess Z t satisfies a line ar SDE of Ornstein-Uhlenb e ck (OU) (se e [ 33 ]) typ e with additive noise. In the R e gime 1, i.e, when c = 0 , it is pur e OU pr o c ess while in R e gime 2 it has a extr a line ar drift − ℓ ( t ) . Sinc e it has line ar drift with deterministic c o efficient and natur e of noise is additive, the e quation ( 2.19 ) admits an unique str ong solution and in p articular it is a Gaussian pr o c ess. This justifies r eferring the The or em 2.2 as a c entr al limit typ e the or em. A similar explanation c an b e given ab out Q t and The or em 2.4 , in fact it is mor e str aightforwar d sinc e Q t is a deterministic function. 3 Preliminaries In this section, we men tion the preliminaries that are required to pro ve our main results. Let us no w start b y presenting certain sto c hastic to ols that will b e used in pro ving our results. 11 3.1 Classical Results from Literature The first, W ald’s Identit y , will be used to simplify the first t wo moments for stopp ed sums of i.i.d. random v ariables. Lemma 3.1 (W ald’s Identit y) . [ 13 , The or em 5.5.3] L et X 0 , X 1 , X 2 , . . . b e an i.i.d. se quenc e of r andom variables. If N is a stopping time with r esp e ct to the filtr ation gener ate d by { X i } i ≥ 0 , then E " N X i =0 X i # = E [ N + 1] E [ X i ] . F urthermor e, supp ose that E [ X i ] = 0 . Then, E   N X i =0 X i ! 2   = E [ N + 1] V ar ( X i ) = E [ N + 1] E [ X 2 i ] . W e remark that our version of the identit y is stated for the sequence X i , i ≥ 0. The analogous cited v ersion is for X i , i ≥ 1, and our statement can be easily derived from it. The elemen tary renew al theorem gives an asymptotic rate for exp ected renewal times. Theorem 3.2 (The elemen tary renew al theorem) . [ 13 , The or em 5.6.2] L et X 1 , X 2 , . . . b e an in- te gr able, p ositive se quenc e of i.i.d. r andom variables. Supp ose that N ( t ) = sup n k : P k i =1 X i ≤ t o . Then, lim t →∞ E [ N ( t )] t = 1 E [ X 1 ] . W e also require Donsker’s theorem for b oth i.i.d. random v ariables and renew al pro cesses. The latter is a functional cen tral limit theorem for the renewal times as the in tensity scales linearly . Theorem 3.3 (Donsk er’s theorem) . [ 1 , The or em 16.1] If X i , i ≥ 1 ar e i.i.d. p ositive r andom variables with finite varianc e, then for any T > 0 , as r andom elements of the Skor okho d sp ac e D [0 , T ] we have ⌊ nt ⌋ X i =1 X i − E [ X i ] p nV ar ( X 1 ) ⇒ B t we akly as n → ∞ , wher e ( B t ) t ∈ [0 ,T ] is a Br ownian motion. Theorem 3.4 (Donsk er’s theorem for renewal processes) . [ 1 , The or em 17.3] Supp ose that X i , i = 1 , 2 , . . . ar e i.i.d. r andom variables with me an µ and varianc e σ . L et S m = X 1 + X 2 + ... + X m and τ t = sup { m : X m ≤ t } . Then, nt/µ − τ nt √ n ⇒ σ µ 3 / 2 B t we akly in the Skor okho d top olo gy on c ad lag p aths in R + , wher e B t is a standar d Br ownian motion. W e are now ready to pro ve some estimates whic h will b e used to prov e our main results. 12 3.2 Key estimates W e b egin b y establishing b ounds on the solutions to equations ( 2.4 ) and ( 2.6 ). Recall the functions x r and x n r defined in ( 2.4 ) and ( 2.6 ). Our next lemma prov es that b oth functions gro w slo wer than a deterministic exp onential function in t . This is particularly surprising for x n r , since it remov es an y sto c hastic inv olvemen t in the gro wth b ound. Lemma 3.5. L et L ( T ) = sup 0 ≤ r ≤ T | x r | and L ′ ( T ) = sup 0 ≤ r ≤ T | x n r | . Ther e exists a c onstant C AB K T > 0 such that max { L ( T ) , L ′ ( T ) } ≤ C AB K T for al l T > 0 . Pr o of. F or x t , using ( 2.5 ) | x t | ≤ | x 0 | + Z t 0 | ( A − B K ) x s | ds ≤ | x 0 | + Z t 0 | A − B K | · | x s | ds. T aking the supremum o ver 0 ≤ t ≤ T L ( T ) ≤ | x 0 | + | A − B K | Z T 0 L ( s ) ds By Gron wall’s lemma L ( T ) ≤ C AB K T . F or x n t , using ( 2.6 ) | x n t | ≤ | x 0 | + Z t 0 | Ax n s − B K x n π n ( s ) | ds ≤ | x 0 | + Z t 0  | A | · | x n s | + | B | · | K | · | x n π n ( s ) |  ds. Since π n ( s ) ≤ s , | x n π n ( s ) | ≤ L ′ ( s ). Th us L ′ ( T ) ≤ | x 0 | + ( | A | + | B | · | K | ) Z T 0 L ′ ( s ) ds. By Gron wall’s lemma, L ′ ( T ) ≤ | x 0 | e ( | A | + | B |·| K | ) T ≤ C AB K T . Com bining constants, the result follo ws. In order to b ound the growth of the sto c hastic solution X ϵ,n t in ( 2.14 ), we include the follo wing lemma whic h is a standard corollary of the BDG inequality . Lemma 3.6. [ 33 ] F or any p ∈ [1 , ∞ ) , ther e exists a C > 0 such that for al l T ∈ [0 , T 0 ] E " sup 0 ≤ t ≤ T     Z t 0 dW s     2 p # ≤ C T p . 13 The next estimate b ounds the tail of a matrix exp onen tial sum. It will b e used in b ounds in volving T aylor series expansions. Lemma 3.7. F or any r ≥ 0 , we have      ∞ X k =3 r k A k k !      ≤ r 3 | A | 3 6 e r | A | . Pr o of. Since the norm is submultiplicativ e,      ∞ X k =3 r k A k k !      ≤ ∞ X k =3 r k | A | k k ! ≤ r 3 | A 3 | ∞ X k =0 r k | A | k k ! × ( k + 1)( k + 2)( k + 3) ≤ r 3 | A | 3 6 e r | A | . W e shall rep eatedly use W ald’s lemma, Lemma 3.1 . Our next result establishes a stopping time for whic h it will b e used. Lemma 3.8. The r andom time N n T + 1 is a stopping time with r esp e ct to ( ξ n i +1 ) p , i = 0 , 1 , 2 , . . . , k and p ≥ 1 . Pr o of. In order to pro ve this, fix k ∈ N , and consider the ev ent { N n T + 1 ≤ k } . This is equiv alent to the ev en t { N n T < k } , since N n T is in teger-v alued. On the other hand, b y ( 2.2 ) and ( 2.1 ), { N n T + 1 ≤ k } = { N n T < k } = { τ n k > T } = ( k − 1 X i =0 ξ n i +1 > T ) = ( k − 1 X i =0 (( ξ n i +1 ) p ) 1 p > T ) . Hence, { N n T + 1 ≤ k } dep ends only on ξ n 1 , . . . , ξ n k whic h are the first k terms of ( ξ n i +1 ) p , i = 0 , 1 , . . . , k − 1. This mak es it a stopping time. The next estimate, which is of indep enden t interest as a general to ol, states that we can estimate the exp ectations of supremums of i.i.d. sets o v er v arious index sets, by bounding them ab o v e by sums whose exp ectations can b e taken using Lemma 3.1 . Prop osition 3.9. L et X i , i ≥ 0 b e i.i.d. non-ne gative r andom variables, and τ b e a stopping time with r esp e ct to { X i } i ≥ 0 . Supp ose that E [ X q i ] < ∞ for some q > 1 . Then, E  sup 0 ≤ t ≤ τ X i  ≤ E [ τ + 1] 1 q E [ X q i ] 1 q . Pr o of. By Jensen’s inequalit y , E  sup 0 ≤ t ≤ τ X i  ≤ E  sup 0 ≤ t ≤ τ X q i  1 q ≤ E " τ X t =0 X q i # 1 q , where w e used the fact that X i are non-negativ e. Note that { X q i } i ≥ 0 con tinues to b e an i.i.d. sequence whic h generates the same filtration as { X i } i ≥ 0 . Th us, τ remains a stopping time with respect to this sequence, and the pro of directly follo ws by Lemma 3.1 . 14 During our analysis, w e shall rep eatedly see a family of low er order noise terms generated b y random sampling. W e establish the low er order growth of these terms. Lemma 3.10. L et ˜ N p := Z T 0 ( ξ n N n s +1 ) p e ( ξ n N n s +1 ) ˜ C ds, wher e p > 0 and ˜ C is a c onstant dep ending up on matric es A,B, K, time T and p. Then, we have E [ ˜ N p ] ≤ 1 n p E [ N n T ] + 2 n E [ ξ p +1 1 e ξ n 1 ˜ C ] . Pr o of. W e ha ve the fact that if ξ i are i.i.d., then for an y measurable function f , f ( ξ i ) are also i.i.d. Th us, w e hav e Z T 0 ( ξ n N n s +1 ) p e ξ n N n s +1 ˜ C ds = N n T − 1 X k =0 Z τ n k +1 τ n k ( ξ n k +1 ) p e ξ n k +1 ˜ C ds + Z T τ n N n T ( ξ n N n s +1 ) p e ξ n N n s +1 ˜ C ds = N n T − 1 X k =0 ( ξ n k +1 ) p e ξ n k +1 ˜ C Z τ n k +1 τ n k ds + ( ξ n N n T +1 ) p e ξ n N n T +1 ˜ C Z T τ n N n T ds = N n T − 1 X k =0 ( ξ n k +1 ) p e ξ n k +1 ˜ C ( τ n k +1 − τ n k ) + ( ξ n N n T +1 ) p e ξ n N n T +1 ˜ C ( T − τ n N n T ) ≤ N n T X k =0 ( ξ n k +1 ) p +1 e ξ n k +1 ˜ C ≤ N n T +1 X k =0 ( ξ n k +1 ) p +1 e ξ n k +1 ˜ C . T aking exp ectation, and using Lemma 3.1 , we get the required result. W e remark that as n → ∞ , by Theorem 3.2 the right hand side is asymptotically equiv alen t to a deca y rate of 1 n p E [ ξ p +1 1 ] E [ ξ 1 ] . T aking ˜ C = 0 ab ov e we ha v e following corollary . Corollary 3.11. F or any p > 0 , define N p := Z T 0 ( s − π n ( s )) p ds. Then, we have E [ N p ] ≤ 1 n p E [ N n T + 2] n E [ ξ 1 ] p +1 , whic h also deca ys at the rate 1 n p same wa y as in Lemma 3.10 . Observ e that, in this case, w e hav e ( s − π n ( s )) ≤ ξ n N n s +1 . The upgrade from distributional conv ergence to con vergence of momen ts (for instance, in Theorem 3.4 ) requires uniform integrabilit y . The next tw o lemmas are necessary to ac hieve this. 15 Lemma 3.12. L et M k := P k i =1 ( ξ i − E [ ξ 1 ]) = τ 1 k − E [ ξ 1 ] k , k = 1 , 2 , . . . b e the discr ete time r andom walk asso ciate d to the incr ements ξ i − E [ ξ 1 ] . Then, (a) F or any stopping time τ ′ with r esp e ct to M k , E [sup k ≤ τ ′ | M k | 6 ] ≤ C 3 E [ τ ′ 4 ] 1 2 E [ τ ′ 2 ] 1 2 wher e C 3 dep ends only up on ξ 1 . (b) F or al l even numb ers p , we have E [ | M k | p ] ≤ C p,ξ 1 k p/ 2 for al l k > 0 and C p,ξ 1 dep ending only up on p and ξ 1 . Pr o of. W e will use the discrete time BDG inequality (see [ 16 , page 23] or [ 30 , page 2]). Observe that M k is a sum of indep enden t mean zero random v ariables, and hence a martingale with resp ect to its own filtration. Therefore, applying the BDG inequalit y with p = 6 for any stopping time τ ′ , E [ sup k ≤ τ ′ | M k | 6 ] ≤ C 3 E [ ⟨ M k , M k ⟩ 3 τ ′ ] (3.1) for a constant C 3 indep enden t of ξ 1 and τ ′ , where ⟨ M k , M k ⟩ t is the quadratic v ariation of M k . Let t i = M i − M i − 1 = ( ξ i − E [ ξ i ]) b e the martingale differences. Then, the quadratic v ariation in discrete time (see [ 16 , page 23]) is defined as E [ ⟨ M k , M k ⟩ k ] = E X i ≤ k t 2 i ! . Therefore, w e ha ve E  ⟨ M k , M k ⟩ 3 τ ′  = E   X k ≤ τ ′ ( ξ k − E [ ξ k ]) 2 ! 3   . (3.2) W e will now expand and b ound this exp ectation. Note that b y Holder’s inequality , X k ≤ τ ′ ( ξ k − E [ ξ k ]) 2 ! 3 ≤ τ ′ 2 × X k ≤ τ ′ ( ξ k − E [ ξ k ]) 6 . Therefore, w e ha ve E   X k ≤ τ ′ ( ξ k − E [ ξ k ]) 2 ! 3   ≤ E " τ ′ 2 × X k ≤ τ ′ ( ξ k − E [ ξ k ]) 6 # ≤ E [ τ ′ 4 ] 1 2 × E   X k ≤ τ ′ ( ξ k − E [ ξ k ]) 6 ! 2   1 2 , (3.3) b y C-S inequalit y . W e are only left to b ound the final term ab o ve. Let a = X k ≤ τ ′ ( ξ k − E [ ξ k ]) 6 b = X k ≤ τ ′ E ( ξ k − E [ ξ k ]) 6 = τ ′ × E ( ξ k − E [ ξ k ]) 6 . 16 No w, observe that a 2 ≤ 4 b 2 + 4( a − b ) 2 , and that a − b = P k ≤ τ ′ [( ξ k − E [ ξ k ]) 6 − E ( ξ k − E [ ξ k ]) 6 ] is a stopp ed sum of i.i.d. zero mean random v ariables, on which W ald’s iden tity 3.1 can b e applied. Doing so giv es E   X k ≤ τ ′ ( ξ k − E [ ξ k ]) 6 ! 2   ≤ 4 E [ τ ′ 2 ][ E [( ξ k − E [ ξ k ]) 6 ]] 2 + 4 E [ τ ′ ]V ar[( ξ k − E [ ξ k ]) 6 ] ≤ C ξ 1 ( E [ τ ′ 2 ] + E [ τ ′ ]) ≤ E [ τ ′ 2 ] C ξ 1 . for a constant C ξ 1 dep ending only up on ξ 1 , where we used the fact that E [ τ ′ ] ≤ E [ τ ′ 2 ] since τ ′ is non-negativ e in teger v alued. Combining this with ( 3.1 ), ( 3.2 ) and ( 3.3 ) finishes the pro of. F or the other part, we first remark that for p = 2, the iden tity is immediate. F or p > 2 ev en, recall that M k is a martingale. Applying the discrete BDG inequality as w e did in ( 3.1 ), E [ | M k | p ] ≤ C p E [ ⟨ M k , M k ⟩ p/ 2 k ] . (3.4) T o b ound the righ t hand side, w e notice that by Holder’s inequalit y , E [ ⟨ M k , M k ⟩ p/ 2 k ] = E   X i ≤ k t 2 i ! p/ 2   ≤ E " k p/ 2 − 1 k X i =1 t p i # ≤ k p 2 C pξ 1 , where we used the linearit y of exp ectation in the last line. Com bining this with ( 3.4 ) completes the pro of. The preceding result on moments has an immediate application in the next Lemma. Lemma 3.13. F or any even numb er p , we have E [ | N 1 t | p ] ≤ C p t p for every t > 0 and a c onstant C p,ξ 1 dep ending only on p and ξ 1 . Pr o of. W e use tail probabilit y representation of momen ts as E [ | N 1 t | p ] = p Z ∞ 0 x p − 1 P ( N 1 t ≥ x ) dx = p ∞ X k =0 Z k +1 k x p − 1 P ( N 1 t ≥ k + 1) dx = ∞ X k =0 P ( N 1 t ≥ k + 1)(( k + 1) p − k p ) ≤ C p ∞ X k =0 P ( N 1 t ≥ k + 1) k p − 1 = C p ∞ X k =0 P ( τ 1 k +1 ≤ t ) k p − 1 . (3.5) It no w suffices to b ound P ( τ 1 k +1 ≤ t ). W e do so as follo ws : observe that if t < ( k + 1) E [ ξ 1 ] and p is an y natural n um b er, then b y Mark ov’s inequalit y , P ( τ 1 k +1 ≤ t ) = P ( τ 1 k +1 − ( k + 1) E [ ξ 1 ] ≤ t − ( k + 1) E [ ξ 1 ]) 17 ≤ P ( | τ 1 k +1 − ( k + 1) E [ ξ 1 ] | ≥ ( k + 1) E [ ξ 1 ] − t ) ≤ E ( | τ 1 k +1 − ( k + 1) E [ ξ 1 ] | 2 p +2 (( k + 1) E [ ξ 1 ] − t ) 2 p +2 = E ( | M k +1 | 2 p +2 ) (( k + 1) E [ ξ 1 ] − t ) 2 p +2 , where M k = τ 1 k − k E [ ξ 1 ]. By Lemma 3.12 we ha v e P ( τ 1 k +1 ≤ t ) ≤ C k p +1 | ( k + 1) E [ ξ 1 ] − t | 2 p +2 for all k suc h that ( k + 1) E [ ξ 1 ] > t . In particular, if ( k + 1) > 2 t E [ ξ 1 ] , we hav e (( k + 1) E [ ξ 1 ] − t ) ≤ 1 2 ( k + 1) E [ ξ 1 ] and therefore P ( τ 1 k +1 ≤ t ) ≤ C p,ξ 1 k p +1 ( k + 1) 2 p +2 ≤ C p,ξ 1 k − p − 1 . W e apply the follo wing to ( 3.5 ) as follows : if t ≥ ( k + 1) E [ ξ 1 ], then we ma y b ound P ( τ 1 k +1 ≤ t ) ≤ 1. Therefore, E [ | N 1 t | p ] ≤ C p ⌈ t/ E [ ξ 1 ] − 1 ⌉ X k =0 k p − 1 + C p,ξ 1 ∞ X ⌈ t/ E [ ξ 1 ] − 1 ⌉ +1 k − p − 1+ p − 1 ≤ C p ⌈ t/ E [ ξ 1 ] − 1 ⌉ X k =0 k p − 1 + C p,ξ 1 ∞ X k =0 k − 2 < C p,ξ 1 t p , where C p,ξ 1 is indep endent of t . This completes the pro of. W e are now ready to pro ve Theorem 2.1 . 4 Pro of of LLN t yp e Results In this section, we pro ve Theorem 2.1 . F or this purp ose, we first establish following t wo sup- p orting lemmas. The theorem will subsequently be deriv ed as a simple consequence. Lemma 4.1. L et Y T = sup 0 ≤ s ≤ T | x n s − x s | p . Then for any p ≥ 1 , ther e exists C AB K T p > 0 , dep ending on T , p and the matric es A, B and K such that Y T ≤ C AB K T p N p , wher e N p = R T 0 ( s − π n ( s )) p ds is define d as in Cor ol lary 3.11 and c onver ges to zer o. Pr o of. Let t ∈ [0 , T ] b e arbitrary . Subtract ( 2.4 ) from ( 2.6 ) and integrating o v er [0 , t ], we get x n t − x t = Z t 0  A ( x n s − x s ) + B K ( x s − x n π n ( s ) )  ds. 18 T aking the p th p ow er of the absolute v alue of b oth sides, and applying first the triangle in- equalit y and then the C-S inequalit y , we get | x n t − x t | p ≤ C T p  Z t 0 | A | p | x n s − x s | p ds + Z t 0 | B | p · | K | p | x s − x n π n ( s ) | p ds  . (4.1) F or the second term | x s − x n π n ( s ) | p ≤ C T p  | x s − x n s | p + | x n s − x n π n ( s ) | p  ≤ C T p ( Y s + | x n s − x n π n ( s ) | p ) . Using the ( 2.6 ), we ha v e x n s − x n π n ( s ) = Z s π n ( s )  Ax n r − B K x n π n ( r )  dr . Th us | x n s − x n π n ( s ) | ≤ Z s π n ( s )  | A || x n r | + | B | · | K || x n π n ( r ) |  dr . Since π n ( r ) ≤ r ≤ s , by Lemma 3.5 , | x n r | ≤ L ′ ( s ) ≤ C AB K T , | x n π n ( r ) | ≤ L ′ ( s ) ≤ C AB K T . So | x n s − x n π n ( s ) | ≤ C AB K T Z s π n ( s ) ds = ( s − π n ( s )) C AB K T . The second in tegral in ( 4.1 ) b ecomes Z t 0 | B | p · | K | p | x s − x n π n ( s ) | p ds ≤ C T p  Z t 0 | B | p · | K | p Y s ds + Z t 0 | B | p · | K | p ( s − π n ( s )) p ds  . Plugging the ab o v e estimate in equation ( 4.1 ) and taking sup erim um ov er t ∈ [0 , T ] on b oth sides, w e get Y T ≤ C AB K T p  Z T 0 Y s ds + Z T 0 ( s − π n ( s )) p ds  , Finally , by Gron wall’s lemma, w e hav e Y T ≤ C AB K T p N p . Lemma 4.2. F or any p ≥ 1 , ther e exists C AB K T p > 0 , such that E  sup 0 ≤ t ≤ T | X ε,n t − x n t | p  ≤ C AB K T p ε p . 19 Pr o of. Subtract ( 2.6 ) from ( 2.8 ) X ε,n t − x n t = Z t 0 A ( X ε,n s − x n s ) ds − Z t 0 B K  X ε,n π n ( s ) − x n π n ( s )  ds + ε Z t 0 dW s . By the triangle inequality | X ε,n t − x n t | ≤ Z t 0 | A || X ε,n s − x n s | ds + Z t 0 | B | · | K || X ε,n π n ( s ) − x n π n ( s ) | ds + ε     Z t 0 dW s     . Define S t = sup 0 ≤ s ≤ t | X ε,n s − x n s | . Since π n ( s ) ≤ s , | X ε,n π n ( s ) − x n π n ( s ) | ≤ S s . Th us S T ≤ Z T 0 ( | A | + | B | · | K | ) S s ds + ε sup 0 ≤ t ≤ T     Z t 0 dW s     . T aking pth p o wer on b oth the sides and using H¨ older’s inequalit y , w e get S p T ≤ C pT Z T 0 ( | A | + | B | · | K | ) p S p s ds + ε p sup 0 ≤ t ≤ T     Z t 0 dW s     p . T aking exp ectation and applying Lemma 3.6 , we ha v e E [ S p T ] ≤ C AB K T p  Z T 0 E [ S p s ] ds + ε p  . By Gron wall’s lemma, w e obtain E [ S p T ] ≤ C AB K T p ε p , i.e., E  sup 0 ≤ t ≤ T | X ε,n t − x n t | p  ≤ C AB K T p ε p . No w, we are ready to pro ve Theorem 2.1 with the help of ab ov e tw o Lemmas 4.1 and 4.2 . Pr o of of The or em 2.1 . By the triangle inequality E  sup 0 ≤ s ≤ T | X ε,n s − x s | p  ≤ C p  E  sup 0 ≤ s ≤ T | X ε,n s − x n s | p  + E  sup 0 ≤ s ≤ T | x n s − x s | p  . F rom Lemma 4.1 and 4.2 , w e obtain E  sup 0 ≤ s ≤ T | X ε,n s − x s | p  ≤ C AB K T p E [ N p ] + C AB K T p ε p ≤ C AB K T p ( E [ N p ] + ε p ) , whic h is desired result. W e now turn our attention to the pro of of second principal result of this pap er, which is formulated and rigorously stated in Theorem 2.2 . The con ten t of the follo wing section is dev oted entirely to this pro of. 20 5 Pro of of the CL T Theorem 2.2 The main fo cus of this section is to prov e Theorem 2.2 with the help of v arious auxiliary lemmas. Recall that b y ( 2.15 ), the key term to analyze is the rescaled fluctuation pro cess Z t 0 X ε,n s − X ε,δ π n ( s ) ε ds. W e first pro ceed b y simplifying this expression, decomp osing it in to a sampling comp onen t and a component affected by white noise. T o this end, let M = {M t : 0 ≤ t < ∞} b e the pro cess defined by M t := Z t 0 e − sA dW s = e − tA W t + Z t 0 e − sA AW s ds. (5.1) Note that M t is a {F t } -martingale since it is a sto chastic in tegral. Lemma 5.1. F or ε, 1 /n ∈ (0 , 1) , t ∈ [0 , T ] , we have 1 ε Z t 0 ( X ε,n s − X ε,n π n ( s ) ) ds := L ε,n 1 ( t ) + L ε,n 2 ( t ) , (5.2) wher e L ε,n 1 ( t ) = 1 ε Z t 0  e ( s − π n ( s )) A − I   I − A − 1 B K  X ε,n π n ( s ) ds, and L ε,n 2 ( t ) = Z t 0 e sA  M s − M π n ( s )  ds. Pr o of. Giv en that A is in vertible, w e hav e for any a < b Z b a e − sA ds = −  e − bA − e − aA  A − 1 , whic h gives Z t τ k e ( t − s ) A B K ds = − e tA  e − tA − e − τ k A  A − 1 B K =  e ( t − τ k ) A − I  A − 1 B K . Therefore, e ( t − τ k ) A − Z t τ k e ( t − s ) A B K ds − I =  e ( t − τ k ) A − I  I − A − 1 B K  . Let t ∈ [ τ n k , τ n k +1 ) for some k ∈ N . In this interv al, X ε,n π n ( s ) = X ε,n τ n k is fixed. F rom equation ( 2.13 ), w e hav e X ε,n t = X ε,n τ n k e ( t − τ n k ) A − Z t τ n k e ( t − s ) A B K X ε,n τ n k ds + ε Z t τ n k e ( t − s ) A dW s . 21 Subtracting X ε,n τ n k , on b oth sides, we get X ε,n t − X ε,n τ n k = " e ( t − τ n k ) A − Z t τ n k e ( t − s ) A B K ds − I # X ε,n τ n k + εe tA Z t τ n k e − sA dW s =  e ( t − τ n k ) A − I   I − A − 1 B K  X ε,n τ n k + εe tA ( M t − M τ n k ) . (5.3) No w, for an y t ≥ 0 , w e can write X ε,n t − X ε,n τ n k ( t ) = X k ≥ 0 1 [ τ n k ,τ n k +1 ) ( t )( X ε,n t − X ε,n τ n k ( t ) ) , and so Z t 0 X ε,n s − X ε,n π n ( s ) ds = X k ≥ 0 Z t 0 1 [ τ n k ,τ n k +1 ) ( s )( X ε,n s − X ε,n τ n k ( s ) ) ds. Since N n T < ∞ almost surely for all n b y ( 2.3 ) and Theorem 3.2 , the sum ab o ve is finite a.s. Therefore it can b e exchanged with the in tegral, leading to X k ≥ 0 Z t 0 1 [ τ n k ,τ n k +1 ) ( s )  e ( t − τ n k ) A − I   I − A − 1 B K  X ε,n τ n k + εe tA ( M t − M τ n k ) ds = Z t 0  e ( t − τ n k ) A − I   I − A − 1 B K  X ε,n π n ( s ) + εe tA ( M t − M π n ( s ) ) ds, b y applying ( 5.3 ) and thus proof is complete. No w, we provide tw o k ey Prop ositions on L ε,n 1 ( t ) and L ε,n 2 ( t ). By com bining the results obtained for L ε,n 1 and L ε,n 2 , w e will then b e able to establish Theorem 2.2 . Prop osition 5.2. Ther e exists ε 0 such that for and 0 < ε < ε 0 , we have E  sup 0 ≤ t ≤ T | L ε,n 1 ( t ) − ℓ ( t ) |  ≤  c ( n − 1 / 2 ( n − 1 + ε ) + n − 1 / 4 ) + M κ ( ε )  C AB K T ξ 1 wher e the function ℓ ( t ) is given by Definition ( 2.17 ) . Prop osition 5.3. F or 0 < ε < ε 0 , the term L ε,n 2 de c ays with r ate n − 1 / 2 , i.e., we have E  sup 0 ≤ t ≤ T | L ε,n 2 ( t ) |  ≤ 1 √ n C AT ξ 1 . Pr o of of The or em 2.2 . The Theorem can b e prov ed by combining the Prop ositions 5.2 and 5.3 . In the following subsections 5.1 and 5.2 , w e will fo cus on establishing Prop ositions 5.2 and 5.3 resp ectiv ely . W e b egin in the follo wing subsection with the term L ε,n 1 , which represen ts the random sampling part and leads to Proposition 5.2 . Since its limiting b ehavior cannot b e derived in a straigh tforward manner, w e decomp ose L ε,n 1 in to sev eral auxiliary terms. W e then demonstrate that this sequence conv erges to the function ℓ . In the subsequent subsection, w e turn to L ε,n 2 and establish that it v anishes in the limit and so establish Prop osition 5.3 . Accordingly , we no w pro ceed with a detailed analysis of the random sampling comp onent. 22 5.1 The Decomp osition of Random Sampling P art: Pro of of Prop o- sition 5.2 W e decomp ose L ε,n 1 in the follo wing w ay: L ε,n 1 = 1 εn Z t 0  e s − π n ( s ) A − I 1 /n  ( I − A − 1 B K ) X ε,n π n ( s ) ds = 1 εn Z t 0  e s − π n ( s ) A − I 1 /n  ( I − A − 1 B K )  X ε,n π n ( s ) − x π n ( s )  ds (5.4) + 1 εn Z t 0  e s − π n ( s ) A − I − ( s − π n ( s )) A 1 /n  ( I − A − 1 B K ) x π n ( s ) ds (5.5) + 1 εn Z t 0  ( s − π n ( s )) A 1 /n ( I − A − 1 B K )( x π n ( s ) − x ( s ))  ds (5.6) + 1 εn Z t 0  ( s − π n ( s )) A 1 /n − M A  ( I − A − 1 B K ) x ( s ) ds (5.7) +  1 εn − c  Z t 0 M ( A − B K ) x ( s ) ds (5.8) + c Z t 0 M ( A − B K ) x ( s ) ds =: 5 X i =1 G i + c Z t 0 M ( A − B K ) x ( s ) ds = 5 X i =1 G i + ℓ ( t ) . That is, L ε,n 1 ( t ) − ℓ ( t ) = 5 X i =1 G i , (5.9) where the functions G i , i = 1 , 2 , . . . , 5 are defined b y expressions ( 5.4 )–( 5.8 ), resp ectively . W e no w pro ceed to ev aluate eac h of the quantities G i individually and, through a careful and detailed analysis, deriv e the follo wing results. Lemma 5.4. The term G 1 de c ays at the r ate n − 1 2 ( n − 1 + ϵ ) , i.e., E  sup 0 ≤ t ≤ T | G 1 ( t ) |  ≤ 1 ε C AB K T  p E [ N 2 ] + ε 2  q E ˜ N 3 ≤ c n − 1 / 2 ( n − 1 + ε ) C AB K T ξ 1 . Lemma 5.5. The term G 2 ( t ) de c ays at the r ate n − 1 , i.e., E  sup 0 ≤ t ≤ T | G 2 ( t ) |  ≤ n ε C AB K T E [ ˜ N 3 ] ≤ c n − 1 C AB K T ξ 1 . Lemma 5.6. The term G 3 de c ays at the r ate n − 1 , i.e., E [ sup 0 ≤ t ≤ T | G 3 ( t ) | ] ≤ C AB K T ε E [ N 2 ] ≤ c n − 1 C AB K T ξ 1 . 23 Lemma 5.7. The term G 4 de c ays at the r ate n − 1 4 , i.e., E  sup 0 ≤ t ≤ T | G 4 ( t ) |  ≤ c n − 1 / 4 C AB K T ξ 1 . Lemma 5.8. The term G 5 de c ays at the r ate κ ( ϵ ) . That is, ther e exists a ε 0 > 0 such that for any 0 < ε < ε 0 , we have sup 0 ≤ t ≤ T | G 5 ( t ) | ≤ M κ ( ε ) C AB K T . Pr o of of Pr op osition 5.2 . By combining the results obtained in Lemmas 5.4 – 5.8 , w e get the required result. Of the aforemen tioned lemmas, Lemmas 5.4 – 5.6 are established b y suitably adapting the analytical framew ork dev elop ed in [ 8 ]. Their pro ofs require careful mo difications to accommo- date the presence of the random sampling terms. This extension is feasible since the quan tity M asso ciated with the random sampling mec hanism remains unaffected in these estimates. In con trast, the pro of of Lemma 5.7 is considerably more in volv ed and is carried out through a sequence of carefully structured steps. The pro of of 5.8 follo ws directly from ( 2.11 ) and ( 2.5 ). W e no w pro ceed to prov e Lemmas 5.4 – 5.8 . Each of these lemmas pro vides the necessary estimate for one of the quan tities G i . W e therefore treat these terms separately and deriv e the required estimates step by step. F or clarit y , the pro ofs are organized into the distinct subsubsections, with eac h subsubsection dev oted exclusively to the analysis of a single G i . The pro of of Lemma 5.7 , ho wev er, is more tec hnically inv olv ed and will therefore b e prov ed after completing the pro of of Lemmas 5.8 . 5.1.1 The G 1 T erm W e no w prov e Lemma 5.4 . In a n utshell, the key observ ation is that ( X ϵ,n π n ( s ) − x π n ( s ) ) decays fast enough b y Theorem 2.1 , for the term to conv erge to 0. The pr o of of L emma 5.4 : Let us define f n ( s ) :=  e s − π n ( s ) A − I 1 /n  . W e hav e s − π n ( s ) = s − τ n N n s . Using the fact that τ n N n s ≤ s ≤ τ n N n s +1 , w e ha ve s − τ n N n s ≤ τ n N n s +1 − τ n N n s = ξ n N n s +1 , and therefore, we get that | f n ( s ) | = n      Z s − π n ( s ) 0 Ae rA dr      ≤ n | A | Z ξ n N n s +1 0 | e rA | dr ≤ nξ n N n s +1 | A | e ξ n N n s +1 | A | . (5.10) Consider the term G 1 defined in equation ( 5.4 ) and let z s = X ε,n π n ( s ) − x π n ( s ) . Then, G 1 b ecomes G 1 ( t ) = 1 εn Z t 0 f n ( s )( I − A − 1 B K ) z s ds. 24 T aking the supremum o ver t ∈ [0 , T ] of the absolute v alue of G 1 , and then taking exp ectation, w e obtain E h sup 0 ≤ t ≤ T | G 1 ( t ) | i ≤ 1 εn E  sup 0 ≤ t ≤ T     Z t 0 f n ( s )( I − A − 1 B K ) z s ds      ≤ 1 εn E  sup 0 ≤ t ≤ T Z t 0 | f n ( s ) | · | I − A − 1 B K | · | z s | ds  ≤ 1 εn E  sup 0 ≤ t ≤ T  | I − A − 1 B K | sup 0 ≤ r ≤ t | z r | Z t 0 | f n ( s ) | ds  ≤ 1 ε | I − A − 1 B K | · | A | E  sup 0 ≤ t ≤ T | z t |  Z T 0 | f n ( s ) | ds  ≤ 1 ε | I − A − 1 B K | · | A | " E  sup 0 ≤ t ≤ T | z t |  2 # 1 2 " E  Z T 0 ξ n N n s +1 e ξ n N n s +1 | A | ds  2 # 1 2 b y using C-S inequality and ( 5.10 ). Let us fo cus on second term of the last expression, applying C-S, w e get  Z T 0 ξ n N n s +1 e ξ n N n s +1 | A | ds  2 ≤ T Z T 0  ξ n N n s +1  2 e 2 ξ n N n s +1 | A | ds. F or the first term, from Theorem 2.1 , for p = 2, we ha v e that E sup 0 ≤ t ≤ T | z t | 2 = E sup 0 ≤ t ≤ T | X ε,n π n ( s ) − x π n ( s ) | 2 ≤ C AB K T  E ( N 2 ) + ε 2  . With ab ov e results and calculations w e ha ve 1 ε | I − A − 1 B K || A | E "  sup 0 ≤ t ≤ T | z t |  2 # 1 2 E "  Z T 0 ξ n N n s +1 e ξ n N n s +1 | A | ds  2 # 1 2 ≤ 1 ε C AB K T  E [ N 2 ] + ε 2  1 2 ( E [ ˜ N 3 ]) 1 2 . Therefore, w e ha ve E  sup 0 ≤ t ≤ T | G 1 ( t ) |  ≤ 1 ε C AB K T  E [ N 2 ] + ε 2  1 2 ( E [ ˜ N 3 ]) 1 2 ≤ c n − 1 / 2 ( n − 1 + ε ) C AB K T ξ 1 , b y Lemma 3.10 and its Corollary 3.11 . 5.1.2 The G 2 T erm In this section, w e prov e Lemma 5.5 which pro vides an estimate for the expression ( 5.5 ). The k ey idea is that the term e ( s − π n ( s )) A − I − ( s − π n ( s )) A 1 /n , b y virtue of b eing the remainder of a T a ylor series, deca ys fast enough in n b y Lemma 3.7 . 25 Pr o of of L emma 5.5 . W e ha ve G 2 ( t ) = 1 εn Z t 0  e ( s − π n ( s )) A − I − ( s − π n ( s )) A 1 /n  ( I − A − 1 B K ) x π n ( s ) ds Letting g n ( s ) :=  e sA − I − sA 1 /n  ( I − A − 1 B K ) . It can b e easily seen that Z t 0  e s − π n ( s ) A − I − ( s − π n ( s )) A 1 /n  ( I − A − 1 B K ) x π n ( s ) ds = N n t − 1 X k =0  Z ξ n k +1 0 g n ( s ) ds  x τ n k + Z t − π n ( t ) 0 g n ( s ) ds ! x π n ( t ) ≤ sup 0 ≤ t ≤ T " N n t − 1 X k =0     Z ξ n k +1 0 g n ( s ) ds     | x τ n k | +      Z t − π n ( t ) 0 g n ( s ) ds      | x π n ( t ) | # . A direct calculation yields Z r 0 g n ( s ) ds = nA − 1  e rA − I − r A − r 2 A 2 2  ( I − A − 1 B K ) or ,     Z r 0 g n ( s ) ds     ≤ n | A − 1 | ·     e rA − I − r A − r 2 A 2 2     · | I − A − 1 B K | = n | A − 1 | ·      ∞ X k =3 r k A k k !      · | I − A − 1 B K | ≤ n r 3 | A | 3 6 | A − 1 | · e r | A | · | I − A − 1 B K | . (5.11) In equation ( 5.11 ), we ha v e used Lemma 3.7 . Let us put r = ξ n k +1 to get     Z ξ n k +1 0 g n ( s ) ds     ≤ nC AB K  ( ξ n k +1 ) 3 6 e ξ n k +1 | A |  . (5.12) No w putting r = t − π n ( t ), w e get      Z t − π n ( t ) 0 g n ( s ) ds      ≤ nC AB K  ( t − π n ( t )) 3 6 e ( t − π n ( t )) | A |  . (5.13) Using the estimates ( 5.12 ) and ( 5.13 ) sup 0 ≤ t ≤ T     Z t 0  e s − π n ( s ) A − I − ( s − π n ( s )) A 1 /n  ( I − A − 1 B K ) x π n ( s ) ds     26 ≤ nC AB K T sup 0 ≤ t ≤ T " N n t − 1 X k =0 ( ξ n k +1 ) 3 6 e ξ n k +1 | A | + ( t − π n ( t )) 3 6 e ( t − π n ( t )) | A | # sup 0 ≤ t ≤ T | x ( t ) | ≤ nC AB K T N n t X k =0 ( ξ n k +1 ) 3 e ξ n k +1 | A | , (5.14) b y Lemma 3.5 . No w, ( 5.14 ) and Lemma ( 3.10 ) assures that E  sup 0 ≤ t ≤ T | G 2 ( t ) |  ≤ n ε C AB K T N n t X k =0 ( ξ n k +1 ) 3 e ξ n k +1 | A | ≤ n ε C AB K T E [ ˜ N 3 ] ≤ c n − 1 C AB K T ξ 1 . 5.1.3 The G 3 T erm In this section, w e prov e Lemma 5.6 which pro vides an estimate for the expression ( 5.6 ). The k ey idea is con trolling the term | x π n ( s ) − x s | using Lemma 4.1 . Pr o of of L emma 5.6 . F rom equation ( 5.6 ), w e ha ve E h sup 0 ≤ t ≤ T | G 3 ( t ) | i ≤ | A − B K | εn E  Z T 0 ( s − π n ( s )) 1 n · | x π n ( s ) − x ( s )) | ds  ≤ | A − B K | ε E  sup 0 ≤ t ≤ T  | x π n ( t ) − x ( t )) |  Z T 0 ( s − π n ( s )) ds  ≤ | A − B K | ε  E  sup 0 ≤ t ≤ T  | x π n ( t ) − x ( t )) | 2   1 2 E "  Z T 0 ( s − π n ( s )) ds  2 #! 1 2 ≤ | A − B K | ε  E  sup 0 ≤ t ≤ T  | x π n ( t ) − x ( t )) | 2   1 2  E  Z T 0 ( s − π n ( s )) 2 ds  1 2 . b y applying the C-S inequality twice. No w, for first term, w e can use the Lemma 4.1 for p = 2, whic h gives E sup 0 ≤ t ≤ T   x π n ( s ) − x ( s )   2 ≤ C AB K T E [ N 2 ] . F or the second term, b y applying Corollary 3.11 , w e ultimately obtain E  sup 0 ≤ t ≤ T | G 3 ( t ) |  ≤ C AB K T ε E [ N 2 ] ≤ c n − 1 C AB K T ξ 1 . 27 5.1.4 The G 5 T erm Pr o of of L emma 5.8 . W e ha ve G 5 =  1 εn − c  Z t 0 M ( A − B K ) x ( s ) ds. Therefore, sup 0 ≤ t ≤ T | G 5 ( t ) | ≤  1 εn − c   sup 0 ≤ t ≤ T     Z t 0 M ( A − B K ) x ( s ) ds      ≤  1 εn − c  M | A − B K |  Z T 0 | x ( s ) | ds  ≤  1 εn − c  M T | A − B K |  sup 0 ≤ t ≤ T | x ( s ) | ds  . There exists ε 0 > 0, suc h that for any 0 < ε < ε 0 , w e obtain sup 0 ≤ t ≤ T | G 5 ( t ) | ≤ C AB K T M κ ( ε ) , b y applying Definition 2.11 together with Lemma 3.5 . No w, at last, w e are left only with Lemma 5.7 , which w e will address in the following subsubsection. 5.1.5 The G 4 T erm The pro of of Lemma 5.7 consists of several in termediate steps. T o elab orate on these steps, it will b e conv enien t to recall the definition of G 4 from ( 5.7 ): G 4 ( t ) = 1 εn Z t 0  ( s − π n ( s )) A 1 /n − M A  ( I − A − 1 B K ) x s ds. The k ey idea is that replacing ( I − A − 1 B K ) x s b y any smo oth b ounded function and to obtain the follo wing lemma. Lemma 5.9. L et f ( x s ) := f ( s ) b e a smo oth, and b ounde d function whose se c ond derivative gr ows at most exp onential ly in time . Then, for any finite T < ∞ , we have E  sup 0 ≤ t ≤ T     Z t 0 ( s − π n ( s )) 1 /n f ( s ) ds − Z t 0 M f ( s ) ds      ≤ n − 1 / 4 C ξ 1 f T . (5.15) Remark 5.10. L et x t denote the solution of system ( 2.4 ) with initial c ondition x 0 . Then one example of such an f ( s ) = f ( x s ) is f ( x s ) = ( I − A − 1 B K ) x s = ( I − A − 1 B K ) e ( A − B K ) s x 0 := f ( s ) . In what fol lows, we apply T aylor’s the or em to the mapping s 7→ f ( x s ) . F or notational simplicity, we tr e at f ( x s ) as a function dep ending only on time, that is, we write f ( x s ) = f ( s ) , with the understanding that the time dep endenc e arises thr ough the tr aje ctory x s . This c onvention wil l b e use d thr oughout without further mention. 28 The pro of of Lemma 5.9 is based on a suitable decomp osition of the integral Z t 0 n ( s − π n ( s )) f ( s ) ds in to sev eral terms that will b e treated separately . In place of w orking with the full expression at this stage, we focus on outlining the main steps of the decomp osition argumen t and the techniques inv olved. Each term in the decomp osition is estimated separately and finally by combining all the estimates obtained for the individual terms, w e conclude the pro of of Lemma 5.9 and consequently proof of Lemma 5.7 . Sk etch of the Pro of of Lemma 5.9 . W e no w present the sk etch of the pro of. Before explaining the argument, we p oin t out that the first step in v olves a T aylor expansion. By the second order mean v alue theorem, for eac h n ∈ N , k ∈ Z + and r > 0 there exists η k ∈ [ τ n k , τ n k +1 ] suc h that Z r 0 nsf ( τ n k + s ) ds = n r 2 2 f ( τ n k ) + nr 3 6 (2 f ′ ( η k ) + ˜ η k f ′′ ( η k )) , (5.16) where ˜ η k = η k − τ n k , f ′ = d f ds and f ′′ = d 2 f ds 2 . The integral Z t 0 n ( s − π n ( s )) f ( s ) ds , for any t ∈ [0 , T ] will asymptotically simplify in the following w ay . Belo w, η , η k ∈ [0 , T ] are random v ariables arising from remainder terms in the resp ective T aylor expansions, while terms that asymptotically conv erge to 0 are indicated in square brac kets, along with the corresp onding lemma stating this. In the second line b elo w, we used ( 5.16 ) for r = ξ n k and t − π n ( t ). Z t 0 ( s − π n ( s )) 1 n f ( s ) ds = N n t − 1 X k =0 Z ξ n k +1 0 nsf ( τ n k + s ) ds + Z t − π n ( t ) 0 nsf ( π n ( t ) + s ) ds = N n t − 1 X k =0 n ( ξ n k +1 ) 2 2 f ( τ n k ) + n ( t − π n ( t )) 2 2 f ( π n ( t )) +  n ( ξ n k +1 ) 3 6 (2 f ′ ( η k ) + ˜ η k f ′′ ( η k )) + n ( t − π n ( t )) 3 6 (2 f ′ ( η ) + ( η − π n ( t )) f ′′ ( η ))  → 0 , by Lemma 5.11 ≈ N n t − 1 X k =0 n 2 ( ξ n k +1 ) 2 f ( τ n k ) + h n 2 ( t − π n ( t )) 2 f ( π n ( t )) i → 0 , by Lemma 5.12 ≈ N n t − 1 X k =0 n 2 ( ξ n k +1 ) 2 f ( τ n k ) = nt/ E [ ξ 1 ] − 1 X k =0 n 2 ( ξ n k +1 ) 2 f ( τ n k ) +   X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) n 2 ( ξ n k +1 ) 2 f ( τ n k )   → 0 , by Lemma 5.16 ≈ nt/ E [ ξ ] − 1 X k =0 n 2 ( ξ n k +1 ) 2 f ( τ n k ) 29 = nt/ E [ ξ 1 ] − 1 X k =0 n 2 ( ξ n k +1 ) 2 f  k E [ ξ 1 ] n  +   ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2  f ( τ n k ) − f  k E [ ξ 1 ] n    → 0 , by Lemma 5.17 ≈ nt/ E [ ξ 1 ] − 1 X k =0 n 2 ( ξ n k +1 ) 2 f  k E [ ξ 1 ] n  = nt/ E [ ξ 1 ] − 1 X k =0 n 2 E ( ξ n k +1 ) 2 f  E [ ξ 1 ] k n  +   ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0 n 2  ( ξ n k +1 ) 2 − E [( ξ n k +1 ) 2 ]  f  k E [ ξ 1 ] n    → 0 , by Lemma 5.18 ≈ nt/ E [ ξ 1 ] − 1 X k =0 n 2 E ( ξ n k +1 ) 2 f  k E [ ξ 1 ] n  ≈ Z t 0 M f ( s ) ds b y Lemma 5.19 . The next step is to do detailed examination of the decomp osition in tro duced ab o v e. Our ob jective is to derive precise estimates for eac h individual term and to demonstrate that the con tributions app earing in big brac kets are negligible in the asymptotic regimes under consider- ation. This will b e accomplished through a sequence of auxiliary results, namely Lemmas 5.11 - 5.19 . Once these estimates are established and com bined, the pro of of Lemma 5.9 will follo w as a direct and immediate consequence. W e no w pro ceed b y analyzing those brac k et terms one b y one. The first contribution is treated in Lemma 5.11 . F undamen tally , the asymptotic deca y of this term is due to the app earance of the cub es of the in ter-renew al times ( ξ n k +1 ) 3 . It also arises as a T a ylor remainder, whic h naturally leads to the follo wing result. Lemma 5.11. L et t ≤ T , η k ∈ ( τ n k , τ n k +1 ) for e ach k ∈ 0 , 1 , . . . , N n t − 1 , η ∈ ( t − π n ( t ) , t ) . Then, E " sup 0 ≤ t ≤ T      N n t − 1 X k =0 n ( ξ n k +1 ) 3 6 (2 f ′ ( η k ) + ˜ η k f ′′ ( η k )) + n ( t − π n ( t )) 3 6 (2 f ′ ( η ) + ( η − π n ( t )) f ′′ ( η ))      # ≤ C ξ 1 f T n . Pr o of. First, note that η k , η ∈ [0 , T ]. Therefore, max {| 2 f ′ ( η k ) + ˜ η k f ′′ ( η k ) | , | 2 f ′ ( η ) + ( η − π n ( t )) f ′′ ( η ) |} ≤ C f T for some constant C f T dep ending only on f and T . Using this, for 30 an y t ∈ [0 , T ],      N n t − 1 X k =0 n ( ξ n k +1 ) 3 6 (2 f ′ ( η k ) + ˜ η k f ′′ ( η k )) + n ( t − π n ( t )) 3 6 (2 f ′ ( η ) + ( η − π n ( t )) f ′′ ( η ))      ≤ C f T N n t − 1 X k =0 n ( ξ n k +1 ) 3 6 + n ( t − π n ( t )) 3 6 ! ≤ C f T N n t X k =0 n ( ξ n k +1 ) 3 6 ≤ C f T N n T +1 X k =0 n ( ξ n k +1 ) 3 6 . (5.17) By Lemma 3.1 and the fact that ξ n k = ξ k n , E   N n T +1 X k =0 n ( ξ n k +1 ) 3 6   = E [ ξ 3 1 ] n E [ N n T ] + 2 6 n . (5.18) By Theorem 3.2 , the second term con verges to 1 E [ ξ 1 ] , while the first term conv erges to 0. When com bined with ( 5.17 ) and ( 5.18 ), this completes the pro of. The next bound is essen tially prov ed using the idea that when n is large, ev ery inter-renew al in terv al is, on av erage, of size E [ ξ 1 ] n . W e will use Prop osition 3.9 . Lemma 5.12. We have E  sup 0 ≤ t ≤ T n 2   ( t − π n ( t )) 2 f ( π n ( t ))    ≤ C ξ 1 f T n 1 / 2 . Pr o of. Since t ≤ T , π n ( t ) ≤ T . Therefore, there is a constan t C f T > 0 suc h that sup 0 ≤ t ≤ T n 2 ( t − π n ( t )) 2 | f ( π n ( t )) | ≤ C f T n 2 sup 0 ≤ t ≤ T ( t − π n ( t )) 2 . (5.19) By ( 2.3 ), π n ( t ) = π n ( t ) = 1 n τ 1 N n t = 1 n τ 1 N 1 nt = 1 n π 1 ( nt ) . No w, for an y t ∈ [0 , T ], n 2 ( t − π n ( t )) 2 = ( nt − nπ n ( t )) 2 2 n = ( nt − π 1 ( nt )) 2 2 n . By the ab ov e equalit y , the C-S inequality and Proposition 3.9 with X i = ξ 2 i and q = 2, E  sup 0 ≤ t ≤ T n 2 ( t − π n ( t )) 2  = E  sup 0 ≤ t ≤ T ( nt − π 1 ( nt )) 2 2 n  ≤ 1 2 n E " sup k ≤ N 1 nT +1 ( ξ k ) 2 # ≤ 1 2 n q E [ ξ 4 1 ] q E [ N 1 nT ] + 2 . By Theorem 3.2 , p E [ N 1 nT ] + 2 gro ws at the rate √ n , whic h ensures that the righ t hand side deca ys to 0 at the rate 1 √ n . Com bining the ab o v e equation with ( 5.19 ), the result follo ws. 31 Lemmas 5.16 and 5.17 are sligh tly harder to pro ve and to make them easier to treat w e in tro- duce the follo wing auxiliary results : Lemma 5.13 - Lemma 5.15 . The first of these establishes the uniform in tegrabilit y condition. Lemma 5.13. The fol lowing b ounds hold for the r enewal pr o c ess N 1 t and arrival pr o c ess τ n k : (i) We have E " sup 0 ≤ t ≤ T     nt/ E [ ξ 1 ] − N 1 nt √ n     6 # ≤ C T ξ 1 . (5.20) In p articular,  sup 0 ≤ t ≤ T    nt/ E [ ξ 1 ] − N 1 nt √ n    4 : n ∈ N  is uniformly inte gr able. (ii) We have E     E [ ξ 1 ] n ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0     τ k − k E [ ξ 1 ] V ar ( ξ 1 ) √ n       6   < C T ξ 1 . (5.21) In p articular,      E [ ξ 1 ] n ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0     τ k − k E [ ξ 1 ] V ar ( ξ 1 ) √ n       4 : n ∈ N    is uniformly inte gr able. Pr o of. W e first prov e part (i). Observe that   nt/ E [ ξ 1 ] − N 1 nt   ≤   nt/ E [ ξ 1 ] − τ N 1 nt / E [ ξ 1 ]   +   τ N 1 nt / E [ ξ 1 ] − N 1 nt   ≤ 1 E [ ξ 1 ] ( | ξ N 1 nt +1 | + | τ N 1 nt − E [ ξ 1 ] N 1 nt | ) . By the triangle inequality , and the inequalit y ( a + b ) 6 ≤ 64 a 6 + 64 b 6 w e hav e sup 0 ≤ t ≤ T   nt/ E [ ξ 1 ] − N 1 nt   6 ≤ 64 E [ ξ 1 ] 6 sup 0 ≤ k ≤ N 1 nT +1 | ξ k | 6 + sup 0 ≤ k ≤ N 1 nT | τ k − k E [ ξ 1 ] | 6 ! . T aking exp ectations o ver b oth sides and dividing by n 3 , it suffices to show that n − 3 E sup 0 ≤ k ≤ N 1 nT +1 | ξ k | 6 < C T ξ 1 (5.22) n − 3 E sup 0 ≤ k ≤ N 1 nT | τ k − k E [ ξ 1 ] | 6 < C T ξ 1 , (5.23) for a constan t C T ξ 1 indep enden t of n . F or the first, w e use Prop osition 3.9 with q = 1 to get n − 3 E sup 0 ≤ k ≤ N 1 nT +1 | ξ k | 6 ≤ n − 3 E [ N 1 nT + 2] E [( ξ k ) 6 ] = n − 2 E [( N 1 nT + 2) /nT ] E [( ξ k ) 6 ] . 32 By Theorem 3.2 , this term deca ys to 0 at the rate n − 2 . F or the second term, we use Lemma 3.12 (a) on M k = τ k − k E [ ξ 1 ] and Lemma 3.13 to see that n − 3 E sup 0 ≤ k ≤ N 1 nT | M k | 6 ≤ n − 3 E [( N 1 nt ) 4 ] 1 2 E [( N 1 nt ) 2 ] 1 2 ≤ C T ξ 1 . for some constan t C indep endent of n . Thus, ( 5.22 ) and ( 5.23 ) are prov ed, sho wing that ( 5.20 ) holds. Next, w e pro ve part (ii). Recalling that M k = τ 1 k − k E [ ξ 1 ] w e ha ve E   E [ ξ 1 ] 6 n 6   ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0     τ k − k E [ ξ 1 ] V ar ( ξ 1 ) √ n       6   = E [ ξ 1 ] 6 V ar ( ξ 1 ) 6 n 9 E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 | M k |   6   . It is sufficien t to fo cus on the exp ectation. By Holder’s inequalit y ,   ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 | M k |   6 ≤ ( ⌈ nT / E [ ξ 1 ] ⌉ ) 5   ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 | M k | 6   . By the Lemma 3.12 (b), the exp ectation of the left hand side is b ounded by a constant times n 5 P ⌈ nT / E [ ξ 1 ] ⌉− 1 k =0 k 3 , whic h is at most a constant times n 9 . Therefore, ( 5.21 ) follo ws. Next, w e include the pro of of Lemma 5.14 which will b e used in b oth Lemmas 5.16 and 5.17 . Lemma 5.14. L et α ≥ 0 b e a c onstant, then ther e exists a c onstant C > 0 indep endent of n such that E h e 2 ατ n ⌈ nT / E [ ξ 1 ] ⌉ i 1 2 ≤ C αT ξ 1 . Pr o of. Recall that τ n ⌈ nT /E [ ξ 1 ] ⌉ = 1 n P ⌈ nT /E [ ξ 1 ] − 1 ⌉ k =0 ξ k is a sum of ⌈ nT E [ ξ 1 ] ⌉ i.i.d. copies of ξ k n . By Jensen’s inequalit y , E [ X 1 n ] ≤ [ E X ] 1 n for all n ≥ 1. Th us, we ha v e E h e 2 ατ n ⌈ nT / E [ ξ 1 ] ⌉ i 1 2 = E h e 2 α n ξ 1 i 1 2 ⌈ nT E [ ξ 1 ] ⌉ ≤ E [ e 2 αξ 1 ] 1 2 n ⌈ nT E [ ξ 1 ] ⌉ ≤ E [ e 2 αξ 1 ]  T 2 E [ ξ 1 ] + 1 2  . (5.24) This completes the pro of. Finally , we include the pro of of the rather tec hnical Lemma 5.15 which will b e used in Lemma 5.17 . Let D ([0 , T ]) be the Sk orokho d space, i.e., D ([0 , T ]) = { x : [0 , T ] → R | x is right con tin uous and has left limits } . Then w e ha ve the follo wing result. 33 Lemma 5.15. Supp ose that X n ⇒ X as r andom elements on D ([0 , T ]) . Then, if (sup t ∈ [0 ,T ] X n ( t )) n ≥ 1 is uniformly inte gr able and E [sup t ∈ [0 ,T ] X ] < ∞ , we have lim sup n →∞ E " sup t ∈ [0 ,T ] X n # ≤ E " sup t ∈ [0 ,T ] X # . Pr o of. W e start with the claim that the functional T y := sup t ∈ [0 ,T ] y ( t ) is upp er semicontin uous (usc) on [0 , T ]. F or this, it is sufficient to prov e that if y n → y in D [0 , T ], then lim sup n →∞ T y n ≤ T y . F ollowing [ 1 , Page 112, second paragraph], y n → y implies the existence of a sequence of functions λ n : [0 , T ] → [0 , T ] which are strictly increasing con tin uous bijections, suc h that y n ( λ n ( t )) → y n ( t ) and λ n ( t ) → t uniformly in t . Let η > 0 b e arbitrary . By the abov e paragraph, there exists N suc h that y n ( λ n ( t )) < y ( t ) + η for all t ∈ [0 , T ] and n > N . In particular, since λ n are bijections, w e ha ve T y n = T ( y n ◦ λ n ) ≤ T ( y ◦ λ n ) + η = T y + η whenev er n > N . Since η was arbitrary , this instantly implies our claim. F or an arbitrary L > 0, consider the functional T L ( y ) = min { L, T y } . Since the p oint wise minim um of usc functions remains usc, it follo ws that T L ( y ) is a bounded, usc function on [0 , T ]. By [ 1 , Problem 7, Chapter 2], it follows that lim sup n →∞ E [ T L X n ] ≤ E [ T L X ] . (5.25) By the uniform integrabilit y condition, for all η > 0 there exists K ∈ N suc h that E [ |T X n | 1 {|T X n | >K } ] < η for all n ∈ N . This implies that E [ T X n − T K X n ] = E [ T X n 1 {T X n >K } ] < η (5.26) for all n ∈ N . No w, for any arbitrary L > 0 and n ∈ N , E [ T X n ] = E [ T X n − T L X n ] + E [ T L X n − T L X ] + E [ T L X − T X ] + E [ T X ] . W e tak e the limit-sup erior on b oth sides as n → ∞ . F or any η > 0, the first term can b e made smaller than η by taking L large enough as in ( 5.26 ). The second term has non-p ositiv e limit sup erior for any L b y ( 5.25 ). The third term is clearly non-p ositive. Thus, taking L → ∞ w e obtain lim sup n →∞ E [ T X n ] ≤ E [ T X ] , as desired. W e are no w ready to address Lemma 5.16 , which starts with a collection of appro ximation results for the sum P N n t − 1 k =0 n 2  ξ n k +1  2 f ( τ n k ) . Our first lemma replaces the random sum b y a deterministic sum, by noting that N n t − 1 ≈ nt E [ ξ 1 ] − 1 b y Theorem 3.2 . Therefore, v alues of k in b et w een N n t − 1 and nt E [ ξ 1 ] − 1 are expected to contribute negligibly to the sum since they are v ery few indices in num b er. This is what Lemma 5.16 establishes. Throughout the next pro ofs, the in terv al ( a, b ) is used to denote the set { x ∈ R : min( a, b ) ≤ x ≤ max( a, b ) } . 34 Lemma 5.16. We have E   sup 0 ≤ t ≤ T       X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z n 2 ( ξ n k +1 ) 2 f ( τ n k )         ≤ C ξ 1 f T n − 1 4 . Pr o of. Note that if k ∈ ( nt/ E [ ξ 1 ] − 1 , N n t − 1) ∩ Z + then k ≤ max {⌈ nt/ E [ ξ 1 ] ⌉ − 1 , N n t − 1 } . So, w e hav e for t ≤ T , τ n k ≤ max { τ n ⌈ nt/ E [ ξ 1 ] ⌉ , π n ( t ) } ≤ max { τ n ⌈ nt/ E [ ξ 1 ] ⌉ , T } ≤ τ n ⌈ nT / E [ ξ 1 ] ⌉ + T . Since f ′′ has at most exp onential gro wth, so do es f . Therefore, for some α > 0, | f ( τ n k ) | ≤ C e ατ n k ≤ C T e ατ n ⌈ nT / E [ ξ 1 ] ⌉ . Using this w e get E   sup 0 ≤ t ≤ T       X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z + n 2 ( ξ n k +1 ) 2 f ( τ n k )         ≤ E   C T e ατ n ⌈ nT / E [ ξ 1 ] ⌉ sup 0 ≤ t ≤ T X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z + n 2 ( ξ n k +1 ) 2   ≤ C T E h e 2 ατ n ⌈ nT / E [ ξ 1 ] ⌉ i 1 2 E     sup 0 ≤ t ≤ T X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z + n 2 ( ξ n k +1 ) 2   2   1 2 (5.27) b y the C-S inequalit y . The first term is b ounded indep enden t of n , b y Lemma 5.14 : E h e 2 ατ n ⌈ nT / E [ ξ 1 ] ⌉ i 1 2 ≤ C T α . (5.28) W e now focus our atten tion on the other term in ( 5.27 ), where we begin by noting that E     sup 0 ≤ t ≤ T X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z + n 2 ( ξ n k +1 ) 2   2   1 2 ≤ 1 2 n E " sup 0 ≤ t ≤ T | nt/ E [ ξ 1 ] − N n t | 2 sup k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z + ( ξ k +1 ) 4 # 1 2 . (5.29) No w, supp ose that 0 ≤ t ≤ T and k ∈ ( nt/ E [ ξ 1 ] − 1 , N n t − 1) ∩ Z . Then, observe that k ≤ max {⌈ nt/ E [ ξ 1 ] ⌉ − 1 , N 1 nt − 1 } ≤ max {⌈ nT / E [ ξ 1 ] ⌉ − 1 , N 1 nT − 1 } , 35 where w e used ( 2.3 ). Therefore, sup k ∈ ( nt/ E [ ξ 1 ] − 1 ,N 1 nt − 1) ∩ Z + ( ξ k +1 ) 4 ≤ sup k ∈ (0 , max {⌈ nT / E [ ξ 1 ] ⌉− 1 ,N 1 nT − 1 } ) ∩ Z + ( ξ k +1 ) 4 . No w, splitting the suprem ums and applying the C-S inequalit y in ( 5.29 ), 1 2 n E " sup 0 ≤ t ≤ T   nt/ E [ ξ 1 ] − N 1 nt   2 sup k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z + ( ξ k +1 ) 4 # 1 2 ≤ 1 2 n E "  sup 0 ≤ t ≤ T   nt/ E [ ξ 1 ] − N 1 nt   2  sup k ∈ (0 , max {⌈ nT / E [ ξ 1 ] ⌉− 1 ,N 1 nT − 1 } ) ( ξ k +1 ) 4 !# 1 2 ≤ 1 2 n E  sup 0 ≤ t ≤ T   nt/ E [ ξ 1 ] − N 1 nt   4  1 4 E " sup k ∈ (0 , max {⌈ nT / E [ ξ 1 ] ⌉− 1 ,N 1 nT − 1 } ) ∩ Z ( ξ k +1 ) 8 !# 1 4 . (5.30) T o b ound the first term in ( 5.30 ), we note that b y Donsk er’s Theorem 3.4 and the con tin uous mapping theorem [ 1 , Theorem 5.1],     nt/ E [ ξ 1 ] − N 1 nt √ n     4 ⇒ V ar ( ξ 1 ) 4 E [ ξ 1 ] 12 | B t | 4 , where B t is a Brownian motion (in distribution). By Lemma 5.13 (i) and Lemma 5.15 we hav e lim sup n →∞ E " sup 0 ≤ t ≤ T     nt/ E [ ξ 1 ] − N 1 nt √ n     4 !# 1 4 ≤ V ar ( ξ 1 ) E [ ξ 1 ] 3 E  sup 0 ≤ t ≤ T | B t | 4  1 4 , (5.31) Th us, the first term in ( 5.30 ) gro ws at the rate √ n as n → ∞ . F or the other term, w e use Prop osition 3.9 with nT E [ ξ 1 ] + N 1 nT + 1 and p = 1. Indeed, E " sup k ∈ (0 , max { nT / E [ ξ 1 ] ,N 1 nT − 1 } ) ∩ Z + ( ξ k +1 ) 8 # 1 4 ≤ E " sup k ≤ nT / E [ ξ 1 ]+ N 1 nT +1 ( ξ k +1 ) 8 # 1 4 ≤ E [ ξ 8 1 ] 1 4  E  nT / E [ ξ 1 ] + N 1 nT + 1  1 4 . (5.32) This term grows at the rate n 1 4 as n → ∞ , by Theorem 3.2 . Com bining all the num b ered equations ( 5.27 )-( 5.32 ), w e obtain that E   sup 0 ≤ t ≤ T       X k ∈ ( nt/ E [ ξ 1 ] − 1 ,N n t − 1) ∩ Z n 2 ( ξ n k +1 ) 2 f ( τ n k )         ≤ C ξ f T n − 1+ 1 2 + 1 4 = C ξ f T n − 1 4 from where the result follows. 36 The pro of of next Lemma 5.17 inv olves handling f ( τ n k ). Note that we exp ect τ n k ≈ k E [ ξ 1 ] n . Th us, the mean v alue theorem can be used to b ound f ( τ n k ) − f  k E [ ξ 1 ] n  in terms of τ n k − k E [ ξ 1 ] /n . These k ey ideas form the crux of the next b ound. Lemma 5.17. We have E   sup 0 ≤ t ≤ T       ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 ( f ( τ n k ) − f ( k E [ ξ 1 ] /n ))         ≤ C T ξ 1 n − 1 / 4 . Pr o of. F or every k = 0 , 1 , . . . , ⌈ nt/ E [ ξ 1 ] ⌉ − 1 and 0 ≤ t ≤ T , observe that max { τ n k , k E [ ξ 1 ] /n } ≤ max { τ n ⌈ nt/ E [ ξ 1 ] ⌉ , t } ≤ T + τ n ⌈ nT / E [ ξ 1 ] ⌉ . Recall that f ′ gro ws at most exp onen tially , b ecause f ′′ do es so. By the mean v alue inequality , | f ( τ n k ) − f ( k E [ ξ 1 ] /n ) | ≤ C | τ n k − k E [ ξ 1 ] /n | e ατ n ⌈ nT / E [ ξ 1 ] ⌉ for some α > 0. This implies that sup 0 ≤ t ≤ T       ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 ( f ( τ n k ) − f ( k E [ ξ 1 ] /n ))       ≤ C sup 0 ≤ t ≤ T   ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   e ατ n ⌈ nT / E [ ξ 1 ] ⌉ ≤ C   ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   e ατ n ⌈ nT / E [ ξ 1 ] ⌉ . W e take the exp ectation ab ov e, and apply the C-S inequalit y to separate the terms. E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   e ατ n ⌈ nT / E [ ξ 1 ] ⌉   ≤ C E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   2   1 2 E h e 2 ατ n ⌈ nT / E [ ξ 1 ] ⌉ i 1 2 . (5.33) By Lemma 5.14 , E h e 2 ατ n ⌈ nT / E [ ξ 1 ] ⌉ i 1 2 ≤ C αT (5.34) 37 is b ounded indep enden t of n (where we note that α = | A − B K | for the linear case). T o bound the first term in ( 5.33 ), we remo v e the factor sup k ≤⌈ nT / E [ ξ 1 ] ⌉ +1 ξ 2 k +1 from the sum. E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   2   1 2 = 1 2 n E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 ( ξ k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   2   1 2 ≤ 1 2 n E "   ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 | τ n k − k E [ ξ 1 ] /n |   2 sup k ≤⌈ nT / E [ ξ 1 ] ⌉− 1 ( ξ k +1 ) ! 4 # 1 2 ≤ 1 2 n E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 | τ n k − k E [ ξ 1 ] /n |   4   1 4 E " sup k ≤⌈ nT / E [ ξ 1 ] ⌉− 1 ( ξ k +1 ) 8 # 1 4 , (5.35) where we applied the C-S inequality in the last step. The second term is controlled b y applying Prop osition 3.9 with q = 1, X i = ξ 8 k +1 and noticing that ⌈ nT / E [ ξ 1 ] + 1 ⌉ is constant: E   sup k ≤⌈ nT / E [ ξ 1 ] − 1 ⌉ ( ξ k +1 ) ! 8   1 4 ≤ E   ⌈ nT / E [ ξ 1 ]+1 ⌉ X k =0 ( ξ k +1 ) 8   1 4 ≤ C T E  ( ξ 1 ) 8  1 4 n 1 4 . (5.36) The first term is handled by the usual Donsk er’s Theorem 3.3 , from which w e know that τ ⌊ nt ⌋ − ⌊ nt ⌋ E [ ξ 1 ] V ar ( ξ 1 ) √ n ⇒ B t on D ([0 , T ]), where B t is a Brownian motion (in distribution). W e will now use the ”generalized” con tinuous mapping theorem, see [ 28 , Theorem 1.11.1]. Note that every function in D ([0 , T ]) is Riemann integrable since it has only countably man y discon tinuities. Therefore, an y sequence of Riemann sums of suc h a function con verge to the integral of the function. That is, for any g ∈ D ([0 , T ]), E [ ξ 1 ] n ⌈ nT / E [ ξ 1 ] ⌉ X k =0     g  k n      → Z T 0 | g ( t ) | dt. Applying the generalized contin uous mapping theorem,   E [ ξ 1 ] n ⌊ nT / E [ ξ 1 ] ⌋ X i =1     τ k − k E [ ξ 1 ] V ar ( ξ 1 ) √ n       4 ⇒  Z T 0 | B t | dt  4 . (5.37) 38 T o strengthen this result in to conv ergence of exp ectations, observ e that E     nT / E [ ξ 1 ] − 1 X k =0 | τ n k − k E [ ξ 1 ] /n |   4   1 4 = √ n V ar ( ξ 1 ) E [ ξ 1 ] E     E [ ξ 1 ] n ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0     τ k − k E [ ξ 1 ] V ar ( ξ 1 ) √ n       4   1 4 . By Lemma 5.13 (ii), ( 5.37 ) can b e upgraded to con vergence of momen ts. In particular, E     E [ ξ 1 ] n ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0     τ k − k E [ ξ 1 ] V ar ( ξ 1 ) √ n       4   1 4 → E "  Z T 0 | B t | dt  4 # 1 4 . Therefore, it follo ws that this particular term grows at the rate √ n as n → ∞ . Th us E     ⌈ nT / E [ ξ 1 ] ⌉− 1 X k =0 n 2 ( ξ n k +1 ) 2 | τ n k − k E [ ξ 1 ] /n |   2   1 2 ≤ n − 1 / 4 C T ξ 1 . Com bining the ab ov e equation with the num b ered equations ( 5.33 )-( 5.37 ), the result follows. W e shall now resolve the final step of this transition, which is the replacement of ( ξ n k +1 ) 2 b y its exp ectation E [( ξ n 1 ) 2 ]. Once this is complete, we will hav e a deterministic Riemann integral to w ork with. Lemma 5.18. We have, for σ = V ar ( ξ 2 1 ) , that E   sup 0 ≤ t ≤ T       ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0 n 2  ( ξ n k +1 ) 2 − E [( ξ n k +1 ) 2 ]  f  k E [ ξ 1 ] n          ≤ C σ 1 n  1 + nT E [ ξ 1 ]  1 2 . Pr o of. Define S m = m X k =1  ( ξ k ) 2 − E [( ξ k ) 2 ]  f  ( k − 1) E [ ξ 1 ] n  (5.38) and S 0 = 0. Then S m is a martingale with resp ect to F m = σ ( ξ 1 , ξ 2 , ....ξ m ). By the C-S inequalit y and Do ob’s maximal inequality , E   sup 0 ≤ m ≤⌈ nT E [ ξ 1 ] ⌉ | S m |   ≤ v u u u t E   sup 0 ≤ m ≤⌈ nT E [ ξ 1 ] ⌉ | S m | 2   ≤ 2 r E h | S ⌈ nT E [ ξ 1 ] ⌉ | 2 i . (5.39) Let X k = ( ξ k ) 2 − E [( ξ k ) 2 ]. F or any 1 ≤ m ≤ ⌈ nT E [ ξ 1 ] ⌉ , b y ( 5.38 ) we ha v e E | S m | 2 = m X k =1     f  ( k − 1) E [ ξ 1 ] n      2 E [( X 2 1 )] ≤ mC AB K T V ar ( ξ 2 1 ) . (5.40) 39 Com bining equations ( 5.38 )-( 5.40 ), 1 2 n E   sup 0 ≤ t ≤ T       ⌈ nt/ E [ ξ 1 ] ⌉− 1 X k =0  ( ξ k +1 ) 2 − E [( ξ k +1 ) 2 ]  f  k E [ ξ 1 ] n          = 1 2 n E   sup 0 ≤ m ≤⌈ nT E [ ξ 1 ] ⌉ | S m |   ≤ C AB K T σ 1 q ⌈ nT E [ ξ 1 ] ⌉ n ≤ C σ 1 n  1 + nT E [ ξ 1 ]  1 2 , whence the result follows. Thus, this term will deca y at rate 1 / √ n. The final lemma establishes the accuracy of the Riemann integral appro ximation. Although the argumen t is standard, w e include the pro of to maintain completeness of the presen tation. Lemma 5.19. As n → ∞ , we have E   sup 0 ≤ t ≤ T       nt/ E [ ξ 1 ] X k =0 n 2 E ( ξ n k +1 ) 2 f ( E [ ξ 1 ] k /n ) − Z t 0 M f ( s ) ds         ≤ 1 n C ξ 1 M T . Note that the exp e ctation is over a deterministic quantity, henc e irr elevant. Pr o of. W e hav e nt/ E [ ξ 1 ] X k =0 n 2 E ( ξ n k +1 ) 2 f ( E [ ξ 1 ] k /n ) = nt/ E [ ξ 1 ] X k =0 E ( ξ 1 ) 2 2 n f ( E [ ξ 1 ] k /n ) . Let ∆ s = E [ ξ 1 ] /n and s k = E [ ξ 1 ] k /n. Then E [ ξ 2 1 ] 2 n = E [ ξ 2 1 ] 2 E [ ξ 1 ] ∆ s = M ∆ s, and so nt/ E [ ξ 1 ] X k =0 E ( ξ 1 ) 2 2 n f ( E [ ξ 1 ] k /n ) = nt/ E [ ξ 1 ] X k =0 E ( ξ 1 ) 2 2 n f ( s k ) = nt/ E [ ξ 1 ] X k =0 M f ( s k )∆ s. Let g ( s ) = R s s k f ( r ) dr ∈ C 2 [ s k , s k +1 ], with g ′′ = f ′ . Since sup t ∈ [0 ,T ] ∥ f ′ ∥ < ∞ , we ha ve     f ( s k )∆ s − Z s k +1 s k f ( s ) ds     ≤ ∥ f ′ ∥ L ∞ (∆ s ) 2 / 2 . using the mean v alue inequalit y . Summing o ver 0 ≤ k ≤ nt/ E [ ξ 1 ] w e ha ve       nt/ E [ ξ 1 ] X k =0 M f ( s k )∆ s − M Z t 0 f ( s ) ds       ≤ ∥ f ′ ∥ L ∞ M nt/ E [ ξ 1 ] X k =0 (∆ s ) 2 / 2 ≤ M ∥ f ′ ∥ L ∞ nt E [ ξ 1 ]  E [ ξ 1 ] n  2 ≤ M t ∥ f ′ ∥ L ∞ n E [ ξ 1 ] . 40 T aking supremum o ver time t ∈ [0 , T ] , w e get sup t ∈ [0 ,T ]       nt/ E [ ξ ] X k =0 M f ( s k )∆ s − M Z t 0 f ( s ) ds       ≤ 1 n C ξ 1 M T , completing the pro of. Finally , the pro ofs of Lemma 5.9 and 5.7 are straightforw ard. Pr o of of the L emma 5.9 and 5.7 . Combining results of Lemmas 5.11 - 5.19 , w e get the Lemma 5.9 . Finally , observ e that for f ( t ) = ( A − B K ) x ( t ) = ( A − B K ) e ( A − B K ) t x 0 , all the assumptions of Lemma 5.9 are satisfied. Hence, Lemma 5.9 applies to this function f , with constan ts dep ending on the matrices A, B , K and on the time T . As a result, inv oking the estimate from Lemma 5.9 , we obtain Lemma 5.7 as a direct corollary . 5.2 The Noise P art: Prop osition 5.3 In this section, we pro ve Prop osition 5.3 . The pro of follo ws by b ounding L ε,n 2 ( t ) using func- tionals of the Brownian motion W s . Pr o of of Pr op osition 5.3 . W e ha ve L ε,n 2 ( t ) = Z t 0 e sA  M s − M π n ( s )  ds. F or any t ≥ 0, we ha v e M t − M π n ( t ) = Z t π n ( t ) e − sA dW s . F rom ( 5.1 ), we ha ve e tA ( M t − M π n ( t ) ) = W t − e ( t − π n ( t )) A W π n ( t ) + e tA Z t π n ( t ) e − sA AW s ds. By adding and subtracting e ( t − π n ( t )) A W t on righ t hand side, w e ha ve | e tA ( M t − M π n ( t ) ) | ≤   I − e ( t − π n ( t )) A   sup 0 ≤ s ≤ t | W s | + e ( t − π n ( t )) | A | | W t − W π n ( t ) | + e t | A |  Z t π n ( t ) e − s | A | | A | ds  sup 0 ≤ s ≤ t | W s | . (5.41) No w, w e will find estimates for eac h term of right hand sides of ( 5.41 ). Let us fo cus on first term.   I − e ( t − π n ( t )) A   =      Z ( t − π n ( t )) 0 e sA | A | ds      ≤ ( t − π n ( t )) e ( t − π n ( t )) | A | | A | ≤ ξ n N n t +1 e ( ξ n N n t +1 ) | A | | A | . (5.42) 41 F or the next term, a simple b ound holds: e ( t − π n ( t )) | A | | W t − W π n ( t ) | ≤ e ξ n N n t +1 | A | | W t − W π n ( t ) | . (5.43) F or the last term, w e hav e Z t π n ( t ) e − s | A | | A | ds = e t | A | − e π n ( t ) | A | = e t | A | (1 − e − ( t − π n ( t )) | A | ) ≤ e t | A | ( t − π n ( t )) | A | ≤ ξ n N n t +1 e t | A | | A | . (5.44) Using estimates ( 5.42 ), ( 5.43 ) and ( 5.44 ) in ( 5.41 ), we ha ve | e tA ( M t − M π n ( t ) ) | ≤ ξ n N n t +1 e ( ξ n N n t +1 ) | A | | A | sup 0 ≤ s ≤ t | W s | + e ξ n N n t +1 | A | | W t − W π n ( t ) | + ξ n N n t +1 e 2 t | A | | A | sup 0 ≤ s ≤ t | W s | . (5.45) In tegrating with resp ect to time 0 ≤ s ≤ t, | L ε,n 2 ( t ) | ≤ | A | Z t 0 ξ n N n t +1 e ( ξ n N n s +1 ) | A | sup 0 ≤ r ≤ s | W r | ds + Z t 0 e ξ n N n s +1 | A | | W s − W π n ( s ) | ds + e 2 t | A | | A | Z t 0 ξ n N n s +1 sup 0 ≤ r ≤ s | W r | ds. T aking the supremum o ver t ∈ [0 , T ] and then the exp ectation, E  sup 0 ≤ t ≤ T | L ε,n 2 ( t ) |  ≤ | A | Z T 0 E h ξ n N n s +1 e ( ξ n N n s +1 ) | A | i E  sup 0 ≤ r ≤ s | W r |  ds + Z T 0 E h e ξ n N n s +1 | A | i E  | W s − W π n ( s ) |  ds + e 2 t | A | | A | Z T 0 E  ξ n N n s +1  E  sup 0 ≤ r ≤ s | W r |  ds ≤ C AT ξ 1 n  Z T 0 √ sds + n Z T 0 E p ( s − π n ( s )) ds + Z T 0 √ sds  ≤ C AT ξ 1 E [ N 1 / 2 ] . Since E [ N 1 / 2 ] ≤ C ξ 1 √ n b y Corollary 3.11 and so, w e ha ve E  sup 0 ≤ t ≤ T | L ε,n 2 ( t ) |  ≤ C AT ξ 1 √ n . 42 6 Generalization to the Nonlinear Case This section is motiv ated by the w ork presen ted in [ 6 ], where the author has work ed on fast p erio dic sampling p erturb ed by Poisson random measure. In the present section, with the help of Lemma 5.9 , w e extend the result of [ 6 ] to a more general sampling setting by considering a nonlinear h ybrid system sub ject to random sampling but driv en by white noise. More precisely , w e consider that d Y ε,n t = c ( Y ε,n t , Y ε,n π n ) dt + εσ ( Y ε,n t ) dW t Y ε,n 0 = y 0 , (6.1) where the functions c : R d × R d → R d and σ : R d → R d × d are assumed to b e measurable and to satisfy appropriate regularit y conditions. The process W represen t an indep enden t Bro wnian motion. Observ e that equation ( 6.1 ) can naturally b e in terpreted as a random p erturbation of an underlying nonlinear control system. ˙ y = c ′ ( y , u ) , y (0) = y 0 ∈ R d (6.2) where c ′ : R d × R d → R d , with a feedbac k con trol la w u = κ ( y ), for some appropriate function κ. In this case, the drift function c in ( 6.1 ) is obtained from c ′ b y absorbing the feedbac k con trol in to the dynamics, namely , c ( y , z ) := c ′ ( y , κ ( z )) . The argumen t z = Y ε,n π n in ( 6.1 ) reflects a sample and hold implementation of the feedbac k control, where the con trol is up dated only at the random time instan ts as discussed in Section 2. Therefore, in equation ( 6.1 ), the sampling effect is through the term Y ε,n π n . Here, w e also note that the dynamics of the random sampled coun terpart of equation ( 6.2 ) can b e expressed as ˙ y n t = c ( y n t , y n π n ) , y n 0 = y 0 ∈ R d (6.3) whic h is also a fully non linear equation. Our aim in this section is to analyze h ybrid system ( 6.1 ) and to understand how the presence of random sampling and small external noise affect the b ehavior of solution. Now, we state the hypotheses that will be used in the subsequen t analysis. Hyp otheses 6.1 (Lipschitz contin uity) . Ther e exists a p ositive c onstant C such that for any x 1 , x 2 , z 1 , z 2 ∈ R d , we have | c ( x 1 , x 2 ) − c ( z 1 , z 2 ) | ≤ C ( | x 1 − z 1 | + | x 2 − z 2 | ) , and | σ ( x 1 ) − σ ( x 2 ) | ≤ C | x 1 − x 2 | . F rom Hyp othesis 6.1 , we observe that there exists a p ositive constant C such that for any x, z ∈ R d , w e ha ve | c ( x, z ) | ≤ C (1 + | x | + | z | ) , and | σ ( z ) | ≤ C (1 + | z | ) . (6.4) 43 Hyp otheses 6.2 (Boundedness and linear growth of deriv atives) . F or the ve ctors x = ( x 1 , · · · , x n ) , y = ( y 1 , · · · , y n ) ∈ R d and c : R d × R d → R d , define D 1 c ( x, y ) :=  ∂ c i ∂ x k ( x, y )  1 ≤ i,k ≤ n and D 2 c ( x, y ) :=  ∂ c i ∂ y k ( x, y )  1 ≤ i,k ≤ n , b e the Jac obian matric es of f with r esp e ct to the first and se c ond variables, r esp e ctively. Se c ond- or der derivatives ar e define d c omp onent wise by D 2 1 c := D 11 c := D 1 ( D 1 c ) , D 12 c := D 1 ( D 2 c ) , D 2 2 c := D 22 c := D 2 ( D 2 c ) , and higher-or der derivatives, such as D 3 2 f , ar e understo o d in the same manner. We have fol lowing assumptions. We assume that | D 1 c ( x, y ) | ≤ C (1 + | y | ) , | D 2 c ( x, y ) | ≤ C, | D 1 D 2 c ( x, y ) | ≤ C,   D 2 1 c ( x, y )   ≤ C (1 + | y | ) , and   D 2 2 c ( x, y )   ≤ C. W e establish the law of large num b ers and the central limit theorem in the presence of random sampling for this setting also. The pro of of the LLN follows the general strategy of the linear case, but requires additional argumen ts to control the con tributions arising from the m ultiplicative noise term. F or the CL T, w e build on the framew ork introduced in [ 6 ], suitably adapting it to accommo date the random sampling terms. In particular, to deal with the main complicated fluctuation term in the CL T ( Prop osition 6.8 ), w e use Lemma 5.9 , whic h simplifies the computations and makes them comparativ ely easier than the approach in [ 6 ]. W e are no w ready to establish our first main result of this section. But before that, w e mak e a small y et imp ortan t remark. Remark 6.3. We observe that by using an ar gument analo gous to that in L emma 3.5 and applying ( 6.4 ) , one c an get that sup 0 ≤ t ≤ T | y t | < C T , sup 0 ≤ t ≤ T | y n t | < C T . (6.5) 6.1 LLN Result for Nonlinear Case The first main result sho ws that when ε is small and n is large, the sto c hastic pro cess Y ε,n t b eha v es deterministically . In particular, Y ε,n t → y t uniformly in L p (Ω) , p ≥ 1 in all the Regime. Theorem 6.4 (Law of Large Num b ers Type Result) . L et Y ε,n t and y t denote the r esp e ctive solutions to e quations ( 6.1 ) and ( 6.2 ) . Then, for any fixe d 0 < T < ∞ and any 1 ≤ p < ∞ , ther e exists a p ositive c onstant C , dep ending on T and ξ 1 only, such that for any ε > 0 and n ∈ N , we have E  sup 0 ≤ t ≤ T | Y ε,n t − y t | p  ≤ ( ε p + n − p ) C T ξ 1 . 44 Pr o of. In order to tac kle the m ultiplicative noise term, we need to first establish the moment b ounds for the pro cess Y ε,n t . This can b e established easily for an y p ≥ 1 b y using standard tec hniques for estimating momen ts of solutions to sto chastic differen tial equations (see [ 33 ]) and linear gro wth conditions ( 6.4 ), yielding E  sup 0 ≤ s ≤ T | Y ε,n s | p  ≤ C pT , E " sup t ∈ [0 ,T ]     Z t 0 σ ( Y ε,n s ) dW s     p # ≤ C pT . (6.6) W e shall now pro ceed as in the pro of of Theorem 2.1 by using the triangle inequalit y . That is, from equations ( 6.1 ) and ( 6.3 ), for any p ≥ 1 we ha v e | Y ε,n t − y n t | p ≤ 2 p − 1     Z t 0 c ( Y ε,n s , Y ε,n π n ( s ) ) − c ( y n s , y n s ) ds     p + 2 p − 1 ε p     Z t 0 σ ( Y ε,n s ) dW s     p . Using Hyp othesis 6.1 , conditions ( 6.4 ), taking sup ov er time, exp ectation on b oth the sides, and using ( 6.6 ), w e get E sup t ∈ [0 ,T ] | Y ε,n t − y n t | p ≤ C p E sup t ∈ [0 ,T ] Z t 0 (1 + | Y ε,n s − y n s | p + | Y ε,n π n ( s ) − y n s | p ) ds + ε p C pT , . (6.7) Using the fact that π n ( t ) ≤ t, w e ha ve E  sup 0 ≤ t ≤ T | Y ε,n π n ( t ) − y n t |  ≤ E  sup 0 ≤ t ≤ T | Y ε,n t − y n t |  . Finally , applying Gronw all’s inequality to ( 6.7 ) E  sup 0 ≤ t ≤ T | Y ε,n t − y n t | p  ≤ ε p C pT . (6.8) Next, using Hyp othesis 6.1 we ha ve for an y 1 ≤ p < ∞ that | y n t − y t | p ≤     Z t 0 c ( y n s , y n π n ( s ) ) − c ( y s , y s ) ds     p ≤ C T p Z t 0 ( | V s | p + ( s − π n ( s )) p C T ) ds, where V s = sup 0 ≤ r ≤ s | y n s − y s | . T aking the suprem um and exp ectation on b oth the sides, then applying the Gron w all’s inequality , for any 1 ≤ p < ∞ , w e get E [ sup 0 ≤ t ≤ T | y n t − y t | p ] ≤ E [ N p ] C T . (6.9) Com bining estimates ( 6.8 ) and ( 6.9 ) using the triangle inequality , E  sup 0 ≤ t ≤ T | Y ε,n t − y n t | p  ≤ C p E  sup 0 ≤ t ≤ T | Y ε,n t − y n t | p  + C p E [ sup 0 ≤ t ≤ T | y n t − y t | p ] ≤ C T p ( E ([ N p ]) + ε p ) ≤ ( ε p + n − p ) C T pξ 1 b y Corollary 3.11 , for an y 1 ≤ p < ∞ . 45 6.2 CL T Result for Nonlinear Case This subsection is dev oted to establishing CL T-t yp e results for the generalized setting. Our analysis is cov ering Regimes 1 and 2, while the corresp onding result for Regime 3 can b e obtained by follo wing the similar argumen ts as in the linear case. In Regimes 1 and 2, let us define the rescaled fluctuation pro cess Z ε,n t := Y ε,n t − y t ε . Here, we note that the coarser parameter ε is used to rescale the sto chastic quan tity ( Y ε,n t − y t ) . T o get more insigh t into the rescaled pro cess Z ε,n t , w e get b y ( 6.1 ) and ( 6.2 ) that Z ε,n t = 1 ε Z t 0 n c  Y ε,n s , Y ε,n π n ( s )  − c ( y s , y s ) o ds + Z t 0 σ ( Y ε,n s ) dW s . (6.10) Applying T aylor’s theorem (see [ 10 ]), w e obtain Z ε,n t = Z t 0 { D 1 c ( y s , y s ) + D 2 c ( y s , y s ) } Z ε,n s ds + Z t 0 D 2 c ( y s , y s ) Y ε,n π n ( s ) − Y ε,n s ε ! ds + Z t 0 σ ( Y ε,n s ) dW s + R ε,n t , where R ε,n t := Z t 0 " c  Y ε,n s , Y ε,n π n ( s )  − c ( y s , y s ) ε − D 1 c ( y s ,y s ) Z ε,n s − D 2 c ( y s , y s ) Z ε,n s − D 2 c ( y s , y s ) Y ε,n π n ( s ) − Y ε,n s ε ! # ds. (6.11) Our goal is to describ e the limiting b ehavior of the fluctuation pro cess Z ε,n t as ε ↘ 0 and n → ∞ . F or this purp ose, we define a function ℓ g ( t ) := c M Z t 0 D 2 c ( y s , y s ) · c ( y s , y s ) ds. (6.12) Supp ose, we are able to sho w that R ε,n t = O ( ε 2 + n − 2 ), and Z t 0 D 2 c ( y s , y s ) Y ε,n π n ( s ) − Y ε,n s ε ! ds → ℓ g ( t ) as ε ↘ 0 and n → ∞ . Then the pro cess Z ε,n t con verges to a limiting pro cess Z = { Z t : t ≥ 0 } whic h is uniquely defined as the solution of the sto chastic differen tial equation given below Z t := Z t 0 { D 1 c ( y s , y s ) + D 2 c ( y s , y s ) } Z s ds + c M Z t 0 D 2 c ( y s , y s ) · c ( y s , y s ) ds + Z t 0 σ ( y s ) dW s . (6.13) 46 This argument is the central fo cus of this subsection. W e analyze this appro ximation in Regimes 1 and 2 in the following theorem, whic h constitutes the second main result of this section. Theorem 6.5 (Central Limit Theorem T yp e Result) . L et y t and Y ε,n t denote the solutions of ( 6.2 ) and ( 6.1 ) , r esp e ctively. F urthermor e, let Z ε,n t and Z t b e define d by ( 6.10 ) and ( 6.13 ) , r esp e ctively. Supp ose that we ar e in R e gime i ∈ { 1 , 2 } , i.e., lim ε ↘ 0 ,n →∞ 1 / ( nε ) = c ∈ [0 , ∞ ) . Then, for any fixe d T ∈ (0 , ∞ ) , ther e exists a p ositive c onstant C indep endent of ε and n , and ther e exists ε 0 > 0 such that for 0 < ε < ε 0 , we have E  sup 0 ≤ t ≤ T | Z ε,n t − Z t |  ≤ [ c ( n − 1 / 4 + ε ) + κ ( ε ) + ε + n − 1 / 2 ] C M T ξ 1 . The pro of of Theorem 6.5 is based on a collection of intermediate results. W e b egin by presen ting the necessary prop ositions and lemmas. Once these are stated, we combine them to conclude the pro of of the theorem at the end of this subsection. The proof of those in termediate results will b e provided in next subsection. Prop osition 6.6. We have E  sup 0 ≤ t ≤ T     Z t 0 { σ ( Y ε,n s ) − σ ( y s ) } dW s      ≤ C T [ E [ N 2 ] + ε 2 ] 1 2 ≤ [ ε + n − 1 ] C ξ 1 T . Prop osition 6.7. L et R ε,n t b e define d as in ( 6.11 ) . Then, for any fixe d T > 0 , ther e exists C T > 0 such that E  sup 0 ≤ t ≤ T | R ε,n t |  ≤ C T ( ε 4 + E [ N 4 ]) 1 2 ≤ ( ε 2 + n − 2 ) C T ξ 1 . Prop osition 6.8. L et y t and Y ε,n t solve ( 6.2 ) and ( 6.1 ) , r esp e ctively. Then, for any fixe d T > 0 , ther e exists a p ositive c onstant C T such that for any 0 < ε < ε 0 E " sup 0 ≤ t ≤ T    Z t 0 D 2 c ( y s , y s ) Y ε,n s − Y ε,n π n ( s ) ε ds − ℓ g ( t )    # ≤ [ c ( n − 1 / 4 + ε ) + κ ( ε ) + n − 1 / 2 ] C M T ξ 1 which c onver ges to zer o as ε ↘ 0 and n → ∞ with the r ate dep ending on the value of c ∈ [0 , ∞ ) . W e establish Prop osition 6.8 by decomp osing the proof in to the follo wing sequence of inter- mediate lemmas, eac h addressing a k ey comp onent of the argumen t. Lemma 6.9. L et Y ε,n t b e the solution of e quation ( 6.1 ) . Then, for any fixe d T > 0 , t ∈ [0 , T ] , and ε, n > 0 , we have Z t 0 D 2 c ( y s , y s ) Y ε,n s − Y ε,n π n ( s ) ε ds = 3 X i =1 J ε,n i ( t ) , wher e (6.14) 47 J ε,n 1 := Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n r , Y ε,n π n ( r ) ) − c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) ε dr ds + Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) − c ( y π n ( r ) , y π n ( r ) ) ε dr ds, J ε,n 2 := Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( y π n ( r ) , y π n ( r ) ) ε dr ds, J ε,n 3 := Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) σ ( Y ε,n r ) dW r ds. No w, our next step is to sho w that the terms E [sup 0 ≤ t ≤ T | J ε,n 1 ( t ) | ] , sup 0 ≤ t ≤ T | J ε,n 2 ( t ) − ℓ g ( t ) | and E [sup 0 ≤ t ≤ T | J ε,n 3 ( t ) | ] are small. Lemma 6.10. L et J ε,n 1 ( t ) b e define d as in e quation ( 6.14 ) . Then, for any fixe d T > 0 , ther e exists a p ositive c onstant C T ξ 1 such that for any ε ∈ (0 , ε 0 ) , we have E  sup 0 ≤ t ≤ T | J ε,n 1 ( t ) |  ≤ c  ε + n − 1  C T ξ 1 . W e next decomp ose J ε,n 2 ( t ) defined in ( 6.14 ) as follows. J ε,n 2 ( t ) = I ε,n 1 ( t ) + I ε,n 2 ( t ) , where I ε,n 1 ( t ) := Z t 0 { D 2 c ( y s , y s ) − D 2 c ( Y π n ( s ) , Y π n ( s ) ) } Z s π n ( s ) c ( y π n ( r ) , y π n ( r ) ) ε dr ds, I ε,n 2 ( t ) := Z t 0 D 2 c ( y π n ( r ) , y π n ( r ) ) Z s π n ( s ) c ( Y π n ( s ) , Y π n ( s ) ) ε dr ds. (6.15) Lemma 6.11. L et I ε,n 1 ( t ) b e define d as in e quation ( 6.15 ) . Then, E  sup 0 ≤ t ≤ T | I ε,n 1 ( t ) |  ≤ c [ ε + n − 1 ] C T ξ 1 . Lemma 6.12. L et I ε,n 2 ( t ) b e define d as in e quation ( 6.15 ) with ℓ g ( t ) as given in ( 6.12 ) . Then, for any T > 0 , ther e exists a p ositive c onstant C T > 0 such that for any 0 < ε < ε 0 , we have sup 0 ≤ t ≤ T | I ε,n 2 ( t ) − ℓ g ( t ) | ≤ [ c ( n − 1 / 4 + ε ) + κ ( ε )] C T M ξ 1 . Lemma 6.13. L et J ε,n 3 ( t ) b e define d as in e quation ( 6.14 ) . Then, for any fixe d T > 0 , ther e exists a p ositive c onstant C T such that for any ε, n > 0 , we have E  sup 0 ≤ t ≤ T | J ε,n 3 ( t ) |  ≤ n − 1 / 2 C T ξ 1 . Pr o of of The or em 6.5 . By combining the conclusions of Prop ositions 6.6 – 6.8 , and using the estimates established therein, we obtain the desired result. 48 6.2.1 Pro ofs of Prop ostions 6.6 - 6.8 and Lemmas 6.9 - 6.13 . Pr o of of Pr op osition 6.6 . Using C-S inequalit y , BDG inequality , Hypothesis 6.1 and Theorem 6.4 , w e get E  sup 0 ≤ t ≤ T     Z t 0 { σ ( Y ε,n s ) − σ ( y s ) } dW s      ≤ C  E sup 0 ≤ t ≤ T | Y ε,n t − y t | 2  1 2 ≤ C T  E [ N 2 ] + ε 2  1 2 ≤ [ ε + n − 1 ] C T ξ 1 . Pr o of of Pr op osition 6.7 . The proof can b e obtained by directly following [ 6 , Prop osition 4.4] b y using T aylor’s form ula and Hyp otheses 6.2 . Pr o of of Pr op osition 6.8 . By putting together the results prov ed in Lemmas 6.9 – 6.13 , w e obtain the required result. Pr o of of L emma 6.9 . F rom ( 6.1 ) w e ha ve Z t 0 D 2 c ( y s , y s ) Y ε,n s − Y ε,n π n ( s ) ε ds = Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n r , Y ε,n π n ( r ) ) ε dr ds + Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) σ ( Y ε,n r ) dW r ds. W riting c ( Y ε,n r , Y ε,n π n ( r ) ) = c ( Y ε,n r , Y ε,n π n ( r ) ) − c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) + c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) + c ( y π n ( r ) , y π n ( r ) ) − c ( y π n ( r ) , y π n ( r ) ) in the second term of the right hand side of the ab o v e equation, w e get 1 ε Z t 0 D 2 c ( y s , y s )  Y ε,n s − Y ε,n π n ( s )  ds = Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n r , Y ε,n π n ( r ) ) − c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) ε dr ds + Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) − c ( y π n ( r ) , y π n ( r ) ) ε dr ds − Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( y π n ( r ) , y π n ( r ) ) ε dr ds + Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) σ ( Y ε,n r ) dW r ds. The righ t hand side of the ab o v e equation is easily recognized as the sum of J ε,n i ( t ) , 1 ≤ i ≤ 3 . 49 Pr o of of L emma 6.10 . Recalling the definition of J ε,n 1 ( t ) from ( 6.14 ), we ha v e J ε,n 1 ( t ) = Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n r , Y ε,n π n ( r ) ) − c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) ε dr ds + Z t 0 D 2 c ( y s , y s ) Z s π n ( s ) c ( Y ε,n π n ( r ) , Y ε,n π n ( r ) ) − c ( y π n ( r ) , y π n ( r ) ) ε dr ds := J ε,n 1 ( t ) + J ε,n 2 ( t ) . F or J ε,n 1 ( t ) , using Hyp othesis 6.1 , we ha ve | J ε,n 1 ( t ) | ≲ 1 ε Z t 0 Z s π n ( s ) sup 0 ≤ r ≤ s | Y ε,n r − y r + y r − y π n ( r ) + y π n ( r ) − Y ε,n π n ( r ) | dr ds ≲ 1 ε Z t 0 Z s π n ( s ) sup 0 ≤ r ≤ s | Y ε,n r − y r | dr ds + 1 ε Z t 0 Z s π n ( s ) sup 0 ≤ r ≤ s | y r − y π n ( r ) | dr ds. T aking supremum on b oth side o v er time and then taking exp ectation on b oth sides, w e get E  sup 0 ≤ t ≤ T | J ε,n 1 ( t ) |  ≲ 1 ε E  Z T 0 sup 0 ≤ r ≤ s  | Y ε,n r − y r | + | y r − y π n ( r ) |  ( s − π n ( s )) ds  . Applying C-S inequalit y , we get E  sup 0 ≤ t ≤ T | J ε,n 1 ( t ) |  ≲ 1 ε  E sup 0 ≤ r ≤ s | Y ε,n r − y r | 2  1 2  E Z T 0 ( s − π n ( s )) 2 ds  1 2 + 1 ε  E sup 0 ≤ r ≤ s | y r − y π n ( r ) | 2  1 2  E Z T 0 ( s − π n ( s )) 2 dr ds  1 2 ≤ 1 ε h ( ε 2 + N 2 ) 1 2 + ( N 2 ) 1 2 i ( E [ N 2 ]) 1 2 C T ≤ c  ε + n − 1  C T ξ 1 . Similarly , for J ε,n 2 ( t ) , b y similar calculations to those ab ov e, w e obtain E  sup 0 ≤ t ≤ T | J ε,n 2 ( t ) |  ≤ c  ε + n − 1  C T ξ 1 . Before pro ving Lemma 6.11 and 6.12 , w e need the following fluctuation b ound whic h can b e establish by using mean v alue theorem and Hyp othesis 6.2 . Lemma 6.14. L et the function c satisfies the Hyp othesis 6.2 . Then, ther e exists a c onstant K > 0 , such that   D 2 c ( Y π n ( s ) , Y π n ( s ) ) − D 2 c ( y s , y s )   ≤ K | Y π n ( s ) − y s | . 50 Pr o of of L emma 6.11 . W e ha ve I ε,n 1 ( t ) = Z t 0 { D 2 c ( y s , y s ) − D 2 c ( Y π n ( s ) , Y π n ( s ) ) } Z s π n ( s ) c ( y π n ( r ) , y π n ( r ) ) ε dr ds. Applying C-S inequalit y rep eatedly , w e get for any t ∈ [0 , T ] that | I ε,n 1 ( t ) | ≤ 1 ε  Z t 0 | D 2 c ( y s , y s ) − D 2 c ( Y π n ( s ) , Y π n ( s ) ) | 2 ds  1 2 ×  Z t 0 | c ( y π n ( r ) , y π n ( r ) ) | 4 ds  1 4  Z t 0 ( s − π n ( s )) 4 ds  1 4 ≤ 1 ε  Z T 0 | Y π n ( s ) − y s | 2 ds  1 2  Z T 0 (1 + | y π n ( r ) | 4 ) ds  1 4  Z T 0 ( s − π n ( s )) 4 ds  1 4 , b y using Lemma 6.14 and ( 6.4 ). T aking the supremum o ver t ∈ [0 , T ] on the left, and taking exp ectation the on b oth sides, using Jenson inequalit y , applying Theorem 6.4 , we get E [ sup 0 ≤ t ≤ T | I ε,n 1 ( t ) | ] ≤ 1 ε ( ε 2 + E [ N 2 ]) 1 2 ( E [ N 4 ]) 1 4 C T = c ( ε + n − 1 ) C ξ 1 T . Pr o of of L emma 6.12 . Recalling the definition of ℓ g ( t ) and I ε,n 2 ( t ) from Definition 6.12 and equations( 6.15 ), resp ectively , w e hav e I ε,n 2 ( t ) − ℓ g ( t ) = Z t 0 D 2 c ( Y π n ( s ) , Y π n ( s ) ) Z s π n ( s ) c ( y π n ( r ) , y π n ( r ) ) ε dr ds − c M Z t 0 D 2 c ( y s , y s ) · c ( y s , y s ) ds =  1 nε − c  Z t 0 D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( r ) , y π n ( r ) ) ( s − π n ( s )) 1 /n ds + c Z t 0  D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( s ) , y π n ( s ) ) − D 2 c ( y s , y s ) · c ( y s , y s )  ( s − π n ( s )) 1 /n ds + c Z t 0  ( s − π n ( s )) 1 /n − M  D 2 c ( y s , y s ) · c ( y s , y s ) ds. Therefore,   I ε,n 2 ( t ) − ℓ g ( t )   ≤     1 nε − c         Z t 0  D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( s ) , y π n ( s ) )  ( s − π n ( s )) 1 /n ds     + c     Z t 0  D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( s ) , y π n ( s ) ) − D 2 c ( y s , y s ) · c ( y s , y s )  ( s − π n ( s )) 1 /n ds     51 + c     Z t 0  ( s − π n ( s )) 1 /n − M  D 2 c ( y s , y s ) · c ( y s , y s )     := I 1 2 + I 2 2 + I 3 2 . Noting that κ ( ε ) =   1 nε − c   , w e ha ve I 1 2 = κ ( ε )     Z t 0  D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( s ) , y π n ( s ) )  ( s − π n ( s )) 1 /n ds     ≤ κ ( ε ) K Z t 0 (1 + | y π n ( s ) | ) ( s − π n ( s )) 1 /n ds, whic h gives E  sup 0 ≤ t ≤ T | I 1 2 ( t ) |  ≤ κ ( ε ) nK  E Z T 0 ( s − π n ( s )) 2 ds  1 2 C T = κ ( ε ) nK ( E N 2 ) 1 2 C T ≤ κ ( ε ) C T ξ 1 . No w, by C-S inequalit y , w e obtain E  sup 0 ≤ t ≤ T | I 2 2 |  ≤ n c  E Z T 0  D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( s ) , y π n ( s ) ) − D 2 c ( y s , y s ) · c ( y s , y s )  2 ds  1 2 ×  E Z T 0 ( s − π n ( s )) 2 ds  1 2 . The last in tegral is E ( N 2 ) 1 2 . No w, let us fo cus on the first integral. W e can write E Z T 0   D 2 c ( Y π n ( s ) , Y π n ( s ) ) · c ( y π n ( s ) , y π n ( s ) ) − D 2 c ( y s , y s ) · c ( y s , y s )   2 ds ≲ E Z T 0   D 2 c ( Y π n ( s ) , Y π n ( s ) ) ·  c ( y π n ( s ) , y π n ( s ) ) − c ( y s , y s )    2 ds + E Z T 0    D 2 c ( Y π n ( s ) , Y π n ( s ) ) − D 2 c ( y s , y s )  · c ( y s , y s )   2 ds ≤ K E Z T 0 | y π n ( s ) − y s | 2 ds +  E Z T 0   Y π n ( s ) − y s   4 ds  1 2  E Z T 0  1 + | y s | 4  ds  1 2 , where w e ha ve used the Hyp otheses 6.1 , 6.2 , and Lemma 6.14 . Thus E  sup 0 ≤ t ≤ T | I 2 2 |  ≤ 1 ε h N 2 + ( ε 4 + N 4 ) 1 2 i 1 2 ( E [ N 2 ]) 1 2 C T ≤ c [ n − 1 + ε ] C T ξ 1 . 52 Next, w e ha ve E  sup 0 ≤ t ≤ T | I 3 2 |  = c E sup 0 ≤ t ≤ T     Z t 0  ( s − π n ( s )) 1 /n − M  D 2 c ( y s , y s ) · c ( y s , y s ) ds     . By Hyp othesis 6.2 , the term D 2 c ( y s , y s ) · c ( y s , y s ) is contin uous. Moreo ver, by ( 6.4 ) together with estimate ( 6.5 ), we hav e | c ( y s , y s ) | ≤ C T (1 + | y s | ) ≤ C T , and from Hyp othesis 6.2 we also obtain | D 2 c ( y s , y s ) | < K . Hence the product D 2 c ( y s , y s ) · c ( y s , y s ) is not only con tinuous but also b ounded. Therefore, there exists a mollifier ρ ε ( y s ) suc h that | D 2 c ( y s , y s ) · c ( y s , y s ) − ρ ε ( y s ) | ≤ C ε. No w, take f to b e a function satisfying the assumptions of Lemma 5.9 . Then w e ha ve E sup 0 ≤ t ≤ T     Z t 0  ( s − π n ( s )) 1 /n − M  D 2 c ( y s , y s ) · c ( y s , y s ) ds     ≤ E Z T 0      ( s − π n ( s )) 1 /n − M  ( D 2 c ( y s , y s ) · c ( y s , y s ) − ρ ε ( y s ))     ds + E Z T 0      ( s − π n ( s )) 1 /n − M  ( ρ ε ( y s ) − f ( y s ))     ds + E sup 0 ≤ t ≤ T     Z t 0  ( s − π n ( s )) 1 /n − M  f ( y s ) ds     ≤ sup s ∈ [0 ,T ] ( | D 2 c ( y s , y s ) · c ( y s , y s ) − ρ ε ( y s ) | + | ρ ε ( y s ) − f ( y s ) | ) × Z T 0 E | n ( s − π n ( s )) − M | ds + n − 1 / 4 C ξ 1 T ≤ C ε Z T 0 E ( ξ N n s +1 + M ) ds + n − 1 / 4 C ξ 1 M T ≤ ( ε + n − 1 / 4 ) C ξ 1 M T , where in last second inequality , w e hav e used Lemma 5.9 . So, finally , we ha v e E  sup 0 ≤ t ≤ T | I 3 2 |  ≤ c ( ε + n − 1 / 4 ) C ξ 1 M T . Pr o of of L emma 6.13 . F or t ∈ [0 , T ] , recalling the definition of J ε,n 3 ( t ) from equation ( 6.14 ) and using the C-S inequality , w e obtain | J ε,n 3 ( t ) | ≤  Z t 0 | D 2 c ( y s , y s ) | 2 ds  1 2 Z t 0     Z s π n ( s ) σ ( Y ε,n r ) dW r     2 ds ! 1 2 . T aking suprem um o v er [0 , T ] follo wed b y exp ectation, Jensen’s inequalit y ( E [ X 1 2 ] ≤ [ E X ] 1 2 ) and then using Hyp othesis 6.2 for the b oundedness of D 2 c ( y s , y s ), w e ha ve E  sup 0 ≤ t ≤ T | J ε,n 3 ( t ) |  ≤ C T Z T 0 E     Z s π n ( s ) σ ( Y ε,n r ) dW r     2 ds ! 1 2 . (6.16) 53 Let σ i ∈ R d , 1 ≤ i ≤ n , represent the columns of the matrix σ , then using Itˆ o isometry and C-S’s inequalit y , w e get Z T 0 E     Z s π n ( s ) σ ( Y ε,n r ) dW r     2 ds = Z T 0 E      n X i =1 Z s π n ( s ) σ i ( Y ε,n r ) dW i r      2 ds ≲ Z T 0 E n X i =1     Z T 0 σ i ( Y ε,n r )1 { π n ( s ) ,s } ( r ) dW i r     2 ds ≲ Z T 0 n X i,j =1 E Z T 0 σ 2 j i ( Y ε,n r )1 { π n ( s ) ,s } dr ds = E Z T 0 Z s π n ( s ) | σ ( Y ε,n r ) | 2 dr ds ≤  E Z T 0 sup 0 ≤ r ≤ s | σ ( Y ε,n r ) | 4 ds  1 2  E Z T 0 ( s − π n ( s )) 2 ds  1 2 . b y C-S inequality . Now, using ( 6.4 ), and then moment b ound ( 6.6 ) and Corollary 3.11 , w e get Z T 0 E     Z s π n ( s ) σ ( Y ε,n r ) dW r     2 ds ≤ C  E Z T 0  1 + sup 0 ≤ r ≤ s | Y ε,n r | 4  ds  1 2 ( E N 2 ) 1 2 ≤ C T ( E [ N 2 ]) 1 2 . Putting this last expression in equation ( 6.16 ), we get E  sup 0 ≤ t ≤ T | J ε,n 3 ( t ) |  ≤ C T ( E [ N 2 ]) 1 4 ≤ C T ξ 1 √ n . 7 Ac kno wledgmen t The first author ackno wledges the supp ort of the Institute P ostdo ctoral F ellowship at Ashok a Univ ersity , India. The second author ac kno wledges the supp ort of the Institute Postdoctoral F ellowship at I IT Kanpur, India. He would also lik e to thank to Prof. Suprio Bhar (I IT Kanpur) for his supp ort during the initial stage of this w ork through his CRG grant with file n umber : I ITK/RD/AD/28487. References [1] Patric k Billingsley . Conver genc e of Pr ob ability Me asur es . Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, New Y ork, 1968. 54 [2] Charles-Edouard Br´ ehier. The av eraging principle for sto chastic differential equations driv en by a Wiener pro cess revisited. Comptes R endus. Math ´ ematique , 360(G3):265–273, 2022. [3] Sandra Cerrai and Mark F reidlin. Averaging principle for a class of sto c hastic reaction– diffusion equations. Pr ob ability the ory and r elate d fields , 144(1):137–177, 2009. [4] Sandra Cerrai and Yic hun Zh u. Av eraging principle for slow–fast systems of sto c hastic PDEs with rough co efficien ts. Sto chastic Pr o c esses and their Applic ations , 185:104618, 2025. [5] T ongw en Chen and Bruce A F rancis. Optimal sample d-data c ontr ol systems . Springer Science & Business Media, 2012. [6] Shiv am Dhama. Asymptotic analysis of dynamical systems driven b y Poisson random measures with p erio dic sampling. Multisc ale Mo deling & Simulation , 23(2):959–984, 2025. [7] Shiv am Dhama and Chetan D P ahla jani. Asymptotic analysis of discrete-time mo dels for linear con trol systems with fast random sampling. In 2021 Seventh Indian Contr ol Confer enc e (ICC) , pages 359–364. IEEE, 2021. [8] Shiv am Dhama and Chetan D Pahla jani. Appro ximation of linear controlled dynamical systems with small random noise and fast p erio dic sampling. Mathematic al Contr ol and R elate d Fields , 13(3):852–872, 2023. [9] Shiv am Dhama and Chetan D P ahla jani. Fluctuation analysis for a class of nonlinear systems with fast p erio dic sampling and small state-dep endent white noise. Journal of Differ ential Equations , 362:438–483, 2023. [10] Charles Henry Edwards. A dvanc e d c alculus of sever al variables . Courier Corp oration, 2012. [11] Lawrence C Ev ans. An intr o duction to sto chastic differ ential e quations , v olume 82. Amer- ican Mathematical So c., 2012. [12] Mark I F reidlin and Ric hard B So wers. A comparison of homogenization and large devia- tions, with applications to wa vefron t propagation. Sto chastic pr o c esses and their applic a- tions , 82(1):23–52, 1999. [13] Rob ert G Gallager. Sto chastic pr o c esses: the ory for applic ations . Cambridge Univ ersity Press, 2013. [14] Huijun Gao, Junli W u, and Peng Shi. Robust sampled-data H ∞ con trol with sto chastic sampling. A utomatic a , 45(7):1729–1736, 2009. [15] Dror Giv on. Strong con v ergence rate for t wo-time-scale jump-diffusion stochastic differen- tial systems. Multisc ale Mo deling & Simulation , 6(2):577–594, 2007. 55 [16] Peter Hall and Christopher C Heyde. Martingale limit the ory and its applic ation . Academic press, 1980. [17] RZ Has’minskii. On sto chastic pro cesses defined b y differential equations with a small parameter. The ory of Pr ob ability & Its Applic ations , 11(2):211–228, 1966. [18] Rudolf Emil Kalman. Analysis and synthesis of line ar systems op er ating on r andomly sample d data . PhD thesis, Departmen t of Electrical Engineering, Electronics Researc h Lab oratories, Columbia Univ ersit y Engineering Cen ter, 1957. [19] RZ Khasminskij. On the principle of a v eraging the Ito’s sto c hastic differential equations. Kyb ernetika , 4(3):260–279, 1968. [20] H Kushner and Leonard T obias. On the stability of randomly sampled systems. IEEE T r ansactions on Automatic Contr ol , 14(4):319–324, 1969. [21] O. Leneman. Random sampling of random processes: mean-square behavior of a first order closed-lo op system. IEEE T r ansactions on Automatic Contr ol , 13(4):429–432, 1968. [22] Johan Nilsson, Bo Bernhardsson, and Bj¨ orn Wittenmark. Sto chastic analysis and con trol of real-time systems with random time delays. Automatic a , 34(1):57–64, 1998. [23] Bo Shen, Zidong W ang, and Xiaohui Liu. Sampled-data sync hronization control of dy- namical net works with sto c hastic sampling. IEEE T r ansactions on Automatic Contr ol , 57(10):2644–2650, 2012. [24] Eduardo D Sontag. Mathematic al c ontr ol the ory: deterministic finite dimensional systems , v olume 6. Springer Science & Business Media, 2013. [25] Aneel T an wani, Debasish Chatterjee, and Lars Gr ¨ une. P erformance b ounds for sto chas- tic receding horizon control with randomly sampled measurements. In 2019 IEEE 58th Confer enc e on De cision and Contr ol (CDC) , pages 2330–2335. IEEE, 2019. [26] Aneel T an wani, Debasish Chatterjee, and Daniel Lib erzon. Stabilization of deterministic con trol systems under random sampling: Ov erview and recen t developmen ts. Unc ertainty in Complex Networke d Systems: In Honor of R ob erto T emp o , pages 209–246, 2018. [27] Aneel T an wani and Olga Y uferev a. Error co v ariance b ounds for sub optimal filters with Lipsc hitzian drift and Poisson-sampled measuremen ts. Automatic a , 122:109280, 2020. [28] Aad v an der v aart and Jon W ellner. We ak c onver genc e and empiric al pr o c esses: with applic ations to statistics . Springer Science & Business Media, 2013. [29] Y e W ang, Hong jian Liu, and Hailong T an. An o v erview of filtering for sampled-data systems under communication constraints. International Journal of Network Dynamics and Intel ligenc e , pages 100011–100011, 2023. 56 [30] Andrew T A W o o d. Rosenthal’s inequality for p oin t pro cess martingales. Sto chastic pr o- c esses and their applic ations , 81(2):231–246, 1999. [31] Juan I Y uz and Graham C Go o dwin. Sample d-data mo dels for line ar and nonline ar systems . Springer, 2014. [32] Xian-Ming Zhang, Qing-Long Han, Xiaoh ua Ge, Bo da Ning, and Bao-Lin Zhang. Sampled- data con trol systems with non-uniform sampling: A surv ey of metho ds and trends. A nnual R eviews in Contr ol , 55:70–91, 2023. [33] Bernt Øksendal. Sto chastic differen tial equations. In Sto chastic Differ ential Equations: A n Intr o duction with Applic ations , pages 38–50. Springer Berlin Heidelb erg, Berlin, Hei- delb erg, 2003. 57

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment