Corrected diffusion approximation for random walks conditioned to stay positive

Let $S_n$ be a random walk with i.i.d. increments which have zero mean and finite variance. For every $x\ge0$ we define the stopping time $τ_x:=\inf\{n\ge1:x+S_n\le0\}$ and consider the probabilities $\mathbb{P}(x+S_n\ge y,τ_x>n)$. We study the quali…

Authors: Denis Denisov, Alex, er Tarasov

CORRECTED DIFFUSION APPR O XIMA TION F OR RANDOM W ALKS CONDITIONED TO ST A Y POSITIVE DENIS DENISOV, ALEXANDER T ARASO V, AND VIT ALI W ACHTEL Abstract. Let S n be a random w alk with i.i.d. increments whic h hav e zero mean and finite variance. F or ev ery x ≥ 0 w e define the stopping time τ x := inf { n ≥ 1 : x + S n ≤ 0 } and consider the probabilities P ( x + S n ≥ y , τ x > n ). W e study the qualit y of the normal appro ximation for these probabilities and derive a Berry-Esseen-type inequality for P ( x + S n ≥ y | τ x > n ). Our Theorem 1 is an extension of the results in [5] where we hav e considered the special case x = 0. It is also w orth mentioning that Theorem 1 complemen ts the results of Siegm und and Y uh [10] on the corrected diffusion appro ximation. 1. Introduction Let { X k } b e a sequence of indep endent, identically distributed random v ariables with zero mean E X 1 = 0 and finite v ariance E X 2 1 =: σ 2 ∈ (0 , ∞ ). Consider a random walk { S n ; n ≥ 0 } defined as follows, S 0 = 0 and S n := X 1 + X 2 + . . . + X n , n ≥ 1 . F or every x ≥ 0 we define the stopping time τ x := inf { n ≥ 1 : x + S n ≤ 0 } . The main purp ose of the present pap er is to study the quality of normal approxi- mation for probabilities P ( x + S n ≥ y , τ x > n ) and P ( x + S n ≥ y | τ x > n ). In [5] w e hav e considered the case x = 0 and hav e prov ed that there exists an absolute constant A 0 suc h that sup y ≥ 0    P ( S n ≥ y | τ 0 > n ) − e − y 2 / 2 σ 2 n    ≤ A 0 ( E | X 1 | 3 ) 3 σ 9 √ n and       P ( τ 0 > n ) q 2 π E | S τ 0 | n − 1 / 2 − 1       ≤ A 0 ( E | X 1 | 3 ) 3 σ 9 √ n for all n ≥ 1. These estimates can b e seen as an analogue of the classical Berry- Esseen inequality , whic h says that     P  S n σ √ n ≤ x  − Φ( x )     ≤ γ 0 E | X 1 | 3 σ 3 √ n , (1) where Φ stands for the standard normal distribution function and one can take γ 0 = 0 . 4785. 2020 Mathematics Subje ct Classific ation. Primary 60G50; Secondary 60G40, 60F17. Key wor ds and phr ases. Random walk, exit time, Rayleigh distribution, diffusion approxima- tion, Berry-Esseen inequality . 1 2 DENISOV, T ARASOV, AND W ACHTEL In the present note w e are going to generalize the results from [5] to the case of arbitrary starting point x ≥ 0. It turns out that in this case it is more con venien t to consider the probabilities P ( x + S n ≥ y , τ x > n ) than the conditioned probabilities P ( x + S n ≥ y | τ x > n ). Here is our main result. Theorem 1. Assume E X 1 = 0 , E | X 1 | 2 = σ 2 and E | X 1 | 3 < ∞ . Then ther e exists an absolute c onstant A 1 such that     P ( x + S n ≥ y , τ x > n ) −  Φ  y + x σ √ n  − Φ  y − x σ √ n  − 2 √ 2 σ 2 π n e − y 2 2 σ 2 n E | x + S τ x |     ≤ A 1 ( E | X 1 | 3 ) 3 E | S τ x | σ 9 √ n ( x + √ n ) . (2) Let B t denotes the standard Brownian motion. F or every fixed x w e denote τ bm x := inf { t > 0 : x + B t ≤ 0 } . Then, by the reflection principle for the Bro wnian motion, P ( x + B t ≥ y , τ bm x > t ) = P ( x + B t ≥ y ) − P ( − x + B t ≥ y ) = Φ  y + x √ t  − Φ  y − x √ t  . Th us, the estimate (2) can b e seen as a corrected diffusion approximation for random walks conditioned to sta y p ositive, the correction is giv en by the term 2 √ 2 σ 2 π n e − y 2 2 σ 2 n E | x + S τ x | . This term is bigger than the righ t hand side in (2) in the case when x = o ( n 1 / 2 ). T o illustrate the effect of the correction w e consider the probabilit y P ( τ x > n ). Putting y = 0 in (2), w e obtain     P ( τ x > n ) −  Φ  x σ √ n  − Φ  − x σ √ n  − 2 √ 2 σ 2 π n E | x + S τ x |     ≤ A 1 ( E | X 1 | 3 ) 3 E | S τ x | σ 9 n . If we assume no w that x = x n → ∞ and x n = o ( n 1 / 2 ) then, using the fact that lim x →∞ E | x + S τ x | =: E ∈ (0 , ∞ ) under the assumption E | X 1 | 3 < ∞ , we conclude that     P ( τ x > n ) −  Φ  x σ √ n  − Φ  − x σ √ n  − 2 E √ 2 σ 2 π n     = o ( n − 1 / 2 ) . This simple argument shows that Theorem 1 giv es, for x = o ( √ n ), a better result than the b ound sup x ≥ 0     P ( τ x > n ) −  Φ  x σ √ n  − Φ  − x σ √ n      ≤ A E | X 1 | 3 n 1 / 2 , (3) whic h has b een obtained in [1]. If x = O ( n 3 / 4 ) then the result in Theorem 1 gives a b etter rate of con vergence than the rate obtained v ery recen tly b y Grama and Xiao [7]. W e notice also that our Theorem 1 can b e seen as a complement to the corrected diffusion approximation obtained b y Siegm und and Y uh in [10] in the case when x = a √ n with some a > 0. They hav e obtained a short asymptotic expansions for the probability P ( x + S n ≥ y , τ x ≤ n ), which can b e transferred, b y using classical expansions in the CL T, into expansions for P ( x + S n ≥ y , τ x > n ). CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 3 It is well-kno wn that if x = o ( √ n ) then the distribution of x + S n σ √ n conditioned on τ x > n conv erges tow ards the Rayleigh distribution. Theorem 1 allo ws one to obtain a rate of con v ergence in this limit theorem. Corollary 2. Under the assumptions of The or em 1, ther e exist absolute c onstants A 2 , A 3 such that, for al l x ≤ √ n ,     P ( x + S n ≥ y | τ x > n ) − e − y 2 2 σ 2 n     ≤ A 2 ( E | X 1 | 3 ) 3 σ 9 √ n + A 3 x 2 σ 2 n (4) and       P ( τ x > n ) q 2 σ 2 π E | S τ x | n − 1 / 2 − 1       ≤ A 2 ( E | X 1 | 3 ) 3 σ 9 √ n + A 3 x 2 σ 2 n . (5) This corollary implies uniform rate of con vergence of order n − 1 / 2 when x ≤ n 1 / 4 . The term x 2 n on the right hand sides of (4) and (5) is caused b y the approximation of Φ  y + x σ √ n  − Φ  y − x σ √ n  b y 2 x √ 2 σ 2 π n e − y 2 2 σ 2 n and appears naturally in the asymptotic expansions for Φ  y + x σ √ n  − Φ  y − x σ √ n  Φ  x σ √ n  − Φ  − x σ √ n  in the case when x = o ( √ n ). Comparing (2) with the classical Berry-Esseen inequalit y (1) and with Aleshky a- vic hene’s b ound (3), w e see that the righ t hand side of (2) contains the third p ow er of the Lyapuno v ratio E | X 1 | σ 3 . W e b elieve that the optimal bound should be linear in the Lyapuno v ratio and that the third pow er is caused by our approac h whic h uses the Berry-Esseen inequalit y to b ound local probabilities, see (6) below. W e next sho w that imposing some structural prop erties on the distribution of increments { X k } allows one to improv e the bound in Theorem 1. 2. Preliminar y estima tes In what follows w e shall assume, without loss of generality , that σ 2 = 1 . W e first extend upp er bounds obtained in [5] to the case of arbitrary starting p oin t x . Using the classical Berry-Esseen inequality (1), we conclude that, uni- formly in x and y , P ( x + S n ∈ [ y , y + z ]) ≤ 2 γ 0 E | X 1 | 3 √ n + 1 √ 2 π Z ( y + z − x ) / √ n ( y − x ) / √ n e − z 2 / 2 dz ≤ γ 1 ( z ) √ 2 n , (6) where γ 1 ( z ) := √ 2 E | X 1 | 3 + z π − 1 / 2 . This implies that P ( x + S n ∈ [ y , y + z ] , τ x > n ) ≤ Z ∞ 0 P ( x + S ⌊ n/ 2 ⌋ ∈ dw , τ x > n/ 2) P ( S n −⌊ n/ 2 ⌋ ∈ [ y − w, y − w + z ]) ≤ γ 1 ( z ) √ n P ( τ x > n/ 2) . (7) 4 DENISOV, T ARASOV, AND W ACHTEL Lemma 3. F or al l n ≥ 8( E | X 1 | 3 ) 2 one has P ( τ x > n ) ≤ 6 x + | E [ x + S τ x ] | x + √ n = 6 E | S τ x | x + √ n . (8) Pr o of. Applying Lemma 25 in [4] to the stopping time τ x , we conclude that P ( τ x > n ) ≤ E [ x + S n ; τ x > n ] E [( x + S n ) + ] for ev ery x ≥ 0. (As usual, z + denotes the p ositive part of z , i.e. z + = max { z , 0 } .) Applying the optional stopping theorem to the martingale x + S n with the stop- ping time τ x ∧ n , w e infer that x = E [ x + S n , τ x > n ] + E [ x + S τ x , τ x ≤ n ] and hence E [ x + S n ; τ x > n ] = x − E [ x + S τ x ; τ x ≤ n ] ≤ x + | E [ x + S τ x ] | . Consequen tly , P ( τ x > n ) ≤ x + | E [ x + S τ x ] | E [( x + S n ) + ] . (9) Using the classical Berry-Esseen b ound (1), we obtain E [( x + S n ) + ] = Z ∞ 0 P ( x + S n > y ) dy = Z ∞ − x P ( S n > y ) dy ≥ x P ( S n > 0) + Z √ n 0 P ( S n > y ) dy = x P ( S n > 0) + √ n Z 1 0 P ( S n > y √ n ) dy ≥ x  1 2 − γ 0 E | X 1 | 3 √ n  + √ n Z 1 0 Φ( y ) dy − γ 0 E | X 1 | 3 . Recalling that γ 0 ≤ 0 . 4785 and noticing that R 1 0 Φ( y ) dy ≥ 0 . 341, w e conclude that if n ≥ 8( E | X 1 | 3 ) 2 then E [( x + S n ) + ] ≥ 1 6 ( x + √ n ) . This completes the pro of of the lemma. □ Lemma 4. Ther e exists an absolute c onstant C such that E | u + S τ u | 2 ≤ C E | S τ u | ( E | X 1 | 3 ) 2 , u ≥ 0 . (10) F urthermor e, E | u + S τ u | ≤ 3 E | X 1 | 3 E | X 1 | 2 , u ≥ 0 . (11) Pr o of. The second inequality is pro v ed by Mogulskii, see Theorem 2 in [8]. T o prov e the first b ound we define ϕ ( x ) := E h τ 0 − 1 X k =0 1 { S k ∈ [ x,x +1) } i = ∞ X k =0 P ( S k ∈ [ x, x + 1) , τ 0 > k ) . By (39) in [11], ϕ ( x ) = H ( x + 1) − H ( x ) , CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 5 where H ( x ) denotes the renewal function corresp onding to strict ascending ladder heigh ts. By subadditivit y of renewal function, H ( x + 1) − H ( x ) ≤ H (1) for all x . Therefore, sup x ∈ R ϕ ( x ) = ϕ (0) ≤ H (1) . In [5, Corollary 8] w e hav e sho wn that H ( x ) ≤ 2 E | S τ 0 | ( x + c 2 E | X 1 | 3 ) , x ≥ 0 , where c 2 is an absolute constant. Consequently , sup x ∈ R ϕ ( x ) ≤ 2(1 + c 2 ) E | X 1 | 3 E | S τ 0 | . (12) Splitting the path of the walk in to independent cycles b y descending ladder ep o chs, w e obtain ∞ X k =0 P ( u + S k ∈ [ w , w + 1) , τ u > k ) = E h τ u − 1 X k =0 1 { u + S k ∈ [ w,w +1) } i = ϕ ( w − u ) + E h θ ( u ) − 1 X j =1 ϕ ( w − u + χ − 1 + · · · + χ − j ) i , where θ ( u ) := inf { j ≥ 1 : χ − 1 + . . . + χ − j ≥ u } and χ − i is the i -th weak descending ladder heigh t. Therefore, χ − i are independent copies of | S τ 0 | . Combining this represen tation with (12) and noting that, due to the W ald iden tit y , E | S τ 0 | E θ ( u ) = E | S τ u | , w e conclude that ∞ X k =0 P ( u + S k ∈ [ w , w + 1) , τ u > k ) ≤ max x ∈ R ϕ ( x ) E θ ( u ) ≤ c 3 E | X 1 | 3 E | S τ u | , (13) where c 3 is an absolute constant. By the total probability la w, E | u + S τ u | 2 = ∞ X k =0 Z ∞ 0 P ( u + S k ∈ dw , τ u > k ) E [( X + w ) 2 ; X ≤ − w ] ≤ ∞ X w =0 ∞ X k =0 P ( u + S k ∈ [ w , w + 1) , τ u > k ) E [ X 2 ; X ≤ − w ] . Applying now (13) we conclude that E | u + S τ u | 2 ≤ c 3 E | X 1 | 3 E | S τ u | ∞ X w =0 E [ X 2 ; X ≤ − w ] ≤ c 3 E | X 1 | 3 E | S τ u | ∞ X w =0 E [ X 2 ; | X | ≥ w ] ≤ c 3 E | X 1 | 3 E | S τ u | E [ X 2 ( | X | + 1)] ≤ 2 c 3 ( E | X 1 | 3 ) 2 E | S τ u | , using that E | X | 3 ≥ 1 b ecause of E [ X 2 ] = 1. This completes the pro of of the lemma. □ 6 DENISOV, T ARASOV, AND W ACHTEL The next lemma provides an upper b ound for conditional lo cal probabilities and is the main difference to the approach used in [5]. Lemma 7 there gives a similar b ound for the particular case x = 0 and is based on a represen tation for the lo cal probabilities which follows from the Wiener-Hopf factorisation. Since the factorisation is not directly applicable to p ositive starting p oint x , we use a differen t, even simpler, approac h based on the time reversal. Lemma 5. F or al l n ≥ 32( E | X 1 | 3 ) + 4 one has P ( x + S n ∈ [ y , y + 1) , τ x > n ) ≤ 288 γ 1 (1) E | S τ x |  y + 4 E | X 1 | 3  n ( x + √ n ) ≤ 288 γ 1 (1)  x + 3 E | X 1 | 3   y + 4 E | X 1 | 3  n ( x + √ n ) . Pr o of. Consider a random walk { S ′ n } n ≥ 0 d = {− S n } n ≥ 0 . Define τ ′ y = inf { n : y + S ′ n ≤ 0 } and let m = ⌊ n/ 2 ⌋ . Due to Lemma 3, for all n ≥ 16( E | X 1 | 3 ) 2 + 2 and all x, y ≥ 0, w e hav e the b ounds P ( τ x > m ) ≤ 6 E | S τ x | x + √ m ≤ 12 E | S τ x | x + √ n (14) and P ( τ ′ y > m ) ≤ 12 E | S ′ τ ′ y | √ n . (15) Applying the Marko v prop ert y at time m , w e obtain P ( y + S ′ n ∈ [ z , z + 1) , τ ′ y > n ) ≤ Z ∞ 0 P ( y + S ′ m ∈ du, τ ′ y > m ) P ( S ′ n − m ∈ [ z − u, z − u + 1)) . Noting that X ′ 1 has zero mean and unit v ariance and E | X ′ 1 | 3 = E | X 1 | 3 , we infer that (6) can b e applied to P ( S ′ n − m ∈ [ z − u, z − u + 1)). As a result we ha v e P ( y + S ′ n ∈ [ z , z + 1) , τ ′ y > n ) ≤ P ( τ ′ y > m ) γ 1 (1) p 2( n − m ) ≤ P ( τ ′ y > m ) γ 1 (1) √ n , where the last inequality holds since 2( n − m ) ≥ n . Applying now (15), one gets P ( y + S ′ n ∈ [ z , z + 1) , τ ′ y > n ) ≤ 12 γ 1 (1) E | S ′ τ ′ y | n . (16) Applying the same argumen t to the walk { S n } and using (14) instead of (15), we obtain P ( x + S n ∈ [ z , z + 1) , τ x > n ) ≤ 12 γ 1 (1) E | S τ x | √ n ( x + √ n ) . (17) CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 7 Using once again the Mark o v prop erty , w e hav e P ( x + S n ∈ [ y , y + 1) , τ x > n ) = Z ∞ 0 P ( x + S m ∈ dz , τ x > m ) P ( z + S n − m ∈ [ y , y + 1) , τ z > n − m ) . (18) Rev erting the time, one gets easily P ( z + S k ∈ [ y , y + 1) , τ z > k ) ≤ P ( y + 1 + S ′ k ∈ [ z , z + 1) , τ ′ y +1 > k ) , k ≥ 1 . Com bining this b ound with (16), we conclude that P ( z + S n − m ∈ [ y , y + 1) , τ z > n − m ) ≤ 12 γ 1 (1) E | S ′ τ ′ y +1 | n − m ≤ 24 γ 1 (1) E | S ′ τ ′ y +1 | n for all n ≥ 32( E | X 1 | 3 ) 2 + 4. Substituting this into (18) and applying (14), w e finally obtain P ( x + S n ∈ [ y , y + 1) , τ x > n ) ≤ 24 γ 1 (1) E | S ′ τ ′ y +1 | n P ( τ x > m ) ≤ 288 γ 1 (1) E | S τ x | E | S ′ τ ′ y +1 | n ( x + √ n ) . Using (11) and recalling that σ 2 = 1, we get E | S τ x | E | S ′ τ ′ y +1 | ≤ E | S τ x |  y + 1 + 3 E | X 1 | 3  ≤  x + 3 E | X 1 | 3   y + 4 E | X 1 | 3  . This completes the pro of of the lemma. □ Lemma 6. Ther e exists an absolute c onstant C 1 such t hat, for al l k ≥ 32( E | X 1 | 3 ) 2 + 5 , one has the b ounds P ( τ x = k ) ≤ C 1 E | S τ x | k ( x + √ k ) ( E | X 1 | 3 ) 2 , E [ | x + S τ x | ; τ x = k ] ≤ C 1 E | S τ x | k ( x + √ k ) ( E | X 1 | 3 ) 2 , E [ γ 1 ( | x + S τ x | ); τ x = k ] ≤ C 1 E | S τ x | k ( x + √ k ) ( E | X 1 | 3 ) 3 . and E [ | x + S τ x | 2 ; τ x = k ] ≤ C 1 E | S τ x | √ k ( x + √ k ) ( E | X 1 | 3 ) 2 . Pr o of. Fix some a, b ≥ 0 and consider the expected v alue E [( a | x + S τ x | + b ); τ = k ]. By the total probability law, E [( a | x + S τ x | + b ); τ x = k ] ≤ Z ∞ 0 P ( x + S k − 1 ∈ dy , τ x > k − 1) E [( − aX + b ); X ≤ − y ] ≤ ∞ X j =0 P ( x + S k − 1 ∈ [ j, j + 1) , τ x > k − 1] E [( − aX + b ); X ≤ − j ]) . 8 DENISOV, T ARASOV, AND W ACHTEL Applying now Lemma 5 with n = k − 1, w e get E [( a | x + S τ x | + b ); τ x = k ] ≤ 288 γ 1 (1) E | S τ x | ( k − 1)( x + √ k − 1) ∞ X j =0  j + 4 E | X 1 | 3  E [( − aX + b ); X ≤ − j ]) ≤ 815 γ 1 (1) E | S τ x | k ( x + √ k ) E   ( − aX + b ) X j ∈ [0 , − X ]  j + 4 E | X 1 | 3  ; X ≤ 0   ≤ 815 γ 1 (1) E | S τ x | k ( x + √ k ) E  ( − aX + b )( − X + 1)  − X 2 + 4 E | X 1 | 3  ; X ≤ 0  . T aking here a = 0 and b = 1, w e get P ( τ x = k ) ≤ 815 γ 1 (1) E | S τ x | k ( x + √ k ) E  ( − X + 1)  − X 2 + 4 E | X 1 | 3  ; X ≤ 0  Using next the Jensen inequality and noting that σ 2 = 1 implies that E | X 1 | 3 ≥ 1, w e conclude that P ( τ x = k ) ≤ C 1 E | S τ x | k ( x + √ k ) ( E | X 1 | 3 ) 2 . Cho osing a = 1 and b = 0 and applying once again the Jensen inequality , we infer that the same inequality is v alid for E [ | x + S τ x | ; τ = k ]. F urther, taking a = E | X 1 | 3 and b = π − 1 / 2 , we obtain the third claim. Finally , E [ | x + S τ x | 2 ; τ x = k ] = Z ∞ 0 P ( x + S k − 1 ∈ dy , τ x > k − 1) E [( − X + y ) 2 ; X ≤ − y ] ≤ ∞ X j =0 P ( x + S k − 1 ∈ [ j, j + 1) , τ x > k − 1) E [( − X + j ) 2 ; X ≤ − j ] . Applying now (17), we get E [ | x + S τ x | 2 ; τ x = k ] ≤ 12 γ 1 (1) E | S τ x | √ k − 1( x + √ k − 1) ∞ X j =0 E [( − X + j ) 2 ; X ≤ − j ] ≤ 24 γ 1 (1) E | S τ x | √ k ( x + √ k ) E | X 1 | 3 . This finishes the pro of of the lemma. □ Lemma 7. F or al l n ≥ 8( E | X 1 | 3 ) 2 one has n X k =1 k P ( τ x = k ) ≤ E [ τ x ∧ n ] ≤ C E | X 1 | 3 n E | S τ x | x + √ n . (19) Pr o of. Let us first sho w that the desired inequalit y holds in the case x ≥ √ n . Indeed, in this case, noting that E | S τ x | ≥ x , we ha ve E [ τ x ∧ n ] ≤ n ≤ n E | S τ x | x ≤ 2 n E | S τ x | x + √ n . CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 9 Noting that σ 2 = 1 implies that E | X 1 | 3 > 1, w e conclude that if x ≥ √ n then (19) holds with C = 2. W e now assume that x ≥ √ n . Applying the optional stopping theorem to the martingale S 2 n − n , w e get E [ τ x ∧ n ] = E [ S 2 τ x ∧ n ] = E [ S 2 τ x ; τ x ≤ n ] + E [ S 2 n ; τ x > n ] ≤ E [ S 2 τ x ] + E [ S 2 n ; τ x > n ] ≤ 2 x 2 + 2 E [ | x + S τ x | 2 ] + E [ S 2 n ; τ x > n ] . By the assumptions x ≤ √ n , n ≥ 8( E | X 1 | 3 ) 2 and by Lemma 4, 2 x 2 + 2 E [ | x + S τ x | 2 ] ≤ 4 n E | S τ x | x + √ n + C 0 ( E | X 1 | 3 ) 2 E | S τ x | ≤ C 1 E | X 1 | 3 n E | S τ x | x + √ n . Th us, it remains to sho w that E [ S 2 n ; τ x > n ] ≤ C E | X 1 | 3 n E | S τ x | x + √ n + 1 2 E [ τ x ∧ n ] . (20) T o this end w e first estimate the probabilit y P ( S n ≥ y , τ x > n ). W e shall follow the strategy of the pro of of Lemma 1.2 in Doney and Jones [6] and define tw o auxiliary stopping times T y := inf { k ≥ 1 : S k ≥ y / 2 } and η y := inf { k ≥ 1 : X k ≥ y / 4 } . Then we ha ve P ( S n ≥ y , τ x > n ) = P ( S n ≥ y , η y > n, τ x > n ) + P ( S n ≥ y , η y ≤ n, τ x > n ) ≤ P ( S n ≥ y , η y > n, τ x > n ) + P ( η y ≤ n, τ x > n ) . (21) W e also ha v e P ( η y ≤ n, τ x > n ) ≤ n X k =1 P ( τ x > k − 1) P ( X > y / 4) = E [ τ x ∧ n ] P ( X > y / 4) . (22) Noting that S T y < 3 y / 4 on the even t { η y > n } , w e obtain P ( S n ≥ y , η y > n, τ x > n ) ≤ n − 1 X k =0 P ( τ x > k , T y = k ) P ( S n − k > y / 4) ≤ max k ≤ n P ( S n − k > y / 4) n − 1 X k =0 P ( τ x > k , T y = k ) . If y ≥ √ 8 n then, due to the Do ob inequality , P ( τ x > n ) ≥ n − 1 X k =0 P ( τ x > k , T y = k ) P  min j ≤ n − k S j ≤ − y / 2  ≥ 1 2 n − 1 X k =0 P ( τ x > k , T y = k ) . 10 DENISOV, T ARASOV, AND W ACHTEL Consequen tly , P ( S n ≥ y , η y > n, τ x > n ) ≤ 2 P ( τ x > n ) P  max k ≤ n S k ≥ y / 4  . Using once again the Do ob inequalit y , w e conclude that P ( S n ≥ y , η y > n, τ x > n ) ≤ 2 7 P ( τ x > n ) E | S n | 3 y 3 . (23) Plugging (22) and (23) into (21), w e conclude that P ( S n ≥ y , τ x > n ) ≤ 2 7 P ( τ x > n ) E | S n | 3 y 3 + E [ τ x ∧ n ] P ( X > y / 4) for all y ≥ √ 8 n . This implies that E [ S 2 n ; τ x > n ] ≤ 8 n P ( τ x > n ) + E [ S 2 n ; S n > √ 8 n, τ x > n ] ≤ C P ( τ x > n )  n + E | S n | 3 √ n  + E [ τ x ∧ n ] E [ X 2 ; X > p n/ 2] . According to Theorem 2 in [9], E | S n | 3 ≤ C ( E | X 1 | 3 n + n 3 / 2 ) . (24) Com bining this inequality with (8) and noting that, by the Mark o v inequality , E h X 2 ; X > p n/ 2 i ≤ r 2 n E | X 1 | 3 ≤ 1 2 for n ≥ 8( E | X 1 | 3 ) 2 , we conclude that (20) holds. Thus, the pro of is complete. □ Besides the b ound for the truncated expectation of τ x , w e shall need estimates for some truncated moments of x + S τ x , which will be pro v ed in subsequent lemmata. Lemma 8. F or al l n and x we have E [ | x + S τ x | ; τ x ≤ n ] ≤ 8 √ n x + √ n E [ | S τ x | ] . Pr o of. If x ≤ √ n then the inequality is immediate from E [ | x + S τ x | ; τ x ≤ n ] ≤ E [ | x + S τ x | ] ≤ E [ | S τ x | ] . In the case x ≥ √ n , using the Marko v and Do ob inequalities, we hav e E [ | x + S τ x | ; τ x ≤ n ] ≤ E [ | S τ x | ; τ x ≤ n ] ≤ E  max k ≤ n | S k | ; max k ≤ n | S k | ≥ x  ≤ 1 x E  max k ≤ n | S k | 2  ≤ 4 x E [ S 2 n ] = 4 n x ≤ 8 x √ n x + √ n . Com bining this with the observ ation | S τ x | ≥ x , we complete the pro of. □ Lemma 9. Ther e exists an absolute c onstant C such that n X k =1 k E [ | x + S τ x | ; τ x = k ] ≤ C ( E | X 1 | 3 ) 2 n x + √ n E [ | S τ x | ] for al l n ≥ 32( E | X 1 | 3 ) 2 + 5 . CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 11 Pr o of. W e start by noting that ⌊ 32( E | X 1 | 3 ) 2 +5 ⌋ X k =1 k E [ | x + S τ x | ; τ x = k ] ≤ (32( E | X 1 | 3 ) 2 + 5) E [ | x + S τ x | ; τ x ≤ 32( E | X 1 | 3 ) 2 + 5] ≤ 37( E | X 1 | 3 ) 2 E [ | x + S τ x | ; τ x ≤ n ] . T aking into accoun t Lemma 8, w e conclude that ⌊ 32( E | X 1 | 3 ) 2 +5 ⌋ X k =1 k E [ | x + S τ x | ; τ x = k ] ≤ C 1 ( E | X 1 | 3 ) 2 √ n x + √ n E [ | S τ x | ] ≤ C 1 E | X 1 | 3 n x + √ n E [ | S τ x | ] (25) for all n ≥ 32( E | X 1 | 3 ) 2 + 5. F urthermore, the second b ound in Lemma 6 leads to n X ⌊ 32( E | X 1 | 3 ) 2 +5 ⌋ +1 k E [ | x + S τ x | ; τ x = k ] ≤ C 2 ( E | X 1 | 3 ) 2 E [ | S τ x | ] n X k =1 1 x + √ k ≤ C 3 ( E | X 1 | 3 ) 2 E [ | S τ x | ] n x + √ n . Com bining this with (25), w e get the desired estimate. □ Lemma 10. Ther e exists an absolute c onstant C such that E [ | x + S τ x | 2 ; τ x ≤ n ] ≤ C ( E | X 1 | 3 ) 2 √ n x + √ n E [ | S τ x | ] for al l n ≥ 32( E | X 1 | 3 ) 2 + 5 . Pr o of. In the case x ≤ √ n we apply (10) to get E [ | x + S τ x | 2 ; τ x ≤ n ] ≤ E | x + S τ x | 2 ≤ C ( E | X 1 | 3 ) 2 E | S τ x | ≤ C 1 ( E | X 1 | 3 ) 2 √ n x + √ n E | S τ x | . Assume now that x > √ n . Similar to the pro of of the Lemma 8, E [ | x + S τ x | 2 ; τ x ≤ 32( E | X 1 | 3 ) 2 + 5] ≤ E  max k ≤ 32( E | X 1 | 3 ) 2 +5 | S k | 2  ≤ 148( E | X 1 | 3 ) 2 ≤ 296( E | X 1 | 3 ) 2 1 x + √ n E | S τ x | ≤ 296 E | X 1 | 3 √ n x + √ n E | S τ x | . 12 DENISOV, T ARASOV, AND W ACHTEL F urthermore, using the last b ound in Lemma 6, we get E [ | x + S τ x | 2 ; τ x ∈ (32( E | X 1 | 3 ) 2 + 5 , n ]] ≤ C 1 ( E | X 1 | 3 ) 2 E | S τ x | n X k =1 1 √ k ( x + √ k ) ≤ C 1 ( E | X 1 | 3 ) 2 E | S τ x | 1 x n X k =1 1 √ k ≤ C 1 ( E | X 1 | 3 ) 2 E | S τ x | √ n x + √ n . This completes the pro of. □ 3. Proof of Theorem 1 As in the previous section, we shall alw a ys assume that σ 2 = 1. The strategy of the pro of of Theorem 1 is the same in the pro of of the main result in [5]. As in this pap er, we shall use a smo othening with a random v ariable U which has the density g A ( x ) = 3 π A  1 − cos( Ax ) Ax 2  2 , x ∈ R , where A = (8 E | X 1 | 3 ) − 1 . The first step in our pro of is a comparison of probabilities P ( x + S n ≥ y , τ x > n ) and P ( x + S n + U ≥ y , τ x > n ). Lemma 11. F or al l n ≥ 1 and al l x, y ≥ 0 we have | P ( x + S n + U ≥ y , τ x > n ) − P ( x + S n ≥ y , τ x > n ) | ≤ γ 1 ( E | U | ) √ n P ( τ x > n/ 2) (26) and P ( x + S n + U ≤ − y , τ x > n ) ≤ γ 1 ( E | U | ) √ n P ( τ x > n/ 2) . (27) The pro of of this lemma is almost a verbatim rep etition of Lemma 4 in [5] and w e give it just to b e self-contained. Pr o of. Using (7), we obtain Z ∞ 0 P ( U ∈ − dz ) | P ( x + S n − z ≥ y , τ x > n ) − P ( x + S n ≥ y , τ x > n ) | = Z ∞ 0 P ( U ∈ − dz ) P ( x + S n ∈ [ y , y + z ) , τ x > n ) ≤ Z ∞ 0 P ( U ∈ − dz ) √ 2 E | X 1 | 3 + π − 1 / 2 z √ n P ( τ x > n/ 2) = √ 2 E | X 1 | 3 P ( U < 0) + π − 1 / 2 E U − √ n P ( τ x > n/ 2) CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 13 and Z ∞ 0 P ( U ∈ dz ) | P ( x + S n + z ≥ y , τ x > n ) − P ( x + S n ≥ y , τ x > n ) | = Z y 0 P ( U ∈ dz ) P ( x + S n ∈ [ y − z , y ) , τ x > n ) + P ( U > y ) P ( x + S n ∈ (0 , y ) , τ x > n ) ≤ Z y 0 P ( U ∈ dz ) √ 2 E | X 1 | 3 + π − 1 / 2 z √ n P ( τ x > n/ 2) + P ( U > y ) √ 2 E | X 1 | 3 + π − 1 / 2 y √ n P ( τ x > n/ 2) ≤ √ 2 E | X 1 | 3 P ( U > 0) + π − 1 / 2 E U + √ n P ( τ x > n/ 2) . Com bining these tw o inequalities we obtain (26). The second claim follows again from (7): P ( x + S n + U ≤ − y , τ x > n ) = Z ∞ y P ( U ∈ − dz ) P ( x + S n ∈ (0 , z − y ] , τ x > n ) ≤ Z ∞ y P ( U ∈ − dz ) √ 2 E | X 1 | 3 + π − 1 / 2 ( z − y ) √ n P ( τ x > n/ 2) ≤ √ 2 E | X 1 | 3 P ( U < 0) + π − 1 / 2 E U − √ n P ( τ x > n/ 2) . Th us, the pro of of the lemma is complete. □ By the total probability la w, P ( x + S n + U ≥ y , τ x > n ) = P ( x + S n + U ≥ y ) − P ( x + S n + U ≥ y , τ x ≤ n ) = P ( S n + U ≥ y − x ) − n X k =1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ≥ y + z ) = P ( S n + U ≥ y − x ) − n X k =1 P ( τ x = k ) P ( S n − k + U ≥ y ) + n X k =1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P  S n − k + U ∈ [ y , y + z )  14 DENISOV, T ARASOV, AND W ACHTEL and P ( x + S n + U ≤ − y , τ x > n ) = P ( x + S n + U ≤ − y ) − P ( x + S n + U ≤ − y , τ x ≤ n ) = P ( S n + U ≤ − y − x ) − n X k =1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ≤ z − y ) = P ( S n + U ≤ − y − x ) − n X k =1 P ( τ x = k ) P ( S n − k + U ≤ − y ) − n X k =1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ∈ ( − y , − y + z ]) . Set P n ( x, y ) := P ( x + S n + U ≥ y , τ x > n ) − P ( x + S n + U ≤ − y , τ x > n ) = P ( S n + U ≥ y − x ) − P ( S n + U ≤ − y − x ) − n X k =1 P ( τ x = k )  P ( S n − k + U ≥ y ) − P ( S n − k + U ≤ − y )  (28) + n X k =1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) . It is immediate from Lemma 11 that sup y ≥ 0   P ( x + S n ≥ y , τ x > n ) − P n ( x, y )   ≤ 2 γ 1 ( E | U | ) √ n P ( τ x > n/ 2) . (29) W e no w estimate the second half of the last sum in (28). W riting the con volution with U as the integral, w e ha v e P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) = Z ∞ −∞ P ( U ∈ du ) P ( S n − k + u ∈ ( − y , − y + z ] ∪ [ y , y + z )) = Z ∞ −∞ P ( U ∈ du ) P ( S n − k ∈ ( − y − u, − y − u + z ] ∪ [ y − u, y − u + z )) . Applying now (6), we obtain P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) ≤ √ 2 γ 1 ( z ) √ n − k . This implies that n − 1 X k = ⌊ n/ 2 ⌋ +1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) ≤ √ 2 n − 1 X k = ⌊ n/ 2 ⌋ +1 1 √ n − k E [ γ 1 ( | x + S τ x | ); τ x = k ] . CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 15 Applying now the third claim in Lemma 6, w e obtain n − 1 X k = ⌊ n/ 2 ⌋ +1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) ≤ √ 2 C 1 E | S τ x | ( E | X 1 | 3 ) 3 n − 1 X k = ⌊ n/ 2 ⌋ +1 1 k ( x + √ k )( n − k ) 1 / 2 ≤ 4 C 1 E | S τ x | n ( x + √ n ) ( E | X 1 | 3 ) 3 n − 1 X k = ⌊ n/ 2 ⌋ +1 1 ( n − k ) 1 / 2 ≤ 8 C 1 E | S τ x | √ n ( x + √ n ) ( E | X 1 | 3 ) 3 . (30) T o obtain an appropriate estimate for the sum ov er k ≤ ⌊ n/ 2 ⌋ we shall use the follo wing estimate for the density u 7→ f S n + U ( u ) of the random v ariable U + S n . Due to Lemma 9 in [5], uniformly in u ∈ R ,     f S n + U ( u ) − 1 √ 2 π n e − u 2 2 n     ≤  72 E | X 1 | 3 π + E | U | √ 2 π e  1 n =: C 2 E | X 1 | 3 n . (31) Hence, for every k ≤ n/ 2,    P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) − 1 p 2 π ( n − k ) Z ( − y , − y + z ] ∪ [ y ,y + z ) e − u 2 2( n − k ) du    ≤ 4 C 2 E | X 1 | 3 n z . F urthermore, by (32) in [5] we hav e     1 √ n − k e − u 2 2( n − k ) − 1 √ n e − u 2 2 n     ≤ 2 3 / 2 e k n 3 / 2 (32) uniformly in u ∈ R and in k ≤ n/ 2. Applying this b ound, we get      P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) − 1 √ 2 π n Z ( − y , − y + z ] ∪ [ y ,y + z ) e − u 2 2 n du      ≤ 4 C 2 E | X 1 | 3 n z + 2 3 / 2 e k z n 3 / 2 . Upp er b ound (31) in [5] implies that      Z ( − y , − y + z ] ∪ [ y ,y + z ) e − u 2 2 n dz − 2 z e − y 2 2 n      ≤ 2 z 2 e 1 / 2 √ n 16 DENISOV, T ARASOV, AND W ACHTEL and, consequently ,      ⌊ n/ 2 ⌋ X k =1 Z ∞ 0 P ( x + S k ∈ − dz , τ x = k ) P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) − 2 √ 2 π n e − y 2 2 n ⌊ n/ 2 ⌋ X k =1 Z ∞ 0 z P ( x + S k ∈ − dz , τ x = k )      ≤ √ 2 √ eπ n E [ | x + S τ x | 2 ; τ x ≤ n/ 2] + 4 C 2 E | X 1 | 3 n E [ | x + S τ x | ; τ x ≤ n/ 2] + 2 3 / 2 en 3 / 2 ⌊ n/ 2 ⌋ X k =1 k E [ | x + S τ x | ; τ x = k ] . (33) Applying Lemmata 8, 9 and 10 to the corresp onding terms on the right hand side of (33), w e obtain      ⌊ n/ 2 ⌋ X k =1 Z ∞ 0 P ( x + S k ∈ − dz , τ x = k ) P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) − 2 √ 2 π n e − y 2 2 n ⌊ n/ 2 ⌋ X k =1 Z ∞ 0 z P ( x + S k ∈ − dz , τ x = k )      ≤ C 3 E | S τ x | ( E | X 1 | 3 ) 2 √ n ( x + √ n ) . (34) Com bining (34) and (30), and applying the second inequality from Lemma 6 for all k b etw een ⌊ n/ 2 ⌋ and n we conclude that      n X k =1 Z ∞ 0 P ( τ x = k , x + S k ∈ − dz ) P ( S n − k + U ∈ ( − y , − y + z ] ∪ [ y , y + z )) − 2 √ 2 π n e − y 2 2 n E | x + S τ x |      ≤ C 4 E | S τ x | ( E | X 1 | 3 ) 3 √ n ( x + √ n ) , (35) with some absolute constant C 4 . T o estimate other terms in (28) we first notice that, due to the Berry-Esseen inequalit y (1), w e hav e, uniformly in y ∈ R ,   P ( S n − k ≥ y ) − P ( S n − k ≤ − y )   ≤ γ 0 E | X 1 | 3 √ n − k . Then, con v olving with U and taking into accoun t the symmetry of U , we conclude that sup y ∈ R   P ( S n − k + U ≥ y ) − P ( S n − k + U ≤ − y )   ≤ γ 0 E | X 1 | 3 √ n − k . (36) CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 17 Com bining this with the estimate P ( τ x = k ) ≤ C 1 E | S τ x | k ( x + √ k ) ( E | X 1 | 3 ) 2 from Lemma 6, w e get      n − 1 X k = ⌊ n/ 2 ⌋ P ( τ x = k )  P ( S n − k + U ≥ y ) − P ( S n − k + U ≤ − y )       ≤ 2 3 / 2 C 1 E | S τ x | n ( x + √ n ) ( E | X 1 | 3 ) 3 n − 1 X k = ⌊ n/ 2 ⌋ 1 √ n − k ≤ 2 5 / 2 C 1 E | S τ x | √ n ( x + √ n ) ( E | X 1 | 3 ) 3 . (37) F ollowing [5] w e introduce Q n ( x ) := P ( S n + U ≥ x ) − P ( S n + U ≤ − x ) . Then we ha ve ⌊ n/ 2 ⌋ X k =1 P ( τ x = k )  P ( S n − k + U ≥ y ) − P ( S n − k + U ≤ − y )] = ⌊ n/ 2 ⌋ X k =1 P ( τ x = k ) Q n − k ( y ) = Q n ( y ) P ( τ x ≤ ⌊ n/ 2 ⌋ ) + ⌊ n/ 2 ⌋ X k =1 P ( τ x = k )[ Q n − k ( y ) − Q n ( y )] . (38) According to Lemma 11 in [5], sup y   Q n − k ( y ) − Q n ( y )   ≤ 109 r 3 π 2 3 / 2 E | X 1 | 3 k n 3 / 2 uniformly in k ≤ n/ 2. Combining this with Lemma 7, we infer that       ⌊ n/ 2 ⌋ X k =1 P ( τ x = k )[ Q n − k ( y ) − Q n ( y )]       ≤ C E | X 1 | 3 1 n 3 / 2 n X k =1 k P ( τ x = k ) ≤ C ( E | X 1 | 3 ) 2 E | S τ x | √ n ( x + √ n ) . (39) F urthermore, we kno w from (36) that | Q n ( y ) | ≤ γ 0 E | X 1 | 3 √ n . Com bining this with (14), we ha ve | Q n ( y ) P ( τ x ≤ ⌊ n/ 2 ⌋ ) − Q n ( y ) | ≤ 6 E | X 1 | 3 E | S τ x | √ n ( x + √ n ) . (40) Applying (39) and (40) to the corresponding terms in (38), we conclude that       ⌊ n/ 2 ⌋ X k =1 P ( τ x = k )  P ( S n − k + U ≥ y ) − P ( S n − k + U ≤ − y )] − Q n ( y )       ≤ C ( E | X 1 | 3 ) 2 E | S τ x | √ n ( x + √ n ) . (41) 18 DENISOV, T ARASOV, AND W ACHTEL Plugging (35), (37) and (41) into (28), we obtain      P n ( x, y ) − P ( S n + U ≥ y − x ) + P ( S n + U ≤ − y − x ) + Q n ( y ) − 2 √ 2 π n e − y 2 2 n E | x + S τ x |      ≤ C ( E | X 1 | 3 ) 3 E | S τ x | √ n ( x + √ n ) . (42) W e next notice that P ( S n + U ≥ y − x ) − P ( S n + U ≤ − y − x ) − Q n ( y ) = P ( S n + U ≥ y − x ) − P ( S n + U ≥ y + x ) + Q n ( x + y ) − Q n ( y ) = P ( S n + U ∈ [ y − x, y + x )) + Q n ( x + y ) − Q n ( y ) . (43) Using (31) for x ≤ √ n and (36) for x > √ n , we conclude that | Q n ( y + x ) − Q n ( y ) | = | P ( S n + U ∈ ( − x − y , − y ]) − P ( S n + U ∈ [ y , y + x )]) | ≤ 2 C 2 E | X 1 | 3 x √ n ( x + √ n ) (44) and, by similar argumen ts,     P ( S n + U ∈ [ y − x, y + x )) − 1 √ 2 π n Z y + x y − x e − u 2 / 2 n du     ≤ 2 C 2 E | X 1 | 3 x √ n ( x + √ n ) . (45) Applying (44) and (45) to the corresponding summands in (43), we conclude that     P ( S n + U ≥ y − x ) − P ( S n + U ≤ − y − x ) − Q n ( y ) − 1 √ 2 π n Z y + x y − x e − u 2 / 2 n du     ≤ 4 C 2 E | X 1 | 3 x √ n ( x + √ n ) . Com bining this with (42), w e obtain     P n ( x, y ) − 1 √ 2 π n Z y + x y − x e − u 2 / 2 n du − 2 √ 2 π n e − y 2 2 n E | x + S τ x |     ≤ C ( E | X 1 | 3 ) 3 E | S τ x | √ n ( x + √ n ) . This b ound, in combination with (29) and (14), implies that     P ( x + S n ≥ y , τ x > n ) − 1 √ 2 π n Z y + x y − x e − u 2 / 2 n du − 2 √ 2 π n e − y 2 2 n E | x + S τ x |     (46) ≤ C ( E | X 1 | 3 ) 3 E | S τ x | √ n ( x + √ n ) . Th us, Theorem 1 is pro ved. In order to prov e Corollary 2 we consider the function ϕ ( t ) = e − t 2 / 2 and use T aylor expansion. Then we ha v e for θ = θ ( u ) ∈ [0 , 1] ϕ (( y + u ) / √ n ) − ϕ ( y / √ n ) = u/ √ nϕ ′ ( y / √ n ) + u 2 2 n ϕ ′′ (( y + θ u ) / √ n ) . CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 19 In tegrating this from − x to x and multiplying by 1 / √ 2 π n we get     1 √ 2 π n Z y + x y − x e − u 2 / 2 n du − 2 x √ 2 π n e − y 2 / 2 n     ≤ sup t ∈ R ϕ ′′ ( t ) 1 √ 2 π n 3 / 2 Z x − x u 2 / 2 du ≤ x 3 3 √ 2 π n 3 / 2 . Com bining this with (46) and noting that x ≤ x + E | x + S τ x | = E | S τ x | we obtain     P ( x + S n ≥ y , τ x > n ) − 2 √ 2 π n e − y 2 2 n E | S τ x |     ≤ C ( E | X 1 | 3 ) 3 E | S τ x | n + x 2 E | S τ x | 3 √ 2 π n 3 / 2 . Then considering y = 0 we obtain       P ( τ x > n ) q 2 σ 2 π E | S τ x | n − 1 / 2 − 1       ≤ A 2 ( E | X 1 | 3 ) 3 σ 9 √ n + A 3 x 2 σ 2 n for some absolute constants A 2 and A 3 . Hence     P ( x + S n ≥ y | τ x > n ) − e − y 2 2 σ 2 n     ≤ A 2 ( E | X 1 | 3 ) 3 σ 9 √ n + A 3 x 2 σ 2 n . Th us Corollary 2 is pro ved. 4. Impro vement of Theorem 1 for la ttice and absolute continuous random w alks As we hav e already mentioned in the introduction, the only reason for the ap- p earance of the third p o wer of the Lyapuno v ratio in Theorem 1 is the b ound (6), where we hav e used the classical Berry-Esseen inequality to b ound lo cal probabil- ities for S n . Ho w ev er, lo cal central limit theorems suggest that there should exist an upp er b ound for P ( S n ∈ [ y , y + z ]) whic h does not contain E | X 1 | 3 in the leading term. But to obtain such estimates one needs to introduce some assumptions on the ’lo cal structure’ of the distribution of increments { X k } . In this section we sho w ho w such alternative estimates can b e obtained in the case when the distribution of X 1 is either absolute contin uous with resp ect to the Lebesgue measure or lattice with maximal span 1. Using these estimates we explain how to reduce the p ow er of the Lyapuno v ratio in Theorem 1. Let us start by men tioning a version of the Berry-Esseen inequality for lo cal cen tral limit theorem for densities. If the distribution of X 1 is absolutely con tin uous with a b ounded density p ( x ) then there exists an absolute constan t A such that, see the pap er by Bobk o v and G¨ otze [2], sup x ∈ R     p n ( x ) − 1 √ 2 π e − x 2 / 2     ≤ A E | X 1 | 3 √ n ∥ p ∥ 2 ∞ , (47) where p n denotes the density of S n / √ n . W e next deriv e an analogue of this inequality for lattice random walks. Lemma 12. Assume that al l the c onditions of The or em 1 ar e valid. If the distri- bution of X 1 is lattic e with the maximal sp an 1 then sup x ∈ Z     √ n P ( S n = x ) − 1 √ 2 π e − x 2 / 2 n     ≤  76 π + 24 π V  β 3 √ n , 20 DENISOV, T ARASOV, AND W ACHTEL wher e V = V ( X 1 ) = − sup t ∈ (0 , 2 π ) log | φ ( t ) | 1 − cos t . Remark 13. The quan tity V ( X 1 ) was in tro duced by Bobko v and Ulyano v in [3] and can be seen as a quantitativ e characteristic of the assumption on the maximal span of X 1 . ⋄ Pr o of of L emma 12. By the inv ersion formula for lattice random v ariables, P ( S n = x ) = 1 2 π π Z − π e − itx φ n ( t ) dt = 1 √ 2 π n π √ n Z − π √ n e − it x √ n φ n  t √ n  dt. Consequen tly , √ n P ( S n = x ) − 1 √ 2 π e − x 2 / 2 n = 1 2 π − π √ n Z − π √ n φ n  t √ n  dt − 1 2 π ∞ Z −∞ e it x √ n e − t 2 / 2 dt. Splitting the integrals at 1 4 L n = √ n 4 β 3 (with β 3 = E | X 1 | 3 ), we obtain sup x ∈ Z    √ n P ( S n = x ) − 1 √ 2 π e − x 2 / 2 n    (48) ≤ 1 2 π 1 / 4 L n Z 1 / 4 L n    φ n  t √ n  − e − t 2 / 2    dt + 1 2 π Z | t |∈ [ 1 4 L n ,π √ n ]   φ n ( t/ √ n )   dt + 1 π ∞ Z 1 / 4 L n e − t 2 / 2 dt =: I 1 + I 2 + I 3 . Using the b ound    φ n  t √ n  − e − t 2 / 2    ≤ 16 L n | t | 3 e − t 2 / 3 , | t | ≤ 1 4 L n , w e obtain I 1 ≤ 16 L n π √ n Z 1 / (4 L n ) 0 t 2 e − t 2 / 3 dt ≤ 16 π β 3 √ n ∞ Z 0 t 3 e − t 2 / 3 dt = 16 π β 3 √ n 9 2 ∞ Z 0 z e − z dz = 72 π β 3 √ n . (49) F or I 3 w e hav e I 3 ≤ 4 π β 3 √ n ∞ Z 1 / 4 L n te − t 2 / 2 dt = 4 π β 3 √ n e − n/ 32 β 3 ≤ 4 π β 3 √ n . (50) CORRECTED DIFFUSION APPRO XIMA TION FOR CONDITIONED W ALKS 21 In order to b ound I 2 w e notice that our assumption on the maximal span of X 1 implies that V > 0. Moreo v er, we ha v e the b ound log | φ ( t ) | ≤ − V (1 − cos t ) , t ∈ [ − π , π ] . Therefore, I 2 ≤ 1 π π √ n Z 1 / 4 L n e − nV (1 − cos( t/ √ n )) dt ≤ 1 π Z π √ n 1 / 4 L n e − nV  t √ n  2  1 − π 2 12  dt ≤ 1 π Z ∞ 1 / 4 L n e − e V t 2 / 2 dt, where e V = V  1 − π 2 12  . Then we ha ve I 2 ≤ 4 β 3 π √ n ∞ Z 1 / 4 L n te − e V t 2 / 2 dt = 4 β 3 π e V √ n e − n e V / 32 β 3 ≤ 4 β 3 π e V √ n . (51) Plugging (49) - (51) into (48), w e obtain sup x ∈ Z     √ n P ( S n = x ) − 1 √ 2 π e − x 2 / 2 n     ≤  76 π + 4 π (1 − π 2 / 12) V  β 3 √ n ≤  76 π + 24 π V  β 3 √ n . Th us, the pro of of the lemma is complete. □ Using (47) in the absolutely contin uous case and Lemma 12 in the lattice case w e can sharp en the bound (6) for lo cal probabilities. Indeed, (47) implies that P ( x + S n ∈ [ y , y + z ]) ≤  1 √ 2 π n + A ∥ p ∥ ∞ E | X 1 | 3 n  z . Under the assumptions of Lemma 12 one has P ( x + S n ∈ [ y , y + z ]) ≤  1 √ 2 π n +  76 π + 24 π V  E | X 1 | 3 n  ( z + 1) . Letting z = 1 in these inequalities, we conclude that if the distribution of X 1 is absolute contin uous or lattice with the maximal span 1 then there exists an absolute constan t A such that P ( x + S n ∈ [ y , y + 1]) ≤ A √ 2 n  1 + R | X 1 | 3 √ 2 n  , (52) where R = ∥ p ∥ 2 ∞ in the absolute contin uous case and R = V − 1 in the lattice case. Replacing (6) b y (52) in the pro of of Lemma 5, we obtain P ( x + S n ∈ [ y , y + 1) , τ x > n ) ≤ C  1 + R | X 1 | 3 √ k  E | S τ x | ( y + E | X 1 | 3 ) n ( x + √ n ) . This, in its turn, leads to the following impro vemen ts of bounds in Lemma 6: P ( τ x = k ) ≤ C 1  1 + R | X 1 | 3 √ k  E | S τ x | k ( x + √ k ) E | X 1 | 3 , 22 DENISOV, T ARASOV, AND W ACHTEL E [ | x + S τ x | ; τ x = k ] ≤ C 1  1 + R | X 1 | 3 √ k  E | S τ x | k ( x + √ k ) E | X 1 | 3 and E [ | x + S τ x | 2 ; τ x = k ] ≤ C 1  1 + R | X 1 | 3 √ n  E | S τ x | √ k ( x + √ k ) E | X 1 | 3 for all n ≥ 32( E | X 1 | 3 ) 2 +5. F urthermore, using (52) in the arguments leading to (30), we can see that the righ t hand side in this estimate c hanges to C E | S τ x | √ n ( x + √ n ) E | X 1 | 3  1 + R E | X 1 | 3 √ n  . Since this was the only place where the third pow er of E | X 1 | 3 arises, we conclude that the right hand side in(2) can replaced b y C ( E | X 1 | 3 ) 2 E | S τ x | σ 6 √ n ( x + √ n )  1 + R E | X 1 | 3 √ n  pro vided that the distribution of X 1 is either absolute contin uous or lattice with maximal span 1. References [1] A. K. Aleshkya vichene. Nonuniform estimate of speed of con vergence of the distribution of the maxima of sequences of sums of indep endent random v ariables. Mathematic al tr ansactions of the A c ademy of Scienc es of the Lithuanian SSR , 13(3):356–378, 1973. [2] S. G. Bobko v and F. G¨ otze. Berry-Esseen b ounds in the lo cal limit theorems. Lith. Math. J. , 65(1):50–66, 2025. [3] S. G. Bobk o v and V. V. Uly anov. The cheb yshev–edgew orth correction in the cen tral limit theorem for integer-v alued indep endent summands. The ory of Pr ob ability & Its Applications , 66(4):537–549, 2022. [4] D. Denisov, A. Sakhanenko, and V. W ac htel. First-passage times for random w alks with nonidentically distributed increments. Ann. Pr obab. , 46(6):3313–3350, 2018. [5] D. Denisov, A. T arasov, and V. W ach tel. Berry-Esseen inequality for random walks condi- tioned to stay p ositive. ArXiv preprint: 2412.08502 , 2024. [6] R. A. Doney and E. M. Jones. Large deviation results for random walks conditioned to stay positive. Ele ctron. Commun. Prob ab. , 17:no. 38, 11, 2012. [7] I. Grama and H. Xiao. Gaussian heat k ernel asymptotics for conditioned random w alks. ArXiv pr eprint: 2412.08932 , 2024. [8] A. A. Mogul’skii. Absolute estimates for moments of certain b oundary functionals. Theory Pr ob. Appl. , 18:340–347, 1974. [9] S. V. Nagaev and I. F. Pinelis. Some inequalities for the distributions of sums of indep endent random v ariables. T e or. V er ojatnost. i Primenen. , 22(2):254–263, 1977. [10] D. Siegm und and Y. S. Y uh. Brownian approximations to first passage probabilities. Z. Wahrsch. V erw. Gebiete , 59(2):239–248, 1982. [11] V. A. V atutin and V. W ach tel. Lo cal probabilities for random w alks conditioned to stay positive. Pr obab. The ory R elated Fields , 143(1-2):177–217, 2009. Dep ar tment of Ma thema tics, University of Manchester, UK Email address : denis.denisov@manchester.ac.uk F acul ty of Ma thema tics, Bielefeld University, Germany Email address : atarasov@math.uni-bielefeld.de F acul ty of Ma thema tics, Bielefeld University, Germany Email address : wachtel@math.uni-bielefeld.de

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment