An Addendum to "How Good is PSK for Peak-Limited Fading Channels in the Low-SNR Regime?"

A proof is provided of the operational achievability of $R_\mathrm{rt}$ by the recursive training scheme in \cite{zhang07:it}, for general wide-sense stationary and ergodic Rayleigh fading processes.

Authors: Wenyi Zhang

An Addendum to “Ho w Go o d is PS K f or P eak-Limited F ading Channels in the Low-SN R Regime?” W en yi Zhang, Memb er, IEEE ∗ Abstract A pro o f is provided of the op erationa l achiev a bilit y of R rt by the recursive training scheme in [1], for gener al wide- sense s tationary and ergo dic Rayleigh fading pro c e sses. 1 In tro duction In [1], an issue remaining o p en is whether the recursive training sc heme in terpretation [1, Sec. I I I.B] is op erational for fading pro cesses with an in- finitely long memory length. As commen ted in [1], the main d ifficult y in analysis p ertains to th e effect of error propagation, which cannot b e readily circum v ente d in ord er to rigorously establish a c hannel cod in g theorem. In this addendu m, w e provide a pro of of the op erational ac hiev abilit y of R rt b y the recursive training sc heme, for general w id e-sense stationary and ergod ic Ra yleigh fading pr o cesses, without the m -dep endence constrain t as further imp osed in the partial pro of in [1]. The pr esen t pro of exploits certain tec h- niques in mismatc hed deco ding, and its b asic idea differs considerably from that in th e previous pro of f or m -dep end en t fadin g pr o cesses. Before pr o ceeding to our p r o of, we b riefly sk etc h a heuristic reasoning whic h has often b een inv ok ed to argue that th e recur siv e training s cheme is op erational f or c hannels w ith memory , and p oin t out why su c h a heur istic reasoning is in sufficien t to lead to a m athematical ly rigorous pro of. Con- sider the recursive training sc heme as illustrated in [1, Fig. 1]. T h e heuristic reasoning is as follo ws. F or general fading pro cesses that p ossess infi n ite ∗ The auth or is with Ming H sieh Dep artment of Electrical Engineering, Universit y of Southern California, Los A ngeles, CA. Email: wenyizha@u sc.edu . This wo rk has b een supp orted in part by NSF OCE-0520324 . 1 memory , some residual correlation r emains within eac h parallel sub -c hannel (PSC) among its K sym b ols. Du e to the ergo d icit y of the fadin g pro cess, as w e let the in terlea ving depth L gro w to infinit y , this correlation asymptot- ically v anishes and thus eac h PSC can then b e viewe d as essentia lly mem- oryless. Mean while, as w e let the co ding blo ckl ength K gro w to in fi nit y , the c hann el cod ing theorem ( e.g. , [2 ]) for the L asymptotically memoryless PSCs ensures that eac h PSC can ac hiev e a rate arbitrarily close to the input- constrained capacit y of a true memoryless fading channel, and consequently the arbitrarily reliably d eco ded symbols can b e used as essen tially err or-free training pilots for the recursiv e c hannel estimation/decodin g p ro cedure. Unfortunately , the outlined heur istic reasoning is in h eren tly flaw ed. It do es not prop erly resolve a tension b et we en channel memory decorrelation and error propagation. T he essen tially memoryless PSC s are obtained at the cost of indefinitely increasing the in terlea ving d epth L . As L increases, the blo c k error probabilit y of deco ding eac h PS C is required to decrease inv ersely prop ortionally with L , in ord er to prev ent catastrophic error propagation, and thus the co ding b lo c k length K is required to increase corresp ondingly . Ho w ev er, it is unclear h o w v alid the “essen tially memoryless” p rop erty of PSCs is as K scales with L asymp toticall y , b ecause a f ormal mathematical c haracterizatio n of such a prop erty would inv olv e a limiting argument that fixes a suffi ciently large K and s u bsequently lets L grow large. A t this p oint, w e h a v e observ ed a logic lo op b et wee n the scaling of K and L . S o ev en if the heuristic r easoning we re ind eed correct, it lacks a mathematical confirmation, except for fading p r o cesses with fi nite memory , namely , the so-calle d m -dep enden t fad in g p ro cesses. In our pro of in this addend um, we break th e logic lo op by n ot requ iring the PSC s b e essen tially memoryless, and hence reliev e the tension b et w een c hannel memory d ecorrelation and error propagation, since K and L do not need to scale w ith eac h other. Ho we ve r, w ith ou t the essentiall y memoryless condition, we can no longer ignore the residu al correlation within eac h PSC. F ortunately , the generalized m utual information (GMI) in th e theory of mismatc hed deco ding (see, e.g. , [3 ] and r eferences therein) pro vides us an alternativ e w a y of establishing the ac hiev abilit y of information rates (at least) up to the inp u t-constrained capacit y of an asso ciated memoryless fading c hannel, as will b e elab orated shortly . 2 2 Pro of Consider the recursiv e trainin g s cheme as d escrib ed in [1 , S ec. I I I.B]. W e seek to prov e that, for an arbitrarily sm all deco din g error r ate δ > 0 and an arbitrarily small o ve rall rate loss factor λ > 0, there exists a co de suc h that the recursiv e co din g sc heme ac hiev es th e rate (1 − λ ) · R rt with an av erage deco ding error pr obabilit y no greater than δ , where R rt is d efined by [1, eqn. (23)]. F or our pu rp ose, we also consider a virtual system consisting of infinitely man y PSC s. Among those PSCs, PS C l is a Ra yleigh fading c hann el w ith p erfect receiv e c hannel state information (CS I), an d with indep end en t and iden tically distribu ted (i.i.d.) phase-shift k eying (PS K) inp uts of a v erage SNR ρ [ l ] as d efined by [1, Eqn. (19)]. F urthermore, the fading pro cess within eac h PS C is m emoryless, and the noise samples are also i.i.d. zero- mean circular complex Gaussian. By suc h a constru ction, the L -a ve rage capacit y , n amely the av erag e inpu t-constrained capacit y of the fir s t L P SCs in this virtual system approac hes R rt , as L gro ws asymptotically large (cf. [1, Sec. I I I.A]). In our pro of, we h ence fix a sufficien tly large L suc h th at the L -a v erage capacit y of the virtu al system exceeds (1 − λ/ 2) · R rt . In order to complete the ac hiev abilit y pro of, it suffices to show that PSC l in th e recursiv e training sc heme achiev es at least the inpu t-constrained capacit y of memoryless PS C l in the virtual system, for ev ery l from 0 to ( L − 1). In ligh t of [1, Eqn . (18)-(1 9)] and the one-step MMSE pred iction in recursiv e training, we can write the c hannel equation of P S C l as X [ l ; k ] = q ρ [ l ] · ˆ H d [ l ; k ] · S [ l ; k ] + ¯ Z [ l ; k ] , k = 1 , . . . , K, (1) where the in dex [ l ; k ] denotes the k th c hann el u se of PSC l . Due to the wide-sense stationarit y and ergod icit y of th e original fading pro cess, b oth { ˆ H d [ l ; :] } and { ¯ Z [ l ; :] } sequences are wide-sense stationary and ergod ic w ith resp ect to index k . In this addend um, w e fur ther n ormalize them to b e of unit v ariance and thus ρ [ l ] is the a v erage S NR. Ho w ev er, du e to the n on- negligible residu al correlation among c hannel uses within PSC l , in general neither { ˆ H d [ l ; :] } n or { ¯ Z [ l ; :] } sequ en ce can b e a circular complex Gaussian pro cess. F ortunately , we ha ve and shall utilize a “lo cally” Gaussian pr op ert y of (1). Th at is, for eac h k , ˆ H d [ l ; k ] and ¯ Z [ l ; k ] are joint ly Gaussian and in fact indep enden t, as shown in [1, Sec. I I I .A]. Let u s ev aluate th e GMI of (1) follo wing [3, Sec. I I I]. F or a giv en co d e- b o ok { s m [ l ; :] } M m =1 and a giv en realizatio n of receiv ed c hann el outputs x [ l ; :] along with c h annel fading pro cess ˆ h d [ l ; :], a mismatc hed channel deco der 3 c ho oses the message m that minimizes th e metric D ( m ) = 1 K K X k =1     x [ l ; k ] − q ρ [ l ] · ˆ h d [ l ; k ] · s m [ l ; k ]     2 . (2) That is, the deco der treats the PS C as if it is w ere memoryless Raylei gh fad- ing with i.i.d. circular complex Gaussian noise. Consider the ensem ble of all co deb o oks i.i.d. generated b y a prescrib ed PSK constellatio n, and without loss of generalit y assum e the first message is selected w ith its corresp onding co dew ord tr an s mitted. First, due to ergo d icit y , the metric D (1) con verge s almost surely to unit y , since T = lim K →∞ D (1) = lim K →∞ 1 K K X k =1   ¯ Z [ l ; k ]   2 = E n   ¯ Z [ l ; k ]   2 o = 1 , a.s. (3) The probabilit y that an incorrect co dewo rd accum ulates a metric smaller than un it y deca ys exp onentially in K , and the GMI is just the exp onen t giv en by I GMI = s up µ< 0 ( µ − Λ( µ )) . (4) The limiting log-momen t generating function Λ( µ ) = lim K →∞ 1 K Λ K ( K µ ) is induced via Λ K ( µ ) = log E n e µD ( m ′ )    X [ l ; :] , ˆ H d [ l ; :] o , (5) for m ′ > 1. Because the inpu ts within eac h cod ew ord are i.i.d. PSK sym b ols, conditioning up on X [ l ; :] and ˆ H d [ l ; :], w e ha ve 1 K Λ K ( K µ ) = 1 K K X k =1 log E  e µ   X [ l ; k ] − √ ρ [ l ] · ˆ H d [ l ; k ] · S   2     X [ l ; k ] , ˆ H d [ l ; k ]  , (6) where the exp ectation is with resp ect to the PSK constellation S . F or con- creteness, let us den ote the PSK constellation b y S ∈ { θ j } J j =1 with eac h constellatio n p oint selected equip robably . Hence we can fu r ther write (6) as 1 K Λ K ( K µ ) = 1 K K X k =1 log J X j =1 e µ   X [ l ; k ] − √ ρ [ l ] · θ j · ˆ H d [ l ; k ]   2 − log J. (7) 4 Due to ergo dicit y , for ev ery µ < 0 w ithin a certain finite neigh b orho o d of zero, Λ( µ ) con v erges almost surely to an exp ectation, as Λ( µ ) = lim K →∞ 1 K Λ K ( K µ ) = E    log J X j =1 e µ   X − √ ρ [ l ] · θ j · ˆ H d   2    − log J, a.s. , (8) where the exp ecta tion is with r esp ect to the fading co efficient ˆ H d ∼ C N (0 , 1) and the c hannel outp u t corresp ondin g to the fi r st message, X = p ρ [ l ] · ˆ H d · S 1 + ¯ Z with ¯ Z ∼ C N (0 , 1) indep end en t of ˆ H d . Note th at the probabilit y densit y fun ction of X conditioned up on ˆ H d is p X | ˆ H d ( x | ˆ h d ) = 1 J J X j =1 1 π e −   x − √ ρ [ l ] · θ j · ˆ h d   2 . (9) So b y letting µ = − 1, we hav e I GMI ≥ − 1 − Λ( − 1) = − 1 − log π − E n E n log p X | ˆ H d ( X | ˆ H d ) o    ˆ H d o = h ( X | ˆ H d ) − h ( ¯ Z ) = h ( X | ˆ H d ) − h ( X | S 1 , ˆ H d ) = I ( S 1 ; X | ˆ H d ) , (10) whic h is precisely the input-constrained capacit y of memoryless PS C l in the virtual system. Hence as we c ho ose a sufficiently large co ding blo ckle ngth K su c h that for every l from 0 to ( L − 1), ther e exist a co deb o ok and an asso ciated mism atc h ed c hannel deco der that ac hiev e a fr action of (1 − λ ) / (1 − λ/ 2) of th e GMI of PSC l with an av erage deco din g error probabilit y no greater than δ /L , the en tire recursiv e training sc heme ac hieves the rate (1 − λ ) · R rt with an a v erage deco ding error prob ab ility no greater than δ , follo w in g a standard union upp er b ounding argument. Th is concludes our pro of. References [1] W. Zh ang and J . N. L an eman, “Ho w Go o d is PSK for P eak-Limited F ading Channels in th e Low-SNR Regime?” IEEE T r ans. Inform. The- ory , V ol. 53, No. 1, pp . 236–251, J an . 2007. 5 [2] R. G. Gallage r, Information The ory and R eliable Communic ation , J ohn Wiley & Sons, Inc., New Y ork, NY, 1968. [3] A. Lapidoth and S. Shamai (Sh itz), “F ading Ch annels: How P erfect Need ‘Pe rfect Side Information’ Be?” IEEE T r ans. Inform. The ory , V ol. 48, No. 5, pp. 1118–113 4, May 2002. 6

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment