On an Auxiliary Function for Log-Density Estimation

In this note we provide explicit expressions and expansions for a special function which appears in nonparametric estimation of log-densities. This function returns the integral of a log-linear function on a simplex of arbitrary dimension. In particu…

Authors: ** - Michael Cule - Richard Samworth - (본 논문의 저자는 원문에 명시되지 않았으나, 해당 연구는 Cule et al. (2007, 2008) 의 작업을 기반으로 함) **

Univ ersity of Bern Institute of Mathematical Statistics and Actuarial Science T echnical Report 71 On an A uxiliary Function f or Log-Density Estimation Madeleine L. Cule and Lutz D ¨ umbgen (Uni ve rsity of Cambridge and Univ ers ity of Bern) July 2008, minor re visions in January 2016 Abstract In this note we provide explicit expressions and expansions for a special fun ction J which appears in non parametric estimation o f log-den sities. This function return s the integral of a log-linea r function on a simplex of arbitrary dimension . In particular it is used in the R - package LogCond DEAD by Cule et al. (200 7). Contents 1 Intr oduction 2 2 The special function J ( · ) 3 2.1 Definition of J ( · ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 A first recur sion formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.3 Another recu rsion formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 An expansion f or J ( · ) 5 4 A r ecursi ve implementation of J ( · ) and its partia l deriva tiv es 6 5 The special case s d = 1 and d = 2 7 5.1 General cons iderati ons about a bi v ariate functi on . . . . . . . . . . . . . . . . . 7 5.2 More deta ils for the case d = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.3 The case d = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6 Gamma and multiv ariat e b eta (Dirichlet) distrib utions 10 1 Introd uction Suppose one wants to es timate a p robab ility density f on a certai n compact re gion C ⊂ R d , ba sed on an empirical distrib ut ion ˆ P of a sample from f . One possib ility is to embed C into a union S = m [ j =1 S j of simplices S j ⊂ R d with pairwise disjoint interior . B y a simplex in R d we mean the con ve x hull of d + 1 po ints. Then we cons ider the family G = G ( S 1 , . . . , S m ) of all continu ous functions ψ : S → R whi ch are linear on each simplex S j . Now ˆ ψ := arg max ψ ∈G  Z S ψ d ˆ P − Z S exp( ψ ( x )) dx  (1) defines a maximum likelihoo d estimator ˆ f := exp( ˆ ψ ) of a probabilit y density on S , based on ˆ P . For e xisten ce and uniquene ss of this estimator see, for instance, C ule et al. (2008 ). T o compute ˆ ψ expl icitly , note that ψ ∈ G is uniquely determined by its value s at the corners (ex tremal points) of all simplices S j , and R ψ d ˆ P is a linear function of these valu es. The second inte gral in ( 1) may be repre sented as follows: L et S j be the con vex hull of x 0 j , x 1 j , . . . , x d j ∈ R d , and set y ij := ψ ( x ij ) . Then Z S exp( ψ ( x )) dx = m X i =1 Z S i exp( ψ ( x )) dx = m X i =1 D j · J ( y 0 j , y 1 j , . . . , y d j ) , 2 where D j := det  x 1 j − x 0 j , x 2 j − x 0 j , . . . , x d j − x 0 j  , while J ( · ) is an auxiliar y function defined and analyzed subsequen tly . 2 The special function J ( · ) 2.1 Definition of J ( · ) For d ∈ N let T d := n u ∈ (0 , 1) d : d X i =1 u i < 1 o . Then for y 0 , y 1 , . . . , y d ∈ R we define J ( y 0 , y 1 , . . . , y d ) := Z T d exp  (1 − u + ) y 0 + d X i =1 u i y i  d u with u + := P d i =1 u i . Standard considerati ons in connectio n with beta- and gamma-dist rib utions as described in S ec- tion 6 rev eal the follo wing alternati ve representa tion: J ( y 0 , y 1 , . . . , y d ) := 1 d ! E exp  d X i =0 B i y i  with B i = B d,i := E i  P d s =0 E s and stochas tically indepe ndent , standard expon ential random v ariabl es E 0 , E 1 , . . . , E d . This representati on sho ws clearl y that J ( · ) is symmetric in its ar gu- ments. An often useful identity is J ( y 0 , y 1 , . . . , y d ) = exp( y ∗ ) J ( y 0 − y ∗ , y 1 − y ∗ , . . . , y d − y ∗ ) for an y y ∗ ∈ R . (2) 2.2 A first r ecursion f ormula For d = 1 one can compute J ( y 0 , y 1 ) e xplicitly: J ( y 0 , y 1 ) = Z 1 0 exp  (1 − u ) y 0 + uy 1  du =      exp( y 1 ) − exp( y 0 ) y 1 − y 0 if y 0 6 = y 1 , exp( y 0 ) if y 0 = y 1 . For d ≥ 2 one may use the follo wing recursion formula: J ( y 0 , y 1 , . . . , y d ) =        J ( y 1 , y 2 , . . . , y d ) − J ( y 0 , y 2 , . . . , y d ) y 1 − y 0 if y 0 6 = y 1 , ∂ ∂ y 1 J ( y 1 , y 2 , . . . , y d ) if y 0 = y 1 . (3) 3 Since J ( y 0 , y 1 , . . . , y d ) is contin uous in y 0 , y 1 , . . . , y d , it suffice s to ve rify (3) in case of y 0 6 = y 1 . W e may identif y T d with the set  ( v , u ) : u ∈ T d − 1 , v ∈ (0 , 1 − u + )  . Then it follows from Fubini’ s theorem that J ( y 0 , y 1 , . . . , y d ) = Z T d − 1 Z 1 − u + 0 exp  (1 − u + − v ) y 0 + v y 1 + d X i =2 u i − 1 y i  dv d u = Z T d − 1  exp  (1 − u + − v ) y 0 + v y 1 + P d i =2 u i − 1 y i  y 1 − y 0      1 − u + v =0 d u = Z T d − 1 exp  (1 − u + ) y 1 + P d i =2 u i − 1 y i  − exp  (1 − u + ) y 0 + P d i =2 u i − 1 y i  y 1 − y 0 d u = J ( y 1 , y 2 , . . . , y d ) − J ( y 0 , y 2 , . . . , y d ) y 1 − y 0 . 2.3 Another r ecursion formu la It is well-kno wn that for any inte ger 0 ≤ j < d , E i P j s =0 E s ! j i =0 , B := P j i =0 E i P d s =0 E s , E i P d s = j +1 E s ! d i = j +1 are stoc hastica lly independen t with B ∼ Beta( j + 1 , d − j ) ; see also Section 6. H ence we end up with the follo w ing recursi ve identity: J ( y 0 , y 1 , . . . , y d ) = j !( d − j − 1)! d ! E  J ( B y 0 , . . . , B y j ) J ((1 − B ) y j +1 , . . . , (1 − B ) y d )  = Z 1 0 u j (1 − u ) d − j − 1 J ( uy 0 , . . . , uy j ) J ((1 − u ) y j +1 , . . . , (1 − u ) y d ) du with J ( r ) := exp( r ) . Here we used the well-kno wn identity Z (1 − u ) ℓ u m du = ℓ ! m ! ( ℓ + m + 1)! for inte gers ℓ, m ≥ 0 . (4) Plugging in j = d − 1 int o the prev ious recursi ve equation leads to J ( y 0 , y 1 , . . . , y d ) = Z 1 0 u d − 1 J ( uy 0 , . . . , uy d − 1 ) exp((1 − u ) y d ) du. (5) 4 3 An expansion f or J ( · ) W ith ¯ y := ( d + 1) − 1 P d i =0 y i and z i := y i − ¯ y one may write J ( y 0 , y 1 , . . . , y d ) = exp( ¯ y ) J ( z 0 , z 1 , . . . , z d ) by virtue of (2). Note that z + := P d i =0 z i = 0 . As z := ( z i ) d i =0 → 0 , d ! J ( z 0 , z 1 , . . . , z d ) = 1 + d X i =0 I E( B i ) z i + 1 2 d X i,j =0 I E( B i B j ) z i z j + 1 6 d X i,j,k =0 I E( B i B j B k ) z i z j z k + O ( k z k 4 ) . It follo ws from Lemma 6.1 that I E  d Y i =0 B k i i  = d Y i =0 k i ! . [ d + k + ] k + for inte gers k 0 , k 1 , . . . , k d ≥ 0 . In particul ar , I E( B 0 ) = 1 d + 1 , I E( B 2 0 ) = 2 [ d + 2] 2 , I E( B 0 B 1 ) = 1 [ d + 2] 2 , I E( B 3 0 ) = 6 [ d + 3] 3 , I E( B 2 0 B 1 ) = 2 [ d + 3] 3 , I E( B 0 B 1 B 2 ) = 1 [ d + 3] 3 . Consequ ently , P d i =0 I E( B i ) z i = I E( B 0 ) z + = 0 , [ d + 2] 2 d X i,j =0 I E( B i B j ) z i z j = d X i,j =0  1 [ i = j ] · 2 + 1 [ i 6 = j ]  z i z j = d X i,j =0  1 [ i = j ] + 1  z i z j = d X i =0 z 2 i + z 2 + = d X i =0 z 2 i , 5 and [ d + 3] 3 d X i,j,k =0 I E( B i B j B k ) z i z j z k = d X i,j,k =0  1 [ i = j = k ] · 6 + 1 [# { i,j,k } =2] · 2 + 1 [# { i,j,k } =3]  z i z j z k = d X i,j,k =0  1 [ i = j = k ] · 5 + 1 [# { i,j,k } =2] + 1  z i z j z k = 5 d X i =0 z 3 i + 3 d X s,t =0 1 [ s 6 = t ] z 2 s z t + z 3 + = 5 d X i =0 z 3 i + 3 d X s =0 z 2 s z + − 3 d X s =0 z 3 s + z 3 + = 2 d X i =0 z 3 i . Consequ ently , J ( y 0 , y 1 , . . . , y d ) = exp( ¯ y )  1 d ! + 1 2( d + 2)! d X i =0 z 2 i + 1 3( d + 3)! d X i =0 z 3 i + O  k z k 4   . (6) 4 A r ecursiv e implementation of J ( · ) and its partial derivati ves By means of (3) and the T aylor expa nsion (6 ) one can implement the functi on J ( · ) in a recursi ve fash ion. In what follo ws we use the abbre viation y a : b = ( ( y a , . . . , y b ) if a ≤ b () if a > b T o compute J ( y 0: d ) we ass ume without loss of generality that y 0 ≤ y 1 ≤ · · · ≤ y d . It follo ws from (3) and symmetry of J ( · ) that J ( y 0: d ) = J ( y 1: d ) − J ( y 0: d − 1 ) y d − y 0 if y 0 6 = y d . This formu la is ok ay n umerical ly if y d − y 0 is not too small. Otherwise one should u se (6). This leads to the the pseudo code in T able 1. T o av oid messy formulae, one can express partial deriv ativ es of J ( · ) in terms of higher order ver sions of J ( · ) by means of the recursio n (3). For in stance , ∂ J ( y 0: d ) ∂ y 0 = lim ǫ → 0 J ( y 0 + ǫ, y 1: d ) − J ( y 0 , y 1: d ) ǫ = lim ǫ → 0 J ( y 0 , y 0 + ǫ, y 1: d ) = J ( y 0 , y 0 , y 1: d ) . 6 Algorithm J ← J ( y , d, ǫ ) if y d − y 0 < ǫ then ¯ y ← P d i =0 y i / ( d + 1) z 2 ← P d i =0 ( y i − ¯ y ) 2 / 2 z 3 ← P d i =0 ( y i − ¯ y ) 3 / 3 J ← exp( ¯ y )  1 /d ! + z 2 / ( d + 2)! + z 3 / ( d + 3)!  else J ←  J ( y 1: d , d − 1 , ǫ ) − J ( y 0: d − 1 , d − 1 , ǫ )  / ( y d − y 0 ) end if. T able 1: Pseudo-code for J ( y ) with orde red input vector y . Similarly , ∂ 2 J ( y 0: d ) ∂ y 2 0 = lim ǫ → 0  J ( y 0 + ǫ, y 1: d ) − J ( y 0 , y 1: d ) ǫ − J ( y 0 , y 1: d ) − J ( y 0 − ǫ, y 1: d ) ǫ   ǫ = 2 lim ǫ → 0 J ( y 0 , y 0 + ǫ, y 1: d ) − J ( y 0 , y 0 − ǫ, y 1: d ) 2 ǫ = 2 lim ǫ → 0 J ( y 0 , y 0 − ǫ, y 0 + ǫ, y 1: d ) = 2 J ( y 0 , y 0 , y 0 , y 1: d ) , while ∂ 2 J ( y 0: d ) ∂ y 0 ∂ y 1 = lim ǫ → 0  J ( y 0 + ǫ, y 1 + ǫ, y 2: d ) − J ( y 0 , y 1 + ǫ, y 2: d ) ǫ − J ( y 0 + ǫ, y 1 , y 2: d ) − J ( y 0 , y 1 , y 2: d ) ǫ   ǫ = lim ǫ → 0 J ( y 0 , y 0 + ǫ, y 1 + ǫ, y 2: d ) − J ( y 0 , y 0 + ǫ, y 1 , y 2: d ) ǫ = lim ǫ → 0 J ( y 0 , y 0 + ǫ, y 1 , y 1 + ǫ, y 2: d ) = J ( y 0 , y 0 , y 1 , y 1 , y 2: d ) . 5 The special cases d = 1 and d = 2 For small dimensio n d it may be worthwhile to work with non-recursi ve implementat ions of the functi on J ( · ) . Here we collect and exte nd some results of D ¨ umbgen et al. (2007 ). 5.1 General considerations about a biv ariate function In vie w of ( 3) we consider an arbitrary functio n f : R → R which is infinite ly of ten d if ferentiable. Then h ( r , s ) :=      f ( s ) − f ( r ) s − r if s 6 = r f ′ ( r ) if s = r 7 defines a smooth and symmetric function h : R 2 → R suc h that h ( r , s ) = f ′ ( r ) + f ′′ ( r ) 2 ( s − r ) + O  ( s − r ) 2  as s → r. Its first partial deri vati ves of order one and two are giv en by ∂ h ( r , s ) ∂ r =        f ( s ) − f ( r ) − f ′ ( r )( s − r ) ( s − r ) 2 if s 6 = r , f ′′ ( r ) 2 + f ′′′ ( r ) 6 ( s − r ) + O  ( s − r ) 2  as s → r , ∂ 2 h ( r , s ) ∂ r 2 =        2  f ( s ) − f ( r ) − f ′ ( r )( s − r )  − ( s − r ) 2 f ′′ ( r ) ( s − r ) 3 if s 6 = r , f ′′′ ( r ) 3 + f ′′′′ ( r ) 12 ( s − r ) + O  ( s − r ) 2  as s → r, ∂ 2 h ( r , s ) ∂ r ∂ s =        ( s − r )  f ′ ( r ) + f ′ ( s )  − 2  f ( s ) − f ( r )  ( s − r ) 3 if s 6 = r , f ′′′ ( r ) 6 + f ′′′′ ( r ) 12 ( s − r ) + O  ( s − r ) 2  as s → r. The other partial deri vati ves of order one and two follo w via symmetry considerati ons. 5.2 Mor e details for t he case d = 1 Recall that J ( r , s ) = Z 1 0 exp  (1 − u ) r + us  du =    exp( s ) − exp( r ) s − r if r 6 = s , exp( r ) if r = s . This is just the function introduced by D ¨ umbgen , H ¨ usler and Rufibach (2007). Let us recall some proper ties and formulae for the correspon ding partial deriv ati ves J a,b ( r , s ) := ∂ a + b ∂ r a ∂ s b J ( r , s ) = Z 1 0 (1 − u ) a u b exp((1 − u ) r + us ) du. Note first that J a,b ( r , s ) = J b,a ( s, r ) = exp( r ) J a,b (0 , s − r ) . Thus it suf fi ces to deri ve formulae for ( r, s ) = (0 , y ) and b ≤ a . It follo ws from (4) that J a, 0 (0 , y ) = Z 1 0 (1 − u ) a ∞ X k =0 u k k ! y k du = ∞ X k =0 1 k ! Z 1 0 (1 − u ) a u k du · y k = ∞ X k =0 a ! ( k + a + 1)! y k = a ! y a +1  exp( y ) − a X ℓ =0 y ℓ ℓ !  . 8 In particul ar , J 1 , 0 (0 , y ) = exp( y ) − 1 − y y 2 = 1 2 + y 6 + y 2 24 + y 3 120 + O ( y 4 ) ( y → 0) , J 2 , 0 (0 , y ) = 2(exp( y ) − 1 − y − y 2 / 2) y 3 = 1 3 + y 12 + y 2 60 + y 3 360 + O ( y 4 ) ( y → 0) , J 3 , 0 (0 , y ) = 6(exp( y ) − 1 − y − y 2 / 2 − y 3 / 6) y 4 = 1 4 + y 20 + y 2 120 + y 3 840 + O ( y 4 ) ( y → 0) , J 4 , 0 (0 , y ) = 24(exp( y ) − 1 − y − y 2 / 2 − y 3 / 6 − y 4 / 24) y 5 = 1 5 + y 30 + y 2 210 + y 3 1680 + O ( y 4 ) ( y → 0) . Another general observ ation is that J a,b ( r , s ) = Z 1 0 (1 − u ) a (1 − (1 − u )) b exp((1 − u ) r + us ) du = b X i =0  b i  ( − 1) i J a + i, 0 ( r , s ) . In particul ar , J a, 1 ( r , s ) = J a, 0 ( r , s ) − J a +1 , 0 ( r , s ) , J a, 2 ( r , s ) = J a, 0 ( r , s ) − 2 J a +1 , 0 ( r , s ) + J a +2 , 0 ( r , s ) . On the other hand, J a,b (0 , y ) = ∞ X k =0 y k k ! Z 1 0 (1 − u ) a u k + b du = ∞ X k =0 a ![ k + b ] b ( k + a + b + 1)! y k with [ r ] 0 := 1 and [ r ] m := Q m − 1 i =0 ( r − i ) for integ ers m > 0 . In particular , J 1 , 1 (0 , y ) = exp( y )( y − 2) + 2 + y y 3 = 1 6 + y 12 + y 2 40 + y 3 180 + O ( y 4 ) ( y → 0) . 9 5.3 The case d = 2 Our recurs ion formula (3) yields J ( r , s, t ) =    J ( s, t ) − J ( r, t ) s − r if r 6 = s , J 10 ( r , t ) if r = s . Because of J ’ s symmetry we may re w rite this in ter ms of the ord er statistic s y (0) ≤ y (1) ≤ y (2) of ( y i ) 2 i =0 as J ( r , s, t ) =        J ( y (1) , y (2) ) − J ( y (0) , y (1) ) y (2) − y (0) if y (0) < y (2) , exp( y (0) ) 2 if y (0) = y (2) . For fixed third argumen t t , this functio n J ( r , s, t ) corres ponds to h ( r , s ) in Section 5.1 with f ( x ) := J ( x, t ) . Thus ∂ J ( r, s , t ) ∂ r =        J ( s, t ) − J ( r, t ) − J 1 , 0 ( r , t )( s − r ) ( s − r ) 2 if r 6 = s , J 2 , 0 ( r , t ) 2 + J 3 , 0 ( r , t )( s − r ) 6 + O  ( s − r ) 2  as s → r. Moreo ver , ∂ 2 J ( r , s, t ) ∂ r 2 =        2  J ( s, t ) − J ( r , t ) − J 1 , 0 ( r , t )( s − r )  − ( s − r ) 2 J 2 , 0 ( s − r ) 3 if r 6 = s , J 3 , 0 ( r , t ) 3 + J 4 , 0 ( r , t )( s − r ) 12 + O  ( s − r ) 2  as s → r , ∂ 2 J ( r , s, t ) ∂ r ∂ s =         J 1 , 0 ( r , t ) + J 1 , 0 ( s, t )  ( s − r ) − 2  J ( s, t ) − J ( r , t )  ( s − r ) 3 if r 6 = s , J 3 , 0 ( r , t ) 6 + J 4 , 0 ( r , t )( s − r ) 12 + O  ( s − r ) 2  as s → r. 6 Gamma and multiv ariate beta (Dirichl et) distribution s Let G 0 , G 1 , . . . , G m be stochastical ly indep endent random v ariable s with G i ∼ Gamma( a i ) for certain paramete rs a i > 0 . That means, for any Borel set A ⊂ (0 , ∞ ) , I P( G i ∈ A ) = Z A Γ( a i ) − 1 y a i − 1 exp( − y ) dy . No w we define a + := P m i =0 a i , G + := P m i =0 G i and ˜ B := ( G i /G + ) m i =0 , B := ( G i /G + ) m i =1 . Note that ˜ B is contain ed in the unit simplex in R m +1 , while B is contain ed in the open set T m =  u ∈ (0 , 1) m : u + < 1  with u + := P m i =1 u i . W e also define u 0 := 1 − u + for any u ∈ T m . 10 Lemma 6.1. The random vec tor B and the random varia ble G + are stochas tically independ ent. Moreo ver , G + ∼ Gamma( a + ) while B is distri b uted according to the L ebesgu e density f ( u ) := Γ( a + ) Q m i =0 Γ( a i ) m Y i =0 u a i − 1 i on T m . For arbitrary numbers k 0 , k 1 , . . . , k m ≥ 0 and k + := P m i =0 k i , I E  m Y i =0 B k i i  = Γ( a + ) Γ( a + + k + ) m Y i =0 Γ( a i + k i ) Γ( a i ) . As a by-pro duct of this lemma we obtain the follo w ing formula: Cor ollary 6.2. For arb itrary numbers a 0 , a 1 , . . . , a m > 0 , Z T m m Y i =0 u a i − 1 i d u = Γ( a + ) − 1 m Y i =0 Γ( a i ) . Pro of of Lemma 6.1. Note that G = ( G i ) m i =0 my be written as Ξ( G + , B ) with the bijecti ve mapping Ξ : (0 , ∞ ) × T m → (0 , ∞ ) m +1 , Ξ( s, u ) := ( su i ) m i =0 . Note also that det D Ξ( s, u ) = det         u 0 − s − s · · · − s u 1 s 0 · · · 0 u 2 0 s . . . . . . . . . . . . . . . . . . 0 u m 0 · · · 0 s         = det         1 0 0 · · · 0 u 1 s 0 · · · 0 u 2 0 s . . . . . . . . . . . . . . . . . . 0 u m 0 · · · 0 s         = s m . Thus the distrib ution of ( G + , B ) has a Lebesgue density h on (0 , ∞ ) × T m which is giv en by h ( s, u ) = m Y i =0  Γ( a i ) − 1 Ξ( s, u ) a i − 1 i exp( − Ξ( s, u ) i )  ·   det D Ξ( s , u )   = m Y i =0  Γ( a i ) − 1 ( su i ) a i − 1 exp( − su i )  · s m = s a + − 1 exp( − s ) m Y i =0  Γ( a i ) − 1 u a i − 1 i  = Γ( a + ) − 1 s a + − 1 exp( − s ) · f ( u ) . Since this is th e densit y of Gamma( a + ) at s times f ( u ) , we see t hat G + and B are stoc hastica lly indepe ndent , where G + has distrib ution Gamma( a + ) , and that f is indeed a proba bility density on T m descri bing the distrib ution of B . 11 The fact that f integr ates to one over T m entails Corollar y 6.2. But then we can concl ude that I E  m Y i =0 B k ( i ) i  = Z T m m Y i =0 u a i + k i − 1 i d u . Z T m m Y i =0 u a i − 1 i d u = Γ( a + ) Γ( a + + k + ) m Y i =0 Γ( a i + k i ) Γ( a i ) . Refer ences [1] M . L . C U L E , R . B . G R A M AC Y , and R . J . S A M W O RT H (2007). LogConcDEAD, An R packag e for log-conca ve density estimation in arbitrary dimensions. A v ailabl e from http: //cra n.r-project.org/ . [2] M . L . C U L E , R . J . S A M W O RT H and M . I . S T E W A RT (2008) . Maximum likelih ood estimati on of a multidi mension al log-conc a ve density . Preprint. [3] L . D ¨ U M B G E N , A . H ¨ U S L E R and K . R U FI B A C H (2007). Activ e set an d EM algorith ms for log- conca ve densities based on complete and censo red data. T echnic al repor t 61, IMSV , Univ ersity of Bern (arXi v:0707.4643). 12

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment