On Estimation and Optimization of Mean Values of Bounded Variables

In this paper, we develop a general approach for probabilistic estimation and optimization. An explicit formula and a computational approach are established for controlling the reliability of probabilistic estimation based on a mixed criterion of abs…

Authors: Xinjia Chen

On Estimation and Optimization of Mean V alues of Bounde d V a riables ∗ Xinjia Chen First Submitted in F ebruary 2008 Abstract In this pap er, we dev elop a general approach for probabilistic estimation and optimization. An explicit formula and a computational approach are established for con trolling the r eliability of probabilistic estimation based on a mixed criter ion of absolute and r elative error s. By employing the Cher noff-Ho effding b ound and the concept of sampling, the minimiza tion of a probabilis tic function is transfor med into an optimization problem a menable for gradie nt descendent algorithms. 1 Analytical Sample S ize F orm u la for Estimation of Mean V alues Let X b e a r andom v ariable b oun ded in int erv al [0 , 1] with m ean E [ X ] = µ ∈ (0 , 1), whic h are defined on a p robabilit y space (Ω , F , P r). I n many areas of sciences and engineering, it is desired to estimate µ b ased on samples X 1 , X 2 , · · · , X n of X . F requen tly , the samples X 1 , X 2 , · · · , X n ma y not b e id en tical and indep endent (i.i.d). Thus, it is a significan t pr oblem to estimate µ und er the assumption that 0 ≤ X k ≤ 1 almost su rely for an y p ositiv e intege r k , (1) E [ X k | F k − 1 ] = µ almost su rely f or any p ositiv e in teger k, (2) where { F k , k = 0 , 1 , · · · , ∞} is a sequence of σ -subalgebra suc h that { ∅ , Ω } = F 0 ⊂ F 1 ⊂ F 2 ⊂ · · · ⊂ F , with F k b eing ge nerated b y X 1 , · · · , X k . Naturally , an estimato r for µ is tak en as b µ = P n i =1 X i n . (3) Since b µ is of random nature, it is crucial to control the statistical error. F or this purp ose, we hav e established the follo win g result. ∗ The author is currentl y with Department of Electrical Engineering, Louisiana State Un ivers ity at Baton Rouge, LA 70803, US A, and Department of Electrical Engineering, Southern Un iversi ty and A&M College, Baton Rouge, LA 70813, US A; Email: c hen xinjia@gmail.com 1 Theorem 1 L et δ ∈ (0 , 1) . L et ε a ∈ (0 , 1) and ε r ∈ (0 , 1) b e r e al numb e rs such that ε a ε r + ε a ≤ 1 2 . Assume that (1) and (2) ar e true. Then, Pr  | b µ − µ | < ε a or     b µ − µ µ     < ε r  > 1 − δ (4) for any µ ∈ (0 , 1) pr ovide d that n > ε r ln 2 δ ( ε a + ε a ε r ) ln (1 + ε r ) + ( ε r − ε a − ε a ε r ) ln  1 − ε a ε r ε r − ε a  . (5) It should b e noted that con v en tional metho ds for determinin g sample sizes are b ased on normal appro ximation, see [4] and the references therein. In contrast, T heorem 1 offers a rigorous metho d for determining sample sizes. In the sp ecial case that X is a Bernoulli random v ariable, a numerical app roac h h as b een d evelo p ed by C hen [2] wh ic h p ermits exact computation of the minim um sample size. 2 A Computational App roac h for General Ca se In this sectio n, w e shall inv estigate an exact computatio nal sample size method for the case that X ∈ [ a, b ] with E [ X ] = µ . Assu me that a ≤ X k ≤ b almost surely for an y p ositiv e in teger k , (6) E [ X k | F k − 1 ] = µ almost su rely f or any p ositiv e in teger k, (7) where { F k , k = 0 , 1 , · · · , ∞} is a sequence of σ -subalgebra suc h that { ∅ , Ω } = F 0 ⊂ F 1 ⊂ F 2 ⊂ · · · ⊂ F , with F k b eing ge nerated b y X 1 , · · · , X k . W e wish to determine minim um sample size n suc h th at Pr { | b µ − µ | < ε a or | b µ − µ | < ε r | µ |} > 1 − δ (8) for any µ ∈ [ a, b ], where b µ is d efined by (3). Unlike the sp ecial case that X is b ou n ded in interv al [0 , 1], there is no explicit form ula for the general case that X is b ounded in inte rv al [ a, b ]. W e will employ the b ranc h and b ound tec hn ique of global optimization. F or this p urp ose, w e need to deriv e a sample size form ula and the asso ciated b ounding m etho d. T o describ e the relev an t theory for computing s amp le size s, define fun ction M ( z , θ ) =                z ln θ z + (1 − z ) ln 1 − θ 1 − z for z ∈ (0 , 1) and θ ∈ (0 , 1) , ln(1 − θ ) for z = 0 and θ ∈ (0 , 1) , ln θ for z = 1 and θ ∈ (0 , 1) , −∞ for z ∈ [0 , 1] and θ / ∈ (0 , 1) 2 Define ϑ ( µ ) = µ − a b − a , g ( µ ) = ϑ ( µ ) − max { ε a , ε r | µ |} b − a , h ( µ ) = ϑ ( µ ) + max { ε a , ε r | µ |} b − a , W ( µ ) = max { M ( g ( µ ) , ϑ ( µ )) , M ( h ( µ ) , ϑ ( µ )) } for µ ∈ [ a, b ]. By virtu e of suc h functions, we h a v e established th eoretical results w hic h are essen tial f or the exact co mputation of samp le sizes as follo ws. Theorem 2 Assume that (6 ) and (7) ar e satisfie d. Then, (8) holds for any µ ∈ [ a, b ] pr ovide d that n ≥ ln δ 2 max ν ∈ [ a,b ] W ( ν ) . (9) Mor e over, W ( ν ) ≤ max { M ( g ( d ) , ϑ ( c )) , M ( h ( c ) , ϑ ( d )) } , (10) W ( ν ) ≥ max { M ( g ( c ) , ϑ ( d )) , M ( h ( d ) , ϑ ( c )) } (11) for ν ∈ [ c, d ] ⊆ [ a, b ] such that g ( d ) ≤ ϑ ( c ) ≤ ϑ ( d ) ≤ h ( c ) . See App endix 5 for a pro of. Since (10) and (11) of Theorem 2 pro vide compu table up p er and lo wer b ound s of W ( ν ), the maxim um of W ( ν ) ov er [ a, b ] can b e exactly computed with the Br anch and Bound metho d prop osed b y Land and Doig [6]. 3 Optimization of Probabilit y In many applications, it is d esirable to fin d a v ector of real n umb ers θ to minimize a probabilit y , p ( θ ), whic h can b e exp ressed as p ( θ ) = Pr { Y ( θ , ∆ ) ≤ 0 } , where Y ( θ , ∆ ) is piece-wise contin uous with resp ect to θ and ∆ is a random vect or. If we defin e µ ( λ, θ ) = E [ e − λY ( θ , ∆ ) ] , then, app lying Chernoff b oun d [3], w e ha v e p ( θ ) ≤ inf λ> 0 µ ( λ, θ ) . This indicate s that we can mak e p ( θ ) small by making µ ( λ, θ ) small. Hence, w e shall att empt to minimize µ ( λ, θ ) with resp ect to λ > 0 and θ . 3 T o make th e new ob jectiv e function µ ( λ, θ ) more tr actable, we take a sampling ap p roac h . Sp ecifically , w e obtain n i.i.d. samp les ∆ 1 , · · · , ∆ n of ∆ and appr o xim ate µ ( λ, θ ) as g ( λ, θ ) = P n i =1 e − λY ( θ , ∆ i ) n . A critical step is the determination of sample s ize n s o that g ( λ, θ ) is sufficien tly close to µ ( λ, θ ). Since 0 < e − λY ( θ , ∆ ) < 1, an appr opriate v alue of n can b e compu ted b ased on (5) of Th eorem 1. Finally , we ha ve transformed the problem of m in imizing the probabilit y fu nction p ( θ ) as the problem of minimizing a p iece-wise con tinuous function g ( λ, θ ). S ince g ( λ, θ ) is a more smo oth function, w e can bring all the p ow er of nonlin ear programming to solv e the problem. An extremely useful tool is the gr adient desc endent algorithm , see, e.g. [1] a nd th e references therein. 4 Pro of of Theorem 1 T o pro ve the theorem, we sh all in tro d u ce fun ction ψ ( ε, µ ) = ( µ + ε ) ln µ µ + ε + (1 − µ − ε ) ln 1 − µ 1 − µ − ε where 0 < ε < 1 − µ . W e need some preliminary results. The fol lo w ing lemma is due to Ho effding [5]. Lemma 1 Assume that (1) and (2) hold for any p ositive inte g e r k . Then, Pr { b µ ≥ µ + ε } ≤ exp( n ψ ( ε, µ )) for 0 < ε < 1 − µ < 1 , Pr { b µ ≤ µ − ε } ≤ exp( n ψ ( − ε, µ )) for 0 < ε < µ < 1 . Lemma 2 L et 0 < ε < 1 2 . Then, ψ ( ε, µ ) is monotonic al ly incr e asing with r esp e ctiv e to µ ∈ (0 , 1 2 − ε ) and monotonic al ly de cr e asing with r esp e ctive to µ ∈ ( 1 2 , 1 − ε ) . Similarly, ψ ( − ε, µ ) is monotonic al ly incr e asing with r esp e ctive to µ ∈ ( ε, 1 2 ) and monotonic al ly de cr e asing with r esp e ctive to µ ∈ ( 1 2 + ε, 1) . Pro of . T edious compu tation sho ws that ∂ ψ ( ε, µ ) ∂ µ = ln µ (1 − µ − ε ) ( µ + ε )(1 − µ ) + ε µ + ε 1 − µ and ∂ 2 ψ ( ε, µ ) ∂ µ 2 = − ε 2 µ 2 ( µ + ε ) − ε 2 (1 − µ ) 2 (1 − µ − ε ) < 0 for 0 < ε < 1 − µ < 1. Note that ∂ ψ ( ε, µ ) ∂ µ | µ = 1 2 = ln 1 − 2 ε 1 + 2 ε + ε < 0 4 b ecause d h ln 1 − 2 ε 1+2 ε + ε i dε = − 4 1 − 4 ε 2 < 0 . Moreo ve r, ∂ ψ ( ε, µ ) ∂ µ | µ = 1 2 − ε = ln 1 − 2 ε 1 + 2 ε + 4 ε 1 − 4 ε 2 > 0 b ecause d h ln 1 − 2 ε 1+2 ε + 4 ε 1 − 4 ε 2 i dε = 32 ε 2 (1 − ε 2 ) 2 > 0 . Similarly , ∂ ψ ( − ε, µ ) ∂ µ = ln µ (1 − µ + ε ) ( µ − ε )(1 − µ ) − ε µ − ε 1 − µ and ∂ 2 ψ ( − ε, µ ) ∂ µ 2 = − ε 2 µ 2 ( µ − ε ) − ε 2 (1 − µ ) 2 (1 − µ + ε ) < 0 for 0 < ε < µ < 1. Hence, ∂ ψ ( − ε, µ ) ∂ µ | µ = 1 2 = ln 1 + 2 ε 1 − 2 ε − ε > 0 b ecause d h ln 1+2 ε 1 − 2 ε − ε i dε = 4 1 − 4 ε 2 > 0; and ∂ ψ ( − ε, µ ) ∂ µ | µ = 1 2 + ε = ln 1 + 2 ε 1 − 2 ε − 4 ε 1 − 4 ε 2 < 0 as a r esu lt of d h ln 1+2 ε 1 − 2 ε − 4 ε 1 − 4 ε 2 i dε = − 32 ε 2 (1 − ε 2 ) 2 < 0 . Since ∂ ψ ( ε,µ ) ∂ µ | µ = 1 2 < 0 , ∂ ψ ( ε,µ ) ∂ µ | µ = 1 2 − ε > 0 and ψ ( ε, µ ) is concav e with resp ect to µ , it m us t b e true that ψ ( ε, µ ) is monotonically increasing w ith resp ectiv e to µ ∈ (0 , 1 2 − ε ) and monotonical ly decreasing with r esp ectiv e to µ ∈ ( 1 2 , 1 − ε ). Since ∂ ψ ( − ε,µ ) ∂ µ | µ = 1 2 > 0 , ∂ ψ ( − ε,µ ) ∂ µ | µ = 1 2 + ε < 0 and ψ ( ε, µ ) is conca ve with resp ect to µ , it must b e true that ψ ( − ε, µ ) is monotonically increasing with r esp ectiv e to µ ∈ ( ε, 1 2 ) and mon otonically decreasing with resp ectiv e to µ ∈ ( 1 2 + ε, 1). ✷ Lemma 3 L et 0 < ε < 1 2 . Then, ψ ( ε, µ ) > ψ ( − ε, µ ) ∀ µ ∈  ε, 1 2  , ψ ( ε, µ ) < ψ ( − ε, µ ) ∀ µ ∈  1 2 , 1 − ε  . 5 Pro of . It ca n b e sho wn that ∂ [ ψ ( ε, µ ) − ψ ( − ε, µ )] ∂ ε = ln  1 + ε 2 (1 − 2 µ ) ( µ 2 − ε 2 )(1 − µ ) 2  for 0 < ε < min( µ, 1 − µ ). Note that ε 2 (1 − 2 µ ) ( µ 2 − ε 2 )(1 − µ ) 2 > 0 for ε < µ < 1 2 and ε 2 (1 − 2 µ ) ( µ 2 − ε 2 )(1 − µ ) 2 < 0 for ε < 1 2 < µ < 1 − ε. Therefore, ∂ [ ψ ( ε, µ ) − ψ ( − ε, µ )] ∂ ε > 0 for ε < µ < 1 2 and ∂ [ ψ ( ε, µ ) − ψ ( − ε, µ )] ∂ ε < 0 for ε < 1 2 < µ < 1 − ε. So, w e can complete the pro of of th e lemma by observin g the sign of t he partial d eriv ativ e ∂ [ ψ ( ε,µ ) − ψ ( − ε,µ )] ∂ ε and the fact that ψ ( ε, µ ) − ψ ( − ε, µ ) = 0 for ε = 0. ✷ Lemma 4 L et 0 < ε < 1 . Then, ψ ( εµ, µ ) is monotonic al ly de cr e asing with r esp e ct to µ ∈  0 , 1 1+ ε  . Similarly, ψ ( − εµ, µ ) is monoto nic al ly de cr e asing with r esp e ct to µ ∈ (0 , 1) . Pro of . Note th at ∂ ψ ( εµ, µ ) ∂ µ = (1 + ε ) ln 1 − (1 + ε ) µ 1 − µ − (1 + ε ) ln (1 + ε ) + ε 1 − µ and ∂ 2 ψ ( εµ, µ ) ∂ µ 2 = − ε 2 (1 − µ ) 2 [1 − (1 + ε ) µ ] < 0 for any µ ∈  0 , 1 1+ ε  . Since ∂ ψ ( εµ,µ ) ∂ µ | µ =0 = ε − (1 + ε ) ln(1 + ε ) < 0, we hav e ∂ ψ ( εµ, µ ) ∂ µ < 0 , ∀ µ ∈  0 , 1 1 + ε  and it follo w s that ψ ( εµ, µ ) is monoto nically decreasing with resp ect to µ ∈  0 , 1 1+ ε  . Similarly , since ∂ ψ ( − εµ, µ ) ∂ µ | µ =0 = − ε − (1 − ε ) ln(1 − ε ) < 0 and ∂ 2 ψ ( εµ, µ ) ∂ µ 2 = − ε 2 (1 − µ ) 2 [1 − (1 − ε ) µ ] < 0 , ∀ µ ∈ (0 , 1) 6 w e h a v e ∂ ψ ( − εµ, µ ) ∂ µ < 0 , ∀ µ ∈ (0 , 1) and, consequ en tly , ψ ( − εµ, µ ) is monotonically decreasing with resp ect to µ ∈ (0 , 1). ✷ Lemma 5 Supp ose 0 < ε r < 1 and 0 < ε a ε r + ε a ≤ 1 2 . Then, Pr { b µ ≤ µ − ε a } ≤ exp  n ψ  − ε a , ε a ε r  (12) for 0 < µ ≤ ε a ε r . Pro of . W e shall s ho w (12) by inv estigating three cases as follo ws. In the case of µ < ε a , it is clear that Pr { b µ ≤ µ − ε a } = 0 < exp  n ψ  − ε a , ε a ε r  . In the case of µ = ε a , w e ha ve Pr { b µ ≤ µ − ε a } = lim η ↑ ε a Pr { b µ ≤ µ − η } ≤ lim η ↑ ε a exp ( n ψ ( − η , µ )) = exp ( n ψ ( − ε a , µ )) = exp ( n ψ ( − ε a , ε a )) < exp  n ψ  − ε a , ε a ε r  , where the last inequalit y follo ws from Lemma 2 and the fact that ε a < ε a ε r ≤ 1 2 − ε a . In the case of ε a < µ ≤ ε a ε r , we hav e Pr { b µ ≤ µ − ε a } ≤ exp ( n ψ ( − ε a , µ )) < exp  n ψ  − ε a , ε a ε r  , where the fi rst inequalit y follo ws from Lemma 1 and the second inequ alit y follo ws fr om Lemma 2 and the f act that ε a < ε a ε r ≤ 1 2 − ε a . So, (12) is established. ✷ Lemma 6 Supp ose 0 < ε r < 1 and 0 < ε a ε r + ε a ≤ 1 2 . Then, Pr { b µ ≥ (1 + ε r ) µ } ≤ exp  n ψ  ε a , ε a ε r  (13) for ε a ε r < µ < 1 . 7 Pro of . W e shall sh o w (13) by in v estigat ing three cases as follo ws . In the ca se of µ > 1 1+ ε r , it is clear that Pr { b µ ≥ (1 + ε r ) µ } = 0 < exp  n ψ  ε a , ε a ε r  . In the case of µ = 1 1+ ε r , we hav e Pr { b µ ≥ (1 + ε r ) µ } = lim η ↑ ε r Pr { b µ ≥ (1 + η ) µ } ≤ lim η ↑ ε r exp( n ψ ( η µ, µ )) = exp( n ψ ( ε r µ, µ )) < exp  n ψ  ε a , ε a ε r  , where the last inequalit y follo w s from Lemma 4 and the fact that ε a ε r ≤ 1 2 1 1+ ε r < 1 1+ ε r as a resu lt of 0 < ε a ε r + ε a ≤ 1 2 . In the case of ε a ε r < µ < 1 1+ ε r , w e ha ve Pr { b µ ≤ (1 + ε r ) µ } ≤ exp( n ψ ( ε r µ, µ )) < exp  n ψ  ε a , ε a ε r  , where the fi rst inequ ality follo w s from Lemma 1 and the second inequalit y f ollo ws fr om Lemma 4. So, (13) is established. ✷ W e are now in a p osition to prov e the theorem. W e sh all assu me (5) is satisfied and sho w that (4) is true. It suffices to sho w that Pr {| b µ − µ | ≥ ε a , | b µ − µ | ≥ ε r µ } < δ . F or 0 < µ ≤ ε a ε r , w e ha ve Pr {| b µ − µ | ≥ ε a , | b µ − µ | ≥ ε r µ } = Pr {| b µ − µ | ≥ ε a } = Pr { b µ ≥ µ + ε a } + Pr { b µ ≤ µ − ε a } . (14) Noting that 0 < µ + ε a ≤ ε a ε r + ε a ≤ 1 2 , w e ha ve Pr { b µ ≥ µ + ε a } ≤ exp( n ψ ( ε a , µ )) ≤ exp  n ψ  ε a , ε a ε r  , where the fi rst inequ ality follo w s from Lemma 1 and the second inequalit y f ollo ws fr om Lemma 2. It can b e chec k ed that (5) is equ iv alen t to exp  n ψ  ε a , ε a ε r  < δ 2 . Therefore, Pr { b µ ≥ µ + ε a } < δ 2 8 for 0 < µ ≤ ε a ε r . On the other hand, since ε a < ε a ε r < 1 2 , b y Lemma 5 and Lemma 3, we ha v e Pr { b µ ≤ µ − ε a } ≤ exp  n ψ  − ε a , ε a ε r  ≤ exp  n ψ  ε a , ε a ε r  < δ 2 for 0 < µ ≤ ε a ε r . Hence, b y (14), Pr {| b µ − µ | ≥ ε a , | b µ − µ | ≥ ε r µ } < δ 2 + δ 2 = δ. This pro ves (4) for 0 < µ ≤ ε a ε r . F or ε a ε r < µ < 1, w e h a ve Pr {| b µ − µ | ≥ ε a , | b µ − µ | ≥ ε r µ } = Pr {| b µ − µ | ≥ ε r µ } = Pr { b µ ≥ µ + ε r µ } + Pr { b µ ≤ µ − ε r µ } . In v oking Lemma 6, w e ha ve Pr { b µ ≥ µ + ε r µ } ≤ exp  n ψ  ε a , ε a ε r  . On the other hand, Pr { b µ ≤ µ − ε r µ } ≤ exp( n ψ ( − ε r µ, µ )) ≤ exp  n ψ  − ε a , ε a ε r  ≤ exp  n ψ  ε a , ε a ε r  where the fi r st inequalit y follo ws from Lemma 1, the second inequalit y follo ws from Lemma 4, and the last inequalit y follo ws from Lemma 3 . Hence, Pr {| b µ − µ | ≥ ε a , | b µ − µ | ≥ ε r µ } ≤ 2 exp  n ψ  ε a , ε a ε r  < δ. This pro ves (4) for ε a ε r < µ < 1. The p ro of of Theorem 1 is thus completed. 5 Pro of of Theorem 2 Define Y n = 1 n P i =1 Y i with Y i = X i − a b − a for i = 1 , · · · , n . Then, E [ Y i ] = ϑ ( µ ) for i = 1 , · · · , n . Moreo ve r, Pr {| X n − µ | ≥ ε a , | X n − µ | ≥ ε r | µ |} = Pr { X n ≤ µ − m ax( ε a , ε r | µ | ) } + Pr { X n ≥ µ + m ax( ε a , ε r | µ | ) } = Pr  Y n ≤ g ( µ )  + P r  Y n ≥ h ( µ )  . (15) It follo ws from (1 5) and Lemma 1 that Pr {| X n − µ | ≥ ε a , | X n − µ | ≥ ε r | µ |} ≤ exp ( n M ( g ( µ ) , ϑ ( µ )) + exp ( n M ( h ( µ ) , ϑ ( µ )) ≤ 2 exp( n W ( µ )) , 9 from whic h it follo ws immediately that (8) holds f or an y µ ∈ [ a, b ] pro vided th at (9) is true. No w w e sh all show (10) and (11). F or ν ∈ [ c, d ] ⊆ [ a, b ] with g ( d ) ≤ ϑ ( c ) ≤ ϑ ( d ) ≤ h ( c ), it can b e sho wn that g ( c ) ≤ g ( ν ) ≤ g ( d ) ≤ ϑ ( c ) ≤ ϑ ( ν ) ≤ ϑ ( d ) ≤ h ( c ) ≤ h ( ν ) ≤ h ( d ) . By d ifferen tiatio n, it can b e sho wn th at for any fi x ed µ ∈ (0 , 1), M ( z , µ ) is monotonically increas- ing with resp ect to z ∈ (0 , µ ). Since g ( ν ) ≤ g ( d ) ≤ ϑ ( ν ) for all ν ∈ [ c, d ], it follo ws that M ( g ( ν ) , ϑ ( ν )) ≤ M ( g ( d ) , ϑ ( ν )) , ∀ ν ∈ [ c, d ] . (16) By differenti ation, it can b e s h o wn that for any fixed z ∈ (0 , 1), M ( z , µ ) is m on otonically de- creasing w ith r esp ect to µ ∈ ( z , 1). S ince g ( d ) ≤ ϑ ( c ) ≤ ϑ ( ν ) ≤ 1 for all ν ∈ [ c, d ], we ha v e M ( g ( d ) , ϑ ( ν )) ≤ M ( g ( d ) , ϑ ( c )) , ∀ ν ∈ [ c, d ] . (17) By virtue of (16) and (1 7), w e ha v e M ( g ( ν ) , ϑ ( ν )) ≤ M ( g ( d ) , ϑ ( c )) , ∀ ν ∈ [ c, d ] . (1 8) Similarly , it can b e sh o w n that M ( h ( ν ) , ϑ ( ν )) ≤ M ( h ( c ) , ϑ ( d )) , (19) M ( g ( ν ) , ϑ ( ν )) ≥ M ( g ( c ) , ϑ ( d )) , (20) M ( h ( ν ) , ϑ ( ν )) ≤ M ( h ( d ) , ϑ ( c )) (21) for all ν ∈ [ c, d ]. Com b ining (18 ), (19), (20) and (21) yields (10) and (11). Theorem 2 is thus established. References [1] M. S . Bazaraa, H. D. S herali and C. M. S hett y , Nonline ar P r o gr amming – The ory and Algo- rithms , Wi ley , 199 3. [2] X. Ch en, “Exact compu tation of minimum samp le size for estimation of bin omial parame- ters,” arXiv:0707.2113 [math.ST], July 2007. [3] Chern off, H. (1952). A measure of asymp totic efficiency f or tests of a hyp othesis based on the su m of observ ations. A nn. Math. Statist. 23 493–5 07. [4] M. M. Desu and D. Ragha v arao, Sample Si ze M etho dolo gy , Academic Press, 1990. [5] Hoe ffding, W. (1963). Probabilit y inequalities for sums of b ounded v ariables. J. Amer. Statist. Asso c. 58 13–29. [6] A. H. Land and A. G. Doig, “An automatic metho d of solving discrete programming pr ob - lems,” E c onometric a , vol. 28, no. 3, pp. 497–520, 1960. 10

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment