A Nonparametric Approach to 3D Shape Analysis from Digital Camera Images - I. in Memory of W.P. Dayawansa
In this article, for the first time, one develops a nonparametric methodology for an analysis of shapes of configurations of landmarks on real 3D objects from regular camera photographs, thus making 3D shape analysis very accessible. A fundamental re…
Authors: V. Patrangenaru, X. Liu, S. Sugathadasa
A NONP ARAMET RIC APPR O A CH TO 3D SHAPE AN AL YSIS FR OM DIGIT A L CA MERA IMA GES - I . In Memory of W .P . Daya wansa. V . Patrangenaru 1 ∗ , X. Liu 1 † , S. Sugathadasa 2 1 Florida State Uni versity 2 T exas T ech Uni versity June, 5, 2008 Abstract In this article, for the first t ime, one de velops a nonp arametric methodo logy for an analysis of shapes of confi- gurations of landmarks on real 3D objects from regular camera photographs, thus making 3D shape analysis very accessible. A fundamental result in compu ter vision by Faugeras (1992 ), Hartle y , Gupta and Chang (1992 ) is that generically , a finite 3D configu ration of points can be retrie ved up to a projecti ve transformation, from corresponding configurations in a p air of camera image s. Consequently , the projecti ve shape of a 3D configuration can be retrie ved from two o f its planar vie ws. Giv en the inherent registration errors, the 3D projecti ve shape can be estimated from a sample of photos of the scen e containing that con figuration. Projectiv e shapes are here regarded as po ints on projecti ve shape manifolds. Using large sample and nonparametric bootstrap methodolog y for extrinsic means on manifolds, one gives confidence regions and tests for the mean projectiv e shape of a 3D configuration from its 2D camera images. ∗ Researc h suppor ted by Nationa l Sc ience F oundatio n Grant DMS-0652353 and by Nati onal Sec urity Ag ency Research Grant H9 8230-08-1-0058 † Researc h supported by Natio nal Science Founda tion Grants CCF-0514743 and DMS-0713012 1 Keywords pinho le camera images, high level image analysis, 3D r econstructio n, p rojective shape, e xtrinsic means, asymptotic distributions on manifolds, nonpa rametric bootstrap, confidence regions. AMS subject classification Primary 62H11 Secondary 62H10 , 62H35 1 Introd uction Until now , statistical analysis of similarity shapes fro m images was restricted to a small amou nt of data, since simila- rity shape appearan ce is relati ve to the camera position with respect to the scene pictured. In this paper, for the first time, we study the shape of a 3D configuratio n from its 2D images in ph otograp hs of the configur ation, without any came ra positionin g restriction relative to the scene p ictured. Our nonp arametric methodo - logy is manifold based, and uses standard reconstruc tion methods in computer vision. In absence of o cclusions, a set of p oint c orrespon dences in two v iews can be u sed to retr ie ve the 3D c onfigura tion of points. Faugeras (19 92) an d Har tley et. al. (1992 ) state that two such reconstruction s differ by a projective transfor- mation in 3D. Sugh atadasa (2006) and Patrangenaru a nd Sugh atadasa ( 2006) noticed that actually the o bject which is recovered without ambiguity is the projective shap e of the configu ration, which c asts a n ew light on the ro le o f projective shape in th e identification of a spatial configuratio n. Projective shape is the natural appr oach to shape analysis fro m dig ital imag es, since the vast major ity of libr aries of images are acq uired via a central projection from the scene pictured to the black box reco rding plane. Har tley an d Zisserman (2004, p.1 ) no te th at ”this often ren ds classical shap e ana lysis of a spatial s cene impossible, since similarity is not preserved when a camera is moving. ” Advances in statistical analysis of projecti ve shape ha ve been slo wed d own d ue to ov eremph asis on the imp ortance of similarity shap e in imag e ana lysis, with little fo cus on the prin ciples of image acquisition o r binocular vision. Progress was also affected by lack o f a geom etric model for the spac e of pr ojective shapes, and u ltimately probab ly by insuf fi- cient dialogue between researchers in geometr y ,compu ter vision and statistical shape analysis. For re asons presented ab ove, pr ojectiv e shap es h av e been studied o nly recently , a nd e x cept for on e concrete 3D exam- ple d ue to Sug hatadasa(20 06), to be foun d in Liu et al. (20 07), the literature was bo und to linear or planar projective 2 shape analyzes. Examples o f 2D pr ojectiv e shape analysis can b e found in Mayb ank (199 4), Mardia et. al. (1996), Goodall an d Mardia (1999), Patrangen aru (20 01), L ee et. al. (20 04), Paige et. al. (2 005), Mard ia and Patrange naru (2005 ), K ent and Mardia (2006, 2007) and Munk et. al. (2007). Our main goal he re is to deri ve a n atural con cept o f 3 D shape that can b e extracted from data recorded f rom camera images. Th e statistical m ethodolo gy fo r estimatio n of a m ean 3 D projective shape is nonp arametric, b ased o n large sample theory a nd on Efron’ s bootstrap ( Ef ron (1 979, 1982) ). In this pa per , a 3D p rojective shape is regarded as a random object on a projective shape space. Since typically samples of images are s mall, in order to estimate the m ean projective shape we use nonparametric boo tstrap f or the studentized samp le mean p rojective shape on a man ifold, as shown in Bhattachar ya an d Patrang enaru ( 2005) . This bootstrap distribution was essentially presented in M ardia and Patrangenaru (2005) . Since, while runnin g the projectiv e shape estimation algorithm in Mard ia and Patrangenaru (2005 ) on a concrete data set, Liu et al. (20 07) h av e foun d typo s in some formu las. In this pa per we are making the necessary corrections. In section 2 we pr esent pr ojective geometry concep ts and facts that ar e need ed in section 3, such as pro jecti ve space, projective fra mes, an d pr ojective coord inates. W e also introd uce co mputer vision con cepts, such as essential matrix and fu ndamenta l m atrix , associated with a pair of camera views of a 3D scene th at is need ed in the reco nstruction of that scene from 2D calibrated , and respectively n on-calibr ated ca mera images. W e then state the Fauge ras-Hartley- Gupta-Chan g projec ti ve amb iguity theorem for the scene reconstru cted from two no n-calibrated cam era views. For the reconstructio n of a configura tion o f points in space from its views in a pair of images, we refer to a computation al algorithm in Ma et. al. (2006) . In Section 3 we introdu ce projec ti ve shapes of configuration s of poin ts in R m or in R P m , and the multiv ariate a xial geometric model for the p rojective shap e space, which is our cho ice for a statistical study of pr ojective shap e. The Faugeras-Hartley-Gupta -Chang theorem is reformula ted in Theorem 3.1 in terms of projective shapes: if R is a 3D r eco nstruction of a spa tial co nfiguration C fr o m two of its u ncalibrated ca mera views, then R and C have the same pr ojective sh ape . Th is is the key result for our projective shap e analysis o f spatial c onfigura tions, which ope ns the Statistical Shape Analysis door to Computer V ision and Pattern Recognition of 3D scenes. Since projective sh ape spa ces are identified via p rojective f rames with products of ax ial spaces, in section 4 we ap- 3 proach multiv ariate axial distributions via a quadr atic equi variant embed ding of a produ ct of q copies of R P m in produ cts o f spaces of symm etric matrices. A theor em on the asympto tic distrib utions of extrinsic sample m eans of multiv ar iate axes, stated without proof and with some m inor typos in Mar dia and Patrangenaru (2005) is given in this section (Theorem 4.1) with a full proof. The corrections to Mardia an d P atrangen aru (2005 ) ar e lis ted in Remark 4.3. The asymptotic and nonparametric bo otstrap distribution results are used to der iv e con fidence region s for ex- trinsic mean pro jectiv e shapes. If a random projective shape has a no ndegenerate d extrinsic cov ariance matrix, one may studentize th e extrinsic samp le mea n to gen erate asymp totically chi square distributions that a re u seful fo r large sample confidenc e regions in Corollary 4.2, or nonpa rametric bootstrap confidence regions if the sample is small in Corollary 4.4. If the e xtrinsic cov ar iance m atrix is d egenerated, an d the axial marginals ha ve nondegenerated extrinsic covariance matrices, one g i ves a Bonferr oni type of argum ent for axial margina ls to derive confidence regions for the mean projective shap e in Corollary 4.5. 2 Basic Projecti ve Geometry f or Ideal Pinhole Camera Image Acquisition Pinhole camera image acquisition is based on a central projection from the 3D world t o the 2D photograp h. Distances between o bserved poin ts are not pro portiona l t o distanc es between their corresp onding points in the ph otograp h, and Euclidean geometry is inapprop riate to model the relationship be tween a 3D o bject and its p icture, e ven if the object is flat. The natural approach to ideal pinhole camera image acquisition is via projectiv e geometry , which also provides a logical justification f or the mental reconstru ction of a spatial scene from bin ocular retinal imag es, play ing a central r ole in huma n vision. I n this sectio n we review some of the basics o f projec ti ve geometry that are usef ul in un derstanding image formatio n and scene retriev al from ideal pinhole camera images. 2.1 Basics of Project ive Geometry Consider a real vector space V . T wo vectors x, y ∈ V \{ 0 V } are equ iv alen t if they differ by a scalar multiple. The equiv alence class of x ∈ V \{ 0 V } is labeled [ x ] , and the set of all su ch eq uiv alen ce classes is the p r ojective spac e P ( V ) associated with V , P ( V ) = { [ x ] , x ∈ V \ O V } . The r eal pr ojective sp ace R P m is P ( R m +1 ) . Another no tation 4 for a pr ojective point p = [ x ] ∈ R P m , eq uiv alen ce class of x = ( x 1 , . . . , x m +1 ) ∈ R m +1 , p = [ x 1 : x 2 : · · · : x m +1 ] , features the homogeneous coo r din ates ( x 1 , . . . , x m +1 ) of p, which ar e de termined u p to a m ultiplicative constant. A projective point p a dmits also a sph erical r epr e sentation , when thought of as a pair of antipodal points on the m dimensiona l unit spher e, p = { z , − z } , x = ( x 1 , x 2 , . . . , x m +1 ) , ( x 1 ) 2 + · · · + ( x m +1 ) 2 = 1 . A d - dimensional pr ojective subspace of R P m is a projective space P ( V ) , wh ere V is a ( d + 1) -dimension al vector sub space of R m +1 . A cod imension one pr ojectiv e subspace of R P m is also called hyperpla ne. The linear span of a subset D of R P m is the smallest pro jectiv e subspace of R P m containing D . W e say that k poin ts in R P m are in general po sition if th eir linear span is R P m . I f k points in R P m are in general position, then k ≥ m + 1 . The numerical space R m can be embedded in R P m , preserving collinearity . An example of such an af fine embedding is h (( u 1 , ..., u m )) = [ u 1 : ... : u m : 1] = [ ˜ u ] , (2.1) where ˜ u = ( u 1 , . . . , u m , 1 ) T , and in gen eral, an a ffine embeddin g is g i ven for any A ∈ Gl ( m + 1 , R ) , by h A ( u ) = [ A ˜ u ] . The com plement o f the ran ge o f the embedding h in (2 .1) is the hype rplane R P m − 1 , set of po ints [ x 1 : · · · : x m : 0] ∈ R P m . Con versely , th e inhomogeneou s ( affine ) coor dinates ( u 1 , . . . , u m ) of a point p = [ x 1 : x 2 : · · · : x m +1 ] ∈ R P m \ R P m − 1 are giv en by (2.2) u j = x j x m +1 , ∀ j = 1 , . . . , m. Consider no w the linea r transformation from R m ′ +1 to R m +1 defined by th e matrix B ∈ M ( m + 1 , m ′ + 1; R ) an d i ts kernel K = { x ∈ R m ′ +1 , B x = 0 } . Th e pr ojective map β : R P m ′ \ P ( K ) → R P m , a ssociated with B is de fined by (2.3) β ([ x ]) = [ B x ] . In particular, a p r ojective transformation β of R P m is th e pr ojectiv e map associated with a no nsingular matrix B ∈ GL ( m + 1 , R ) an d its action on R P m : β ([ x 1 : · · · : x m +1 ]) = [ B ( x 1 , . . . , x m +1 )] . (2.4) 5 In af fine coordinates ( in verse of th e af fin e embedding (2 .1)), the projecti ve transfo rmation (2 .4) is gi ven by v = f ( u ) , with (2.5) v j = a j m +1 + P m i =1 a j i u i a m +1 m +1 + P m i =1 a m +1 i u i , ∀ j = 1 , . . . , m where det B = det(( a j i ) i,j =1 ,...,m +1 ) 6 = 0 . An affine transform ation of R P m , v = Au + b, A ∈ GL ( m, R ) , t ∈ R m , is a particular case of projective tr ansformatio n α, associated with the matrix B ∈ GL ( m + 1 , R ) , giv en by (2.6) B = A b 0 T m 1 . A pr ojective fr ame in an m dimensiona l pro jectiv e spac e ( o r pr ojective ba sis in co mputer vision literature, see e.g. Hartley (1 993)) is an ordered set of m + 2 pr ojective points in gen eral p osition. An examp le of projective fr ame in R P m is the standard pr ojective frame ([ e 1 ] , . . . , [ e m +1 ] , [ e 1 + ... + e m +1 ]) . In pro jectiv e shape analysis it is prefe rable to employ coordin ates in variant with respect to the grou p PGL ( m ) of projective transform ations. A proje cti ve tran sformation takes a projective frame to a projectiv e f rame, an d its action on R P m is determined b y its ac tion o n a pr ojectiv e fr ame, th erefore if we define the pr o jective coor d inate(s) of a p oint p ∈ R P m w .r .t. a projective fram e π = ( p 1 , . . . , p m +2 ) as b eing gi ven by (2.7) p π = β − 1 ( p ) , where β ∈ P GL ( m ) is a p rojective tran sformation taking the standa rd projective f rame to π . These co ordinates hav e automatically the in variance p roperty . REMARK 2.1. Assume u, u 1 , . . . , u m +2 ar e points in R m , such that π = ([ ˜ u 1 ] , . . . , [ ˜ u m +2 ]) is a pr ojective fr ame. If we c onsider the ( m + 1 ) × ( m + 1) matrix U m = [ ˜ u T 1 , . . . , ˜ u T m +1 ] , the pr o jective co or dinates of p = [ ˜ u ] w .r .t. π ar e given by (2.8) p π = [ y 1 ( u ) : · · · : y m +1 ( u )] , wher e v ( u ) = U − 1 m ˜ u T (2.9) 6 and (2.10) y j ( u ) = v j ( u ) v j ( u m +2 ) , ∀ j = 1 , . . . , m + 1 . Note that in our notation, th e superscrip ts ar e reserved fo r the componen ts of a point, wh ereas th e subscripts are for the labels of points. The projective coo rdinate(s) of x are given by the point [ z 1 ( x ) : · · · : z m +1 ( x )] ∈ R P m . 2.2 Pr ojective geometry and image acquisition in ideal digital cameras. An introd uction to the geo metry pinho le cam era princip le can be foun d in 3D-V ision texts includ ing Ma et. al. (2006 ), Hartley and Zisserman (2004 ) [13], Birchfeld (1998) [4], etc. In this section we gi ve such a description in our projective g eometry notation. Ideal pinhole camera image a cquisition can be t houg ht of in terms of a central pr ojection β : R P 3 \ R P 2 → R P 2 , wh ose representation in conv eniently selected af fine coordin ates ( x, y, z ) ∈ R 3 , ( u, v ) ∈ R 2 is giv en by u = f x z v = f y z , (2.11) where f is th e focal length, i .e. the distance from the image sensor or film to the p inhole or principal plane of the lens R P 2 , wh ich is the com plement of the dom ain of β in R P 3 . I n homo geneou s coor dinates [ x : y : z : w ] , [ u : v : t ] the perspective pr ojective map β can be represented by the matrix B ∈ M (3 , 4 ; R ) g iv en by: (2.12) B = f 0 0 0 0 f 0 0 0 0 1 0 . Digital cameras im age acqu isition is based o n a slightly different projecti ve transformation , that in addition takes into account intern al camera parameters such as pixel aspect ratio, skew ness parameter and principal point (orig in of image coo r d inates in the principal plane) . For such cameras, the p rojective m ap (2.12) is altered by a comp osition with a matrix accounting fo r camera internal calibr ation par ameters. If we also take into consideration the change of coordin ates between th e initial and cur rent camera position in volving a roto-tr anslation ( R, t ) ∈ S O (3 ) × R 3 , the 7 projective map of a p inhole camera image acquisition ˜ π is associated with the matrix: (2.13) ˜ B = C int B G = k u k c u 0 0 k v v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R t 0 T 3 1 = AG, where k u and k v are scale factors of the imag e plane in un its of the focal length f , and θ = cot − 1 k c is th e ske w , an d ( u 0 , v 0 ) is th e pr incipal po int. The m atrix A c ontains the intern al parameter s and the projection m ap (2.1 2), while E contains the external param eters. The colu mns o f the matrix ˜ B are the colum ns of a 3 × 3 matrix P follo wed by a 3 × 1 vector p: (2.14) ˜ B = ( P p ) so that (2.15) P = AR and p = At. 2.3 Essential and fundamental matrices Consider now two positions of a camer a directed at a point [ u ] ∈ R P 3 , and the projective points associated with its images taken at these locations of the cam era, m a = [ u a ] ∈ R P 2 , a = 1 , 2 , wher e u 1 , u 2 ∈ R 3 \{ 0 } . If we ass ume the camera’ s internal p arameters ar e known ( camera is calibrated), then, with respect to the camera ’ s coor dinates frame at each position, we may assume C int = I 3 . Since the lines joining the two lo cations of t he camera optical c enter with the image points m eet at [ u ] , these two line s and the line joining the tw o loc ations of the camera optical center are coplanar . The plane containing th ese lines is the epipolar plane associated with the point [ u ] . Assume we refer all the p oints to one coordin ate system, say the coor dinate system of the second position of the camera. The p osition vectors of first an d seco nd image points are t + Ru 1 , respectively u 2 , and and th e vector fro m one optical center to the oth er is t. Here the change of c oordina tes b etween the Euclidean frames correspond ing to the two cam era position s is giv en by a roto-tran slation ( R , t ) ∈ S O (3) × R 3 . The thre e vectors above are directions of 8 lines in the epipolar plane, therefore (2.16) u T 2 ( t × ( Ru 1 )) = 0 . By defining t × as the m atrix associa ted with the linear o perator y → t × y we can re write the equation (2.16) as follows (2.17) u T 2 ( t × Ru 1 )) = u T 2 E u 1 = 0 , where E = t × R is the so called essential matrix . If the cam era is u ncalibrated, then the matr ices A 1 = A 2 = A in (2.15) containing the camer a inter nal p arameters, yield the homog eneous pixel coordinates: v 1 = Au 1 (2.18) v 2 = Au 2 . (2.19) Thus: (2.20) ( A − 1 v 2 ) T ( t × RA − 1 v 1 ) = v T 2 A − 1 ( t × RA − 1 v 1 ) = 0 , and we obtain (2.21) v T 2 F v 1 = 0 , where F = ( A − 1 ) T E A − 1 , w ith E th e essential m atrix in (2.1 7 ) is the so called fundamenta l matrix . Th e fundamental matrix dep ends only on th e relative po sition of the two cameras, and o n their internal parameters. I t has ran k two, depend ing on seven real constants. 2.4 Reconstruction of a 3D scene from tw o of its 2D images. If we select co n veniently the coordin ates for the first cam era position , and also incorp orating the interna l parameter s, we may assum e that th e matrix associa ted with ˜ β 1 in eq uations (2.3) an d (2 .13) is B 1 = ( I | 0) , an d the f undamen tal matrix factors as follows : F = t × R, where B 2 = ( R | t ) is the matrix d efining ˜ β 2 ( see equations (2.3) an d (2.1 3)). 9 Note that her e R is no nsingular, and it does n ot n ecessarily r epresent the matrix of a r otation. Let [ v 1 ] , [ v 2 ] ∈ R P 2 be given by ( 2.18) associated with a pair [ u 1 ] , [ u 2 ] ∈ R P 2 correspo nding to matched po ints in two images. W e seek a point [ u ] ∈ R P 3 such th at [ v i ] = ˜ β i [ u ] , i = 1 , 2 . From the relatio n v T 2 F v 1 = v T 2 t × Rv 1 = v T 2 ( t × Rv 1 ) = 0 , it follows that v 2 , R v 1 , t are linear ly dependen t and we may assume that R v 1 = bv 2 − at. Moreover , since v 1 is defined u p to a scalar mu ltiple, we may assume th at R v 1 = v 2 − at, and define [ u ] ∈ R P 3 by u = ( v T 1 , a ) T . No w B 1 u = ( I | 0) u = v 1 , and B 2 u = ( R | t ) u = Rv 1 + at = v 2 , therefore [ u ] is a desired solu tion to the recon struction problem . As shown, [ u ] is determ ined by the tw o camera projection matrices B 1 and B 2 . If we choose a different pair of camera ma trices B 1 H and B 2 H yielding the same fun damental matrix F, then, in order to p reserve the same pair of matched image points, the point [ u ] mu st be replaced by [ H − 1 u ] . PROBLE M 2.1 . The pr oblem of the reconstruction of a c onfigu ration of p oints in 3D fr om two ideal uncalibrated camera images, is equivalent to the following: given two camera images R P 2 1 , R P 2 2 of unkn own r elative p osition and unkno wn interna l camera parameters, an d two matching sets of labelled poin ts { p a, 1 , . . . , p a,k } ⊂ R P 2 a , a = 1 , 2 , find a con figuration of points p 1 , . . . , p k ∈ R P 3 such th at ther e e xist tw o positions of the ca mera R P 2 1 , R P 2 2 for whic h ˜ β a ( p j ) = p a,j , ∀ a = 1 , 2 , j = 1 , . . . , k . The above d iscussion proves the fo llowing theorem (Faugeras(1992 ), Hartley et al.(1992)) : THEOREM 2.2. The r e construction pr oblem for two no n c alibrated ca mera images has a solution in terms of the fundamen tal matrix F = t × R. Any two solutions ca n be o btained fr om each other b y a pr o jective transformation in R P 3 . REMARK 2.2. Note that, althou gh the configu rations in corr espond ence are finite, their size is a rbitrarily larg e, and the assumption of fi nite matching labelled pairs can be r eplac ed by an assumption o f p arameterized sets in corr esp ondenc e. Therefor e, in a bsence of occlu sions, a 3D config uration can be r econstructed fr om 2D images , and this r eco nstruction is unique up to a pr ojective transformation. 10 2.5 Estimation of the fundamen tal matrix . Since equatio n (2 .21) is homogeneo us as a lin ear equation in F, and F has rank two, this matrix depends on se ven indepen dent parameters. Therefo re, in p rinciple, F can be re covered fro m corre sponding configurations of seven points. Due to the fact tha t the n ature of dig ital imaging data is in herently discrete and er rors o ccur also in landm ark registration, F can be estimated using configu rations of eight or more points p a,i , a = 1 , 2 , i = 1 , . . . k , k ≥ 8 , whose stacked homogen eous co ordinates are the k × 3 m atrices y a , a = 1 , 2 . The linear system for F is (2.22) y T 2 F y 1 = 0 and can be written as (2.23) f T Y = 0 , where f is a vectorized form of F. A refined eight point algorithm f or the estimate ˆ F o f th e fundamen tal matrix F can be found in Ma et al. ( 2006, p. 188, p. 395). 3 Pr ojective S hape and 3D Reconstruction DEFINITION 3.1. T wo con figurations of po ints in R m have the same p r ojective sha pe if they differ by a pr ojective transformation of R m . Unlike similar ities or affine transformations, pro jectiv e tran sformation s of R m do not have a group structure u nder composition of maps( the d omain of definition of the co mposition of two such maps is smaller th an the maximal domain of a pr ojective transformation in R m ) . T o avoid this co mplication, rather than con sidering the pr ojective shapes of co nfiguratio ns in R m , we co nsider projecti ve shapes of configuration s in R P m . A pr ojective shape of a k-ad ( con figuration of k land marks or labe lled points ) is the o rbit of that k -ad un der projective transformations with respect to the diagonal action (3.1) α k ( p 1 , . . . , p k ) = ( α ( p 1 ) , . . . , α, ( p k )) . 11 Since the actio n (2.4) of β ∈ P GL ( m ) on [ x ] ∈ R P m , wh en expressed in inhom ogeneo us coo rdinates (2.2), reduces to (2.5), if two configu rations Γ 1 , Γ 2 of points in R m have the s ame projecti ve shape, then h (Γ 1 ) , h (Γ 2 ) have the same projective shape in R P m ( h is the affine embedding gi ven by (2.1)). Patrangenaru (1999, 2001) considered the set G ( k, m ) of k-ad s ( p 1 , ..., p k ) , k > m +2 , for which π = ( p 1 , ..., p m +2 ) is a projective frame. P GL ( m ) acts simply transiti vely on G ( k , m ) and the projective shape space P Σ k m , is the q uotient G ( k , m ) /P GL ( m ) . Using the projective coordin ates ( p π m +3 , . . . , p π k ) gi ven by (2.7) one can sho w that P Σ k m is a manifold diffeomorph ic with ( R P m ) k − m − 2 . The pr ojective frame repr esentation ha s two usef ul feature s: firstly , the projective shape space has a manifold structur e, thu s allowing the use of the asymp totic theo ry for means on manifolds in Bhattachary a and P atrange naru (2003, 2005). Secon dly , it can be extended to infinite dimen sional projective shape spaces, such as projectiv e shapes of curves, as shown in Munk et al. (2007 ). Th is approach has the advantage of being indu cti ve in th e sense th at each new landmar k o f a configur ation add s an extra marginal axial co ordinate, thu s allowing to detect its overall contribution to the variability of the configur ation, as well as th e cor relation with other landmark s. Th e effect of ch ange of projective coordinates du e to p rojective frame selection , can be understood via a group of projective tran sformation s, but is beyond the scope of this paper . W e r eturn to the recon struction of a spatial config uration. Ha vin g in view the d efinition 3.1 o f a pro jectiv e shape of a configur ation, Theorem 2.2 can be stated as follows: THEOREM 3 .1. A spatial R r eco nstruction of a 3 D con figuration C can be obtain ed in absence of o cclusions fr o m two of its ideal camera views . Any such 3D r econstruction R o f C has the same pr ojective shape as C . REMARK 3.1 . Since the p r ojective shape o f the 3D r econstruction co nfiguration fr om a pair of images is un iquely determined, a nd since mu ltiplying by imposed intern al camera parameters ma trix keeps the pr ojective sh ape of th e r eco nstruction unchanged, on e may also fix th e internal camera p arameters conveniently and estimate the essential matrix instea d of the fund amental matrix. An eight p oint algorithm fo r estima tion of the essential matrix is given in Ma et. al. (2004 , p . 121), for given internal parameters. REMARK 3 .2. Another appr oach to p r ojective shape has been r e cently in itiated b y Kent and Mar dia (200 6, 200 7). This app r oach h as the adva ntage of bein g in v ariant with r espec t to the gr oup of permutations of la ndmark indices, 12 however it involves a n onlinear a ppr o ximation to the matrix solutio n of a da ta driven eq uation in a n m × m matrix, and has not been yet applied in pr ojective shape an alysis for m > 1 . 4 Nonparametric Estimat ion and T es ting f or the Projecti ve Shape a 3D Confi- guration Assume J : M → R N is an embeddin g of th e d dime nsional com plete m anifold M . Bhattachar ya and Patrangen aru (2003 ) defined the e xtrinsic mean µ J of a J − nonfoc al r andom object ( r .o.) Y on M by (4.1) µ J =: J − 1 ( P J ( µ )) , where µ = E ( J ( Y )) is the m ean vector of J ( Y ) and P J : F c → J ( M ) is the o rtho-p rojection o n J ( M ) defined on the compleme nt of the set F of foca l points of J ( M ) . The extrinsic covariance matrix of Y with respect to a local fr ame field y → ( f 1 ( y ) , . . . , f d ( y )) , for w hich ( dJ ( f 1 ( y )) , . . . , dJ ( f d ( y ))) are orth onorm al vectors in R N , was defined in Bhattacharya and Patrangenaru (20 05). If Σ is the covariance matrix o f J ( Y ) (regarded as a ran dom vector on R N ) , then P J is differentiable at µ. I n orde r to ev aluate the dif ferential d µ P J one con siders a special or thonor mal frame field to ease the compu tations. A loc al orth o-fram e field ( e 1 ( p ) , e 2 ( p ) , . . . , e N ( p )) defined o n an open neighb orhoo d U ⊆ R N of P J ( M ) is adap ted to the embe dding J if ∀ y ∈ J − 1 ( U ) , ( e r ( J ( y )) = d y J ( f r ( y )) , r = 1 , . . . , d . Let e 1 , e 2 , . . . , e N be the canonical basis of R N and assume ( e 1 ( p ) , e 2 ( p ) , . . . , e k ( p )) is an adapted frame field around P J ( µ ) = J ( µ J ) . Th en Σ E giv en by Σ E = " d X a =1 d µ P J ( e b ) · e a ( P J ( µ )) e a ( P J ( µ )) # b =1 ,...,N Σ " d X a =1 d µ P J ( e b ) · e a ( P J ( µ )) e a ( P J ( µ )) # T b =1 ,...,N . (4.2) is th e extrinsic covaria nce matrix of Y with re spect to ( f 1 ( µ J ) , ..., f d ( µ J )) . T he p rojective shap e space P Σ k m is homeom orphic to M = ( R P m ) q , q = k − m − 2 . RP m , a s a particular case of a Gr assmann man ifold, is equi variantly embedd ed in th e space S ( m + 1) of ( m + 1) × ( m + 1) symm etric matrices ( Dimitric (1996 ) ) via j : R P m → S ( m + 1) , 13 giv en by (4.3) j ([ x ]) = xx T . Patrangenaru (2001 ) and Mardia and Patrang enaru (2005) considered the resulting eq uiv ar iant embedding of the p ro- jectiv e shape space J = j k : P Σ k m = ( R m ) q → ( S ( m + 1)) q defined by (4.4) j k ([ x 1 ] , ..., [ x q ]) = ( j [ x 1 ] , ..., j [ x q ]) , where x s ∈ R m +1 , x T s x s = 1 , ∀ s = 1 , ..., q . REMARK 4.1. The embe dding j k in (4.4) yields the fastest known co mputation al algorithms in pr o jective shape analysis. Basic axial statistics r elated to W atson’s method of moments su ch as the sample me an axis ( W atson( 1983) ) and e xtrinsic sample covariance matrix (Prentice(1984)) ca n be e xpr essed in terms of j m +3 = j . A random projective shape Y of a k -ad in R P m is giv en in axial representation by the multiv ar iate random axes (4.5) ( Y 1 , . . . , Y q ) , Y s = [ X s ] , ( X s ) T X s = 1 , ∀ s = 1 , . . . , q = k − m − 2 . From Bhattacharya and Patrang enaru (200 3) or M ardia a nd Patrang enaru (2005 ) it fo llows that in this mu lti variate axial representation of pro jectiv e shapes, the e xtrinsic mean pro jecti ve shape of ( Y 1 , . . . , Y q ) exists if ∀ s = 1 , . . . , q , the largest eigen value of E ( X s ( X s ) T ) is simple. In this case µ j k is giv en by µ j k = ([ γ 1 ( m + 1)] , ..., [ γ q ( m + 1)]) (4.6) where λ s ( a ) and γ s ( a ) , a = 1 , . . . , m +1 are the eigen values in in creasing order and the correspondin g unit eigenvector of E ( X s ( X s ) T ) . If Y r , r = 1 , . . . , n are i.i.d.r .o. ’ s from a popu lation of pro jectiv e shapes ( in its multi-ax ial representatio n), for which the mean shape µ j k exists, from a g eneral consistency theorem fo r extrinsic means on m anifolds in Bhattach arya and Patrangenaru (2003 ) it follows that the extrinsic sam ple mean [ Y ] j k ,n is a strongly con sistent estimator of µ j k . In the 14 multiv ar iate axial representation, Y r is giv en by Y r = ([ X 1 r ] , . . . , [ X q r ]) , ( X s r ) T X s r = 1; s = 1 , ..., q . (4.7) Let J s be the random symmetric matrix gi ven by J s = n − 1 Σ n r =1 X s r ( X s r ) T , s = 1 , . . . , q, (4.8) and let d s ( a ) and g s ( a ) be the eigen values in incr easing order and the corresponding un it eigen vector of J s , a = 1 , . . . , m + 1 . T hen the sample mean projective shape in its mu lti-axial representation is gi ven by Y j k ,n = ([ g 1 ( m + 1)] , . . . , [ g q ( m + 1)]) . (4.9) REMARK 4.2. So me of the r esults in this section ar e given without a pr oof in Mar dia and P atrangena ru (2005). F or r ea sons pr esented in Rema rk 4.3 we give full pr oofs of these r esults. T o d etermine the extrinsic cov ariance matrix (4.2) of (4.5), we note that the vectors f ( s,a ) = (0 , . . . , 0 , γ s ( a ) , 0 , . . . , 0) , (4.10) with the on ly n onzero term in position s, s ∈ 1 , q, a ∈ 1 , m, yield a basis in the tan gent space at the extrin sic mean T µ j k ( R P m ) q , that is o rthono rmal with respect to the scalar pro duct induced by th e emb edding j k . The vectors e ( s,a ) , ∀ s ∈ 1 , q, ∀ a ∈ 1 , m , d efined as follows: (4.11) e ( s,a ) =: d µ j k j k ( f ( s,a ) ) . form an o rthobasis of T j k ( µ j k ) ( R P m ) q . W e c omplete this orthoba sis to an orth obasis o f q- tuples o f matrices ( e i ) i ∈I for ( S ( m + 1)) q , that is index ed by th e set I , the first indices of w hich are th e pairs ( s, a ) , s = 1 , . . . , q ; a = 1 , . . . , m in their le xicogr aphic ord er . L et E b a be the ( m + 1) × ( m + 1) matrix with all entries zero, except fo r an entry 1 in the position ( a, b ) . The stand ard basis o f S ( m + 1) is gi ven by e b a = E b a + E a b , 1 ≤ a ≤ b ≤ m + 1 . For ea ch s = 1 , ..., q , the vector ( s e b a ) = (0 m +1 , ..., 0 m +1 , e b a , 0 m +1 , ..., 0 m +1 ) 15 has all the compon ents z ero matrices 0 m +1 ∈ S ( m + 1) , except for the s -th componen t, which is the matr ix e b a of the standard basis of S ( m + 1 , R ) ; the vectors s e b a , s = 1 , . . . , q, 1 ≤ a ≤ b ≤ m + 1 listed in the lexicog raphic order of their indices ( s, a , b ) give a basis of S ( m + 1) q . Let Σ b e the covariance matrix of j k ( Y 1 , . . . , Y q ) regarde d as a rand om vector in ( S ( m + 1 )) q , with r espect to this standard basis, and let P =: P j k : ( S ( m + 1)) q → j k (( R P m ) q ) be the projectio n o n j k (( R P m ) q ) . Fr om (4.2) it follows that the extrinsic c ov ariance matrix of ( Y 1 , . . . , Y q ) with respect to the basis (4 .10) of T µ j k ( R P m ) q is g iv en by Σ E = e ( s,a ) ( P ( µ )) · d µ P ( r e b a ) ( s =1 ,...,q ) , ( a =1 ,...,m ) · Σ · e ( s,a ) ( P ( µ )) · d µ P ( r e b a ) T ( s =1 ,...,q ) , ( a =1 ,...,m ) . (4.12) Assume Y 1 , . . . , Y n are i.i. d.r .o. ’ s (in depend ent ide ntically d istributed random objects) from a j k -nonf ocal probab i- lity measure on ( R P m ) q , and µ j k in (4.6) is the extrinsic mean of Y 1 . W e arr ange the pairs of in dices ( s, a ) , s = 1 , . . . , q ; a = 1 , . . . , m , in their lexicograp hic order, and define the ( mq ) × ( mq ) symmetric matrix G n , with the entries G n ( s,a ) , ( t,b ) = n − 1 ( d s ( m + 1) − d s ( a )) − 1 ( d t ( m + 1) − d t ( b )) − 1 · · n X r =1 ( g s ( a ) T X s r )( g t ( b ) T X t r )( g s ( m + 1) T X s r )( g t ( m + 1) T X t r ) . (4.13) LEMMA 4.1. G n is the extrinsic sample covariance matrix estimator of Σ E . Proof . The proof o f lemma 4 .1 is b ased on th e equiv ariance of the embe dding j k . As a p reliminary step n ote that the grou p S O ( m + 1) acts as a group of isom etries of R P m . If R ∈ S O ( m + 1) an d [ x ] ∈ R P m then the action R ([ x ]) = [ Rx ] is well d efined. S O ( m + 1) acts by isometries also on S + ( m + 1 , R ) via R ( A ) = RAR T . Note that the map j ( x ) = xx T is equiv ariant si nce j ( R [ x ]) = j ([ R x ]) = ( Rx )( Rx ) T = Rj ([ x ]) R T = R ( j ([ x ])) . Therefo re, for q ≥ 1 the g roup ( S O ( m + 1)) q acts as a group of isometries of ( R P m ) q and also (4.14) (( R 1 , ..., R q ) · ( A 1 , ..., A q ) = ( R 1 A 1 R T 1 , ..., R q A q R T q ) , R j ∈ S O ( m + 1) , j = 1 , . . . , q . 16 The map j k is equiv ariant with respect to this action since j k (( R 1 , ..., R q ) · ([ x 1 ] , ..., [ x q ])) = ( R 1 , ..., R q ) · j k ([ x 1 ] , ..., [ x q ]) , ∀ ( R 1 , ..., R q ) ∈ ( S O ( m + 1)) q , ∀ ([ x 1 ] , ..., [ x q ]) ∈ ( R P m ) q . (4.15) W e set M = j k (( R P m ) q ) . Let M m +1 1 be the set of all matrices o f rank 1 in S + ( m + 1) . Note that M is the direct produ ct of q copies of M m +1 1 . Recall that (4.16) P : ( S + ( m + 1 , R )) q → M is th e pr ojection on M . If Y r = ([ X 1 r ] , . . . , [ X q r ]) , r = 1 , ..., n , are i.i.d.r .o. ’ s from a pro bability distribution on ( R P m ) q , we set V r = j k ( Y r ) . From the equ i variance of j k , w .l.o .g. ( with out lo ss of gene rality ) we m ay assum e that j k ( Y ) = ˜ D = ( ˜ D 1 , . . . , ˜ D q ) where ˜ D s ∈ S + ( m + 1 , R ) is a diagon al matrix, s = 1 , . . . , q . The refore Y j k ,n = ([ g 1 ( m + 1)] , ..., [ g q ( m + 1)]) where ∀ s = 1 , ..., q , ∀ a = 1 , ..., m + 1 , g a ( s ) = e a are the eigen vectors of ˜ D s . It is obvious that if V is the sample mean of V r , r = 1 , . . . , n, th en (4.17) j k ( Y j k ,n ) = P ( V ) = P ( ˜ D ) . Therefo re w .l.o.g. we ma y assume that (4.18) g s ( a ) = e a , ∀ s = 1 , . . . , q , ∀ a = 1 , . . . , m + 1 , and that j k ( p ) = P ( V ) is g i ven with p = ([ e m +1 ] , . . . , [ e m +1 ]) . The tangent space T p ( R P m ) q can be identified with ( R m ) q , and with this iden tification f ( s,a ) in (4.10) is giv en by f ( s,a ) = (0 , . . . , 0 , e a , 0 , . . . , 0) which has all vector compon ents zero except for p osition s, which is th e vector e a of the standar d basis o f R m . W e may the n assume that e ( s,a ) ( ˜ D ) := d p j k ( i e s ) . From a straightf orward com putation wh ich can b e fo und in Bhattach arya and Patrang enaru (2005 ) it follows that d ˜ D P ( s e b a ) = 0 , except for (4.19) d ˜ D P (( s e m +1 a )) = { d s ( m + 1) − d s ( a ) } e ( s,a ) ( P k ( ˜ D )) . 17 If Y r , r = 1 , . . . , n is given by (4.7), from (4.19), (4.2) we obtain (4.20) ( G n ) ( i,a ) , ( j,b ) = n − 1 { d i ( m + 1) − d i ( a ) } − 1 { d j ( m + 1) − d j ( b ) } − 1 X r i X a r j X b r i X m +1 r j X m +1 r , which is (4.13) expressed in the selected basis, thus proving th e Lemma The proof of Theo rem 4.1 is elementary following from Lemma 4.1, and from the obser vation that V 1 has a multiv ariate distribution with a finite covariance matrix Σ since ( R P m ) q is co mpact. For n la rge eno ugh, V has appr oximately a multiv ar iate normal distrib ution N ( µ, 1 n Σ) an d from the delta method ( Fergusson 1996, p.45 ), it follows that (4.21) P ( V ) ∼ N ( P ( µ ) = j k ( µ k ) , 1 n d µ P Σ d µ P T ) . The ran ge of th e differential d µ P is a subsp ace of T P ( µ ) j k (( R P m ) q ) , theref ore the asymptotic distribution of P ( V ) is degenera te. If we deco mpose S ( m + 1)) q = T P ( µ ) j k (( R P m ) q ) ⊕ T P ( µ ) j k (( R P m ) q ) ⊥ into tangent and nor mal subspaces, then th e covariance m atrix of the tan gential marginal d istribution of t a nP ( V ) is 1 n Σ E , which is no nde- generate because the generalized extrinsic covariance is given by the determ inant det (Σ E ) = Π q s =1 λ s ( a ) , which is positive. Because V is a strongly consistent estimator of µ, and S n is a strongly consistent estimator of Σ , from Slutsky’ s theor ems (Fergusson, 199 6, p.4 2) it follows that G n in (4 .13) is a strongly consistent estimator of Σ E . Let U = [( s U 1 , ..., s U m ) s =1 ,...,q ] T be the rando m vector whose com ponen ts ar e the comp onents o f tanP ( V ) w .r .t. the basis e ( s, a )( ˜ D ) which is gi ven in the proof o f Lemma 4.1. Since G n is a consistent estimator of Σ E , it fol- lows that Z n = √ nG − 1 2 n U conver ges to a N (0 , I mq ) - d istributed rando m vector, and Z T n Z n conv erges to a random variable with a chi-squ are distribution with mq degrees of freedo m. If one uses the equiv ariance again, on e gets Z T n Z n = T ( Y j k ,n ; µ ) in (4.23), which completes the proo f of Theorem 4.1 In prepara tion for an asymptotic distribution of Y j k ,n we set (4.22) D s = ( g s (1) , ..., g s ( m )) ∈ M ( m + 1 , m ; R ) , s = 1 , . . . , q . If µ = ([ γ 1 ] , . . . , [ γ q ]) , where γ s ∈ R m +1 , γ T s γ s = 1 , f or s = 1 , . . . , q , we d efine a Hotelling’ s T 2 type-statistic T ( Y j k ,n ; µ ) = n ( γ T 1 D 1 , . . . , γ T q D q ) G − 1 n ( γ T 1 D 1 , . . . , γ T q D q ) T . (4.23) 18 THEOREM 4.1. Assume ( Y r ) r =1 ,...,n ar e i.i.d.r .o. ’s on ( R P m ) q , and Y 1 is j k -nonfo cal. Let λ s ( a ) an d γ s ( a ) be the eigen va lues in incr easin g or der , r espectively the corresponding unit eigen vectors of E [ X a 1 ( X a 1 ) T ] . If λ s (1) > 0 , for s = 1 , . . . , q , then T ( Y j k ,n ; µ j k ) co n ver ges weakly to a χ 2 mq distributed r ando m variable. If Y 1 is a j k -nonf ocal po pulation on ( R P m ) q , since ( R P m ) q is compact, it f ollows that j k ( Y 1 ) has finite moments of sufficiently high order . Accord ing to Bhattach arya and Gh osh (19 78), this, along with an assumption o f a non zero absolutely continuo us com ponent, suffices to ensur e an Edgew o rth expa nsion u p to order O ( n − 2 ) of the piv o tal s tatistic T ( Y j k ,n ; µ j k ) , and implicitly the bootstrap approximation of this statistic. COROLLAR Y 4.1 . Let Y r = ([ X 1 r ] , . . . , [ X q r ]) , X T st X st = 1 , s = 1 , . . . q , r = 1 , . . . , n , be i.i.d. r .o . ’s fr om a j k - nonfoca l distribution on ( R P m ) q which has a no nzer o ab solutely continuo us com ponent, and with Σ E > 0 . F or a random resample with repetition ( Y ∗ 1 , ..., Y ∗ n ) fr om ( Y 1 , . . . , Y n ) , consider the eigen valu es d ∗ s ( a ) , a = 1 , . . . , m + 1 of 1 n P n r =1 X ∗ r s X ∗ T r s in the ir increasing or der , and th e corr espondin g unit eigen vectors g ∗ s ( a ) , a = 1 , . . . , m + 1 . Let G ∗ n be the matrix obtained fr om G n , by substituting all the entries w ith ∗ − entries. Then the bo otstrap d istrib ution function of the statistic T ( Y ∗ j k ; Y j k ) = n ( g 1 ( m + 1) D ∗ 1 , . . . , g q ( m + 1) D ∗ q ) G ∗− 1 n ( g 1 ( m + 1) D ∗ 1 , . . . , g q ( m + 1) D ∗ q ) T (4.24) appr o ximates the true distrib u tion of T ([ Y j k ; µ j k ]) given by (4.23) , w ith an err o r of or der 0 p ( n − 2 ) . REMARK 4.3. The ab ove cor o llary is fr om Mar dia and P atrangena ru (2 005). F ormula (4.24) in that paper has unnecessary asterisks for g s ( m + 1) , a typo tha t is corr ected her e. Also the condition Σ E > 0 is missing ther e, as well as in th eir T heorem 5. 1. Ano ther typo in Mardia an d P atrangenaru (20 05) is i n their definition of ˜ D s : the last column of ˜ D s should not be ther e. The correct formula is (4.22) . Note that ˜ D s = ( D s | g s ( m + 1)) . Theorem 4.1 and Co rollary 4.1 are useful in estimation and test ing fo r mean pro jectiv e shap es. W e may derive from (4.1) the following lar ge sample confidence region for an extrinsic mean projecti ve shape COROLLAR Y 4.2. Assume ( Y r ) r =1 ,...,n ar e i.i.d.r .o. ’s fr om a j k − nonfo cal pr o bability distrib utio n on ( R P m ) q , and Σ E > 0 . An asymptotic (1 − α ) -confi dence r e g ion fo r µ j k = [ ν ] is given b y R α ( Y ) = { [ ν ] : T ( Y j k ,n ; [ ν ]) ≤ χ 2 mq,α } , 19 wher e T ([ Y j k , [ ν ]) is given in ( 4.23) . If the pr obab ility measure of Y 1 has a nonzer o- absolutely co ntinuou s componen t w .r .t. the volume measur e on ( R P m ) q , th en the covera ge err or of R α ( Y ) is of or der O P ( n − 1 ) . For small samples the coverage error could be quite large, and the bootstrap analogu e in Corollary 4.1 is pref erable. Consider for example the one sample testing pr oblem for mean pr ojective shap es : (4.25) H 0 : µ j k = µ 0 vs. H 1 : µ j k 6 = µ 0 . COROLLAR Y 4.3. The lar ge samp le p-valu e for the testing pr oblem (4 .25) is p = P r ( T > T ( Y j k ,n ; µ 0 )) , wher e T ( Y j k ,n ; µ ) is given by (4.23) . In the small sample case, problem (4.25) can be answered based on C orollary 4.1 to obtain the following 100(1 − α )% bootstrap confidence region fo r µ j k : COROLLAR Y 4.4. Under the hypotheses of Cor ollary 4.1, The corr espondin g 100(1 − α )% confiden ce r egion for µ j k is (4.26) C ∗ n,α := j − 1 k ( U ∗ n,α ) with U ∗ n,α given by (4.27) U ∗ n,α = { µ ∈ j k (( R P m ) q ) : T ( y j k ,n ; µ ) ≤ c ∗ 1 − α } , wher e c ∗ 1 − α is th e up per 100 (1 − α )% point of the va lues of T ( Y ∗ j k ; Y j k ) g iven by (4.2 4) . The r e gion g iven b y (4.26) - (4.2 7 ) has coverage err o r O P ( n − 2 ) . If Σ E is singular an d all th e marginal axial distributions have positive defin ite extrin sic covariance matrices, one may use simultaneous confid ence ellipsoids to estimate µ j k . Assume ( Y r ) r =1 ,...,n are i.i.d.r .o. ’ s f rom a j k − nonf ocal probab ility distribution on ( R P m ) q . For each s = 1 , . . . , q let Σ s be the extrinsic cov ariance m atrix of Y s 1 , and let Y s j,n and G s,n be th e extrinsic samp le mean and the extrin sic sample covariance matrix of the s -th m arginal axial. If the pro bability measure of Y s 1 has a non zero-ab solutely con tinuous compone nt w .r .t. the volume measure on ( R P m ) , and if for s = 1 , . . . , q and for [ γ s ] ∈ R P m , γ T s γ s = 1 , we con sider the statistics : (4.28) T s = T s ( Y s j,n , [ γ s ]) = nγ T s D s G − 1 s,n D T s γ s 20 and the correspo nding bootstrap distrib utions (4.29) T ∗ s = T s ( Y s ∗ j,n , ; Y s j,n ) = ng s ( m + 1) T D ∗ s G ∗ s,n − 1 D ∗ T s g s ( m + 1) . Since by Corollary 4.1 T s has asympto tically a χ 2 m distribution, we obtain the following COROLLAR Y 4.5. F or s = 1 , . . . , q let c ∗ s, 1 − α be the upper 100(1 − α )% po int of the values of T ∗ s given by (4.29) . W e set (4.30) C ∗ s,n,β := j − 1 k ( U ∗ s,n,β ) with U ∗ s,n,β given by (4.31) U ∗ s,n,β = { µ ∈ j ( R P m ) : T s ( y s j,n ; µ ) ≤ c ∗ s, 1 − β } . If (4.32) R ∗ n,α = ∩ q s =1 C ∗ s,n, α q , with C ∗ s,n,β , U ∗ s,n,β given by (4.30) - (4.3 1) , th en R ∗ n,α is a re gion of at least 100(1 − α )% confide nce for µ j k . The coverag e err or is of or d er O P ( n − 2 ) . REMARK 4.4. If Σ E is singula r , one may also use a method for con structing nonpivo tal bootstrap confidence r egions for µ j k using Cor ollary 5 .1 of Bhattacharya and P atrangenaru (2003). A CKNO WLEDGEMENT The authors wish to than k the Nation al Security Agency and the National Science Foundatio n for their gen erous support. W e would also like to thank Rabi N. Bhattach arya and Adina Patrangenaru fo r their suggestion s that h elped improve the manuscript. Refer ences [1] Bhattacharya, R.N. and Ghosh, J .K. (1978). On the validity of th e for mal Ed gew orth expansion. Ann . Sta tist. V ol. 6 , 434–4 51. 21 [2] Bhattacharya, R.N. an d Patrangenaru , V (200 3) Large sample theor y of intrinsic an d extrinsic sample means on manifold s-I, Ann. Statist. V ol. 31 , no . 1, 1–29. [3] Bhattacharya, R.N. ; Patrangenar u V . (2005), Large sam ple theory of intrinsic an d extrinsic sample m eans on manifold s- Part II , Ann. Statist. , V o l. 33 , No. 3, 1211–12 45. [4] http://vision.stanford .edu/ ˜ birch/pr ojectiv e [5] Dimitric, I. (1996) A note on equ iv arian t e mbeddin gs of Grassmannians. Publ. Inst. Math. (Beograd) (N.S.) 59, 131-1 37. [6] Efron, B . (19 79) Bootstrap methods: an other look at the jackknife. it Ann. Statist. 7 , No. 1, 1–26. [7] Efron, B. (19 82). The J ackknife, the Boo tstrap and Other R esampling P lans , CBMS-NSF Regional Conference Series in Applied Mathematics, 38 . SIAM. [8] F augeras O. D. (1992) What can be s een in three dimension s with an un calibrated stereo rig? In Pr oc. Eur opean Confer ence on Computer V ision, LNCS 588 pp 563–5 78. [9] Fergusson,T . (1996 ). Lar ge Samp le Theory . Chapman Hall. [10] Goodall,C. a nd M ardia,K.V . (1999 ). Projective shape analysis, J. Graphical & Computationa l S tatist. , 8 , 1 43– 168. [11] Hartley , R. I. ; Gupta R. ; an d Chang T . (1992). Stereo from uncalibrated cameras, in Pr o c. IEEE Confer ence on Computer V ision and P attern Recognition . [12] Hartley , R. I.(1993) Projecti ve Reconstruction and In variants from Multiple Images, preprint. [13] Hartley , Richard and Zisserman, Andrew Multiple vie w Geometry in computer v ision,; 2 edition (2 004) Cam- bridge University Press. [14] K e nt, J.T . an d Mard ia,K.V . (200 6) A ne w r epresentation f or projective shape, in S. Barber, P . D. Baxter, K.V .Mardia, & R.E. W alls (Eds.), Interdisciplinary Statistics an d Bio informatics , pp. 75-7 8. Leed s, L eeds Uni- versity Press. http://www .maths.leeds.ac.uk /lasr2006/proceedings/ 22 [15] K e nt, J.T . and Mard ia, K.V . (20 07). Procrustes method s for projec ti ve shape In S. Barber, P .D. Baxter , & K.V .Mardia (eds), ıSystems Biolo gy and Statis tical Bioin formatics, pp. 37-40 . Lee ds, Leeds University Press. http://www .maths.leed s.ac.uk/lasr20 07/proceedings/ [16] Lee, J. L. ; Paige, R. ; Patrangenaru, V . and Ruymgaar t, F . (2004) N onparam etric den sity estima- tion on hom ogeneo us spaces in high le vel image analysis, in R.G. A y kroyd, S. Barber , & K.V . Mardia (Eds.), Bioinformatics, Images, and W avelets, pp. 37-40 . Departm ent of Statistics, Uni versity of Leeds. http://www .maths.leed s.ac.uk/Statistics/workshop/lee ds2004/temp [17] Liu, X.; Patrangenaru , V . and Sugathad asa, S.(200 7) Pr ojective Sha pe Ana lysis for Noncalibrated Pinhole Ca- mera V ie ws. T o the Memory of W .P . Dayawan sa , Flor ida State University-Department o f Statistics, T echn ical Report M983. [18] Ma, Y ., Soatto, S., K osecka, J. and Sastry , S.S. (2006). An in vita tion t o 3- D V is ion , Springer, New Y o rk. [19] Mardia, K.V . an d Patrang enaru, V (2005) Dir ections and pr ojective shapes, An n. Statist. V ol. 33 , N o. 4., 1 666– 1699. [20] Mardia K.V ., Goodall C., W alder A.N. (19 96) Distrib utions of pr ojective inv a riants and model based mac hine vision Adv . in App. Pr ob . 28 641–661 . [21] Maybank, S.J. (1 994). Classification based on the cro ss ratio. in A pplication s of In v ariance in Computer V ision. Lecture Notes in Comput. Sci. 825 (J.L. Mundy , A. Zisserman and D. Forsyth, eds.) 433–472. Springer , Berlin. [22] A. Munk , R. Paige, J. Pang, V . Patrangenaru an d F . Ruymg aart ( 2007) , The On e and Multisample Proble m for Functional Data with Application s to Projecti ve Shape Analysis, to appear in J o urnal of Multivariate Analysis. [23] Paige, R., Patrangenaru, V ., Ruy mgaart, F ., & W ang, W .(20 05) Analy sis of p rojective shapes o f curves u sing projective frames, in S. Barber, P .D. Baxter, K.V . Mardia, & R.E. W alls (Eds.), Qua n- titative Bio logy , Shape Ana lysis, and W a velets , pp . 71-7 4. Leeds, Leeds University Press, 2005. http://www .maths.leed s.ac.uk/Statistics/workshop/lee ds2005/temp 23 [24] Patrangenaru, V . (1999 ) Moving projec ti ve frames and spa tial scen e identification, in Pr o ceedings in Spatia l- T emporal Modeling a nd Applications , Edited by K. V . Mardia, R. G. A ykroyd and I.L. D ryden, Leeds Uni versity Press, p. 53–5 7. [25] Patrangenaru,V . (2001). Ne w large sample and boo tstrap methods on shape spaces in high lev el analysis of natural images, Commun. Statist. , 30 , 1675–1 693. [26] Patrangenaru,V . and Sughatadasa, S. (2006 ), Reconstruction of 3D scenes and projective shape analysis, in Pr o- gram of the 34th Annua l Meeting of the Statistical Society of Canada Geometry , May 28 to 3 1, 2 006 at the University of W estern Ontario. Abstracts, p.168. http://www .ssc.ca/2006 /documen ts/meeting.pdf [27] Prentice, M.J. (1 984). A d istribution-free method of interval estimation fo r unsigned dire ctional d ata. Biometrika 71 , 147–15 4. [28] Sughatadasa, S. ( 2006) , Affine and Pr ojec tive Sh ape Ana lysis with A pplication s , Ph.D. Thesis, T exas T ech Un i- versity . [29] W atson, G. S. ( 1983 ) Statistics on Spher es , Uni versity of Ar kansas Lec ture No tes in the Mathematical Science s, 6. A W iley-In terscience Publication. John W iley and Sons, Inc., New Y o rk. 24
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment