On the Schoenberg Transformations in Data Analysis: Theory and Illustrations

The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc le…

Authors: Franc{c}ois Bavaud

Schoenberg transformations are elementwise mappings of Euclidean distances into new Euclidean distances, embeddable in a higher dimensional space. Their potential in Data Analysis seems evident in view of the omnipresence of Euclidean dissimilarities in Multidimensional Scaling (MDS), Factor Analysis, Correspondence Analysis or Clustering. Yet, despite its respectable age (Schoenberg 1938a), the properties and the very existence of this class of transformations appear to be little known in the Data Analytic community. Non-linear embeddings of original data into higher dimensional feature spaces are familiar in the Machine Learning community, which however bases its formalism upon kernels, which are positive definite (p.d.) matrices, rather than on squared Euclidean distances, which are conditionally negative definite (c.n.d.) matrices with a null diagonal. Some aspects of the correspondence between p.d. and c.n.d. matrices are well-known in Data Analysis, and lie at the core of classical MDS (Theorems 1 and 2). Other aspects (Theorem 4), central to the derivation of Schoenberg transformations (Definition 2), are less notorious. Section 2 is a self-contained review of all those results, scattered in the literature, together with their proofs. Section 3 analyses some of the general properties of Schoenberg transformations, and yields original results about angles, arc lengths and curvatures. Section 4 illustrates the non-linear and spectral properties of the transformations on two artificial data sets -the grid and the rod. An elementary yet efficient distance-based linear discriminant algorithm is presented in Section 5. Section 6 proposes in conclusion to revisit the Machine Learning formalism in terms of Euclidean distances, rather than in terms of kernels 2 Definitions and Theorems Classical multidimensional scaling (MDS) (e.g. Borg and Groenen 1997) can be performed iff the eigenvalues of the so-called matrix of scalar products are non-negative. For concision sake, we shall refer to such a matrix as positive definite (instead of "semi-positive definite"), while a strictly positive definite matrix will be characterized by strictly positive eigenvalues. Vectors are meant as column vectors. I denotes the identity matrix, and 1 the unit vector, all components of which being unity. Depending upon context, the "prime" either denotes the transpose of a matrix, or the derivative of a scalar function. Consider a signed distribution a on n objects, that is a vector obeying n i=1 a i = 1, where some components are possibly negative. Consider also the n × n centering matrix H(a) = I -1a ′ , with components δ ij -a j . Let C be a symmetric n × n matrix, and define the matrix . Also, for any z, (z, B(a)z) = -1 2 (y, Cy) where the vector y = H ′ (a)z obeys i y i = 0 for any z, showing "⇐". Also, y = H ′ (a)y whenever i y i = 0, and hence (y, B(a)y) = -1 2 (y, Cy), thus demonstrating "⇒". Moreover, C is c.n.d. iff Ĉ is c.n.d. In this case, the components ĉij are "isometrically embeddable in l 2 ", that is representable as squared Euclidean distances D ij between n objects as where the object coordinates can be chosen as where the λ α are the diagonal components of the diagonal matrix Λ(a) and u iα (a) are the components of the orthogonal matrix U (a) occurring in the spectral decomposition B(a) = U (a)Λ(a)U ′ (a). Proof: the first identity in (2) follows from H(a)1 = 0, and the second one for some vector γ. The next assertion follows from (y, Cy) = (y, Ĉy) whenever i y i = 0, and identity (3) can be shown to amount to the second identity (2) by direct substitution. The p.d. nature of B(a) (Theorem 1) is crucial to insure the nonnegativity of the eigenvalues λ α . Identity H ′ (a)a = 0 yields B(a)a = 0. Hence, at least one eigenvalue is zero and p ≤ n -1 in (3). Theorems 1 and 2 show that any p.d. matrix B, or equivalently any c.n.d. matrix C, define a unique set of squared Euclidean distances D between objects (Torgerson 1958;Gower 1966). The latter can be shown (e.g. from (4)) to obey the celebrated Huygens principle, namely n j=1 where D ia denotes the squared distance between object i (with coordinates x i ) and the a-barycenter defined by the coordinates xa = j a j x j . Also, ∆ a ≥ 0 interprets as the average dispersion of the cloud, provided a is a non-negative distribution representing the relative weights of the objects. In the general case of a signed distribution, ∆ a is still well defined, but can be negative. The squared Euclidean distance between the barycenters xa and xb associated to two signed distributions a and b can also be shown to satisfy which directly demonstrates the c.n.d. nature of D (since z i = a i -b i obeys i z i = 0). Also, ( 6) entails ( 5) with the choice b j = δ jk for some k. which, by the cosine theorem, is the matrix of the scalar products between x i and x j as measured from the origin xa . Low-dimensional factorial reconstructions (that is limiting the sum in (3) to the largest eigenvalues) express a maximum amount of tr(B(a)) = i D ia . This quantity, without direct interpretation, is proportional to the uniform dispersion of the coordinates cloud with respect to the point xa . The dispersion tr(B(a)) is minimum when a is the uniform distribution, a standard choice in classical MDS (see e.g. Mardia et al. 1979). Concentrating the mass of a on a single existing object, typically the last one, is often proposed for computational convenience. Other prescriptions consider a i as proportional to the precision of measurement of object i (see e.g. Borg and Groenen 1997), or set a i = 0 for objects whose behavior might influence excessively the overall configuration, as in the treatment of "supplementary elements" in Correspondence Analysis (see e.g. Benzécri 1992; Lebart, Morineau and Piron 1998; Meulman, van der Kooij and Heiser 2004; Greenacre and Blasius 2006). Other choices such as the circumcenter or the incenter are discussed in Gower (1982). Note that the signed nature of a allows to define an external origin xa lying outside the convex hull of the n points, resulting in B ij ≥ 0 for all pairs. As a matter of fact, the choice of the origin a and the choice of the object weights f constitute two distinct operations, as made explicit by the following generalization of classical MDS (Cuadras and Fortiana 1996;Bavaud 2006Bavaud , 2009)): Then D is squared Euclidean iff the matrix of weighted scalar products is p.d. The objects coordinates can be chosen as where the eigenvalues λ α (a) and eigenvectors u iα (a) obtain from the spectral decomposition of K(a) = U (a)Λ(a)U ′ (a). Moreover, the corresponding lowdimensional factorial reconstruction, retaining in (7) only the components α associated with the largest eigenvalues, express a maximum proportion of the total inertia relatively to a, namely The proof follows from the definitions and Theorem 2 by direct substitution. The last identity is a consequence of ( 5), and shows in particular the total inertia to be minimum for a = f , as expected. When f is uniform, the eigenvalues in Theorems 2 and 3 coincide up to a factor n. If A = (a ij ) and B = (b ij ) are p.d. matrices of same order, so are cA for c ≥ 0, (t i a ij t j ) for any vector t (cf. Proof: the first assertion follows form Theorem 4, and the second from Theorem 2 together with the fact that Dij (λ) can easily be shown to be c.n.d. with a zero diagonal. More generally, any mixture of D(λ) over λ ≥ 0 is a squared Euclidean distance, yielding the following definition and theorem: where g(λ)dλ is a non-negative measure on [0, ∞) such that λ dλ < ∞. Note that (9) entails ϕ(D) ≥ 0 and ϕ(0) = 0 together with where ϕ ′ (D) denotes the derivative of ϕ(D). 3 Some properties of the Schoenberg transformations By construction, ϕ ′ (D) in (10) coincides with the class of completely monotonic functions f (D) obeying (-1) n f (n) (D) ≥ 0 (Bernstein 1929). Hence Schoenberg transformations are characterized by ϕ(D) ≥ 0 with ϕ(0) = 0, with positive odd derivatives ϕ ′ (D), ϕ ′′′ (D), etc., and negative even derivatives ϕ ′′ (D), ϕ ′′′′ (D), etc. (see Table 1). In particular, √ D is Euclidean whenever D is Euclidean. Also, the identity transformation ϕ(D) = D obtains from g(λ) = δ(λ). The latter contribution can be made explicit in the following variant, equivalent to Definition 2 (see e.g. Berg et al. 2008): where µ is a non-negative measure on (0, ∞) such that There exists an important literature about Bernstein functions (see e.g. Berg et al. 2008;Schilling et al. 2010; and references therein), defined as the smooth non-negative functions whose first derivatives are completely monotonic. Hence, Schoenberg transformations coincide with the class of Bernstein functions which are zero at the origin, in the same way that Euclidean distances are c.n.d matrices with zero diagonal (Theorem 2). By construction, Schoenberg transformations are closed under composition, as exemplified by ϕ 6 = ϕ 4 • ϕ 5 in Table 1. A Schoenberg transformation acts as an anamorphosis between Euclidean spaces: to any initial configuration of points X, with mutual squared Euclidean distances D(X), corresponds a transformed configuration X reconstructible by MDS from D = φ(D). By construction, the mapping X(X) is unique up to a translation and a rotation. Consider a smooth curve C whose arc length is parameterized by s, containing two close points at mutual distance ∆s. The corresponding distance on the transformed curve C is ∆s = ϕ((∆s) 2 ). By l'Hospital's rule, the ratio of the infinitesimal arc lengths is which might be finite or not. On the other hand, infinitely distant points in the original space might be infinitely distant or not in the transformed space: Definition 3 The transformation ϕ(D) is said to be λ dλ < ∞. Consider a triangle ijk with a right angle in k. Hence D ij = D ik + D jk by Pythagoras' theorem. Yet, in the transformed space, Dij ≤ Dik + Djk since ), which can be demonstrated by integrating (1 -exp(-λD 1 ))(1 -exp(-λD 2 )) ≥ 0 as in (9). That is, the Schoenberg transformation α of a right angle α = π/2 is in general acute. By the cosine theorem, Under uniform linear dilatation of the original right-angled triangle by a factor ǫ > 0, (11) readily yields that lim ǫ→∞ α(ǫ) = π/3 whenever ϕ is bounded, and lim ǫ→0 α(ǫ) = π/2 whenever ϕ is rectifiable. Straight lines are bent by Schoenberg transformations: think of a rod whose linear distances d between constituents are contracted as, say, √ d. The curvature in the transformed space can be measured as follows: consider in the original space three aligned points i, k, j with d ik = d kj = ǫ and d ij = 2ǫ. The Menger's curvature κ is defined as the limit (Blumenthal 1953 p. 75) where Ãijk is the area of the triangle ijk in the transformed space and d denotes the length of the corresponding sides. Heron's formula yields after simplification where l'Hospital's rule has been used twice in the last equality, under the assumption of rectifiability. Consider n = 100 points forming the bidimensional grid of Figure 1a), on which the transformation ϕ(D) = D 0.4 is applied. Figures 1b) and1c) depict the four first dimensions of the transformed configuration, expressing altogether 62% of the total inertia. Figure 2 depicts the low-order projections (b, c, d, e and f) of the nonrectifiable square root transformation D = √ D of a quasi-unidimensional rod of n = 1 ′ 000 points, uniformly generated as X 1 ∼ U (0, 1000) and X 2 ∼ U (0, 1) (a). As expected, the transformed rod is bent, although the curvature formula of Section 3.4 does not applies here (ϕ ′ (0) = ∞). The transformation of a line is called "screw line" by Von Neumann and Schoenberg (1941), and "helix" by Kolmogorov (1940) -an adequate terminology in view of Figure 2. The first MDS dimensions turn out to express 61.0%, respectively 15.1% of the relative inertia. Analytic arguments, to be developed in a forthcoming publication, demonstrate the corresponding exact quantities to be 6 π 2 = 60.8%, respectively 15 2π 2 = 15.2% for a line. Consider a collection of objects i = 1, . . . , n endowed with p-dimensional features, yielding squared Euclidean distances D ij between objects, possibly after standardization and/or orthogonalization of the features (Mahalanobis distances). Also, suppose that each object belongs to a group g = 1, . . . m. An elementary discriminant strategy would consist in assigning each object i to the group g whose centroid is the closest to i, that is to assign i to arg min g D ig : this is the linear discriminant prescription of Fisher (1936), successfully applied on the Iris Data (n = 150, p = 4, m = 3) with a percentage of well-classified individuals as high as 97%. The same strategy is bound to fail with the data of Figure 3 (n = 150, p = 2, m = 3), reaching a percentage of well-classified individuals of 35%, close to the expected value of 33% under random attribution. However, linear discrimination can be attempted on Schoenberg transformations of the original distances, resulting in the algorithm (see (5)): Distance-based discriminant algorithm: ) is the distribution in group g 2) assign object i to group arg min g Dig . Figure (4) shows the resulting proportion of well-classified individuals, for various one-parameter families of transformations ϕ(D|a). In this data set, the maximum proportion of well-classified individuals reaches 100% for the Gaussian transformation (for a ≥ 0.65). That is, a sufficiently vigorous Schoenberg transformation succeeds in mapping the initial configuration of Figure 3 in such a way that the three groups can be enclosed in three associated disjoint hyperspheres. On one hand, this result is completely expected: mapping the data into a high-dimensional feature space, in which the former become linearly separable, is a routine strategy in the Machine Learning community, developed ever since the nineties (see e.g. Chen et al. 2007 and references therein). On the other hand, the conceptual, formal and computational simplicity of the above, presumably new algorithm, should to be emphasized. The Machine Learning literature contains innumerable algorithms based upon Gaussian and other radial kernels: the procedure exposed in Section 5 is indeed just one among many possible applications, aimed at illustrating the operational content of the theory. Higher-order "principled" embeddings, pioneered by the work of Vapnik (1995) and embodied in this article by the class of Schoenberg transformations, are arguably about to be incorporated in standard Data Analysis, to be routinely used in applications, and taught at graduate and undergraduate non-specialized audiences. Recasting the whole Machine Learning formalism in terms of Euclidean distances, rather than in terms of kernels, could efficiently contribute towards this assimilation: first, the statements in either formalism can be translated into the other, at granted by Theorems of Section 2. In particular, to the "kernel trick" stating that all the quantities of interest depend upon kernels only (and not upon the object features themselves) corresponds an equally efficient "distance trick", stating that Euclidean distances themselves (and not their underlying coordinates) permit to express all the real quantities of interest, as in ( 5), ( 6), or Section 5; see also Schölkopf (2000) and Williams (2002). Furthermore, Euclidean distances are arguably more intuitive than kernels, as attested by the development of Geometry and Data Analysis (including their non-Euclidean extensions; see e.g. Critchley and Fichet (1994) for a review). In that respect, such a revisitation could prove itself beneficial, both from the prospect of future scientific developments as from a pedagogical point of view. Proof: first observe that if B(a) is p.d., then B(ã) is also p.d. for any other signed distribution ã, in view of the identity B P.d. matrices are referred to as kernels in the Machine Learning community (see e.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment