Fast space-variant elliptical filtering using box splines

The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varyi…

Authors: Kunal Narayan Chaudhury, Arrate Munoz-Barrutia, Michael Unser

F ast space-variant elliptical filtering using box splines Kunal Narayan Chaudhury ∗ , Arrate Munoz-Barrutia, Michael Unser Abstract The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper , we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orienta- tion using a fixed number of computations per pixel. The associated algorithm, which is based on a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines , is realized using pre-integration and local finite-differences. The radially-uniform box splines are constructed through the re- peated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior , their simple covariance structure, and their quasi-separability . They converge to Gaussians with the increase of their order , and are used to approximate anisotropic Gaussians of varying covariance simply by controlling the scales of the constituent box distributions. Based on the second feature, we develop a technique for continuously controlling the size, elongation and orientation of these Gaussian-like functions. Finally , the quasi-separable structure, along with a certain scaling prop- erty of box distributions, is used to efficiently realize the associated space-variant elliptical filtering, which requires O (1) computations per pixel irrespective of the shape and size of the filter . Index – Space-variant filter , Finite-differences, R unning-sums, Anisotropic Gaussian, Box spline, Zwart-P owell (ZP) element. 1 INTRODUCTION The most widely used smoothing operator in image processing is the Gaus- sian filter . As far as isotropic Gaussians are concerned, a fast implementation ∗ kchaudhu@math.princeton.edu 1 is achievable simply by decomposing the filter into two orthogonal 1 -D Gaus- sians operating along the image axes. The 1 -D filters are in turn implemented using efficient recursive algorithms, e.g., the ones proposed by Deriche [ 5 ] and Y oung et al. [ 20 ]. W e refer the readers to this survey article [ 16 ] for an exhaustive account of such recursive schemes. A fundamental limitation of isotropic filtering is that it does not take into account the anisotropic nature of image features, which results in blurring of oriented patterns and textures. The development of fast anisotropic filtering in general, and anisotropic Gaussian filtering in particular , have therefore gained momentum over the past decade. W orth mentioning in this regard is the work of Geusebroek et al. [ 7 ], who developed an efficient recursive technique based on the factorization of the 2D -anisotropic Gaussian into two 1 -D Gaussians, one along the image axes and the other along a generally off- grid direction. A drawback of this technique is that one has to in terpolate the image along the off-grid direction to implement the corresponding 1 -D filter . T o avoid interpolation and, in effect, to improve the spatial homogeneity and the Gaussian-like structure of the filters in [ 7 ], Lam et al. came up with the alternative “triple-axis” solution. Instead of using two directions, they chose to decompose the anisotropic Gaussian into three 1 -D Gaussians operating along one ofthe four cardinal directions: the horizontal, the vertical, and the two diagonals [ 10 ]. The focus of these papers has largely been on space-invariant filtering, where the entire image is convolved with a single anisotropic Gaussian. On the other hand, a variety of space-variant filtering strategies have also been developed, including image statistics driven filtering [ 11 ], non-linear diffusion filtering [ 14 , 19 ] and gradient inverse-weighted filtering [18], to name a few . 1.1 Linear space-variant filtering In this paper , we focus on the paradigm of linear space-variant filtering using Gaussian-like kernels of different shapes and sizes. From a purely discrete 1 perspective, this calls for the design of a family of Gaussian filters { g λ [ n ] } λ , so that, given an input image f [ n ] , one can evaluate the filtered samples ¯ f [ n ] through the averaging ¯ f [ n ] = X k f [ k ] g λ ( n ) [ n − k ] . (1) 1 W e associate the term “discrete” with functions defined on the Cartesian lattice Z d , where d is the dimensionality of the function. 2 The parameter λ ( n ) , which specifies the covariance of the filter applied at location n , allows one to continuously adjust the scale, orientation and elongation of the filter in keeping with the anisotropy of the local image fea- tures. There are however certain practical problems involved in an efficient realization of (1) . It is obvious that (1) cannot be written as a convolution, and hence cannot be realized using the FFT algorithm. In fact, the available options are either (i) to compute the filters g λ [ n ] by sampling the contin- uous Gaussian on-the-fly , or (ii) to discretize λ a priori, and to store the pre-compute filters in a look-up table. The problem with the former is that it proves to be extremely slow for wide kernels, while the latter restricts the control on the anisotropy of the filters. By appropriately modifying the recursive filtering strategies in [ 5 , 20 ], T an et al. developed an algorithm for realizing (1) for the particular case where { g λ [ n ] } λ are isotropic [16]. Spline kernels can also yield efficient algorithms for space-variant filter- ing, particularly when the space-variance is in terms of the scale (or size) of the kernel. F or instance, Heckbert proposed an algorithm for adaptive image blurring using tensor-products of polynomial splines, where the image is filtered with kernels of various scales using repeated integrations and finite-differences [ 8 ]. Based on similar principles, namely , the scaling prop- erties of B -splines, Mu ˜ noz-Barrutia et al. have developed an algorithm for fast computation of the continuous wavelet transform of 1 -D signals [ 13 ]. R ecently , the method was extended to perform space-variant filtering using Gaussian-like functions of arbitrary size, which can be elongated along the image axes [ 12 ]. T o achieve this, the authors choose to approximate the Gaussian using separable B -splines. W e propose to take this approach one step further . In particular , we overcome the limited steerability and ellipticity of the separable B -splines by considering certain quasi-separable analogues of B -splines, the so-called box splines [ 2 ]. As was demonstrated for the sepa- rable filters in [ 12 ], we show that these quasi-separable box splines can also be used to approximate the Gaussian, and that the associated space-variant filter can be decomposed into recursive pre-filters and scale-dependent finite difference filters. These together allow us to achieve a fast space-variant filtering of images using elliptical Gaussian-like filters. T o date, there have only been few applications of such multivariate splines in the fields of image processing and computer graphics. Noteworthy among them are the works of Richter [ 15 ] and Asahi et al. [ 1 ], concerning the development of image approximation and reconstruction algorithms, and that of Condat et al. [ 4 ] and Entezari et al. [ 6 ], concerning the development of interpolation formulas for hexagonal and BCC lattices. 3 1.2 Main idea In this contribution, we propose a fast space-variant filtering algorithm using a family of Gaussian-like box splines whose size, elongation and orientation can be continuously controlled. The attractive feature of our approach is that we use a continuous-discrete formalism which avoids the necessity of sampling a continuously-defined filter on-the-fly , or of storing a discrete set of pre-computed kernels. The developments in the paper are centered around two main ideas, which are as follows: (1) The use of quasi-separable box splines . The construction of bivariate box splines, conceived as the “shadow” of N -dimensional ( N ≥ 2 ) polytopes in 2 -dimensions, often turns out to be rather intricate (see [ 2 ] for instance). In this paper , we consider an alternative straightforward recipe for con- structing box splines, namely , through repeated convolutions of dilated and rotated box distributions (see Fig. 3). In particular , we realize the so-called radially-uniform box spline β N a ( x ) through the convolution of N rotated box distributions, where a = ( a 1 , . . . , a N ) is a scale-vector with a k being the scale of the box distribution positioned along the direction ( k − 1) π / N (see § 2.2 for a precise definition). The reason why the radially-uniform box splines are of interest in the current context is twofold. The first of these is that we can make them arbitrarily close to a Gaussian by increasing N (see § 2.2.3). The second reason, which has a more practical significance, is that we can continuously control their size, elongation, and orientation simply by acting on the scales. (2) An efficient strategy for space-variant averaging . T o convey this idea, we examine the following formula ¯ f [ n ] = 1 2 W ( n ) + 1 W ( n ) X k = − W ( n ) f [ n − k ] (2) for computing the space-variant averages of a discrete 1 -dimensional signal f [ n ] . W e interpret the factor W ( n ) , which controls the amount of smoothing, as the size of the “discrete box filter” applied at location n . The disadvantage of using (2) is that its computational cost scales linearly with W ( n ) , which even gets worse in higher dimensions. This can be circumvented (with a mild interpolation cost) by considering instead the space-variant averaging ¯ f [ n ] = 1 a ( n ) Z n + a ( n ) / 2 n − a ( n ) / 2 f ( y ) dy = Z f ( y ) β a ( n ) ( n − y ) dy , (3) 4 0 20 40 60 80 100 120 140 160 ï 1 ï 0.5 0 0.5 1 1.5 2 2.5 f ( x ) n − a ( n ) / 2 n + a ( n ) / 2 s [ n ]= � f ( x ) β a ( n ) ( n − x ) dx Figure 1: Computation of the space-variant average ¯ f [ n ] : The signal f ( x ) is first localized (hatched zone) using the shifted box function β a ( n ) ( n − x ) , and then the area is computed. The central idea of our algorithm is to determine this area by taking the finite-difference of the primitive of f ( x ) . where we have replaced the discrete signal f [ n ] by its interpolated version f ( x ) , and the discrete box filter by the box function β a ( x ) = ( 1 /a for − a/ 2 < x ≤ a/ 2 , 0 otherwise . The main advantage of this formulation is that we can realize (3) using O (1) computations per position, independent of the size of a ( n ) . This is based on the observation that (3) can computed by first evaluating the primitive F ( x ) = Z x −∞ f ( y ) dy , and then using the formula ¯ f [ n ] = 1 a ( n )  F ( n + a ( n ) / 2) − F  n − a ( n ) / 2)  , (4) which requires one addition and multiplication per position. This idea is illustrated in figure 1. The other advantage is that, as opposed to the integer- valued window W ( n ) in (2) , this gives access to the real-valued scale a ( n ) for continuously controlling the smoothing. Indeed, if f ( x ) is integrable (at least locally), then it can be shown that the use of small scales results in less smoothing, namely that ¯ f [ n ] − → f [ n ] as a ( n ) − → 0 , whereas ¯ f [ n ] can be made negligibly small by making a ( n ) sufficiently large. 5 The contribution of this paper is the generalization of the filtering strategy in (3) to the bivariate setting using the radially-uniform box splines. In particular , given a discrete image f [ n ] , we consider the space-variant filtering ¯ f [ n ] = Z f ( y ) β N a ( n ) ( n − y ) d y (5) where f ( x ) represents a suitable interpolation of f [ n ] . The significance of the quasi-separable characterization of β N a ( x ) in terms of the box distributions is that it allows us to relate (5) to the 1 -D problem in (3) . Indeed, we demonstrate in § 2.2 that (5) can be implemented using an appropriate bivariate extension of pre-integrations and finite-differences, together with few evaluations of a fixed piecewise polynomial function (the coefficients of which are pre-computed). Although the derivation of the algorithm is rather involved, the final solution turns out to be remarkably simple (see § 3, Algorithm 1), and easy to implement. 1.3 Notations W e use ˆ f ( ω ) to denote the F ourier transform of a function f ( x ) on R d , specified by ˆ f ( ω ) = R R d f ( x ) exp ( − j ω T x ) d x . W e use f ( · − s ) to denote the function obtained by translating f ( x ) by s . The convolution of two functions f ( x ) and g ( x ) is given by ( f ∗ g )( x ) = R R d f ( s ) g ( x − s ) d s . The notation ~ N k =1 f k ( x ) is used to denote the convolution of a collection of functions f 1 ( x ) , . . . , f N ( x ) ; the order of the convolutions is immaterial. W e suppress the domain of an integral (or summation) if this is obvious from the context. F or a bivariate function f ( x ) = f ( x 1 , x 2 ) , the partial derivative along x i is denoted by ∂ i f ( x ) . Given operators T 1 and T 2 on a domain D , we use T 1 ◦ T 2 (often simply T 1 T 2 ) to denote their composition: ( T 1 ◦ T 2 )( f ) = T 1 ( T 2 ( f )) for every f in D . W e use M n to denote the ( n − 1) - fold matrix multiplication of M with itself . The integral R M ( x ) f ( x ) d x , corresponding to a real-valued function f ( x ) and a matrix-valued function M ( x ) on R 2 , denotes a matrix of the same dimension as M ( x ) , whose ( i, j ) -th component is given by R M i,j ( x ) f ( x ) d x . If P and Q are constant matrices, we then have R PM ( x ) Q f ( x ) d x = P ( R M ( x ) f ( x ) d x ) Q . The notation f ( x ) = O ( g ( x )) , x − → 0 , signifies that there exists a constant C (independent of x ) such that | f ( x ) | ≤ C g ( x ) for sufficiently small x . The space of bivariate finite-energy signals is denoted by L 2 ( R 2 ) , or simply as L 2 ; it is equipped with the norm k f k L 2 = [ R | f ( x ) | 2 d x ] 1 / 2 . The Dirac distribution is denoted by δ ( x ) . 6 1 a β 1 ( x ) ∆ − 1 1 β 1 ( x ) β a ( x )= ∆ a ∆ − 1 1 β 1 ( x + τ ) Figure 2: Box function rescaling through “addition and subtraction” of the unit box function: The step function is first reproduced from the unit box using the running-sum, and then the appropriate finite-difference is applied to recover the rescaled box function. 2 SP ACE- V ARIANT A VERAGING W e now derive (4) using an operator-based formalism. This helps set up the framework needed for the subsequent generalization of the idea to higher dimensions and multiple orders in § 2.2.2. 2.1 R ealization of (3) W e consider the finite-difference (FD) and the running-sum (RS) of a function f ( x ) , which are respectively defined by ∆ a f ( x ) = 1 a  f ( x ) − f ( x − a )  , (6) and ∆ − 1 b f ( x ) = b ∞ X k =0 f ( x − bk ) . (7) The positive real numbers a and b are the scales of the operators 2 . W e note that the operators ∆ a and ∆ − 1 b , which takes f ( x ) into ∆ a f ( x ) and ∆ − 1 b f ( x ) respectively , are linear and translation-invariant, and that when b is an integer , ∆ − 1 b can also be applied to a sequence f [ n ] through the well-defined operation ∆ − 1 b f [ n ] = b P ∞ k =0 f [ n − bk ] . In particular , g [ n ] = ∆ − 1 1 f [ n ] can be implemented efficiently using the simple recursion g [ n ] = g [ n − 1] + f [ n ] , under appropriate boundary conditions [13]. 2 The notation ∆ − 1 b is justified by the fact that ∆ a ∆ − 1 b acts as the identity operator when a = b . 7 The significance of these operators is that we can relate the variable-size box functions in (3) to the unit-width box function using the transformation β a ( x ) = ∆ a ∆ − 1 1 β 1 ( x + τ ) (8) where τ = ( a − 1) / 2 . In particular , this means that box functions of variable widths can be derived from a fixed box function through the successive applications of running-sums and finite-differences (see Fig. 2). T o derive (8) , we note that ∆ − 1 1 β 1 ( x ) = P ∞ k =0 β 1 ( x − k ) = u ( x + 1 / 2) , where the step function u ( x ) equals 1 for x > 0 and zero otherwise. The desired result follows immediately: ∆ a ∆ − 1 1 β 1 ( x + τ ) = 1 a  u ( x + a/ 2) − u ( x − a/ 2)  = β a ( x ) . W e use (8) to derive the algorithm for computing (3) as follows: Fix an arbitrary position n and the corresponding a ( n ) in (3) , and consider the function s ( x ) = Z f ( y ) β 1 ( x − y ) dy . W e claim that ¯ f [ n ] = ∆ a ( n ) ∆ − 1 1 s ( n + τ ) . Indeed, following the linearity and translation-invariance of ∆ a ( n ) ∆ − 1 b , and using (8), we can write ∆ a ( n ) ∆ − 1 1 s ( x + τ ) = Z f ( y )[∆ a ( n ) ∆ − 1 1 β 1 ( x + τ − y )] dy = Z f ( y ) β a ( n ) ( x − y ) dy , which establishes our claim. Now if the input signal is discrete, of the form f ( x ) = P n ∈ Z f [ n ] δ ( x − n ) , then s ( x ) can simply be written as s ( x ) = P f [ n ] β 1 ( x − n ) . A simple manipulation then shows that ∆ − 1 1 s ( x ) = P g [ n ] β 1 ( x − n ) , where g [ n ] is the running-sum of f [ n ] . Thus, denoting the piecewise-constant interpolation of g [ n ] by F ( x ) , we obtain ¯ f [ n ] = ∆ a ( n ) F ( n + τ ) . This leads us to the following two-step algorithm for realizing (3): (1) ( Space-invariant step ) Compute g [ n ] = ∆ − 1 1 f [ n ] using the recursion g [ n ] = g [ n − 1] + f [ n ] ; 8 (2) ( Space-variant step ) F or every position n , set ¯ f [ n ] = ∆ a ( n ) F ( n + τ ) , where τ = ( a ( n ) − 1) / 2 . The steps of the algorithm can be visualized for the particular case when the input is an impulse and when a ( n ) = a for every n using Fig. 2. The second and third plots in this figure then correspond to steps (1) and (2) of the algorithm, respectively . The remarkable fact about the algorithm is that the space-variant aspect of the transformation f [ n ] 7→ ¯ f [ n ] gets transferred to the scale-dependent operator ∆ a , which in turn is implemented at a fixed computational cost per pixel, namely , one addition and multiplication per position. W e would also like to note that (8) can more generally be expressed as β a ( x ) = ∆ a ∆ − 1 b β b ( x + τ ) ( τ = ( a − b ) / 2) . (9) The significance of this relation is that, if the lattice spacing b is different from unity , one can still realize the running-sum (without interpolation) by replacing the operator ∆ − 1 1 by ∆ − 1 b . W e will use this in the sequel. 2.2 Bivariate extension 2.2.1 Radially-uniform box splines W e now extend the space-variant filtering strategy discussed in the previous section to the bivariate setting, where the additional notion of directionality is appropriately addressed. As a first step, we devise an appropriate directional extension of the box function. In particular , corresponding to a real-valued scale a and direction θ , we define the directional box distribution ϕ a,θ ( x ) as the tensor product of the box function β a ( x ) and the Dirac distribution δ ( x ) operating along orthogonal directions, ϕ a,θ ( x ) = β a  u T θ x  δ  u T θ ⊥ x  . Here the orthogonal directions are specified by the unit vectors u θ = (cos θ , sin θ ) , and u θ ⊥ = ( − sin θ , cos θ ) . The scale a controls the amount of smoothing applied along the orientation of the box distribution, whereas no smoothing is applied along the transverse direction. The idea then is to construct the box spline by convolving an arbitrary number of such di- rectional box distributions (cf . Fig 3). Thus, corresponding to an integer N > 1 , a set of real-valued scales a 1 , . . . , a N , and uniform rotation-angles 9 ∗ = Scan profile along θ = π / 8 (A) (B) ∗ ∗ 0 50 100 150 200 250 300 350 400 450 500 ï 0.2 0 0.2 0.4 0.6 0.8 1 Four-directional box spline Figure 3: Construction of the radially-uniform box spline through the convo- lution of four directional box distributions. ( A ) The four box distributions, distributed uniformly over [0 , π ) , were assigned equal scales in this example; ( B ) Scan profile along θ = π / 8 . θ k = ( k − 1) π / N , k = 1 , . . . , N , we specify the radially-uniform box spline through the convolution β N a ( x ) = ( ϕ a 1 ,θ 1 ∗ · · · ∗ ϕ a N ,θ N )( x ) . (10) W e shall refer to N and the tuple a = ( a 1 , . . . , a N ) as the directional-order and the scale-vector of the box spline, respectively . F ollowing definition (10) , it can be verified that β N a ( x ) is a piecewise polynomial of degree ≤ N − 2 , where the partitions are specified by lines running along the directions θ 1 , . . . , θ N . Moreover , β N a ( x ) is symmetric with respect to the origin, and is compactly supported on a convex N -sided polygon consisting of the points n N X k =1 t k a k u θ k : − 1 / 2 ≤ t k ≤ 1 / 2 o . The radially-uniform box splines are non-separable for N > 2 . However , in keeping with the spirit of the underlying tensor construction, the term quasi-separable would be more appropriate. The scale-vector a plays a vital 10 role in determining the size and shape of the box spline. It is clear that the box spline can be arbitrarily elongated along the principal directions θ k (1 ≤ k ≤ N ) simply by rescaling the box distribution ϕ a k ,θ k , that is by making a k large compared to the other scales. Moreover , we will demonstrate in the sequel that one can elongate the box spline along any arbitrary direction by appropriately acting on the scale-vector . The role of the directional-order is more subtle; it determines the degrees of freedom available for controlling the geometry of the box spline and also its regularity (smoothness). W e will discuss these aspects in detail for the particular four-directional box spline ( N = 4 ) in § 2.2.3. 2.2.2 R ealization of (5) W e now formulate the algorithm for realizing (5) by appropriately extending the domain of definition of the FD and the RS operator to bivariate functions. The main idea is to derive a relation similar to (8) for the radially-uniform box splines. Thus, corresponding to positive real-valued scales a and b , and direction 0 ≤ θ < π , we consider the directional finite-difference ∆ a,θ f ( x ) = 1 a  f ( x ) − f ( x − a u θ )  , (11) and the directional running-sum ∆ − 1 b,θ f ( x ) = b ∞ X k =0 f ( x − k b u θ ) . (12) In keeping with the definition of the box spline, the radially-uniform FD and RS operators, ∆ N a and ∆ − N b , are then specified by the combined action of (11) and (12) along the directions θ k = ( k − 1) π / N . In particular , we set ∆ N a = ∆ a 1 ,θ 1 ◦ · · · ◦ ∆ a N ,θ N , (13) and ∆ − N b = ∆ − 1 b 1 ,θ 1 ◦ · · · ◦ ∆ − 1 b N ,θ N , (14) where the scale-vectors a = ( a 1 , . . . , a N ) and b = ( b 1 , . . . , b N ) specify the scale along each direction. The operators ∆ N a and ∆ − N b are closely related to the radially-uniform box splines. Indeed, it can readily be verifed that ∆ N a ∆ − N b = ∆ a 1 ,θ 1 ∆ − 1 b 1 ,θ 1 ◦ · · · ◦ ∆ a N ,θ N ∆ − 1 b N ,θ N , 11 and that using (9) we can write ∆ a,θ ∆ − 1 b,θ ϕ b,θ ( x + τ u θ ) = ϕ a,θ ( x ) . Thus, if we let τ = P τ k u θ k , where τ k = ( a k − b k ) / 2 , then following defini- tions (10), (13), and (14), we see that ∆ N a ∆ − N b β N b ( x + τ ) = ∆ a 1 ,θ 1 ∆ − 1 b 1 ,θ 1 ◦ · · · ◦ ∆ a N ,θ N ∆ − 1 b N ,θ N h ~ N k =1 ϕ b k ,θ k ( x + τ k u θ k ) i = ~ N k =1 ∆ a k ,θ k ∆ − 1 b k ,θ k ϕ b k ,θ k ( x + τ k u θ k ) = ~ N k =1 ϕ a k ,θ k ( x ) . This provides the following crucial connection between the box splines and the 2 -D operators. Proposition 2.1. The box spline β N a ( x ) can be expressed as β N a ( x ) = ∆ N a ∆ − N b β N b ( x + τ ) . (15) Before discussing the filtering algorithm, we briefly elaborate on the implementation of ∆ N a and ∆ − N b . W e can show that (13) can be written as ∆ N a f ( x ) = 2 N − 1 X i =0 w i f ( x − x i ) , (16) where w i = ( − 1) q 1 + ··· + q N ( a 1 · · · a N ) − 1 and x i = P N k =1 q k a k u θ k , and the index i taking values between 0 and (2 N − 1) is the decimal counterpart of the binary number ( q N , q N − 1 , . . . , q 1 ) , which takes values from (0 , . . . , 0) to (1 , . . . , 1) . W e identify the points x i with the vertices of an affine mesh, and w i with the corresponding mesh taps (cf . Fig. 5). As far as the application of ∆ − N b to a discrete sequence f [ n ] is concerned, the unfortunate difficulty is that the vectors b k u θ must necessarily lie on the lattice for (12) to be well-defined. In fact, it is easily seen that one cannot associate a digital filter with the RS operators in general. However , the good news is that, when N equals 2 and 4 , the transformation f [ n ] 7→ ∆ − N b f [ n ] can be exactly realized without the need for interpolation by appropriately setting the scales b k of the directional running-sums. W e will discuss the latter case in detail in § 3. The algorithm for realizing (5) corresponding to a specified scale-vector map a ( n ) is based on relation (15) . In particular , by considering the function 12 f ( x ) = P n f [ n ] δ ( x − n ) , and by proceeding exactly along lines of the 1 -D derivation, we express the filtered samples in (5) as ¯ f [ n ] = 2 N − 1 X i =0 w i F ( n + τ − x i ) , (17) where F ( x ) = P g b [ n ] β N b ( x − n ) denotes the interpolated version of the pre- integrated signal g b [ n ] = ∆ − N b f [ n ] ; τ = 0 . 5( P ( a k ( n ) − b k ) cos θ k , P ( a k ( n ) − b k ) sin θ k ) ; and the pairs ( x i , w i ) are the vertices and taps of the affine FD mesh in (16) . Note that τ , w i and x i are defined pointwise in (17) ; we dropped the index n to simplify the notation. W e will discuss the implemen- tation aspects of the algorithm, particularly the computation of g b [ n ] and its interpolated form F ( x ) for the case N = 4 in § 3. 2.2.3 Characterization of the radially-uniform box splines The motivation behind introducing the radially-uniform box splines was to develop elliptical Gaussian-like filters, whose shape (size, elongation and orientation) can be continuously controlled, and a fast space-variant algorithm using such filters. Indeed, it turns out that the radially-uniform box splines (and its iterated versions) form close approximates of the Gaussian. T o substantiate our claim we present the following result (proof in Appendix § 6.1) that can well be seen as a “radial” version of the central limit theorem. Theorem 2.2. Consider the sequence of box splines β 2 a (2) ( x ) , β 3 a (3) ( x ) , . . . cor- responding to the scale-vectors a (2) , a (3) , . . . , where a k ( N ) = σ √ (24 / N ) for 1 ≤ k ≤ N . Then the following holds lim N − →∞ β N a ( N ) ( x ) = 1 2 π σ 2 exp  − k x k 2 2 σ 2  . (18) In fact, the radially-uniform box splines constructed using uniform scale- vectors are supported on a N -sided uniform polygon, and it can be shown that they have continuous derivatives of order ( N − 3) . The above result is then consistent with the fact that the isotropy and smoothness of such box splines progressively improves with the increase in the directional-order . Moreover , it is also possible to mimic certain anisotropic Gaussians by using a sequence of non-uniform scale-vectors. Indeed, as a direct extension of Theorem 2.2, one can construct sequences of box splines which converge to anisotropic Gaussians as N increases. 13 Y et another useful form of anisotropic convergence is achievable based on the serial convolutions of a radially-uniform box spline, of a fixed directional- order , with itself . In particular , corresponding to fixed integers N and m ( m ≥ 1 ), and a scale-vector a = ( a 1 , . . . , a N ) , we consider the iterated radially-uniform box spline β N ,m a ( x ) = ( β N a ∗ · · · ∗ β N a )( x ) (19) obtained through the ( m − 1) -fold convolution of β N a ( x ) with itself. Then, for the particular sequence of box splines { β N ,m a ( m ) ( x ) } corresponding to the scale-vectors a ( m ) = ( a 1 / √ m, . . . , a N / √ m ) , we have the result lim m − →∞ β N ,m a ( m ) ( x ) = 1 2 π | det( C ) | 1 / 2 exp  − 1 2 x T C − 1 x  , (20) where C = 1 12   P a 2 k cos 2 θ k 1 2 P a 2 k sin 2 θ k 1 2 P a 2 k sin 2 θ k P a 2 k sin 2 θ k   . (21) Indeed, this follows directly from a certain version of the central limit theo- rem, which tells us that the each of the components ~ m j =1 ϕ a k / √ m,θ k ( x ) (1 ≤ k ≤ N ) converge to a “directional” Gaussian distribution as m − → ∞ . The covariance C is then given by the limiting sum of the covariances C k of the constituent box distributions. The utility of such iterated box splines will be discussed in § 3.3. Having characterized the asymptotic behavior of the box splines, we now focus on the problem of approximating an anisotropic Gaussian using a box spline of a fixed directional-order . Since a centered Gaussian is uniquely spec- ified by its covariance, we propose a finite-order box spline approximation of the same based on its covariance. This, in fact, amounts to a moment-based approximation of the Gaussian, that is, the box spline resembles the Gaussian up to its second-order moments. Moreover , since the level-sets of Gaussians are ellipses, this equivalently amounts to constructing elliptical filters of different size, elongation, and orientation. The covariance of the radially-uniform box spline, namely , C N a = Z xx T β N a ( x ) d x , 14 (a) (b) (c) (d) 0 100 200 300 400 500 600 0 0.2 0.4 0.6 0.8 1 0 100 200 300 400 500 600 ï 0.2 0 0.2 0.4 0.6 0.8 1 Figure 4: Intensity distribution of (a) the radially-uniform box spline, and (b) the separable B -spline, of order four . The respective scan profiles along π / 8 are shown in (c) and (d). 15 can be expressed as the sum of the covariances of the box distributions (cf . Appendix § 6.2 for details) as follows: C N a = 1 12   P a 2 k cos 2 θ k 1 2 P a 2 k sin 2 θ k 1 2 P a 2 k sin 2 θ k P a 2 k sin 2 θ k   . (22) This provides the explicit dependence of C N a on the scale-vector . In partic- ular , based on the eigen decomposition of C N a , we propose the following characterization of the elliptical parameters of the box spline: Let λ max and λ min denote the largest and smallest eigenvalues of C N a , and ( v 1 , v 2 ) the eigenvector corresponding to the eigenvalue λ max . The size s N a , elongation % N a , and orientation θ N a of the radially-uniform box spline β N a ( x ) are then defined as s N a = λ max + λ min = 1 12 X a 2 k , % N a = λ max λ min = P a 2 k + √ D P a 2 k − √ D , θ N a = tan − 1  v 2 v 1  = tan − 1 − P a 2 k cos(2 θ k ) + √ D P a 2 k sin(2 θ k ) ! , (23) where D =  P a 2 k cos 2 θ k  2 +  P a 2 k sin 2 θ k  2 . Since C N a is strictly positive (see Appendix § 6.2), all the above parameters are indeed well-defined. Note that the covariance matrix in (22) and the triple in (23) provide equivalent descriptions of the box spline geometry . The motivation behind introducing the latter is its convenient rotation-invariant nature: while C N a changes with the rotations of a given box spline, s N a and % N a remains fixed. It is for this reason that we use the latter description for studying the dependence of the elliptical geometry on the scale-vector in the next section. 3 FOUR-DIRECTIONAL B O X SPLINES W e now study the particular four-directional box spline β 4 a ( x ) = ( ϕ a 1 , 0 ∗ ϕ a 2 ,π / 4 ∗ ϕ a 3 ,π / 2 ∗ ϕ a 4 , 3 π / 4 )( x ) , and the corresponding implementation aspects. This particular box spline is composed of patches of quadratic polynomials (degree ≤ 2), is continuously differentiable, and is compactly supported on a convex octagon (cf . Fig. 3). 16 a 1 a 1 a 2 a 2 a 3 a 3 a 4 a 4 ( x 1 , − w ) ( x 0 , + w ) ( x 3 , + w ) ( x 7 , − w ) ( x 15 , + w ) ( x 8 , − w ) ( x 4 , − w ) ( x 6 , + w ) ( x 9 , + w ) ( x 13 , − w ) ( x 11 , − w ) ( x 2 , − w ) ( x 5 , + w ) ( x 10 , + w ) ( x 12 , + w ) ( x 14 , − w ) Figure 5: The distribution of the taps of the FD mesh. The pairs ( u, v ) denote the positions ( u ) and the corresponding weights ( v ) of the taps of the FD mesh. W e note that in [ 12 ] the authors have used separable B -splines to ap- proximate the Gaussian. Although these functions are built from the same constituent box distributions, the advantage of the four-directional box spline over the separable ones is that they are more isotropic. As seen in Fig. 4, the basic four-directional box spline, besides having a smaller support, exhibits a more Gaussian-like profile than the separable counterpart of identical order . In addition, the anisotropic four-directional box spline can be rotated to arbitrary orientations, while the separable ones are constrained to the image axes. 3.1 F ast space-variant elliptical filtering The four-directional box spline is of particular interest in the context of the space-variant filtering following the fact that ∆ − 4 b can be implemented with- out interpolation when b = (1 , √ 2 , 1 , √ 2) . The corresponding interpolating function, β 4 b ( x ) , turns out to be well-known in the box spline community , and is popularly referred to as the Zwart-P owell (ZP) element [ 2 , 21 ]. The steps for realizing (5) using the four-directional box spline are as follows: (1) ( Pre-integration ) The crucial point is the choice of the scale-vector 17 b = (1 , √ 2 , 1 , √ 2) corresponding to which the RS operator ∆ − 4 b = ∆ − 1 1 , 0 ◦ ∆ − 1 √ 2 ,π / 4 ◦ ∆ − 1 1 ,π / 2 ◦ ∆ − 1 √ 2 , 3 π / 4 can be directly applied to f [ n ] . In particular , the running-sum g b [ n ] = ∆ − 4 b f [ n ] can be computed using the following four steps. (RS1) Horizontal running-sum: g 0 [ n 1 , n 2 ] = ∆ − 1 1 , 0 f [ n 1 , n 2 ] = ∞ X k =0 f [ n 1 − k , n 2 ] . (RS2) First-diagonal running-sum: g π / 4 [ n 1 , n 2 ] = ∆ − 1 √ 2 ,π / 4 g 0 [ n 1 , n 2 ] = √ 2 ∞ X k =0 g 0 [ n 1 − k , n 2 − k ] . (RS3) V ertical running-sum: g π / 2 [ n 1 , n 2 ] = ∆ − 1 1 ,π / 2 g π / 4 [ n 1 , n 2 ] = ∞ X k =0 g π / 4 [ n 1 , n 2 − k ] . (RS4) Second-diagonal running-sum: g b [ n 1 , n 2 ] = ∆ − 1 √ 2 , 3 π / 4 g π / 2 [ n 1 , n 2 ] = √ 2 ∞ X k =0 g π / 2 [ n 1 + k , n 2 − k ] . (2) ( Finite-difference ) A t each position n , the FD mesh is computed using the scale-vector a ( n ) . The weights w i and the vertices x i are listed in T able 1 with the convention that a 0 k = a k / √ 2 for k = 2 , 4 . The mesh has a total of 4 × 4 = 16 vertices; in particular , there are 4 clusters corresponding to the four boxes with 4 vertices per cluster , as shown in Fig. 5. The shift τ = ( τ 1 , τ 2 ) is given by τ 1 = ( √ 2 a 1 + a 2 − a 4 − √ 2) / 2 √ 2 and τ 2 = ( a 2 + √ 2 a 3 + a 4 − 3 √ 2) / 2 √ 2 . The filtered sample is then computed using the formula ¯ f [ n ] = 15 X i =0 w i F ( n + τ − x i ) . (24) The interpolation samples F ( x ) = P g b [ n ] β 4 b ( x − n ) in (24) are computed efficiently by taking advantage of the piecewise polynomial structure of the compactly supported ZP element (see Appendix 6.6). 18 T able 1: Specification of the taps of the FD mesh associated with the operator ∆ 4 a . The weight w is given by ( a 1 a 2 a 3 a 4 ) − 1 , where a = ( a 1 , a 2 , a 3 , a 4 ) is the corresponding scale-vector . i x i w i i x i w i 0 (0 , 0) + w 8 ( − a 0 4 , a 0 4 ) − w 1 ( a 1 , 0) − w 9 ( a 1 − a 0 4 , a 0 4 ) + w 2 ( a 0 2 , a 0 2 ) − w 10 ( a 0 2 − a 0 4 , a 0 2 + a 0 4 ) + w 3 ( a 1 + a 0 2 , a 0 2 ) + w 11 ( a 1 + a 0 2 − a 0 4 , a 0 2 + a 0 4 ) − w 4 (0 , a 3 ) − w 12 ( − a 0 4 , a 3 + a 0 4 ) + w 5 ( a 1 , a 3 ) + w 13 ( a 1 − a 0 4 , a 3 + a 0 4 ) − w 6 ( a 0 2 , a 3 + a 0 2 ) + w 14 ( a 0 2 − a 0 4 , a 3 + a 0 2 + a 0 4 ) − w 7 ( a 1 + a 0 2 , a 3 + a 0 2 ) − w 15 ( a 1 + a 0 2 − a 0 4 , a 3 + a 0 2 + a 0 4 ) + w As in the 1 -D setting, the running-sums are efficiently evaluated using recursions as summarized in Algorithm 1. The decisive computational advan- tage, especially for wider kernels, is derived from the fact that the number of vertices of the FD mesh is completely independent of the scale-vector . As a result, the algorithm has a fixed computational cost per pixel, modulo the cost of the running-sum and the interpolations (see T able 2). Algorithm 1 Space-variant elliptical filtering 1. Input: f [ n ] and a ( n ) 2. P erform recursions: g 0 [ n 1 , n 2 ] ← f [ n 1 , n 2 ] + g 0 [ n 1 − 1 , n 2 ] g π / 4 [ n 1 , n 2 ] ← √ 2 g 0 [ n 1 , n 2 ] + g π / 4 [ n 1 − 1 , n 2 − 1] g π / 2 [ n 1 , n 2 ] ← g π / 4 [ n 1 , n 2 ] + g π / 2 [ n 1 , n 2 − 1] g b [ n 1 , n 2 ] ← √ 2 g π / 2 [ n 1 , n 2 ] + g b [ n 1 + 1 , n 2 − 1] 3. Local FD operation: for each position n do compute w i , x i and τ using a ( n ) evaluate F ( n + τ − x i ) using ZP interpolation ¯ f [ n ] ← P i w i F ( n + τ − x i ) end for 4. R eturn ¯ f [ n ] 19 3.2 Size, elongation and orientation of the box splines As was mentioned earlier , the size and shape of the radially-uniform box spline can be controlled by appropriately adjusting the scales of the con- stituent box distributions. In this regard, we now discuss the following: (i) the forward problem of controlling the anisotropy of the four directional box spline by acting on the scale-vector , and (ii) the inverse problem of uniquely specifying the scale-vector of the box spline corresponding to a given covariance (geometry). F or notational ease, we shall henceforth drop the superscript N = 4 when referring to the four-directional box spline and its related parameters. 3.2.1 Control on the anisotropy The elliptical geometry of this box spline is specified using parameters defined in (23), namely , s a = 1 12 X a 2 k , θ a = tan − 1 a 2 3 − a 2 1 + √ D a 2 2 − a 2 4 ! , and % a = P a 2 k + √ D P a 2 k − √ D , where D = ( a 2 3 − a 2 1 ) 2 + ( a 2 2 − a 2 4 ) 2 . It turns out that the size and orientation can be arbitrarily controlled by adjusting the scale-vector . Indeed, the size can be easily manipulated by multiplying the scale-vector a with a constant factor , since this leaves both the orientation and elongation unchanged. The elongation can be arbitrarily controlled in the neighborhood of the four principal directions. However , there exists a finite upper bound on the elongation along other directions (cf . Appendix § 6.3). Proposition 3.1. F or every φ in [0 , π ) , there exists a scale-vector a such that θ a = φ . There is however a finite bound on the elongation, and is given by sup % a < U ( φ ) = 1 + | ν φ | + q 1 + ν 2 φ 1 + | ν φ | − q 1 + ν 2 φ , (25) where ν φ = 1 2 (tan φ − cot φ )sign  π 2 − φ  . The supremum is over the set of a for which θ a = φ . Fig. 6 illustrates the variation of 1 /U ( φ ) as a function of the orienta- tion; the rationale behind showing the inverse plot is to avoid the blowups U ( φ ) − → + ∞ as φ − → θ k . In particular , a bound of 3 + 2 √ 2 ≈ 5 . 8 is attained along the orientations φ = (2 k − 1) π / 8 , 1 ≤ k ≤ 4 , exactly mid-way between 20 0 π 4 3 π 4 π 2 π 0 . 1 0 . 2 Figure 6: P olar plot of the symmetric variation of 1 /U ( φ ) as a function of the filter orientation φ , where U ( φ ) is the bound on the elongation. The bound reaches its minimum when the orientation of the filter is exactly midway between two principal axes, whereas arbitrary elongation is achievable in the neighborhood of the four principal directions φ = 0 , π / 4 , π / 2 and 3 π / 4 . two adjacent primal directions. This is perfectly reasonable since the control on the geometry of the box spline is minimal along these directions. In order to specify the elliptical geometry of the box spline, we use either of the following equivalent descriptors as per convenience: (D1) Size, elongation and orientation ( s, %, θ ) . (D2) Length of the major and minor axes, and the orientation ( √ λ max , √ λ min , θ ) . (D3) Covariance matrix C . Descriptor (D1) stipulates the lengths of the major and minor axes as λ max = s%/ (1 + % ) and λ min = s/ (1 + % ) , respectively , whereas, (D2) gives the corresponding covariance as C = λ max cos 2 θ + λ min sin 2 θ 1 2 ( λ max − λ min ) sin 2 θ 1 2 ( λ max − λ min ) sin 2 θ λ min cos 2 θ + λ max sin 2 θ ! . 21 3.2.2 Optimal scale-vector for a given anisotropy Since the covariance matrix of the four-directional box spline (cf. Eqn. (22) ) is given by C a = 1 24  2 a 2 1 + a 2 2 + a 2 4 a 2 2 − a 2 4 a 2 2 − a 2 4 2 a 2 3 + a 2 2 + a 2 4  , (26) the inverse problem is that of specifying a scale-vector a such that C a = C . By introducing the positive vector p = ( a 2 1 , a 2 2 , a 2 3 , a 2 4 ) , the problem can be reformulated as: find p > 0 , such that Mp = c , where M =   2 1 0 1 0 1 0 − 1 0 1 2 1   , and c = 24  C (1 , 1) , C (1 , 2) , C (2 , 2)  . The scale-vector solution is then given by a = √ p . As far as existence of solutions is concerned, proposition 3.1 ensures that the linear system Mp = c , p > 0 , corresponding to a given geometry ( λ min , λ max , θ ) , is always solvable provided that the elongation % < U ( θ ) . Moreover , as it turns out, the linear system is under-determined and has infinitely many solutions. The idea then would be to use a scale-vector that is “optimal” in some sense. But first, we try to characterize the solution space of the system Mp = c , p ≥ ε 1 . F or reasons that will soon be obvious, we propose to modify the positivity constraint as p ≥ ε 1 , where ε is some arbitrarily small positive number . W e observe that M is full-rank, and hence the null-space is of dimension 4 − 3 = 1 . In particular , this signifies that the solutions of Mp = c lie on the affine subspace { ¯ p + t e : t ∈ R } , where ¯ p is a particular solution ( M ¯ p = c ) and e is in the null-space ( Me = 0 ). Moreover , one can easily verify that in order to adhere to the positivity constraint, ¯ p + t e = ( ¯ p 1 + t, ¯ p 2 − t, ¯ p 3 + t, ¯ p 4 − t ) ≥ ε 1 , it is both necessary and sufficient that t lies in the closed interval [ t ` , t r ] , where t ` = max( − ¯ p 1 + ε, − ¯ p 3 + ε ) and t r = min( ¯ p 2 − ε, ¯ p 4 − ε ) . In particular , we set e = (1 , − 1 , 1 , − 1) which is in the null-space of M . Note that one can easily compute ¯ p by pivoting one of its components and solving for the remaining three; since M is of full-rank, the reduced system is always solvable. W e now use the available degree of freedom to select a solution that maximizes a certain measure of Gaussianity . A classical measure of the Gaussianity of a 1 -D function is its kurtosis (the fourth-order cumulant). F or a centered function f ( x ) , this is defined as κ = µ 4 − 3 µ 2 2 , where µ 4 and µ 2 are the fourth-order and second-order moments of f ( x ) , respectively . The central property of this measure is that κ = 0 for a true Gaussian function, and as a result, the absolute value of 22 ( a ) θ = 0 ( b ) θ = π / 9 ( c ) θ = 3 π / 4 ( d ) θ = 7 π / 18 ( e ) θ = π / 2 ( h ) θ = 8 π / 9 ( g ) θ = 3 π / 4 ( f ) θ = 23 π / 36 Figure 7: Intensity distributions of the four-directional box splines of identical size ( s = 1 ) and elongation ( % = 2 . 5 ), but with different orientations. The ellipse in each figure represents a level-set of the Gaussian having the same covariance as the corresponding box spline. 23 the kurtosis provides a measure of Gaussianity of the function. In particular , smaller absolute values correspond to more Gaussian-like functions. As for a bivariate function f ( x ) , we shall use the following matrix-valued extension K = L − tr( C ) C − 2 C 2 , (27) where C = R ( xx T ) f ( x ) d x and L = R ( xx T ) 2 f ( x ) d x are the second-order and fourth-order moment matrices of f ( x ) , respectively [ 17 ]. This consti- tutes a valid extension of the 1 -D kurtosis since (27) reduces to κ = µ 4 − 3 µ 2 2 when d = 1 . Moreover , we also have the following desirable properties: (i) If f ( x ) is a multivariate Gaussian, then K = 0 (cf . [17] for a proof). (ii) The Frobenius norm of K , namely , k K k =  X i,j | K ( i, j ) | 2  1 / 2 is rotation-invariant, i.e., the kurtosis matrices of the rotations of f ( x ) have the same Frobenius norm (proof in § 6.4). F ollowing the above arguments, we propose to solve the optimization problem p 0 = argmin p k K p k 2 , Mp = c , p ≥ ε 1 . (28) This yields the optimal scale-vector a 0 = √ p 0 corresponding to the most Gaussian-like box spline. The rotation-invariance property ensures that the box splines of identical size and elongation but different orientation, obtained via the solutions of the above optimization problem, are as homogenous as possible. The norm of the kurtosis matrix of β a ( x ) turns out to be k K p k 2 = P k p 4 k + ( p 2 1 + p 2 3 )( p 2 2 + p 2 4 ) (see Appendix § 6.5). Substituting p k = ¯ p k + te k into this expression, we arrive at the quartic polynomial ζ ( t ) = X k ( ¯ p k + e k t ) 4 +  ( ¯ p 1 + t ) 2 + ( ¯ p 3 + t ) 2   ( ¯ p 2 − t ) 2 + ( ¯ p 4 − t ) 2  , which, together with the parameterization p = ¯ p + t e , simplifies the problem to one of finding t 0 = arg min t ζ ( t ) , t ∈ [ t ` , t r ] . (29) The optimal solution is then given by a 0 = √ ¯ p + t 0 e . This problem however is easily solved, since the minimum is attained either at one of the interior points ( t ` , t r ) where ζ 0 ( t ) = 0 , or at one of the boundary points. In particular , 24 we have the following simple algorithm for designing optimized Gaussian- like box splines of a specified covariance: (i) Set p 4 = 1 , and compute ¯ p by solving the sytem M ¯ p = c . (ii) Use ¯ p to compute t ` , t r and the coefficients of ζ 0 ( t ) . (iii) Find the real roots of ζ 0 ( t ) = 0 over the interval ( t ` , t r ) ; denote the set of real roots by R . Then a 0 = ( ¯ p + t 0 e ) 1 / 2 , where 3 t 0 = argmin t ζ ( t ) , t ∈ R ∪ { t ` , t r } . In particular , the coefficients of the cubic equation ζ 0 ( t ) = ζ 1 t 3 + ζ 2 t 2 + ζ 3 t + ζ 4 = 0 in (iii) are given by ζ 1 = 32 , ζ 2 = 24( ¯ p 1 − ¯ p 2 + ¯ p 3 − ¯ p 4 ) , ζ 3 = 16 P ¯ p 2 k − 8( ¯ p 1 + ¯ p 3 )( ¯ p 2 + ¯ p 4 ) , ζ 4 = 4( ¯ p 3 1 − ¯ p 3 2 + ¯ p 3 3 − ¯ p 3 4 ) + 2( ¯ p 1 + ¯ p 3 )( ¯ p 2 2 + ¯ p 2 4 ) − 2( ¯ p 2 + ¯ p 4 )( ¯ p 2 1 + ¯ p 2 3 ) . The box splines obtained using the above optimization at various orienta- tions are shown in Fig. 7. The quality of the Gaussian approximation under different practical settings of the orientation and the elongation is quantified in Fig. 8. The correspondences (1 , %, θ ) ↔ ( a 1 , a 2 , a 3 , a 4 ) for 0 < θ < π and 1 ≤ % < U ( θ ) can be pre-computed and stored in a look-up table. Note that for a given % , the set of correspondences (1 , %, θ ) ↔ ( a 1 , a 2 , a 3 , a 4 ) have an inherent four- fold symmetry in θ owing to the presence of the four principal directions. Hence, it suffices to store the scale-vector correspondences for 0 < θ < π / 4 which reduces the size of the LUT by a factor of four . Indeed, for any arbitrary size s > 1 , orientations 0 < θ < π , and elongation 1 ≤ % < U ( θ ) , the corresponding scale-vector is then obtained through the following operations: (O1) R otation: φ =            θ for 0 < θ < π / 4 θ − π / 4 for π / 4 < θ < π / 2 θ − π / 2 for π / 2 < θ < 3 π / 4 θ − 3 π / 4 for 3 π / 4 < θ < π . (O2) Find ( a 1 , a 2 , a 3 , a 4 ) corresponding to (1 , %, φ ) using the LUT . The desired 3 The tie is randomly broken if ζ ( t ) has multiple minimizers over [ t ` , t r ] (this was rarely reported in practice). 25 Orientation ( θ ) Correlation index (%) 5 10 15 20 25 30 35 40 45 92 93 94 95 96 97 98 99 100 � =1 � =3 � =5 Figure 8: Normalized correlation between the optimal four-directional box spline and the target Gaussian at different elongations and orientations. F or a fixed elongation, the correlation is minimum at the critical orientation θ = 22 . 5 ◦ , and improves symmetrically as θ approaches the principal orientations (cf . Fig. 6). scale-vector is then given by the following permutation and rescaling: ( a 1 , a 2 , a 3 , a 4 ) 7→            √ s ( a 1 , a 2 , a 3 , a 4 ) for 0 < θ < π / 4 √ s ( a 2 , a 3 , a 4 , a 1 ) for π / 4 < θ < π / 2 √ s ( a 3 , a 4 , a 1 , a 2 ) for π / 2 < θ < 3 π / 4 √ s ( a 4 , a 1 , a 2 , a 3 ) for 3 π / 4 < θ < π . 3.3 Higher-order box splines As suggested by the convergence result (18) , the Gaussian-like nature of the four-directional box splines can be improved by using more directions. Implementing the corresponding space-variant filtering using the algorithm in § 2.2.2 however turns out to be challenging and not very practical—the principal axes of these box splines are generally along off-grid directions, and one needs to interpolate the image for implementing the associated running-sums. The iterated four-directional box splines β 4 ,m a ( x ) introduced in § 2.2.3 provide a practical alternative. These box splines rapidly converge to a Gaussian with the increase in m . Also, note that the four-directional box spline and its iterates have identical covariances. This implies that the 26 (b) Anisotropic f or ms ( s =1 , � =3 , θ = π / 8 ) (a) Isotropic forms ( s =1 , � =1 ) Figure 9: Higher-order box splines through iterative convolutions. Left : The reference four-directional box spline; Center : Iterated box spline obtained by convolving the (rescaled) four-directional box spline with itself; Right : T arget Gaussian having identical covariance. 27 algorithm in § 3.2.2 can be used for optimizing the iterated box splines as well. The first two iterates of the four-directional box spline along with the target Gaussian are shown in Fig. 9. It is seen that β 4 , 2 a ( x ) resembles the target Gaussian very closely . In fact, the minimum correlation coefficient rises from 95% to 99% for m = 2 (cf . Fig. 8). In practice, we can thus implement a higher-order Gaussian-like filtering by simple iterations of the algorithm in § 3.1. It suffices to set the scale-vector in the algorithm as a / √ m , where m is the number of iterations. 4 EXPERIMENT AL RESUL TS 4.1 Computation time The space-variant filtering using the four-directional box spline was imple- mented in Java on a 2 . 66 GHz Intel system. The typical execution times required for convolving a 512 × 512 image with kernels of various sizes are shown in T able 2. It is clear that the run time is independent of the size of the kernel. T able 2: Average computation time for box splines of different sizes. Size ( s ) 1 2 4 8 16 Time (millisec.) 101 100 103 101 100 4.2 Application: F eature-preserving smoothing W e now present an application to demonstrate the space-variant algorithm described in § 3.1. Filtering of noisy images using isotropic Gaussian filters often results in excessive blurring of the anisotropic image features. Diffusion filters are known to perform better in such cases [ 19 ]. As an alternative, we propose to filter the corrupted image using our anisotropic Gaussian-like filters, where we adapt the size, elongation and orientation of the filter to the local image features. The main idea is to locally average the image using elliptical windows that have been elongated along the image feature (orthogonal to the local gradient). This induces more smoothing along the direction of minimal intensity variation resulting in the suppression of the ambient noise, while preserving the sharpness of the image features. T o derive an estimate of the local image anisotropy , we use the paradigm of structure tensors [ 9 ], where the local orientation θ ( x ) is estimated through 28 the minimization of a certain weighted norm of the directional derivative. In particular , if we denote the directional derivative of f ( x ) along along u θ = (cos θ , sin θ ) by D θ f ( x ) , then θ ( x ) is given by the minimizer of Z Ω w ( s ) | ( D θ f )( x − s ) | 2 d s , (30) where Ω is the support of the isotropic averaging window w ( x ) . The solubility of the above optimization problem follows from the observation that this can be recast as an eigenvalue problem. In particular , by expressing the directional derivative in terms of the gradient g ( x ) , namely as D θ f ( x ) = u T θ g ( x ) , one can rewrite (30) as u T θ J ( x ) u θ , (31) where the structure tensor J ( x ) is the 2 × 2 positive-definite matrix Z Ω w ( s )  gg T  ( x − s ) d s . The local orientation θ ? ( x ) is then given by the minimizer of (31) associated with the minimum eigenvalue of J ( x ) . In view of the definitions in (23) and the fact that the eigenvalues of J ( x ) are always non-negative, we propose to estimate the elongation as follows: W e set % ? = λ max /λ min if both eigenvalues are non-zero, equal to 1 if both are zero (locally isotropic intensity), and equal to max(1 , λ ) if only one of the eigenvalues λ is non-zero. Finally , we estimate the size of the box spline as s ? = λ max + λ min . The triple ( s ? , % ? , θ ? ) is then used to compute the optimal scale-vector using the algorithm described in § 3.2. The components of J can be efficiently computed using simple convolution and pointwise operations; we refer the reader to [ 9 , Chapter 13] for implementation details. The main steps of the proposed smoothing algorithm are: • Computation of the structure-tensor . • Pre-integration of the corrupted image using the running-sum filters. • Computation of the triple ( s ? , % ? , θ ? ) at every feature location using the structure tensor . This is used to compute the scale-vector a ( n ) of the optimal Gaussian-like box spline using the algorithm in § 3.2. Isotropic box splines are used in the uniform-intensity regions; we set a ( n ) = ( σ, σ, σ, σ ) , where σ is proportional to the noise variance. 29 (A) (B) (C) (D) Zoom (B) Zoom (C) Zoom (D) Figure 10: Results on a test image. ( A ) Barbara corrupted with additive Gaus- sian noise, PSNR = 18 . 0 dB ; ( B ) Isotropic smoothing, PSNR = 23 . 10 dB ; ( C ) Diffusion filtering, PSNR = 23 . 25 dB ; ( D ) Our algorithm, PSNR = 23 . 58 dB . 30 (A) (B) (C) (D) (E) Zoom (C) Zoom (D) Zoom (E) Figure 11: Results on a real image. ( A ) Noise-free immunofluorescence image of actin fibres (Courtesy of C. Aemisegger , CMIA, University of Zrich); ( B ) Image corrupted with additive Gaussian noise, PSNR = 12 . 20 dB ; ( C ) Isotropic smoothing, PSNR = 15 . 38 dB ; ( D ) Diffusion filtering, PSNR = 15 . 50 dB ; ( E ) Our algorithm, PSNR = 15 . 80 dB . 31 • Computation of the FD mesh using a ( n ) , and its application to the pre-integrated image. T o demonstrate the effectiveness of our strategy in preserving oriented patterns in noisy images, we compare the results obtained from our algorithm with those obtained using the (fixed-scale) isotropic Gaussian filter and the P erona-Malik diffusion filter [ 14 ]. W e use the standard test image of Barbara and corrupt it with additive Gaussian noise. The variance of the noise is used to set the size of the Gaussian for the isotropic smoothing. The parameters used for the P erona-Malik filter were typical: time step of 0 . 1 , conductance in the range of 10 ∼ 30 , and a total of 15 ∼ 30 iterations. The parameters were manually tuned to optimize the PSNR, and also to avoid blocking artifacts. Fig. 10 shows the results obtained from the different filters. As far as the quantitative evaluation of the filters is concerned, our algorithm clearly outperforms both isotropic and diffusion filters in terms of the P eak-Signal- to-Noise-Ratio (PSNR). Moreover , as shown in the zoomed-in sections of the respective images, the oriented stripes on the clothes are quite faithfully restored by our algorithm. A significant amount of blurring of the stripes is seen in the results obtained using isotropic and diffusion filtering. The non-linear diffusion filter , however , tends to perform better at low PSNRs in the range of 5 - 10 dB (cf . T able 3). Next, we compare the results on a real biological image and at a much lower PSNR of around 12 dB. W e consider the fluorescence image shown in Fig. 11, which exhibits numerous elongated fiber-like structures. The parameters of the isotropic filter and the diffusion filter are set as in the previous case, except that the iteration count for the latter is increased to 15 . As before, the improvement of the PSNR obtained using our filter is higher . Importantly , as seen from the zooms, our algorithm results in significantly less merging of the close fibers and blurring of the finer ones. The average execution time of our algorithm is 0 . 6 seconds for a 512 × 512 image, which includes the computation of the structure-tensor , the running-sums, the optimal scale-vector , the interpolated samples and the finite-differences. The four-directional box splines can also be used to derive fast space- variant detectors based on Gaussian forms, e.g., the Laplacian-of-the-Gaussian (LoG) or the so-called Mexican-hat detector . W e refer the interested reader to [ 3 ], where the isotropic forms of the four-directional box spline were used to realize a fast and scalable Mexican-hat-like detector . In particular , a modified version of the space-variant algorithm described in § 3.1 is used to design an efficient coarse-to-fine strategy for the detection of centers and radii of cells/nuclei in fluorescence images. 32 T able 3: Comparison of the filters at different noise levels using the test image of Barbara . The table shows the PSNR of the outputs. Input PSNR (dB) 10.0 12.0 14.0 16.0 18.0 20.0 Isotropic filter 15.38 18.20 20.20 21.65 23.10 24.30 Diffusion filter 15.48 18.31 20.30 21.70 23.25 24.35 Our filter 15.45 18.38 20.57 21.94 23.58 24.56 5 CONCL USION In this paper , we presented a framework for elliptical filtering using the radially-uniform box splines. The associated space-variant filtering was efficiently realized using running-sums and local finite-differences. The attractive features of our algorithm are: • the O (1) computational complexity per pixel, and • the use of real-valued parameters for continuously controlling the shape and size of the filter . Our filtering paradigm offers a nice trade-off between the quality of approxi- mation of Gaussians and the computational complexity of linear space-variant filtering. W e also presented a closed form solution for the problem of constructing four-directional box splines with given covariances. The scope of our algo- rithm was demonstrated through the realization of a smoothing filter that can adapt to the local image characteristics. 6 APPENDIX 6.1 Proof of theorem 2.2 W e first establish that the F ourier sequence b β 2 a (2) ( ω ) , b β 3 a (3) ( ω ) , . . . conver- gences pointwise to a Gaussian: lim N − →∞ b β N a ( N ) ( ω ) = exp  − σ 2 2 k ω k 2  . (32) W e then show that the above convergence is also in the L 2 ( R 2 ) norm. This will establish the theorem, since it is well-known that the F ourier transform of a Gaussian is a Gaussian, and that f n − → g in L 2 if ˆ f n − → ˆ g in L 2 . 33 T o derive (32) , we note that b ϕ a,θ ( ω ) = b β a ( u T θ ω ) = sinc  a u T θ ω / 2  , where sinc( x ) = sin( x ) /x for x 6 = 0 , and equals 1 at the origin. Then, the convolution-multiplication rule gives b β N a ( N ) ( ω ) = N Y k =1 b ϕ a k ( N ) ,θ k ( ω ) = N Y k =1 sinc  a k ( N ) 2 u T θ k ω  . (33) Using the estimate sinc( x ) = 1 − x 2 / 6 + O ( x 4 ) for | x | < 1 , and substituting a k ( N ) = σ √ (24 / N ) into (33), we have b β N a ( N ) ( ω ) = N Y k =1  1 − σ 2 N ( u T θ k ω ) 2 + O  N − 2   ( k ω k < cN ) (34) where c is some positive constant. By developing the quadratic factor ( u T θ k ω ) 2 and the product in (34), we arrive at the estimate b β N a ( N ) ( ω ) = N Y k =1  1 − σ 2 2 N k ω k 2 + σ 2 2 N ( ω 2 1 − ω 2 2 ) cos 2 θ k + σ 2 N ω 1 ω 2 sin 2 θ k + O  N − 2   =  1 − σ 2 2 N k ω k 2  N + σ 2 2 N ( ω 2 1 − ω 2 2 )  1 − σ 2 2 N k ω k 2  N − 1 N X k =1 cos 2 θ k + σ 2 N ω 1 ω 2  1 − σ 2 2 N k ω k 2  N − 1 N X k =1 sin 2 θ k + O  N − 2  =  1 − σ 2 2 N k ω k 2  N + O  N − 2  ( k ω k < cN ) . (35) This is exactly where the fact that θ k are uniformly distributed over [0 , π ) is invoked: the cancellation of the linear factors in the second step is based on the identities P N k =1 cos 2 θ k = 0 , and P N k =1 sin 2 θ k = 0 , where θ k = ( k − 1) π / N . Since (1 − x/m ) m converges to exp( − x ) as m − → ∞ , it can now be readily seen that (32) follows as the limiting case of (35) . T o demonstrate that (32) holds in the L 2 norm sense, it suffices to show the sequence of error functions E N ( ω ) = b β N a ( N ) ( ω ) − exp( − σ 2 k ω k 2 / 2) converge to zero in the above norm, i.e., kE N k L 2 − → 0 as N − → ∞ . Since we have already demonstrated that E N ( ω ) − → 0 pointwise, all we need to show in order to invoke the dominated convergence theorem is that the sequence |E 2 ( ω ) | , |E 3 ( ω ) | , . . . is uniformly bounded by a L 2 function. Moreover , since |E N ( ω ) | ≤ | b β N a ( N ) ( ω ) | + exp( − σ 2 k ω k 2 / 2) , 34 it, in fact, suffices to show that each | b β N a ( N ) ( ω ) | admits such a bound. The main idea behind establishing such a bound is that the above men- tioned sequence can be covered by a Gaussian in a neighborhood of the origin and by a function with sufficient decay at the tails, both of which are independent of N . Indeed, using the estimate sinc( u ) ≤ 1 − u 2 /π 2 for | u | ≤ π , one can verify that    b β N a ( N ) ( ω )    = N Y k =1    sinc  √ 6 σ √ N u T θ k ω     ≤ N Y k =1  1 − 6 σ 2    u T θ k ω    2 π 2 N  ≤ exp  − C 1 k ω k 2  ( k ω k < δ ) As far as the tail is concerned, the Cauchy-Schwarz inequality , | u T θ k ω | ≤ k u θ k k · k ω k = k ω k , gives   b β N a ( N ) ( ω )   = N Y k =1    sinc  √ 6 σ √ N u T θ k ω     ≤ C 2 k ω k 2 ( k ω k ≥ δ ) . Here C 1 , C 2 and δ are appropriate positive constants that are independent of N . Combining the above estimates, we see that   b β N a ( N ) ( ω )   ≤ exp  − C 1 k ω k 2  + C 2 k ω k 2  1 − rect  k ω k δ  for all ω . Since the function on the right is indeed in L 2 ( R 2 ) , this establishes the desired bound, and consequently , the norm convergence. 6.2 Covariance matrix W e begin with the observation that if f ( x ) and g ( x ) are symmetric (about the origin) and have a total mass of unity , then C f ∗ g = C f + C g , where C f denotes the covariance matrix of f ( x ) . Indeed, by noting that ˆ f ( 0 ) = ˆ g ( 0 ) = 1 (unit mass) and ∂ i ˆ f ( 0 ) = ∂ i ˆ g ( 0 ) = 0 (by symmetry), and by recalling the 35 multiplication-differentiation rule R x i x j f ( x ) d x = − ∂ i ∂ j ˆ f ( 0 ) , we have C f ∗ g ( i, j ) = Z x i x j ( f ∗ g )( x ) d x = − ˆ g ( 0 ) ∂ i ∂ j ˆ f ( 0 ) − ˆ f ( 0 ) ∂ i ∂ j ˆ g ( 0 ) − ∂ i ˆ f ( 0 ) ∂ j ˆ g ( 0 ) − ∂ i ˆ g ( 0 ) ∂ j ˆ f ( 0 ) = − ∂ i ∂ j ˆ f ( 0 ) − ∂ i ∂ j ˆ g ( 0 ) = C f ( i, j ) + C g ( i, j ) . Since the directional box distributions ϕ a k ,θ k ( x ) satisfy these criteria, we have that C N a = P k C k , where C k is the covariance matrix of ϕ a k ,θ k ( x ) . W e explicitly compute the component C (1 , 2) ; the remaining components can be similarly derived. Using the multiplication-differentiation rule again, we have C k (1 , 2) = Z x 1 x 2 ϕ a k ,θ k ( x ) d x = − ∂ 1 ∂ 2 ˆ β a k ( u T θ k ω )    ω = 0 = a 2 k 24 sin 2 θ k . Therefore, C N a (1 , 2) = P k C k (1 , 2) = P k a 2 k sin 2 θ k / 24 . As far as the positive-definite nature of C N a is concerned, it will suffice to show that its eigenvalues λ max = 1 2  X a 2 k + √ D  and λ min = 1 2  X a 2 k − √ D  where D = ( P a 2 k cos(2 θ k )) 2 + ( P a 2 k sin(2 θ k )) 2 , are strictly positive. This is obviously the case for λ max . Moreover , the inequality  X k a 2 k  2 − D =  X k a 2 k  2 −  X k a 2 k cos(2 θ k )  2 −  X k a 2 k sin(2 θ k )  2 = 2 X k 6 = ` a 2 k a 2 `  1 − cos(2 θ k − 2 θ ` )  > 0 tells us that P a 2 k > √ D . Hence, λ min is strictly positive as well. 6.3 Proof of proposition 3.1 F ollowing definition (23) , the dependence of orientation of the box spline β 4 a ( x ) on the scale-vector can be expressed as tan θ a = ν + sign( a 2 − a 4 ) p 1 + ν 2 (0 < θ < π ) (36) 36 where ν = ( a 2 3 − a 2 1 ) / ( a 2 2 − a 2 4 ) . W e note the following: it is both necessary and sufficient that a 2 > a 4 (resp. a 2 < a 4 ) for the box spline to be oriented between 0 < θ a < π / 2 (resp. π / 2 < θ a < π ), and it is the map ( a 1 , a 2 , a 3 , a 4 ) 7→ ( ν, sign( a 2 − a 4 )) that uniquely determines the orientation of the box spline. Indeed, the uniqueness aspect is based on the argument that, for 0 < θ < π / 2 , (36) reduces to tan θ a = ν + √ 1 + ν 2 as a consequence of the necessary condition a 2 > a 4 . This implicitly represents a one-to-one between θ a and ν over the domains (0 , π / 2) and ( −∞ , ∞ ) , since the map θ a 7→ tan θ a from (0 , π / 2) into (0 , ∞ ) , and the map ν 7→ ν + √ 1 + ν 2 from ( −∞ , ∞ ) into (0 , ∞ ) are both strictly monotonic. In a similar vein, a one-to-one between θ a and ν over the domains ( π / 2 , π ) and ( −∞ , ∞ ) can be established. In particular , we have ν = 1 2 (tan θ a − cot θ a )sign  π 2 − θ a  . (37) That is, given any orientation θ a = φ , the corresponding ν φ is uniquely specified by (37) . This establishes the first part of the proposition, since there trivially exists some positive vector ( a 1 , . . . , a 4 ) such that ( a 2 3 − a 2 1 ) / ( a 2 2 − a 2 4 ) = ν φ . As far as the bound is concerned, we observe that the elongation can be expressed as % a = 1 + 2 / ( γ − 1) , where γ = P a 2 k / √ D a ≥ 1 . F or a given orientation θ a = φ , the components of the feasible scale-vectors bear the relation ( a 2 1 − a 2 3 ) = ν φ ( a 2 4 − a 2 2 ) , and thus we have that γ = P a 2 k p ( a 2 3 − a 2 1 ) 2 + ( a 2 2 − a 2 4 ) 2 = a 2 1 + a 2 3 p ( a 2 3 − a 2 1 ) 2 + ( a 2 2 − a 2 4 ) 2 + a 2 2 + a 2 4 p ( a 2 3 − a 2 1 ) 2 + ( a 2 2 − a 2 4 ) 2 = 1 q 1 + ν 2 φ a 2 1 + a 2 3 | a 2 1 − a 2 3 | + | ν φ | q 1 + ν 2 φ a 2 2 + a 2 4 | a 2 2 − a 2 4 | > 1 + | ν φ | q 1 + ν 2 φ following the trivial inequalities a 2 1 + a 2 3 > | a 2 1 − a 2 3 | , and a 2 2 + a 2 4 > | a 2 2 − a 2 4 | . Consequently , % a = 1 + 2 γ − 1 < 1 + | ν φ | + q 1 + ν 2 φ 1 + | ν φ | − q 1 + ν 2 φ . (38) The above bound is tight since it can be approached arbitrary closely by making the scales a ` and a k ( θ ` < φ < θ k ) arbitrarily large. 37 6.4 R otation-invariance Let K and K θ denote the kurtosis matrices of f ( x ) and its rotation f ( R T θ x ) , respectively , where R θ is the rotation matrix. Observe that the matrices L θ and L are related as L θ = Z ( xx T ) 2 f ( R T θ x ) d x = Z R θ ( y y T ) 2 R T θ f ( y ) d y ( y = R T θ x ) = R θ  Z ( y y T ) 2 f ( y ) d y  R T θ = R θ LR T θ . Similarly , we have C θ = R θ CR T θ . This also gives us the equivalence tr( C θ ) = tr( R θ CR T θ ) = tr( CR T θ R θ ) = tr( C ) following the identities tr( AB ) = tr( BA ) and R T θ R θ = I . W e can then write K θ = L θ − tr( C θ ) C θ − 2 C 2 θ R θ LR T θ − tr( C ) R θ CR T θ − 2 R θ C 2 R T θ = R θ ( L − tr( C ) C − 2 C 2 ) R T θ = R θ KR T θ . Our claim follows immediately , since k K θ k 2 = tr( K T θ K θ ) = tr( R θ K T KR T θ ) = tr( K T K ) = k K k 2 . 6.5 Kurtosis measure In order to compute the kurtosis matrix, we only need to evaluate the fourth- order moments; the second-order moments are already known. In particular , using F ourier identities similar to the ones used in § 6.2, one can derive the following expressions: Z x 4 1 β a ( x ) d x = 1 4 µ 4  4 a 4 1 + a 4 2 + a 4 4  + 1 2 µ 2 2  6 a 2 1 a 2 2 + 6 a 2 1 a 2 4 + 3 a 2 2 a 2 4  , Z x 3 1 x 2 β a ( x ) d x = 1 4 µ 4  a 4 2 − a 4 4  + 3 2 µ 2 2 a 2 1  a 2 2 − a 2 4  , Z x 2 1 x 2 2 β a ( x ) d x = 1 4 µ 4  a 4 2 + a 4 4  + 1 2 µ 2 2  a 2 1 a 2 2 + a 2 1 a 2 4 + a 2 2 a 2 3 + a 3 3 a 2 4 − a 2 2 a 2 4 + 2 a 2 1 a 2 3  , Z x 1 x 3 2 β a ( x ) d x = 1 4 µ 4  a 4 2 − a 4 4  + 3 2 µ 2 2 a 2 3  a 2 2 − a 2 4  , Z x 4 2 β a ( x ) d x = 1 4 µ 4  4 a 4 3 + a 4 2 + a 4 4  + 1 2 µ 2 2  6 a 2 2 a 2 3 + 6 a 2 3 a 2 4 + 3 a 2 2 a 2 4  , where µ 4 = 1 / 80 and µ 2 = 1 / 12 denote the fourth and second-order mo- ments of β 1 ( x ) , respectively . These provide the components of the matrix L a , 38 which in turn leads to the following simple expression for the kurtosis matrix K a = L a − tr( C a ) C a − 2 C 2 a = ( µ 4 − 3 µ 2 2 ) a 4 1 + 1 2 ( a 4 2 + a 4 4 ) 1 2 ( a 4 2 − a 4 4 ) 1 2 ( a 4 2 − a 4 4 ) a 4 3 + 1 2 ( a 4 2 + a 4 4 ) ! . (39) Finally , from (39), we get k K a k 2 = P 4 k =1 a 8 k + ( a 4 1 + a 4 3 )( a 4 2 + a 4 4 ) . W e note that the negative factor ( µ 4 − 3 µ 2 2 ) in (39) is in fact the kurtosis of rect( x ) , the sub-Gaussian constituent of the box spline. The fact that K a is negative-definite is thus consistent with the sub-Gaussian nature of the resulting box spline. 6.6 F ast ZP interpolation Given a discrete function c [ n ] and a point x on R 2 , we outline a technique for the fast evaluation of the sum X n ∈ Z 2 c [ n ] β 4 b ( n − x ) (40) where b = (1 , √ 2 , 1 , √ 2) . A sketch of the partitions of the piecewise polyno- mial β 4 b ( x ) is provided in Fig. 12. The exact functional forms of the ZP box spline 2 β 4 b ( x ) corresponding to these partitions can be found in [ 21 ]. Since β 4 b ( x ) has a compact support, this is in fact a finite sum, and requires at most seven evaluations of the function β 4 b ( · − x ) for any arbitrary translation x . This is illustrated in Fig. 12, where the red dots x 0 , . . . , x 6 denote the lattice points that intersect the support of β 4 b ( · − x ) . Thus, one needs to evaluate the translated ZP at the points x 0 , . . . , x 6 in order to compute the sum. The drawback here is that naive evaluation of the spline at every x j requires a decision-making to figure out the associated partition before computing the corresponding polynomial. The redundancy that we exploit is as follows: Consider the triangular regions P 0 , . . . , P 3 marked with blue dashed lines in Fig. 12 corresponding to the four different partitions of the ZP . These together constitute a unit cell of the lattice, and hence only one lattice point intersects them. The figure shows a particular instance where this point, denoted by x 0 , lies in P 0 . This clearly fixes the partitions of the remaining lattice points x 1 , . . . , x 6 . Thus, if we de- note the polynomials corresponding to these partitions by ρ 0 , 0 ( x ) , . . . , ρ 0 , 6 ( x ) , then the sum in (40) is simply given by P 6 j =0 c [ x j ] ρ 0 ,j ( x j ) . More generally , if x 0 intersects the partition P i (0 ≤ i ≤ 3) , and if we denote the corresponding polynomials by ρ i,j ( x ) , then the sum is given by P 6 j =0 c [ x j ] ρ i,j  x j  . Thus, we have the computational advantage that at most two binary decisions 39 P 0 P 1 P 2 P 3 x 0 x 1 x 2 x 3 x 4 x 5 x 6 x Figure 12: The figure shows the translated ZP box spline β 4 b ( · + x ) , b = (1 , √ 2 , 1 , √ 2) . The red dots x 1 , . . . , x 7 correspond to the points on the Carte- sian lattice, and the triangular regions P 1 , . . . , P 4 are different partitions of the ZP , which together constitute a unit cell of the lattice. are required to simultaneously determine the ZP partitions corresponding to the points x j , where the coefficients of the polynomials ρ i,j ( x ) can be pre-computed. Acknowledgements This work was supported by the Swiss National Science F oundation un- der grant 200020-109415, and the Ram ´ on y Cajal program of the Spanish Ministry of Science and Innovation. R eferences [1] T . Asahi, K. Ichige, and R. Ishii, An efficient algorithm for decomposition and reconstruction of images by box splines , IEICE T rans. on Funda- mentals of Electronics, Commun. and Comp. Sc. E 84 (2001), no. 8, 1883–1891. 40 [2] C. de Boor , K. H ¨ ollig, and S. Riemenschneider , Box Splines , Springer- V erlag, 1993. [3] K. N. Chaudhury , Zs. P ¨ usp ¨ oki, A. Mu ˜ noz Barrutia, D. Sage, and M. Unser , F ast detection of cells using a continuously scalable Mexican-hat-like template , IEEE International Symposium on Biomedical Imaging: From Nano to Macro (2010), in press. [4] L. Condat and D. V an De Ville, Three-directional box-splines: Character - ization and efficient evaluation , IEEE Signal Process. Lett. 13 (2006), no. 7, 417–420. [5] R. Deriche, F ast algorithms for low-level vision , IEEE T rans. P attern Anal. Mach. Intell. 12 (1990), no. 1, 78–87. [6] A. Entezari, D . V an De V ille, and T. M ¨ oller , Practical box splines for reconstruction on the body centered cubic lattice , IEEE T rans. V is. Comput. Graph. 14 (2008), no. 2, 313–328. [7] J. M. Geusebroek, A. W . M. Smeulders, and J. van de W eijer , Fast anisotropic Gaussian filtering , IEEE T rans. Image Process. 12 (2003), no. 8, 938–943. [8] P . S. Heckbert, Filtering by repeated integration , Intl. Conf. on Computer Graphics and Interactive T echniques 20 (1986), no. 4, 315–321. [9] B. J ¨ ahne, Digital image processing , Springer , 1997. [10] S. Y . M. Lam and B. E. Shi, Recursive anisotropic 2-D Gaussian filtering based on a triple-axis decomposition , IEEE T rans. Image Process. 16 (2007), no. 7, 1925–1930. [11] J.-S. Lee, Digital image enhancement and noise filtering by use of local statistics , IEEE T rans. P attern Anal. Mach. Intell. (1980), 165–168. [12] A. Mu ˜ noz-Barrutia, X. Artaechevarria, and C. Ortiz-de Solorzano, Spa- tially variant convolution with scaled B-splines , IEEE T rans. Image Pro- cess. 19 (2010), 11–24. [13] A. Mu ˜ noz-Barrutia, R. Ertl ´ e, and M. Unser , Continuous wavelet transform with arbitrary scales and O ( N ) complexity , Signal Processing 82 (2002), no. 5, 749–757. 41 [14] P . P erona and J. Malik, Scale-space and edge detection using anisotropic diffusion , IEEE T rans. P attern Anal. Mach. Intell. 12 (1990), no. 7, 629–639. [15] M. Richter , Use of box splines in computer tomography , Computing 61 (1998), no. 2, 133–150. [16] S. T an, J. L. Dale, and A. Johnston, P erformance of three recursive algorithms for fast space-variant Gaussian filtering , R eal- Time Imaging 9 (2003), no. 3, 215–228. [17] A. Tkacenko and P .P . V aidyanathan, Generalized kurtosis and applica- tions in blind equalization of MIMO channels , Conference R ecord of the Thirty-Fifth Asilomar Conference on Signals, Systems and Computers 1 (2001), 742–746. [18] X. W ang, On the gradient inverse weighted filters , IEEE T rans. Signal Process. 40 (1992), no. 2, 482–484. [19] J. W eickert, Theoretical foundations of anisotropic diffusion in image processing , Theoretical foundations of Computer Vision. Computing 11 (1994), 221–236. [20] I. T. Y oung and L. J. van Vliet, Recursive implementation of the Gaussian filter , Signal Processing 44 (1995), no. 2, 139–151. [21] P . B. Zwart, Multivariate splines with nondegenerate partitions , SIAM Journal on Numerical Analysis 10 (1973), no. 4, 665–673. 42

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment