Constant-time filtering using shiftable kernels

It was recently demonstrated in [5] that the non-linear bilateral filter [14] can be efficiently implemented using a constant-time or O(1) algorithm. At the heart of this algorithm was the idea of approximating the Gaussian range kernel of the bilate…

Authors: Kunal Narayan Chaudhury

Constant-time filtering using shiftable kernel s Kunal Narayan Chaudhury ∗ Abstract It was recently demonstrate d in [5] that the non-linear bilateral filter [14] can be efficiently implemented using a constant-time or O (1) algo- rithm. At the heart of this algorithm was the idea of approximating the Gaussian range kernel of the bilateral fi lte r using trigonometric functions. In this letter , we explain how the idea in [5] can be extended to few other linear and non-linear filters [14, 17, 2]. While some of these filters have received a lot of attention in r ecent yea r s, they are k nown to be computa- tionally intensive. T o extend the idea in [5], we identify a central property of trigonometric functions, called shi ftability , that allows us to exploit the re- dundancy inherent in the filte ring operations. In particular , u sing s hiftable kernels, we show how certain complex filtering can be reduced to simply that of computing the mo ving sum of a stack of images. Each i ma ge in the stack is obtained through an elementary pointwise tr ansform of the input image. This has a two-fold advantage. First, we can use fast recursive al- gorithms for computing the moving sum [15, 6], and, se condly , we can u se parallel computation to fur ther speed up the computation. W e also show how shiftable kernels can also be used to a ppr ox imate the (non-shiftable) Gaussian kernel that is ubiquitously used in image filtering. K eywords : Fil tering, shiftability , kernel, O (1) complexity , approximation, constant-time algorithm, mo ving sum, neighborh ood filte r , spatial filter , bilat- eral filter , non-local means. 1 Introduction The f unction cos( x ) has the remarkable property that , for any transl ation τ , cos( x − τ ) = cos( τ ) cos( x ) + sin( τ ) sin( x ) . That is, we can express the translate of a sinusoid as the linear combination of two fixed sinusoids. More generally , this holds true f or any function of the f o rm ϕ ( x ) = c 1 exp( α 1 x ) + · · · + c N exp( α N x ) . This follows fro m the addition-multiplication property of t he exp onential. As special cases, we have the pure exponentials when α i is real, and the trigono- metric functions when α i is imaginary . The translation of not all f unctions can b e written in this way , e.g., co nsider the functions exp( − x 2 ) and (1 + x 2 ) − 1 . The ot her class of functions t hat have ∗ kchaudhu@math.princeton.ed u 1 this property are the polynomials, ϕ ( x ) = c 0 + c 1 x + · · · + c N x N . F unctions with this p roperty can be realized in higher dimensions using higher-dimensional polynomials and exponent ials, or simply by taking the te nsor product o f the one-dimensional functions. More generally , we say that a function ϕ ( x ) is shifta ble in R d if there ex- ists a fixed (fi nite) collection o f functions ϕ 1 ( x ) , . . . , ϕ N ( x ) such that, for every translation τ in R d , we can write ϕ ( x − τ ) = c 1 ( τ ) ϕ 1 ( x ) + · · · + c N ( τ ) ϕ N ( x ) . (1) W e call th e fixed functions ϕ 1 ( x ) , . . . , ϕ N ( x ) the basis functions , c 1 ( τ ) , . . . , c N ( τ ) the interpolat ing coefficients , and N th e order of shiftability . Note that the coeffi- cients d e pend on τ , and are responsible for capt uring the tr anslation action. Shiftable functions, and more generall y , steerable functions [11], have a long history i n sig nal and image processing. Over the last few decades, re- searchers have found applications of th ese special functions in various domains such as image analys is [7, 9] and mo tion estimation [1], and patt ern recogni- tion [16], to name a few . In several of these app lications, the role of steer ability and its formal connection with the theory of Lie groups was only recognized much later . W e refer t h e re aders to the exposition of T eo [13] for an excellent account on t he the o ry and practice of steerable functions. In a recent p aper [5], we showed ho w specialized trigonometr ic kernels could be used for fast bilateral fi ltering. This work was inspired by t he work of P orikli [10], who had earlier shown how polynomials could be used for the same purpose. W e now realize t hat it is the shiftability of the kernel that is at the heart of the matt er , and that this can be applied to o ther forms of filter ing and using more general kernels. W e will provide a general theo ry of t his in Section 2, where we also prop o se some algorithms that have a constant-time complexity per pixe l, independent of the size of the filtering kernels. T o the best of our knowledge, such algorithms have not be en reported in the community in its full gener ality . The problem of designing shiftable kernels is addressed in Section 3. Here we also propose a scheme for approximating th e ubiquitous Gaussian kernel using shiftable functions. F inally , in Sect ion 4, we pre sent some thoughts o n how shiftability could be used for reducing the complexity of the non-local me ans filter [2]. 2 Fi ltering using moving sums W e now show how certain constant-time algorithms for image filter ing can be obtained using shiftable kernels. The idea is that by using shiftability , we can decompose t h e local kernels (obtained usin g translations) in terms of “global” functions – the basis functions. The local information gets encoded in the inter - polating coefficients. T his allows us to ex plicitly take advantage of the red un- dancy in the filtering operation. 2 T o begin with, we consider t he simplest neighborho od filter , namely the spatial filte r . This is given by f ( x ) = 1 η Z Ω ϕ ( y ) f ( x − y ) d y (2) where η = R Ω ϕ ( y ) d y . Here ϕ ( x ) is the kernel, and Ω is the neighborhood o ver which the integral (sum) is taken. Not e t hat, hencefor th, we will use the t erm kernel to specifically mean that the function is symmetric, non-negative, and unimodal o ve r its support (peak at the origin). It is not immediately clear that one can co nstruct such ker nels usi ng shiftable functions. W e will address this problem in the sequel. F or the moment, suppose that the ke rnel ϕ ( x ) is shiftable, so th at ( 1) holds. W e use this, along with symmetry , to center t he k e rnel at x . In particular , we write ϕ ( y ) = ϕ ( x − y − x ) = P N n =1 c n ( x ) ϕ n ( x − y ) . Using linearity , we have Z Ω ϕ ( y ) f ( x − y ) d y = N X n =1 c n ( x ) Z Ω ϕ n ( x − y ) f ( x − y ) d y . Similarly , η = P N n =1 c n ( x ) R Ω ϕ n ( x − y ) d y . Now consider the case when Ω is a square, Ω = [ − T , T ] 2 . The n the above integrals are of th e for m Z [ − T ,T ] 2 F ( x − y ) d y . This is easily r ecognized as th e moving sum of F ( · ) e valuated at x . W e will use the not ation Sum ( F, x , T ) to denote this inte gral (this is simply the “moving- average”, but without the normalization). As is we ll-known, this can be effi- ciently comp uted using r e cursion [15, 6]. The main idea here is that, b y using shift able kernels, we can exp r ess (2) using moving sums and these in turn can be computed efficiently . In particular , note that the number o f computations r equired for the moving sum is indepen- dent of T , t hat is, the size of the neighborhood Ω . These are refer r ed to as the constant-time or O (1) algorithms in the image p r ocessing community . The main steps of our algorithm are summarized in Algorithm 1. Algorithm 1 Constant-time spatial filtering Input : f ( x ) , ϕ ( x ) as in (1), and T . 1. F or 1 ≤ n ≤ N , u se recursion to compute Sum ( f ϕ n , x , T ) and Sum ( ϕ n , x , T ) . 2. Set f ( x ) as t he ratio of P N n =1 c n ( x ) Sum ( f ϕ n , x , T ) and P N n =1 c n ( x ) Sum ( ϕ n , x , T ) . R eturn : Filtere d image f ( x ) . The above idea can also be extended to non-lin ear filters such as the edge- preserving bilateral filter [14, 17, 12]. The bilateral filtering of an image f ( x ) 3 is given by the formula ˜ f ( x ) = 1 η ( x ) Z Ω ϕ ( y ) φ ( f ( x − y ) − f ( x )) f ( x − y ) d y (3) where η ( x ) = R Ω ϕ ( y ) φ ( f ( x − y ) − f ( x )) d y . In this for mula, the bivariate function ϕ ( x ) is called t he spatial kernel , and the one-dimensional function φ ( t ) is called the range kernel . Suppo se that both these kernels are shiftable. In particular , let ϕ ( x − τ ) = P M m =1 c m ( τ ) ϕ m ( x ) , and φ ( t − τ ) = P N n =1 d n ( τ ) φ n ( t ) . Pluggin g these into (3) and using linearity , we can write the numerat or as X m,n c m ( x ) d n ( f ( x )) Z Ω ϕ m ( y ) φ n ( f ( x − y )) f ( x − y ) d y . Similarly , η ( x ) = X m,n c m ( x ) d n ( f ( x )) Z Ω ϕ m ( x − y ) φ n ( f ( x − y )) d y . Algorithm 2 Constant-time bilateral filtering Input : f ( x ) , ϕ ( x ) and φ ( s ) as in (3), and T . 1. F or 1 ≤ m ≤ M and 1 ≤ n ≤ N , set a m,n ( x ) = c m ( x ) d n ( f ( x )) , g m,n ( x ) = ϕ m ( x ) φ n ( f ( x )) f ( x ) , and h m,n ( x ) = ϕ m ( x ) φ n ( f ( x )) . 2. Use recursion to compute Sum ( g m,n , x , T ) and Su m ( h m,n , x , T ) . 3. Set ˜ f ( x ) as the ratio of P m,n a m,n ( x ) Sum ( g m,n , x , T ) and P m,n a m,n ( x ) Sum ( h m,n , x , T ) . R eturn : Filtere d image ˜ f ( x ) . As before, we again recognize the moving sums when Ω = [ − T , T ] 2 . T his gives us a ne w constant-time algorithm fo r b ilateral filtering, which is summa- rized in Algorithm 2. W e note t hat t his algorithm is an extension of t he o ne in [5], wh ere we used shiftable kernels only for the range filter . The main computational advantage of Algorithms 1 and 2 is that t he num- ber of computations per pixel does not depend on the size of the spatial or the range kernel. It only depends on th e order of shiftability . More p recisely , the comp utation time scales linearly with the number of basis f unctions. F or example, in case of the bilateral filter , t he computation size is roughly M N times th e computations r equired t o perfo rm a single mo ving sum of the image, plus the overhead time of initializing the basis images and recomb ining t he out- puts. T o get an estimate of the running time, we implement ed Algorithm 2 in MATLAB on an Intel mach ine with a quad-core 2.83 GHz pro cessor . W e found that a single recursive implementation of the moving sum requires roughly 10 milliseconds o n a 512 × 51 2 image when T = 5 . Considering an instance where M × N = 3 2 × 5 , the total computation time was r oughly 2 . 5 seconds, 1 seconds 4 to initial ize the basis images and combine the results, and 1 . 5 seconds for the moving sums. This indeed looks promising since we can definitely bring d o wn the time using a JA VA or C compiler . Moreover , we can also use multithreading (parallel co mp utation) to f urther acce ler ate the implementation. 3 Shiftable kernels W e now address the p r oblem of designing kerne ls that are shiftable. T he theory of Lie groups can be used to study the class of shiftable functions, e.g., see dis- cussions in [13, 8]. It is well-known that the po lynomials and th e e xponentials are essentially the only shiftable functions. Theorem 3.1 (The class of shiftable functions, [13]) . The only smooth functions that are sh iftable are t he polynomials a nd th e exponential s , and their sum and product. W e r e call that a ker nel is a smooth function th at is symmetr ic, non-negative, and unimodal. A p riori, it is not at all obvious that there ex ists a kernel t hat is a polynomial or e x ponential. Indeed, the real exponentials cannot form valid kernels since they ar e neither symmetric nor unimodal. On t he othe r hand, while there are plenty of polynomials and tr igonometric functions that are both symmetric and non-negative, it is impossible to find a one that is unimodal over the entire real line. This is simply because the po lynomials blow up at infinity , while t he trigonomet ric functions are oscillatory . Proposition 3.2 (Conflicting p roperties) . There is no kernel tha t is shiftable on the entire real line. Nevertheless, unimodality can be achieved at least on a bounded interval, if not t he entire real lin e, using polynomial and trigonomet ric functions. Note that, in practice, a priori bounds on the data are almost always available. F or the rest of the pap er , and without loss of generality , we fix the bounded interval to be [ − T , T ] . W e now give two concrete instances of shiftable kernels on [ − T , T ] . The reason for the se particular choices will be clear in the sequel. F o r ever y integer N = 0 , 1 , 2 , . . . , consider the functions p N ( t ) =  1 − t 2 T 2  N and q N ( t ) =  cos  π t 2 T  N , where t takes values in [ − T , T ] . It is easily ve rified that these are valid kernels on [ − T , T ] . The crucial diffe rence be tween th e above kernels is that the order of shiftability o f p N ( t ) is 2 N + 1 , while that of q N ( t ) is much lower , namely N + 1 . W e not e that it is the kernel q N ( t ) th at was used in [5]. The fact that the sum and product of shiftable kernels is also shiftable can be used to co nstr uct kernels in higher dimensions. F or example, we can set ϕ ( x 1 , x 2 ) = p N ( x 1 ) p N ( x 2 ) , or ϕ ( x 1 , x 2 ) = q N ( x 1 ) q N ( x 2 ) . W e call t h ese the 5 separable kernels . In higher dimensions, one often requires the kernel to have the added pr o perty of isotropy . In two dimensions, we can achieve near-isotropy using t hese separable kernels provided t hat N is suffi ciently large. W e will give a precise reason later in the section. Howe ver , as shown in a different context in [3], it is worth noting that kernels other than the standard separable ones (of same order) can provide bett er isotro py , par ticularly for low orders. Indeed, consider the f o llowing ker nel o n the square [ − T , T ] 2 : φ ( x 1 , x 2 ) = q 1 ( x 1 ) q 1  x 1 + x 2 √ 2  q 1 ( x 2 ) q 1  x 1 − x 2 √ 2  . (4) The ker ne l is composed of four cosines distributed uniformly over the circle. It can be verified t h at φ ( x 1 , x 2 ) is more isotr opic than the separ ab le kernel of same order , q 2 ( x 1 ) q 2 ( x 2 ) . However , note that the no n-negativity constraint is violated on the co rners of the square [ − T , T ] 2 in this case. T h is is unavoidable since we are trying to app roximate a circle within a square. Nevertheless, th is does not create much of a problem since th e negative overshoot is well within 2% of the peak value. F ollowing the same argument, the polynomial φ ( x 1 , x 2 ) = p 1 ( x 1 ) p 1  x 1 + x 2 √ 2  p 1 ( x 2 ) p 1  x 1 − x 2 √ 2  tends to be mor e isotrop ic o n [ − T , T ] 2 than p 2 ( x 1 ) p 2 ( x 2 ) . 3.1 Approximation of Gaussian kernels The most commonly used kernel in image pr ocessing is th e Gaussian kernel. This kernel, h owever , is no t shiftable. As a r esult, we cannot directly apply our algorithm for the Gaussian kernel. One straightforward option is to instead approximate the Gaussian using its F ourier series or its T aylor polynomial, both of which are shiftable. T he difficulty with either of these is t hat they do not yield valid k e rnels. T hat is, it is not guaranteed that the resulting approximation is non-negative or unimodal; see [5, Fig. 1 ]. This is exactly the problem with the polynomial appro ximation used for the bilateral filter in [10]. As against these, we can instead use the specialized shiftable kernels p N ( t ) and q N ( t ) to approximate the Gaussian to any arb itrarily precision, based on the following results. Theorem 3.3 (Gaussian appr o ximation) . F or every − T ≤ t ≤ T , lim N − →∞ p N  t √ N  = exp  − t 2 T 2  , (5) and lim N − →∞ q N  t √ N  = exp  − π 2 t 2 8 T 2  . (6) 6 In eith e r case, the convergence takes place quite rapidly . The fi rst of these results is well-known. F or a proof of the second result, we refer the readers to [5, Sec. II-D]. The added utility of these asymptot ic formulas is that we can control the variance of these kernels using the variance of the target Gaussian. This is part icularly useful because no simple closed-form ex p ressions fo r the variance of p N ( t ) and q N ( t ) are available. W e ref er t h e authors to [5, Sec. II-E] for details on how one can exactly contro l the variance of q N ( t ) . The same idea applies t o p N ( t ) . W e now discuss how to approximate isotropic Gaussians in two dimensions using shiftable kernels. Doing this using separab le kernels is straightforward. F or example, lim N − →∞ q N  x 1 √ N  q N  x 2 √ N  = exp  − π ( x 2 1 + x 2 2 ) 8 T 2  . (7) There is yet another f orm of convergence which is wort h mentioning. Consider the f ollowing ke rnel defined on [ − T , T ] 2 : φ N ( x 1 , x 2 ) = N Y k =1 q 1  r 6 N ( x 1 cos θ k + x 2 sin θ k )  , where θ k = ( k − 1) π / N . The kernel φ 4 ( x 1 , x 2 ) is simply the (rescaled) ke r nel φ ( x 1 , x 2 ) in (4) . By slightly adapt ing the proo f of Theo rem 2.2 in [3, Appendix A], we can show the fo llowing. Theorem 3.4 (Ap p roximation of isotro pic G aussian) . F or every ( x 1 , x 2 ) in [ − T , T ] 2 , lim N − →∞ φ N ( x 1 , x 2 ) = exp  − π 2 ( x 2 1 + x 2 2 ) 8 T 2  . Note that the t arget Gaussian in this case is the same as in (7). The key difference, however , is that for low orders N , e.g. N = 4 , φ N ( x 1 , x 2 ) looks more isotropic than q N ( x 1 / √ N ) q N ( x 2 / √ N ) . However , as noted earlier , the non-negativity requirement of a kernel is mildly violated b y φ N ( x 1 , x 2 ) at the corners o f t he square domain. The significance of t he order N is that it allows o ne to arbitrarily contro l t h e accuracy o f the Gaussian approximation. As discussed in [5, Sec. II-E], N has to be greater than a thre sho ld N 0 for the approx imating kernels in (5) and (6) to be non-negative and unimodal. In particular , if σ 2 is the variance of the t arget Gaussian, then N 0 is of the order O ( T 2 /σ 2 ) . Thus, a large N 0 is re quired to approximate a narrow Gaussian on a large interval. F or the spatial kernel, the ratio T 2 /σ 2 is usuall y small, since T is small in this case. This is, h owever , not the the case f or the range kernel of the bilateral filter , where T can almost be as large as the dynamic range of the image. T he good news is that, by allowing for mild (and controlled) violations of t h e non-negativity constraint, one can closely approx imate the Gaussian using a significantly lower number of terms. This is done by discarding the less significant basis functions in (1); see [4 ] for detailed discussion and results. The same idea can also be app lied to the polynomials. 7 4 Discussion W e have shown how , using shiftable kernels, we can ex press two popular forms of image filters (and po ssibly many more) in terms of simple moving sums. W e note that we can speed up th e impleme nt at ion of Algorithm 1 and 2 usin g parallel co mputation o r multithreading. In future work, we plan to implement these algorithms in C or JAVA , and make ex t ensive comparison of the re sult and execution time with the state-of-the-art algorithms. T o co nclude, we note that the main idea can also b e extended to some othe r forms of neighborhood filtering, e.g., the ones in [17, 12]. What is even more interesting is that we can exte nd the ide a for approximating the non-local means filter [2]. The non-local means is a higher ord e r generalization of the bilateral filter , whe re one works with p atches instead of single pixels. The fi lte r ed image ˆ f ( x ) in this case is given b y ˆ f ( x ) = R f ( x − y ) w ( x , y ) d y R w ( x , y ) d y (8) where w ( x , y ) = ex p( − h − 2 Z g ( u )  f ( x + u ) − f ( x − y + u )  2 d u ) Here g ( u ) is a two -dimensional Gaussian, and the integrals ( sums) are taken over the entire image domain. In p ractice, tho ugh, the sum is per formed locally [2]. It is po ssible to express (8) in terms of moving sums, using shiftable approx- imates of t h e Gaussian. First, we app roximate the domain of the outer integral by a sufficiently large square [ − T , T ] 2 , and t he inner integral b y a finite sum over p neighborhood pixels u 1 , . . . , u p , wher e, say , u 1 = 0 . In this case, ˆ f ( x ) is given by 1 η ( x ) Z [ − T ,T ] 2 f ( x − y ) ϕ ( f ( x + u 1 ) − f ( x − y + u 1 ) , . . . . . . , f ( x + u p ) − f ( x − y + u p )) d y (9) where ϕ ( t 1 , . . . , t p ) = exp( − h − 2 P p k =1 g ( u k ) t 2 k ) , and where η ( x ) is given by Z [ − T ,T ] 2 ϕ ( f ( x + u 1 ) − f ( x − y + u 1 ) , . . . . . . , f ( x + u p ) − f ( x − y + u p )) d y . (10) Note th at ϕ ( t 1 , . . . , t p ) is an anisotrop ic G aussian in p variables with covariance diag ( h 2 / 2 g ( u 1 ) , . . . , h 2 / 2 g ( u p )) . Now , using a shiftable approximation (we con- tinue using the same symbol) of ϕ ( t 1 , . . . , t p ) as in (1), we can write the inte- grand in ( 10) as P N n =1 c n ( f ( x + u 1 ) , . . . , f ( x + u p )) ϕ n ( f ( x − y + u 1 ) , . . . , f ( x − y + u p )) . 8 W e can then write the numerator in (9) as P n c n ( f ( x + u 1 ) , . . . , f ( x + u p )) Sum ( G n , x , T ) , where we set G n ( x ) = f ( x ) ϕ n ( f ( x + u 1 ) , . . . , f ( x + u p )) . Similarly , letting H n ( x ) = ϕ n ( f ( x + u 1 ) , . . . , f ( x + u p )) , we have η ( x ) = P n c n ( f ( x + u 1 ) , . . . , f ( x + u p )) Sum ( H n , x , T ) . The catch here is that it get a good ap p roximation of non-local means we need to make both T and p large. While t h ere is no problem in making T large (the cost of the moving sum is independent of T ), it is r ather difficult to make p large. F or example, with a sep arable approximation o f ϕ ( t 1 , . . . , t p ) , the overall order N would scale as n p , where n is the appr oximation ord e r of the Gaussian along each dimension. This limits the scheme to coarse appr oximations, and to small neighborhoods. T o make this practical, we need a a polynomial or trigonometric approximation of ϕ ( t 1 , . . . , t p ) whose or der grows slowly with p . It would indeed be interesting to see if such approximations exist at all. 5 Acknowledgment The author would like t o t hank Michael Unser fo r r eading the manuscript and for his useful comments. The author was supported by a fellowship from the Swiss National Science F oundation under grant PBELP2- 135867 . R e ferences [1] E. Adelson and J. Bergen. Spatiotemporal energy models for the precep- tion of motion. J. Opti cal Soci ety o f America , 2(2):284–299, 1985. [2] A. Buades, B. Coll, and J.M. Morel. A review of image denoising algo- rithms, with a new one. Multiscale Modeling and Simulat i on , 4:490–530, 2005. [3] K.N. Chaudhury , A. Mu ˜ noz-Barrutia, and M. Unser . F ast space-variant ellip- tical filtering using bo x splines. IEEE T rans. Im age Process. , 19:2290–2306, 2010. [4] K.N. Chaudhury , D. Sage, and M. Unser . Appendix to ”Fast O (1) bil ateral filtering using trigonomet ric range kernels.”. T echnical report, Ecole P olytechnique F ede r ale de Laussanne, http://bigwww .epfl.ch/ p reprints/chaudhury1101pdoc01.pdf, 2011. [5] K.N. Chaudhury , D . Sage, and M. Unser . F ast O (1) bilateral filtering using trigonometric range kernels. I EEE T ra ns. Image Proc ess. , 2011. [6] F .C. Crow . Summed-area tables for texture mapping. ACM Siggra p h , 18:207–212, 1984. [7] W . Freeman and E. Adelson. The design and use of steerable filters. IEEE T rans. P attern Anal. Mach . Intell. , 13(9):8891–906, 1991. 9 [8] Y . Hel-Or and P .C. T eo. Canonical de composition of steerable f unctions. J. Math. Imaging V ision , 9(1):83–95, 1998. [9] P . P erona. Deformable kernels for early vision. IEEE T rans. Pattern Anal. Mach. Intell. , 17(5):488–499, 1991. [10] F . P orikli. Constant time O (1) bilateral filtering. IEEE CVPR , pages 1–8, 2008. [11] E. Simoncelli, W . Freeman, E. Ade lson, and D. Heeger . Shiftable multiscale transforms. IEEE T rans. Information Theory , 38(2) :587–607, 1992. [12] S. M. Smith and J. M. Br ady . Susan–A new approach to low level image processing. Int. J. Comp . Vision , 23:45–78, 1997. [13] P .C. T eo. Th e ory and applications of steerable f unctions. T echnical repor t, Stanford University , 1998. [14] C. T omasi and R. Manduchi. Bilateral filtering for gray and color images. IEEE ICCV , pages 839–846, 1998. [15] P . V iola and M. Jones. R apid ob ject d e tection using a boost e d cascade of simple features. IEEE CVPR , pages 511–518, 2001. [16] A. W atson and A. Ahumada. Models of human visual-motion sensing. J. Optical S ociety o f America , 2(2):322–342, 1985. [17] L.P . Y aroslavsky . Digital Pic ture Processing–An Intro duction . Springer- V er lag, Berlin, 1985. 10

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment