An algebra for signal processing
Our paper presents an attempt to axiomatise signal processing. Our long-term goal is to formulate signal processing algorithms for an ideal world of exact computation and prove properties about them, then interpret these ideal formulations and apply …
Authors: Henning Thielemann
An algebra for signal p ro cessing Henning Thielemann Institut f¨ ur Informatik, Martin-Luther-Universit¨ at Halle-Witten b erg, German y henning.th ielemann@inf ormatik.uni- halle.d e Abstract. Our pap er presents an attempt to axiomatise signal p ro cess- ing. Our long-term goal is to form ulate signal pro cessing algorithms for an ideal w orld of exact computation and p rov e p rop erties about th em, then interpret these ideal formulati ons and apply them without change to real wo rld d iscrete data. W e give mo dels of the axioms that are based on Gauss ia n fun ctions, that allo w for ex act computations and automated tests of signal algorithm p rop erties. Keywords: Algebra, Signal pro cessing, Gaussia n function 1 In tro duction 1.1 Motiv ation In s ig nal pro ces s ing we consider r eal o r co mplex v a lued functions. These func- tions repr esent signals or frequency s pe c tr a and their arguments a re considere d to b e time v alues o r frequency v alues, r esp ectively . There a r e so me fundamen- tal op erations like p oint wise multiplication “ · ” and co nv olutio n “ ∗ ” of signals. The F ourier transform F converts be t ween signa ls and fre q uency spectr a. F or precise definitions of these op er ations see Sec tion 2. Additiona lly there are some essential laws, that every signal pro cessing s c ient ist is familiar with, such as the law, tha t the F ourier transform is a n homomorphism, that maps multiplication to convolution: F ( x · y ) = F x ∗ F y . This is an imp ortant connectio n and it is amazingly simple, but it is not quite true. – The first problem is, that dep ending on the pr ecise definitions of the in- volv ed o p e rations there may b e a fac tor 2 π or √ 2 π to ma ke the ab ov e ident ity correct. This is like w orking in the imp eria l system, where a g al- lon is not just the cubic p ower of a length unit but 231 cubic inch. This is at least cum ber some a nd err or-pro ne when done manually , but it also complicates computer implemen ta tions. With the trans c endent factors we hav e to work in fields like Q ( √ 2 π ) or Q ( π ) or would ha ve to work with approximations, but working with only rational num b er s would of cours e be s till exact and more e fficient . How ever, if w e succeed to suppress the transcendental factor in the ab ove iden tit y by the change of a definition, then it will show up in another place. 2 Henning Thielemann Thu s our fir st contribution is to select a set of op era tions that are com- mon in signal pro ces sing and list identities that show their interrelation in principle. W e use the degrees of freedo m in the op era tio n definitions for simplifying the laws as muc h as p os sible, that is, avoid constant factors and as sert maximum symmetr y in Section 2 . Metapho rically sp eaking, w e try to define something like a metric system for sig nal pro cess ing. – The s econd pro blem is, that the conv o lution and F ourier transform are not alwa y s defined. F or insta nce the straightforw ard definitions of the F ourier transfor m and its inv erse as given in Section 2, map from abso- lutely integrable functions to b o unded functions, i.e . F , F − 1 : L 1 ( R ) → L ∞ ( R ) . Tha t is, in general y ou cannot apply the inv e r se Fourier trans- form to the result of a F o urier tr ansform. Ther e are even more suc h dif- ficulties, but they ca n b e resolved b y r estriction to the Schw a r tz spa ce S of arbitr arily smoo th and rapidly dec aying functions. – The thir d pr oblem is, that commonly sig nal pr o cessing in a computer is per formed on discretised functions with finite pr ecision n umbers. That is, laws a s the ab ove one do not hold exactly . Our contribution co nc e r ning this problem is in the s ubsections of Section 3 . There we ta ke the pres ented laws as axioms and develop mo dels in terms of extended G auss ian functions. All of these functions ar e in S . The more we extend those functions by factors and par ameters, the more op era tio ns we can supp or t. Our goal is to give definitions of functions, such that w e only need to cop e with r ational par ameters. This wa y we can re pr esent a wide range o f functions, we ca n compute exactly a nd w e could ev en generalise these models to any algebr aic field. 1.2 Basics The theory of Algebra provides many a x iomatically describ ed structures, that generalise c ommon mathematica l ob jects and op e rations on them. E.g. gr oups abstract p ermutations, lattices abstrac t Bo ole an logic, rings a bstract integer arithmetic, fields abstra ct rational arithmetic. If we are a ble to perfo r m a pro of with the axioms of a particula r algebraic str uctur e, then o ur pro of automa tica lly applies to every such structure. In this pap er we would like to abstract from what is commonly called signal pro cessing. Signal pro c essing is the r esearch a rea o f constr uction, tr ansformatio n and analysis of o scillation functions in o ne v ar iable (the time), appr oximation o f those functions and related o p erations with discretised data and efficient im- plement ations on digital computers. An impor tant view on an audio s ig nal is the frequency spectrum, b eca use that is clo ser to the w ay hum ans hear than the time-domain representation o f a signa l. W e obtain a frequency spectr um b y the Fourier tr a nsform of a signal. Clo sely related to signa l pro cessing is im- age pr o cessing, with the main difference b eing, that functions in t wo v ariables are studied. Another related ar ea are r andom distributions in sto chastics. The conv o lution, a common op eration in signal pro cessing, of t wo rando m distribu- tions yields the distr ibutio n o f the sum of t wo random v aria bles. The F ourier An algebra for signal pro cessing 3 transform yields the ch ara cteristic function of a rando m dis tr ibution and there are more connections (s e e Section 3). How ever, signal pro cess ing, imag e pro ce ssing a nd r andom distributions are relatively seldo m v ie wed from an algebr aic p oint of view. This may hav e to do with the tra dition to fo cus on signal v alues (e.g. f ( t )) rather than on larger ob jects like signals (e.g . f ). E.g. it is common to express the delay of a signal by f ( t − d ) rather than to us e an op erato r like in f → d . O n the o ne hand this has the adv antage, that so me prop er ties are obvious (e.g . f ( t − ( d 0 + d 1 )) = f (( t − d 0 ) − d 1 )), s ince they are a c onsequence o f simple arithmetic, but are not ob vious at the higher level (i.e. f → ( d 0 + d 1 ) = ( f → d 1 ) → d 0 ). O n the other hand we cannot cleanly expres s identities involving intrinsically functional transforms like the Fourier transform, that has lead to custom no tations like F ( ω ) F ↔ f ( t ), that could b e expressed the functional wa y as F F = f . 2 Finding a set of fundamen tal op erations W e like to start this section listing the op er ations, that are commonly used in signal pro cessing , together with typical applications. W e b egin with the op era- tions with indisputable definitions and contin ue with the ones, w he r e differing definitions are around in the literature. F o r the v ariant definitions we show, what laws they imply . Then we choose the definitions that make the laws most sim- ple. W e close the section with a comprehensive list of la ws that hold for our definitions. In T able 1 we list the signal op erations together with their definitions and in Figure 1 we illustrate them using example signals. – Shrinking a signal as in (1) means to increas e all con ta ine d freque nc ie s prop ortiona lly and to shorten time accordingly . – T ra ns lating a signal as in (2) means to delay it. – Summing t wo signals as in (3) mea ns to sup erp o se them, that is, to play them simultaneously . – Multiplying tw o signals as in (4) means to control the amplitude of one signal b y the other one. F or certain choices of signals this is also known as ring-mo dulation. – Conv o lving tw o signa ls as in (5) means to apply the sound characteristics of one signal to the other one. It may b e used for suppr e ssing or emphasising certain frequencies, for smo o thing or for a pplication of r everb. W e contin ue with the cont rov ersia l definitio ns . Actually , there is es sentially one op er ation, that is defined differently throughout the sig nal pro cess ing liter- ature: The F o urier trans form. It is the tr ansform that computes the frequency sp ectrum for a signal in the time domain ( b ackwar d or analysis tra nsform) and vice v ersa ( fo rwar d or synt hesis transform). F or an ex ample see (6) in Figure 1. The other definitio n with v ar ying instances is the one for functions that ar e eigenfunctions of the Fourier transfor m. Obviously it dep ends entirely on the definition of the F ourier transfor m. 4 Henning Thielemann T able 1. Basic signal pr o c essing op er ations operation application definition shrinking alter pitc h and time ( x ↓ k ) ( t ) = x ( k · t ) translate dela y ( x → k )( t ) = x ( t − k ) adjoin t x ∗ = x ↓ − 1 sum mixing ( x + y ) ( t ) = x ( t ) + y ( t ) multipli cation env elop e ( x · y )( t ) = x ( t ) · y ( t ) conv olution frequency filter ( x ∗ y )( t ) = Z R x ( τ ) · y ( t − τ ) d τ mod u lation frequency shift x · cis1 where cis1 t = exp(2 π i · t ) ↓ 2 = (1) → 1 2 = (2) + = (3) · = (4) ∗ = (5) F = (6) Fig. 1. Il lustr ation of the b asic signal pr o c essing op er ations. An algebra for signal pro cessing 5 1. Oscillations with per io d 1 F ourier forward F 1 x ( τ ) = Z R exp(2 π i · τ · t ) · x ( t ) d t F ourier backw ard F − 1 1 x ( τ ) = Z R exp( − 2 π i · τ · t ) · x ( t ) d t duality F 1 ( F 1 x ) = x ↓ − 1 eigenfunction g ( t ) = ex p − π · t 2 (7) eigenv a lue is 1 F 1 g = g unitarity h x, y i = hF 1 x, F 1 y i conv o lution theorem F 1 ( x ∗ y ) = F 1 x · F 1 y F 1 ( x · y ) = F 1 x ∗ F 1 y deriv ative F 1 ( x ′ ) = − 2 π i · id ·F 1 x (8) 2. Oscillations with per io d 2 π F ourier for ward F 2 x ( τ ) = 1 √ 2 π · Z R exp( i · τ · t ) · x ( t ) d t F ourier ba ckward F − 1 2 x ( τ ) = 1 √ 2 π · Z R exp( − i · τ · t ) · x ( t ) d t duality F 2 ( F 2 x ) = x ↓ − 1 eigenfunction g ( t ) = ex p − t 2 (9) eigenv a lue is 1 F 2 g = g unitarity h x, y i = hF 2 x, F 2 y i conv o lution theorem F 2 ( x ∗ y ) = √ 2 π · F 2 x · F 2 y (10) F 2 ( x · y ) = 1 √ 2 π · F 2 x ∗ F 2 y (11) deriv ative F 2 ( x ′ ) = − i · id ·F 2 x (12) 3. Oscillations with per io d 2 π and no r o ots of π F ourier for ward F 3 x ( τ ) = Z R exp( i · τ · t ) · x ( t ) d t F ourier ba ckward F − 1 3 x ( τ ) = 1 2 π · Z R exp( − i · τ · t ) · x ( t ) d t W e do not further consider Definition 3 , b ecause forward and backw a rd F ourier transfor m have different facto rs, thus laws for forward and backw ard transform differ in factors. F o r definitions 1 and 2 many laws are equa l up to the choice o f the transfor m direction. A t the first glance the definitions 1 a nd 2 of the Fourier tra nsform seem to be equally conv enient o r equally inconvenien t. Thus man y textb o oks just in- tro duce a definition and do not explain, why they pr efer the one to the other po ssible ones. How ever, we think that factors in laws are bad, es p ec ia lly worse 6 Henning Thielemann than factor s that c a n b e hidden in a function definitio n. E.g. compa re the eigen- function prop er t y o f the Gauss ian function in (7) and (9): W e can define the Gauss ian b ell curve in both w ays and call it just g . The factor π does no longer get in the wa y when trans forming equations con taining F a nd g . The s ame ap- plies to the laws o n deriv atives in (8) and (12), where we can consider 2 π i · id as one function. In contrast to that, the co nv o lutio n theorems in (10) and (11) hav e constant factor s, even different one s . This makes manual equation ma nipula tion cum b ersome and err o r-pro ne. E ven more it ma kes computations exclusively with rational num b ers imp ossible, due to the irratio nal factor √ 2 π . W e could suppress the fa ctors in (10) a nd (11) b y defining conv o lution or m ultiplication co ntaining a constant factor, but we think this is no t na tur al. F or these reaso ns w e will stic k to definition 1 and o mit the index of F in the rest of this pap er. Below we list the laws that follow from these definitions and that w e w ant as axio ms for our algebraic structure. B e aw a re, that we hav e omitted conv ergence conditions for the op erations that inv o lve integration. The laws ma rked with # can be derived fr om the remaining laws and are just g iven for the purp os e of completeness . x + y = y + x x + ( y + z ) = ( x + y ) + z x · y = y · x x · ( y · z ) = ( x · y ) · z x · ( y + z ) = x · y + x · z x ∗ y = y ∗ x # x ∗ ( y ∗ z ) = ( x ∗ y ) ∗ z # x ∗ ( y + z ) = x ∗ y + x ∗ z # x → 0 = x ( x → a ) → b = x → ( a + b ) x ↓ 1 = x ( x ↓ a ) ↓ b = x ↓ ( a · b ) ( x ↓ b ) → a = ( x → ( a · b )) ↓ b ( x + y ) → a = ( x → a ) + ( y → a ) ( x + y ) ↓ a = ( x ↓ a ) + ( y ↓ a ) ( x · y ) → a = ( x → a ) · ( y → a ) ( x · y ) ↓ a = ( x ↓ a ) · ( y ↓ a ) ( x ∗ y ) → a = x ∗ ( y → a ) ( x ∗ y ) ↓ a = | a | · ( x ↓ a ) ∗ ( y ↓ a ) ( x · y ) ′ = x ′ · y + x · y ′ ( x ∗ y ) ′ = x ∗ y ′ F ( x + y ) = F x + F y F ( k · x ) = k · F x F ( x ∗ y ) = F x · F y F ( x · y ) = F x ∗ F y # h x, y i = hF x, F y i k x k 2 = kF x k 2 # F ( x ↓ a ) = | a | · F x ↓ 1 a F ( x → a ) = (cis ↓ a ) · F x F ( F x ) = x ↓ − 1 F ( x ∗ ) = F x F ( x ′ ) = − 2 π i · id ·F x An algebra for signal pro cessing 7 3 Dev elopmen t of Gaussian mo dels Now that we hav e stated s ome axio ms, w e wan t to construct an idea l world, that is , a clas s of functions (or s ignals), w her e they hold. This class of function shall allow for simple co nstraints of the laws, for exact and efficien t co mputa- tion (more pr ecisely: op er ations in a field), a nd s hall allow to repres ent many mathematically imp orta nt ob jects exactly and real w o rld signals a ppr oximately . Since Gauss ian functions are eigenfunctions of the Fourier transform, they are per fect fo r representing signa ls in b oth time and frequency domain. They can be extended in a relatively simple w ay , such that all of the op er a tions can b e per formed, that we listed initially . W e like to stress that Gauss ian functions are not o nly interesting, b ecause they allow to p erform the op erations w e w ant, but Gauss ian functions are cen- tral to s ig nal pro cessing and sto chastics. – Gauss ian functions are used as filter window for smoo thing. – Gauss ian functions “minimise uncerta int y”, tha t is, they are the functions where Heisenberg ’s uncertaint y relation b e c omes an e q uation. [7] – With complex mo dulation G auss ian functions are called Gabor a toms or time-fr e quency atoms in the Gabor transfor m. The Gabor tr ansform is a window ed F ourier transfo r m, that shows how frequency comp onents evolv e o ver time. – Complex mo dulated Gauss ian functions with a cor rection offset are ca lled Morlet wa velets a nd are used in the Con tin uous W a velet T ransform. This transform is also intended for sho wing the evolution of fre q uency comp onents over time, but it has higher time reso lution and less frequency resolution for hig h frequencies. – Best basis pur s uits and matching pursuits ar e techniques for deco mp o sing a sig na l into a finite num ber of irregularly lo ca ted time-frequency atoms. [6] Since G abor tra nsform, Morl et wav ele t tra nsform as w ell as b est basis and matching pursuits aim at deco mp o s ition of a signal into time-frequency atoms, we have several to ols for appr oximating re al world sig nals using those atoms as building blo cks. In sto chastics the density of the Normal dis tribution is a Gauss ian function. The Cen tral Limit Theor em sta tes , that adding mor e and more random v ariables (and divide by the square ro ot of the num b er of added v ar iables) in most pr actical cases approaches a no rmally distributed ra ndom v ariable. T ransla ted to signal pro cessing this means, that smo o thing a signal again a nd again with practica lly any filter w indow, appr oximates a Gauss ian filter. The der iv ation of the repr esentations below is not particularly difficult, but we fo cus on finding representations that simplify most op e rations in ter ms of use of transcendental constants a nd irra tional algebr aic functions. W e have im- plement ed these function classes in Has kell using the NumericPrelude type clas s hierarch y . Y ou find our implement ation at http:/ /code .haskel l.org/numeric- prelude/src/MathObj/Gaussian/ . 8 Henning Thielemann 3.1 Simple Gauss ians Simple Gauss ian functions shall b e the first class of functions that w e w ant to consider. In order to avoid a tra ns cendental factor containing π in any of our function par ameters when applying Fourier tr a nsform F , w e c a nnot choos e f ( t ) = exp( − t 2 ) but w e hav e to cho ose its eigenfunction f ( t ) = exp( − π · t 2 ). W e also wan t to supp o r t shrinking of a function, what requires adding a shr inking parameter c , that w e like to write in currie d form : f ( c )( t ) = exp( − π · ( c · t ) 2 ). How ever this would yield a sq uare ro ot in the conv olution of tw o such functions. It can be prevented by c ho osing the form f ( c )( t ) = exp( − π · c · t 2 ) with c ∈ Q . The F ourier transfor m of a shrunk function leads to another factor, the amplitude y of the function. W e end up with the function form: f ( y , c )( t ) = √ y · exp( − π · c · t 2 ) c ∈ Q y ∈ [0 , ∞ ) ∩ Q . This simple function class already allows for several op eratio ns, where con- volution a nd Fourier transform ha ve constraints that as sert conv er gence: scaling k · f ( y , c ) = f ( y · k 2 , c ) shrinking f ( y , c ) ↓ k = f ( y , c · k 2 ) conjugate f ( y , c ) = f ( y , c ) m ultiplication f ( y 0 , c 0 ) · f ( y 1 , c 1 ) = f ( y 0 · y 1 , c 0 + c 1 ) power with r ≥ 0 f ( y , c ) r = f ( y r , r · c ) conv o lution c 0 + c 1 > 0 f ( y 0 , c 0 ) ∗ f ( y 1 , c 1 ) = f y 0 · y 1 c 0 + c 1 , c 0 · c 1 c 0 + c 1 F ourier tra nsform c > 0 F − 1 ( f ( y , c )) = f y c , 1 c . Typical functiona ls like function norms do not ea sily satisfy our go a l of using the most simple a lgebraic structures. They need ro ots o f parameters or co nstant transcendental factors, and thus r equire sp ecial tr eatment: L 1 -norm k f ( y , c ) k 1 = r y c L 2 -norm k f ( y , c ) k 2 = r y √ 2 · c L ∞ -norm k f ( y , c ) k ∞ = √ y L p -norm k f ( y , c ) k p = r y p √ p · c v ariance t 7→ t 2 · f ( y , c )( t ) 1 k f ( y , c ) k 1 = 1 2 π · c . An algebra for signal pro cessing 9 3.2 T ranslated and mo dul ated Gaussians In the next step we wan t to tr anslate and mo dulate the Gauss ian function. The most simple function c lass, that a llows this, s eems to b e: f ( y , a, b , c )( t ) = √ y · exp − π · ( a + b · t + c · t 2 ) (13) y ∈ [0 , ∞ ) ∩ Q c ∈ Q { a, b } ⊂ Q + i Q . With this repr esentation we can p erform the following op era tio ns: translation f ( y , a, b , c ) → k = f ( y , a − b · k + c · k 2 , b − 2 · c · k , c ) mo dulation f ( y , a, b , c ) · (cis1 ↓ k ) = f ( y , a, b + 2 · i · k , c ) scaling k · f ( y , a, b, c ) = f ( y · k 2 , a, b, c ) shrinking f ( y , a, b , c ) ↓ k = f ( y , a, b · k , c · k 2 ) conjugate f ( y , a, b , c ) = f ( y , a, b, c ) power with n ≥ 0 f ( y , a, b , c ) n = f ( y n , n · a, n · b, n · c ) m ultiplication f ( y 0 , a 0 , b 0 , c 0 ) · f ( y 1 , a 1 , b 1 , c 1 ) = f ( y 0 · y 1 , a 0 + a 1 , b 0 + b 1 , c 0 + c 1 ) conv o lution c 0 + c 1 > 0 f ( y 0 , a 0 , b 0 , c 0 ) ∗ f ( y 1 , a 1 , b 1 , c 1 ) = f y 0 · y 1 c 0 + c 1 , a 0 + a 1 − ( b 0 − b 1 ) 2 4 · ( c 0 + c 1 ) , b 0 · c 1 + b 1 · c 0 c 0 + c 1 , c 0 · c 1 c 0 + c 1 F ourier transfor m c > 0 F − 1 ( f ( y , a, b, c )) = f y c , a − b 2 4 c , − i · b c , 1 c The cor rectness of the equa tions for the Fourier transform and the conv olution are not so obvious. The F ourier transform can b e derived by translating the function to the origin and demo dulate it, such that it b ecomes real. Then do F ourier transfo rm and trans late a nd mo dulate it corresp onding to the no rmal- isations that w e p erfo r med in time domain. The con volution can also b e der ived from such an normalisatio n. An alternative is to m ultiply in freq uency domain. W e co uld also employ the definition f ( y , a, b , c )( t ) = √ y · exp( − ( a + b · √ π t + c · π t 2 )) (14) and the formulas for mo st op era tions would remain the sa me, but translation, shrinking and mo dula tio n would have to b e in terpreted with resp ect to the unit √ π . With the considered representation we c an also represent t wo other kinds of functions that ar e imp orta nt to sig nal pro cessing: A decaying ex p o nential cur ve can be obta ine d with c = 0 ∧ b > 0. It is frequen tly encount ered as e nv elop e of per cussive sounds. Unfor tunately that curve is unrestricted in time, wherea s in natural sounds the en velope usually s tarts suddenly somewher e in time. 10 H enning Thielemann The other imp ortant signal is a tone of linearly changing frequency , a chirp . In order to r epresent it, we have to generalis e the real parameter c to a complex parameter. Setting ℑ ( c ) 6 = 0 , ℜ ( c ) = 0 yields a pure chirp, wher eas ℜ ( c ) > 0 yields a chirplet , that is a chirp, that is lo calised in time. The name is chosen analogo usly to wavelets . The Fourier tra ns forms and norm computations con- verge, if and only if, ℜ c > 0 (i.e. o nly for chirplets), and the co nvolution exists if a nd only if ℜ ( c 0 + c 1 ) > 0. The Bluestein transfo r m allows us to write a F ourier tra nsform in terms of a conv o lutio n with a chirp. F − 1 x = (( x · f (1 , 0 , 0 , − i )) ∗ f (1 , 0 , 0 , i )) · f (1 , 0 , 0 , − i ) The downside of this g eneralisa tion is, that we need to maintain a complex amplification factor y . This inv olves a complex square ro o t a nd we must choo se the right branch. F or a single conv olutio n or F ourier transform c ho osing the branch with p o sitive r eal par t is just the right thing. The rea l par t canno t v anish, since then the integrals do no t converge at all. But when m ultiplying sq ua re ro o ts with the conven tion of p ositive real parts then w e must resp ect √ c 0 · √ c 1 = ( − 1 ) k · √ c 0 · c 1 k = 1 : (upp er c 0 ∧ upper c 1 ∧ ¬ upper( c 0 · c 1 )) ∨ ( ¬ upper c 0 ∧ ¬ upper c 1 ∧ upper ( c 0 · c 1 )) 0 : otherwis e upper c = ℑ c > 0 ∨ ( ℑ c = 0 ∧ ℜ c < 0) . That is w e must maintain the sign of the real par t sepa r ately . In the repr esenta- tion (13 ) we can implement a flipp ed sign by adding i to b . How ever main taining this sign means comparisons and th us would not work for g eneralisa tio ns from Q to other fields, e.g. finite fields. 3.3 Gaussians multiplied wi th p ol ynomials Another imp ortant op era tion in sig na l pro cessing is deriv ation, b e it in differ- ent ial equations lik e oscillation equations, for the representation of an eigen- basis of the Fourier tr ansform, or as a highp ass filter , that is, a frequency filter that emphasises high frequencies and suppresses low frequencies. Deriv a- tion r equires, that w e extend our represen tation to a pro duct of a Gauss ian function and a po lynomial function. Ho wev er using t he simple representation f ( y , a, b , c )( t ) = √ y · exp − π · ( a + b · t + c · t 2 ) from (13), this would mean to maintain p oly no mial expressio ns of π as co efficients of the polyno mial factor. An algebra for signal pro cessing 11 W e like to av o id that and instead extend (14) to: f ( y , a, b , c )( t ) = √ y · exp − ( a + b √ π · t + c · π · t 2 ) ϕ (( y , a, b, c ) , p )( t ) = f ( y , a, b, c )( t ) · b p ( t ) y ∈ [0 , ∞ ) ∩ Q c ∈ Q { a, b } ⊂ Q + i Q b p ( t ) ∈ ( Q + i Q )[ √ π · t ] b p ( t ) = n X j =0 p j · ( √ π · t ) j As men tioned ab ov e tr a nslation a nd modulation hav e to b e interpreted with resp ect to a unit √ π , and der iv ation m ust co ntain a factor o f √ π . Ho wev er in order to g e t the usual units we can still replace Q by Q ( √ π ). F or the fo llowing list of instantiations o f signa l pro cessing tra nsforms we like to subsume the pa rameters of f in a parameter tuple α . translation ϕ ( α, p ) → √ π k = ( f α → √ π k ) · ( b p → √ π k ) mo dulation ϕ ( α, p ) · cis1 ↓ k √ π = f α · cis1 ↓ k √ π · b p scaling k · ϕ ( α, p ) = f α · ( k · b p ) shrinking ϕ ( α, p ) ↓ k = ( f α ↓ k ) · ( b p ↓ k ) conjugate ϕ ( α, p ) = f α · b p m ultiplication ϕ ( α 0 , p 0 ) · ϕ ( α 1 , p 1 ) = ( f ( α 0 ) · f ( α 1 )) · \ p 0 · p 1 power with n ∈ N ϕ ( α, p ) n = ( f α ) n · c p n conv o lution ϕ ( α 0 , p 0 ) ∗ ϕ ( α 1 , p 1 ) = F F − 1 ( ϕ ( α 0 , p 0 )) · F − 1 ( ϕ ( α 1 , p 1 )) F ourier transfor m F − 1 ( ϕ ( α, s : p )) = s · F − 1 ( f α ) + i 2 · √ π · F − 1 ( ϕ ( α, p )) ′ where d s : p ( t ) = s + √ π · t · b p ( t ) Different iation 1 √ π · ( ϕ (( y , a, b, c ) , p )) ′ = f ( y , a, b, c ) · 1 √ π · b p ′ − ( t 7→ b + c · √ π · t ) · b p (15) Int egra tion √ π · R T −∞ ϕ (( y , a, b, c ) , p )( t ) d t = s · √ π · Z T −∞ f ( y , a, b , c )( t ) d t + f ( y , a, b, c )( T ) · b q ( T ) = s · exp − a + b 2 4 c · 1 + er f b 2 √ c + √ cπ · T 2 √ c + f ( y , a, b, c )( T ) · b q ( T ) where b q ( t ) = 1 √ π · b q ′ ( t ) − b p ( t ) − s b + c · √ π · t (16) 12 H enning Thielemann Eigenfunction of Fourier transform e n = f (1 , 0 , 0 , 2) ( n ) · f (1 , 0 , 0 , − 1) (17) The equation (16) is the inv er se of (15). This implies that in (16 ) the p o lynomial q depe nds r ecursively on itself. How ever b ecause the degr ee of p is o ne more than that of q , the leading term of q only dep ends on the leading term of p . Thus we can successively determine the terms of q star ting at the highest one . The e q uation can be tra nslated almost literally to a p o lynomial division with remainder s in our Haskell implemen tation and b e solved using lazy evaluation . In (17) we hav e used the definition of H ermite polyno mials and the k nown fact, that the Gauss ian function multiplied with a Hermite p o lynomial is an eigenfunction of the F ourier transfo r m. 3.4 Mixed Gauss ians In order to s uppo rt sums of signals, we must maintain a set A of par ameters for the Gauss ians and a map P from the Gauss ian pa rameters α to the asso ciated po lynomial factor . X α ∈ A f α · c P α Even tua lly , this repr esentation is general enough in or der to be target of a win- dow ed Fourier transfor m or a b es t basis or matching pursuit. That is we can approximate r eal world signals in a natural way and p erfor m exac t sig nal pro- cessing op era tions on them. 4 Related w ork With o ur pap er w e wan ted to draw a connection betw een Computer Algebra on the one side and Signa l P ro cessing and Sto chastics on the other side. With “Co m- puter Algebr a ” we mean exa ct computations inv o lving complex mathema tical ob jects like p olyno mia ls, p olynomial ideals, groups, that is a t a higher level than computing with individual num b ers but at a low er level than computing with general mathematical expre s sions as in symbolic ma nipulations. Although signal pro cessing is cer tainly not the most pr ominent a pplication o f c omputer algebra, there ar e many pr oblems that w ere solved using co mputer alg ebra metho ds. [4] The most fa mous a pplication is probably the Discrete Fourier T ra nsform, that can be p erformed in log-linear time with resp ect to the length of the input data using tec hniques from num b er theo r y , finite fields, p olynomial rings and auto - mated co de genera tion. [1,3 ] Closely related is the fast conv olution of discrete signals, that uses the F ast Fourier T ransfo rm and b y the chirp transform it is also poss ible to express a fast Fourier T ransform in ter ms of fast convolution algorithms. Another computer algebra application in signal pr o cessing is the design of frequency filters, where we have to construct ratio na l functions given conditions for the loca tion of its zero s and p oles. [5] An algebra for signal pro cessing 13 In [2] the authors develop signal pro cessing the algebr aic w ay , as we do in our pap e r. That is, o pp osed to the sample v alue fo c us of most signal pr o cessing literature they treat signa ls as ob jects , define op era tions on them a nd prop ose and prov e laws. The b o o k is concerned with t wo-dimensional signals, but the difference to one- dimensional signals is not essential in this a pproach. Unlike our consideratio n o f real functions a nd esp ecially modified Gauss ian functions the authors stick to discrete signals . Compared to established computer algebra systems and their symbolic in te- gration machineries, our framework pro vides no new class of closedly in tegrable functions. All of Maple 10 , Mathematica 5.2, Ma xima 5 .20.1 ca n integrate the int egra ls o c curring in convolution, Fourier tr ansform, nor m and scalar pro duct of our pro ducts of Gaussian function and p olynomia l in close d form. MuP AD 4.0.6 cannot integrate the more co mplicated conv olutio ns and Axiom 2009 1101 cannot co p e with the integrals at all, since it does not y et suppor t assumptions. W e need assertion for the co efficient c of the quadratic term in the exp onent of the Gauss ian function. It must hav e p o s itive rea l part, o r must be at lea s t a po sitive r eal n umber. That is, with a n assertio n we m ust exclude signals with constant a mplitude or even un b ounded amplitude. 5 Outlo ok 5.1 Dirac im pulses The Dirac impulse δ is a vir tual function that is infinitely high at the origin and zero elsew he r e, enclosing a n infinitely hig h and infinitely narrow recta ng le of area 1. If this function would ex is t, it would b e the iden tity e lement of co n- volution. Its Fourier trans form is the function that is co nstant 1, bec a use this is the iden tity elemen t of po int wise function mult iplication. In s to chastics the Dirac impulse is needed for representing mixed discrete/co ntin uous probability distributions. Exis tence of der iv atives of the Dirac impulse would eliminate the need for a dis tinct differen tiation op era tion, b ecaus e x ′ = x ∗ δ ′ . W e could also more ea s ily hide the factor √ π in the differentiation op eration and we could represent freq uency sp ectra of polyno mial functions. Several appr oaches like Schw ar tz distributions a nd non-s ta ndard analysis were developed, in order to get a strictly founded notion of a Dirac impulse. How ever, none o f them is completely sa tisfying: Schw ar tz distributions hav e no longer a notion of function application and they ca nnot be multiplied p oint- wise. Non- standard a nalysis allows to define infinitely high and infinitely nar r ow functions, that actually let real functions unaltered at the “co arse” scale of real nu mbers after co nv o lution. Howev er when cons idering a non-standar d function on all scales, c onv o lution with the no n-standard Dirac impulse well changes the conv o lution pa r tner. It is not known to us, whether an approa ch can exis t at all, that fulfils all ex pe ctations. All the mor e it is interesting whether we can hav e an ob ject, that exactly behaves lik e a Dirac impulse in our theory . Since our theory is abstr acted from, 14 H enning Thielemann but not b ound to real functions, w e could c heck this w ay , whether a Dirac impulse makes sense at all. F orma lly in our a pproach a Dirac impulse could be r epresented by f (1 , 0 , 0 , + ∞ ). The term + ∞ could be made precise by using pro jective geo metry , i.e. b y allowing an ob ject like 1 0 . But then it is o p en, w hether we should use 1. exp − π · a + b · t + c · t 2 d , 2. exp − π · ( a + b · t + c d · t 2 ) , 3. exp − π · ( a + b · t d + c · t 2 d 2 ) or 4. exp − π · a 0 a 1 + b 0 b 1 · t + c 0 c 1 · t 2 with a pro jective in terpretation of the fractions, a nd how to cop e with the am- plitude parameter . 5.2 Discrete si gnal pro cessing W e would lik e to hav e the s ame set of o pe rations and laws for discre te signa ls that we a lready have for real signals. In orde r to have dual time and frequenc y domains, w e need to conten t ourselves with p erio dic discrete signals. F o r instance for disc rete p er io dic sig nals x a nd y of p erio d length n conv olution and Discrete F ourier T ransform (DFT) a r e usually defined as: ( x ∗ y ) k = X j ∈ Z n x j · y k − j (DFT x ) k = 1 √ n · X j ∈ Z n exp 2 π i n · j · k · x j (DFT − 1 x ) k = 1 √ n · X j ∈ Z n exp − 2 π i n · j · k · x j . The factor 1 √ n is c hosen, such that DFT b ecomes unitar y . How ever with this definition it do es not ho ld DFT( x · y ) = DFT x ∗ DFT y , but instead √ n · DFT( x · y ) = DFT x ∗ DFT y . One solution would b e to add the factor 1 √ n to the definition of the conv o lution. This is at leas t very uncommon. An alterna tive is to tr eat discrete signals as piecewise cons tant functions. The sums a re turned to integrals and thus need a step width. T o this end we equip every signal with a sampling ra te and denote it with r ate. It holds rate(DFT x ) = n rate x . W e obtain the definitions ( x ∗ y ) k = 1 rate x · X j ∈ Z n x j · y k − j for { ra te x, rate y } = { r ate( x ∗ y ) } (DFT x ) k = 1 rate x · X j ∈ Z n exp 2 π i n · j · k · x j (DFT − 1 x ) k = 1 rate x · X j ∈ Z n exp − 2 π i n · j · k · x j . An algebra for signal pro cessing 15 F ollowing the reas oning of r eal signals, w e need eigenfunctions of the Fourier transform. Ge ne r ic s imply r epresentable eigenbases of the Discrete F ourier transform are currently no t known, but acco rding to the Poisson summation formula, the discr etised and p erio dically summed e ig enfunctions of the Con tin- uous Fourier transform, a r e eig env ector s of the discrete transform. How ever discretising an e ig enbasis o f the co nt inuous Fourier tra nsform ma y not yield a discrete eigenbasis. In discrete signal pro cessing t he iden tit y elemen t of convolution is simple to get: It is the sig nal that is 1 at index 0 a nd zero elsewhere. In c ontrast to that, the op eratio n of signal dilation may lead to undefined and multiple times defined elements in the resulting vector. The natural solution is to set undefined elements to zero a nd sum up all candidates for multiply defined output elements. This can b e written generally in the following w ay , where n is the s ignal per io d length and the e mpt y sum is zero: ( x ↑ k ) j = X l ∈ Z n : l · k = j x l . This definition matches shrinking the vector in the frequenc y domain. Nonethe- less, we hav e to drop inv er tibilit y of dila tio n from the list of la ws, that hold for real signa ls. Another pro blem is the definition of differen tiation. W e co uld r eplace it by centred discrete differences. Via the Fourier transform this would a ls o yield a notion of p erio dic po lynomials, namely p olynomia ls in sin 2 π · t n instead of t . But this in terpr etation of differentiation is different from differen tiation of a con- tin uous function with subsequent discretisation and perio dic summation. Thus it cannot be used for eig env ector computation in the same wa y we used it for contin uous s ignals. Summarised, we c annot simply p erfor m the op e r ations on our parameter tuples, that w e developed for contin uous signa ls , and use them for discrete signa ls by just in terpreting them in a discrete way . References 1. Clausen, M., Baum, U.: F ast F ourier T ransforms. BI Wissenschaf tsverlag (1993 ) 2. D ’Alotto, L.A., Giardina, C.R., Luo, H.: A unified signal algebra app roac h to t w o- dimensional parallel d igital signal pro cessing. Monographs and textb o oks in pu re and applied mathematics, Marcel Dek ker (1998) 3. F rigo, M., Johnson, S.G.: FFTW: The F astest Fourier T ransform in th e W est, versi on 3. http://www.fft w.org/ 4. Grabmeier, J., Kaltofen, E., W eispfenning, V. (eds.): Computer Algebra Hand b ook. Springer (2003) 5. H amming, R.W.: D igital Filters. Signal Processing Series, Prentice Hall (January 1989) 6. Mallat, S.G., Zhang, Z. : Matc hing pursuits with time-frequency dictionaries. IEEE T ransactions on Signal P ro cessing 41(12), 3397– 3415 (December 1993), http://iee explore.ieee .org/search/wrapper.jsp?arnumber=258082 7. T rieb el, H.: H¨ ohere A nalysis. Harri Deutsch (1980)
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment