The Berlekamp-Massey Algorithm via Minimal Polynomials

We present a recursive minimal polynomial theorem for finite sequences over a commutative integral domain $D$. This theorem is relative to any element of $D$. The ingredients are: the arithmetic of Laurent polynomials over $D$, a recursive 'index fun…

Authors: Graham H. Norton

The Berlek amp-Massey Algorithm via Minimal P olynomials G. H. Norton, Departmen t of Mathemati cs Universit y of Queensland. No vem b er 2 6 , 2024 Abstract W e p resen t a recursiv e minimal p olynomial theorem for fi nite s equences o ver a c ommutativ e in tegral domain D . This theorem is relativ e to any elemen t of D . The ingredient s are: the arithmetic of Lauren t p olynomials o ver D , a recursive ’index fu nction’ and simple mathematical in duction. T aking reciprocals giv es a ’Berlek a m p-Massey theorem’ i.e. a recursiv e construction of the p olynomials arising in the Berlek amp-Massey algorithm, relativ e to any elemen t of D . The recursive theorem readily yields the iterativ e minimal p olynomial algo r ithm due to the author and a transparen t deriv ation of the iterativ e Berlek amp-Massey algorithm. W e giv e an upp er b ound for the su m of th e linear complexities of s which is tigh t if s has a p erfect linear complexit y pr ofile. This implies that o v er a fi eld, b oth iterativ e algorithms r equire at most 2 ⌊ n 2 4 ⌋ m ultiplications. Keyw ords: Berlek am p-Massey algorithm; Lauren t p olynomial; minimal p olynomial; recursive function. 1 In tro duction 1.1 The Berlek amp-Massey (BM) Algorithm The BM algor it hm determines a linear recurrence of least order L ≥ 0 whic h generates a giv en (finite) sequence s of length n ≥ 1 o v er a field F , [13]. It is widely used in Co ding Theory , Cryptograph y and Sym b olic Computation. There are also connections with partial realization in Mathematical Systems Theory; see [1 8, In tro duction] and the references cite d there. F or an exp osition ba sed o n [13], see [22, Section 9.5]. Ho w eve r , ’The inner workings of the Berlek amp-Massey algorithm can app ear some- what m ysterious’, [3, p. 187] and the extended Euclidean algorithm is usually preferred as ’It is muc h easier to understand’, [14, p. 355]. F or a recen t example where the extended 1 Euclidean a lgorithm is regarded as ’simpler t o understand, t o implemen t a nd to prov e’, see [2]. There hav e b een a n umber of deriv a t ions of the BM a lgorithm for sequences o ve r a field, suc h a s [3, Chapter 7] and [9], whic h uses Hank el matrices. A similar approac h to [9] app eared in [11]; this uses Kronec ke r ’s Theorem on t he rank of Hank el matrices and the Io vidov index of a Hank el ma t r ix. W e do not know if [11] applies to finite fields. Another approac h [8] uses the F eng-Tzeng algorithm [4]. F or referen ces relating the BM and Euclidean algorithms, see [1 8 , Introduction] and the references cited there. A recursiv e v ersion of the BM algorithm (based o n splitting a sequence and r ecom bining the results) app eared in [3, p. 336]. 1.2 Linear Recurring Sequences via Lauren t Series The con ve ntional a pproac h to linear recurring sequences indexes them b y the non- negativ e in tegers and uses recipro cals of p olynomials as characteristic f unctions; see [12], [13] or an y of the standard texts. This complicates their theory . W e t o ok a non-standard, algebraic approac h in [18], [20] (a n exp ository v ersion of [18]): use t he field F [[ x − 1 , x ] o f F -La uren t series in x − 1 (the case F = R is widely used in Mathematical Systems Theory) to study linear recurring sequences . F or us, a sequence is indexed b y 1 , 2 , . . . W e b egan with F [[ x − 1 , x ] as standard F [[ x − 1 , x ]-mo dule. Lat er w e realized tha t it w a s enough for F to b e a commutativ e unital in tegral domain D a nd used the decomp osition D [[ x − 1 , x ] = x − 1 D [[ x − 1 ]] ⊕ D [ x ] . The action of D [ x ] on x − 1 D [[ x − 1 ]] is obta ined as fo llo ws: pro ject multiplication (in D [[ x − 1 , x ]) of a n elemen t of D [ x ] and an elemen t of x − 1 D [[ x − 1 ]] on to the first summand. One c hec ks that this mak es x − 1 D [[ x − 1 ]] in to a D [ x ]-mo dule. Linear r ecurring sequences are then the torsion elemen ts in a natura l D [ x ]-mo dule. F or an y sequence s , w e ha ve its annihilator ideal Ann( s ); it elemen ts are the ’annihilating p olynomials’ of s and are defined b y Equation (2). Strictly sp eaking, s satisfies a linear recurrence relatio n if Ann( s ) 6 = { 0 } and is a linear r ecurring sequence if it has a monic a nnihilat ing p olynomial. When D is a field, Ann( s ) is generated by a unique monic annihilating p olynomial of s , the minimal p olynomial of s (rather tha n the con v en t io nal recipro cal of a certain c haracteristic p olynomial m ultiplied b y a p ow er of x ). In [25, Section I IA ] 1 , [1, Definition 2.1] and [26, Definition 2.1], the definition of a linear recurring sequence s 0 , s 1 , . . . is equiv alen t to expanding the left-hand side of Equation ( 2 ) and replacing d + 1 ≤ j b y d ≤ j . W e note that [18] and [20] w ere r eferred to in [2 1]. 1.3 Finite Sequences via Lauren t p olynomials T o study finite seque nces, w e replaced Lauren t series in x − 1 b y L a ur ent p olynomia ls D [ x − 1 , x ] in [18], [20]; for a succinct ov erview of [20], see [16]. Unfortunately , x − 1 D [ x − 1 ] 1 In [2 5], an element of Ann( s ) with minimal deg ree was called ’a characteristic p olynomial’ of s . 2 do es not b ecome a D [ x ]-mo dule, but we can still define the notions of annihilating and minimal p olynomials; see Definitions 2.1, 3.1. In this pap er, w e presen t a recursiv e minimal p olynomial f unction, see Section 3. W e replace the k ey definition of ’ m ’ of [13, Equation (11), p. 123] by a recursiv ely defined ’index function’; see Definition 4.1. W e t hen deriv e a recursiv e theorem for minimal p olynomials. T aking r ecipro cals (see Corollary 4 .10) leads to a recursiv e BM theorem (see Theorem 5.4). Our pro ofs use no more than t he absence of zero-divisors, the arithmetic of La uren t p olynomials a nd simple induction. 1.4 The Iterativ e Algorithms Our iterative minimal p olynomial algorithm (Algorithm 4 .12) and v ersion of the BM algorithm (Algorithm 5.6) follo w imm ediately . Both are relativ e to an y scalar ε ∈ D ( ε = 1 w as used in [13] whereas ε = 0 was used in [18], [20]). Algorit hm 5.6 is simpler than [9, p. 1 48] — see Remark 5.8 — and unlik e the classical BM algorithm, it is division- free, cf. [23]. The la st section discusse s the complexit y of these tw o algorithms and do es not dep end on any asp ects of the classical BM a lg orithm. W e give an upp er b ound for the sum of the linear complexities o f s , whic h is tight if s has a p erfect linear complexity profile, Corollary 6.3. This implies that the n umber of multiplic a t ions for Algorithms 4 .12 and 5.6 is at most 3 ⌊ n 2 / 4 ⌋ (Theorem 6.5) a nd improv es the b ound of ⌊ 3 n 2 / 2 ⌋ give n in [18, Prop osition 3.23]. Ove r a field F , this reduces to 2 ⌊ n 2 / 4 ⌋ ( if w e ignore divisions in F ). W e also include some remarks on the a ve ra ge complexit y . 1.5 Extensions and Rat ional A ppro ximation Let s = ( s 1 , . . . , s n ) ∈ D n b e a finite sequence and s ( j ) = ( s 1 , . . . , s j ) ha ve ’generating function’ s ( j ) = s 1 x − 1 + . . . + s j x − j for 1 ≤ j ≤ n . W e write (i) µ ( j ) for t he minimal p olynomial of s ( j ) of Theorem 4.5 with degree L j (ii) ν ( j ) for t he ’p o lynomial par t’ o f µ ( j ) · s , whic h w as ev aluated in [18]. Then deg( ν ( j ) ) < L j and µ ( j ) · s ≡ 0 mo d x − j − 1 (1) for 1 ≤ j ≤ n . Remark ably , our fo r m ula f or ν ( j ) ν ( j ) = ∆ ′ j · x max { e, 0 } ν ( j − 1) − ∆ j · x max {− e, 0 } ν ′ ( j − 1) is identical to Theorem (4.5) with µ replaced by ν , where e = e j − 1 = j − 2L j − 1 . The only difference b eing tha t ν is initialised differen tly . It is w ell- kno wn that the BM algorithm also computes r a tional approximations. W e could also extend Algorithm 5.6 to compute ν ( j ) ∗ iterativ ely , obtaining deg( ν ( j ) ) from Equation (1) a nd L j (when deg( ν ( j ) ) 6 = 0). In this wa y , Algorit hm 5.6 could also b e used to deco de not just binary BCH co des, but Reed-Solomon co des, errors and erasures, clas- sical Go ppa co des, negacyclic co des and can b e simplified in characteris t ic t w o . As this 3 has already b een done more simply using rational approximation via minimal p olynomials in [19] and [20, Section 8], w e will not compute ν ( n ) ∗ iterativ ely here. An extension of Theorem 4.5 to ra tional approximation will app ear in [1 7]. W e thank an anonym o us referee for a s impler pro of of Lemma 6 .3 . A preliminary v ersion of this w ork was presen ted in May 2010 at Equip e SECRET, Cen t re de Rec herc he, INRIA Paris-Ro cquencourt, whom t he author thanks f o r their hospitality . 2 Preliminaries 2.1 Notation Let N = { 1 , 2 , . . . } , n ∈ N and let D denote a commutativ e, unital inte g ral domain with 1 6 = 0. F or any set S con taining 0, S × = S \ { 0 } . W e sa y that f ∈ D [ x ] × is monic if its leading term is 1. The recipro cal of 0 is 0 and for f ∈ D [ x ] × , its recipro cal is f ∗ ( x ) = x deg( f ) f ( x − 1 ). W e often write f = x e g + h for f ( x ) = x e g ( x ) + h ( x ), where e ∈ N and g , h ∈ D [ x ]. 2.2 Linear Recurring Sequences By an infinite sequenc e s = ( s 1 , s 2 , . . . ) o v er D , w e mean a function s : N → D i.e. an elemen t of the ab elian group D N . The standard alg ebraic approac h to ’linear recurring sequence s’ is to study D N using s ( x ) = P j ≥ 1 s j x j ∈ D [[ x ]] as in [12], [24], whic h requires recipro cal p olynomials and complicates their underlying theory . W e recall the appro ac h of [18]. W e b egin with the standard D [[ x − 1 ]-mo dule i.e. acting on itself via m ultiplication. ( This a lso mak es D [[ x − 1 ] as a D [ x ]-mo dule.) Next w e let D [ x ] act on x − 1 D [[ x − 1 ]] b y pro jecting the pro duct f ∈ D [ x ] and s = P j ≥ 1 s j x − j on to the first summand of D [[ x − 1 ] = x − 1 D [[ x − 1 ] ⊕ D [ x ]] i.e. f ◦ s = X j ≥ 1 ( f · s ) − j x − j . One c hec ks that this mak es x − 1 D [[ x − 1 ]] in to a D [ x ]-mo dule. Let Ann( s ) = { f ∈ D [ x ] : f ◦ s = 0 } denote the a n nihilator ide al of s ; f is an annihi l a ting p olynomial or an annihilator of s if f ∈ Ann( s ). W e will often write f ◦ s for f ◦ s and Ann( s ) fo r Ann( s ). W e say that s satisfie s a lin e ar r e c urr enc e r e lation if it is a torsion elemen t i.e. if Ann( s ) 6 = (0) [18, Section 2], [18 , Section 2]. In other w ords, s satisfies a linear recurrence relation if for some f ∈ D [ x ] with d = deg ( f ) ≥ 0 ( f · s ) d − j = 0 for d + 1 ≤ j. (2) 4 In this case, f ∈ Ann( s ) × . If w e expand the left-hand side of Equation (2) w e obtain f 0 s j − d + · · · + f d s j = 0 for d + 1 ≤ j. When f d = 1 , w e can write s j = − ( f 0 s j − d + · · · + f d − 1 s j − 1 ) for j ≥ d + 1 and s is a line ar r e curring se q uen c e . F or the F ib onacci sequence s = 1 , 1 , 2 , . . . for example, x 2 − x − 1 ∈ Ann( s ). W e sa y t ha t f ∈ Ann( s ) × is a mini m al p olynomial of s if deg( f ) = min { deg( g ) : g ∈ Ann( s ) } . As Ann( s ) is a n ideal, we easily see that s has a unique monic minimal p olynomial whic h generates Ann ( s ) when D is a field. M o r e generally , it w as sho wn in [6 ] that if Ann( s ) 6 = { 0 } then (i) if D a factorial then Ann( s ) is principal and has a primitiv e generator (ii) if D is p otential, then Ann( s ) has a unique monic generator. In [6 ], w e called D p o ten tial if D [[ x ]] is factorial. It is known that principal ideal do- mains and F [ x 1 , . . . , x k ] are p otential, but not all factorial do ma ins are p oten tia l; see [6, In tro duction] and the r eferences cited t here. 2.3 Finite Sequences W e no w adapt the preceding definition o f Ann( s ) to finite sequences s ∈ D n b y using Lauren t p olynomials . This also leads to a less complicated theory of their annihilating and minimal p olynomials. First, let s = ( s 1 , . . . , s n ) and s ∈ D [ x ] b e s ( x ) = s 1 x + · · · + s n x n . W e will also abbreviate s ( x − 1 ) = s 1 x − 1 + · · · + s n x − n to s , so that s j = s − j for − n ≤ j ≤ − 1. I n the following definition, multiplication of f ∈ D [ x ] and s ∈ D [ x − 1 ] is in the domain o f D -Lauren t p olynomials D [ x, x − 1 ]. Definition 2.1 (Annihilator, annihilating p olynomial) ( [18, Defini tion 2.7, Pr op o- sition 2.8 ]) If s ∈ D n , then f ∈ D [ x ] is an an nihilator (or a char acteristic p o lynomial) of s if f = 0 or d = deg( f ) ≥ 0 and ( f · s ) d − j = 0 for d + 1 ≤ j ≤ n (3) written f ∈ Ann( s ) . If w e expand the left-hand side of Equation ( 3), we obtain f 0 s j − d + · · · + f d s j = 0 for d + 1 ≤ j ≤ n. An y p o lynomial o f degree at least n is v acuously a n annihilato r o f s . F or 1 ≤ i ≤ n , w e write s ( i ) for ( s 1 , . . . , s i ). If n ≥ 2, then Ann( s ) ⊆ Ann( s ( n − 1) ). If d ≤ n − 1 and the leading term of f is a unit, w e can mak e f monic and generate the last n − d terms of s recursiv ely from the first d terms. The follow ing definition is a functional v ersion of [1 8, Definition 2 .1 0]. 5 Definition 2.2 (Discrepancy F unction) We define ∆ : D [ x ] × × D n → D by ∆( f , s ) = ( f · s ) deg( f ) − n . Th us ∆( f , s ) = P d k =0 f k s j − d + k where d = deg( f ) . Clearly for n ≥ 2, f ∈ Ann( s ) × if and only if f ∈ Ann( s ( n − 1) ) × and ∆( s, f ) = 0. F or an y s 1 ∈ D × and constant p olynomial f , ∆( f , ( s 1 )) = s 1 . If s has exactly n − 1 ≥ 1 leading zero es, s n 6 = 0 and f = 1, t hen f ∈ Ann( s ( n − 1) ), but ∆( f , s ) = s n 6 = 0. L et s b e suc h that s ( n − 1) is geometric with common ratio r ∈ D × , but s is not geometric. In this case, w e hav e x − r ∈ Ann( s ( n − 1) ) but ∆( x − r , s ) 6 = 0. If s ∈ D n is understo o d, w e write ∆ n ( f ) for ∆( f , s ); if f is a lso understo o d, w e simply write ∆ n . It is elemen tary that if 1 ≤ i ≤ n − 1 , then ( f · s ( i ) ) deg( f ) − i = ( f · s ) deg( f ) − i . 3 Minimal P o l yn omials A notio n of a ’minimal p olynomial’ of a finite sequence ov er a field seems to ha ve first app eared in [24, Equation (3.16)], where the minimal p olynomial of a finite sequence was defined in terms of the output of the BM algorithm of [13]. W e w ere una ware o f [24 ] and a do pt ed a more ba sic and more general approac h whic h is indep enden t o f the BM algorithm. In part icular, the approach intro duced in [1 8] is indep enden t of linear feedbac k shift registers and connection p olynomials. F or us, a sequence ma y ha ve more than one minimal p olynomial. Definition 3.1 (Minimal Poly nomial) ([18, Defi nition 3.1]) We say that f ∈ Ann( s ) is a mi n imal p olynomial of s ∈ D n if deg( f ) = min { deg( g ) : g ∈ Ann( s ) × } and let MP( s ) d e note the set of minimal p olynomials of s . As any f ∈ D [ x ] o f degree at least n annihilat es s ∈ D n , MP( s ) 6 = ∅ . W e do not require minimal p olynomials to b e monic. F or an y d ∈ D × , d ∈ MP(0 , . . . , 0); if s 1 6 = 0 a nd deg( f ) = 1 then f ∈ MP( ( s 1 )) since D has no zero divisors. The linear complexit y function L : D n → { 0 } ∪ N is L( s ) = deg ( f ) where f ∈ MP( s ) . W e will a lso write L n for L( s ) when s is understo o d and similarly L j = L( s ( j ) ) for 1 ≤ j ≤ n . F or fixed s , L is clearly a non-decreasing f unction of i . It is trivial that if s is infinite and satisfies a linear recurrence relation, then Ann( s ) ⊆ \ n ≥ 1 Ann( s ( n ) ) . When D is a field, a minimal p olynomial of a linear recurring sequenc e t is usually defined as a generator of the ideal Ann( t ); see [12, Chapter 8 ]. 6 Prop osition 3.2 (Cf. [25]) L et n ≥ 1 , s ∈ D n and f ∈ MP( s ) b e monic. Define t ∈ D N to b e the extens i o n of s by f . If Ann( t ) is princ ip al then Ann ( t ) = ( f ) . Pro of. Let Ann( t ) = ( g ) say . As f ∈ Ann( t ) × , Ann( t ) 6 = (0). If g 6 = 0 generates Ann( t ) then g | f a nd deg ( f ) ≥ deg( g ). Since g ∈ Ann( s ), w e cannot hav e deg( g ) < deg( f ), for then f 6∈ MP( s ). So deg ( g ) = deg ( f ) = d sa y . Equating leading co efficien ts sho ws that g d is a unit o f D and so w e can also assume that g is monic. W e conclude that f = g and that Ann ( t ) = ( f ).  It will follow fr o m Prop osition 5 .2 b elow that the (unique) minimal p olynomial of [2 4] obtained from the output of t he BM a lg orithm is an example of a minimal p olynomial as p er Definition 3.1. 3.1 Exp onen ts The follow ing definition will play a k ey role in defining our recursiv e minimal p olynomial function. The reason for c ho osing the term ’exp onen t’ will b ecome clear b elow. Definition 3.3 (Exp onent F unction) F or n ≥ 1 , let the n th exp on e nt function e n : D [ x ] × → Z b e given by e n ( f ) = n + 1 − 2 deg( f ) . The fo llo wing lemma is the annihilato r analo g ue of [13, Lemma 1 ] and will b e used for provin g minimalit y . W e include a short pro of to k eep the presen tation self-contained. Comm utativit y and the a bsence of zero-divisors a r e essen tial here. Lemma 3.4 ([18, L emma 5.2]) L et n ≥ 2 , f ∈ Ann( s ( n − 1) ) × and ∆ n ( f ) 6 = 0 . (i) F or any g ∈ Ann( s ) × , deg( g ) ≥ n − deg( f ) = e n − 1 ( f ) + deg( f ) . (ii) If h ∈ MP( s ) then deg ( g ) ≥ ma x { e n − 1 ( h ) , 0 } + deg ( h ) . Pro of. Put ∆ = ∆ n ( f ). W e can write f · s = N + ∆ · x d − n + P where d = de g ( f ), N i = 0 for d − n ≤ i ≤ − 1 and P ∈ D [ x ]. Lik ewise, write g · s = M + Q and e = deg( g ), with M i = 0 for e − n ≤ i ≤ − 1 a nd Q ∈ D [ x ]. Let h ∈ D [ x ] b e h = f · Q − g · P = g · N − f · M + g · ∆ · x d − n . By construction ( g · N − f · M ) d + e − n = 0 , so h d + e − n = g e · ∆ 6 = 0 and d + e − n ≥ 0. The last sen tence is immediate since Ann( s ) ⊆ Ann ( s ( n − 1) ) and max { e( f ) , 0 } + deg( f ) = max { n − deg ( f ) , deg( f ) } .  If s has exactly n − 1 ≥ 1 leading zero es and s n 6 = 0, then 1 ∈ MP( s ( n − 1) ) and so L( s ( n − 1) ) = L( s 1 ) = 1. L emma 3.4 implies that L( s ) ≥ n and since an y p olynomial o f degree n is an annihilato r, L( s ) = n . F or a geometric sequence s ( n − 1) o ve r D with common ratio r ∈ D × suc h that s is not g eometric, w e hav e x − r ∈ MP( s ( n − 1) ) and ∆( x − r, s ) 6 = 0. By Lemma 3 .4 , we ha ve L( s ) ≥ n − 1. W e will see tha t L( s ) = n − 1. If f ( j ) ∈ MP( s ( j ) ) for 1 ≤ j ≤ n − 1 and e n − 1 = e n − 1 ( f ( n − 1) ) > 0 then L n ≥ L n − 1 + e n − 1 b y Lemma 3.4, and inductive ly , L n ≥ L 1 + X e j − 1 ( f ( j ) ) > 0 e j − 1 ( f ( j ) ) . 7 Theorem 4.5 will imply that this is actually an equalit y . 4 A Recu rsiv e Minimal P olyn o mial F unction W e will define a recursiv e minimal p olynomial function µ : D n → D [ x ]. But first we need the following function (which assumes that µ : D n − 1 → D [ x ] ha s b een defined). W e also set ∆ 0 = 1. 4.1 The Index F un c tion Definition 4.1 (Index F unction) L et n ≥ 1 and s ∈ D n . We set µ (0) = 1 (so that ∆ 1 = ∆ 1 ( µ (0) ) = s 1 ) and e 0 = 1 . Supp ose that for 1 ≤ j ≤ n − 1 , µ ( j ) ∈ MP( s ( j ) ) has discr ep ancy ∆ j +1 and ex p onen t e j . We define the index function ′ : { 0 , . . . , n } → {− 1 , n − 1 } by 0 ′ = − 1 and for 1 ≤ j ≤ n − 1 j ′ =  ( j − 1) ′ if ∆ j = 0 or (∆ j 6 = 0 and e j − 1 ≤ 0) j − 1 if ∆ j 6 = 0 and e j − 1 > 0 . Th us for example, 1 ′ = − 1 if s 1 = 0 and 1 ′ = 0 when s 1 6 = 0 (since e 0 > 0). More generally , if s has n − 1 ≥ 0 leading zero es, t hen ( n − 1) ′ = · · · = 0 ′ = − 1 and n ′ = n − 1 . Example 4.2 In T able 1, 2 ′ = 1 ′ = 0 , 4 ′ = 3 ′ = 2 and 5 ′ = 4 and in T a b le 2, 1 ′ = 0 and 4 ′ = 3 ′ = 2 ′ = 1 . It is trivial tha t j ′ ≤ j − 1 f o r 0 ≤ j ≤ n . W e will see that the j for whic h ∆ j 6 = 0 and e j − 1 > 0 a re precisely t ho se j for whic h L j = L j − 1 + e j − 1 ; the linear complexit y has increased b y e j − 1 . The next r esult is essen tia l. Prop osition 4.3 F or 0 ≤ j ≤ n , ∆ j ′ +1 6 = 0 . Pro of. W e hav e ∆ 0 = 1. Inductiv ely , a ssume that ∆ k ′ +1 6 = 0 fo r a ll k , 0 ≤ k ≤ j − 1. If ∆ j = 0, then ∆ j ′ +1 = ∆ ( j − 1) ′ +1 6 = 0 b y the inductiv e hypothesis. But if ∆ j 6 = 0 and e j − 1 ≤ 0, t hen ∆ j ′ +1 = ∆ ( j − 1) ′ +1 6 = 0 b y the inductiv e hy p o thesis. Otherwise ∆ j ′ +1 = ∆ j since j ′ = j − 1 and we are done.  The definition of j ′ as a maximum a j in [18], [20] r equired j ≥ 3 and L j − 1 > L 1 . This in t ur n necessitated (i) defining a j separately when n = 1 or ( n ≥ 2 and L j − 1 = L 1 for 1 ≤ j − 1 ≤ n − 1) and (ii) merging the separate constructions of minimal p olynomials in to a single c o nstruction. F urther, [18, Proposition 4.1] sho w ed that the t wo notio ns coincide, and required that L − 1 = L 0 = 0. 8 4.2 The Recursiv e Theorem Our goal in this subsection is to define a recursiv e function µ : D n → D [ x ] suc h that for all s ∈ D n , µ ( s ) ∈ MP( s ). When s is understo o d, w e will write µ ( j ) for µ ( s ( j ) ). A minimal p olynomial o f s (1) is clear b y insp ection, so w e could use n = 1 as the basis of the recursion, but with slightly more w ork, w e will see that w e can use n = 0 as the ba sis. Definition 4.4 (Basis of the Recursion) R e c al l that 0 ′ = − 1 and ∆ 0 = 1 . L et ε ∈ D b e arbitr ary but fixe d and s ∈ D n . We put µ ( − 1) = µ ( s, − 1) = ε and µ (0) = µ ( s, 0) = 1 . Th us the exp onen t of µ (0) is e 0 = 1 and ∆ 1 = s 1 . It follows fr om Prop osition 4 .3 and Lemma 3 .4 that L j ≥ L j ′ +1 ≥ max { j ′ + 1 − L j ′ , L j ′ } ≥ j ′ + 1 − L j ′ . A k ey step in the pro of of Theorem 4 .5 is that the first and la st inequalities ar e a ctually equalities. Notation T o simplify Theorem 4.5, w e will use the follo wing notation: (i) µ ′ = µ ◦ ′ (where ◦ denotes comp osition) (ii) L ′ = deg ◦ µ ′ (iii) ∆ ′ = ∆ ◦ (+1) ◦ ′ ◦ ( − 1), where ± 1 hav e the obvious meanings. Th us µ ′ ( j ) = µ ( j ′ ) , L ′ j = L j ′ and ∆ ′ j = ∆( µ ( k ) , s ( k +1) ) = ( µ ( k ) · s ) L k − k − 1 where k = ( j − 1) ′ . The definition o f µ : D n → D [ x ] in the following theorem w as mo t iv ated in [20]: g iven a minimal p olynomial function µ : D n − 1 → D [ x ], the theorem constructs µ : D n → D [ x ] suc h that for all s ∈ D n , µ ( s ) ∈ MP ( s ). W e note that to v erify that µ ( s ) ∈ Ann( s ), w e first need deg( µ ( s )) . Theorem 4.5 (Cf. [1 3], [22, Se ction 9 . 6 ]) L et n ≥ 1 and s ∈ D n and as s ume the initial values of D efinition 4 . 4. Define µ ( n ) r e cursively by µ ( n ) =    µ ( n − 1) if ∆ n = 0 ∆ ′ n · x max { e n − 1 , 0 } µ ( n − 1) − ∆ n · x max {− e n − 1 , 0 } µ ′ ( n − 1) otherwise. If ∆ n = 0 , cle arly µ ( n ) ∈ MP( s ) , L n = L n − 1 and e n = e n − 1 + 1 . If ∆ n 6 = 0 then (i) deg( µ ( n ) ) = max { e n − 1 , 0 } + L n − 1 = n ′ + 1 − L ′ n (ii) µ ( n ) ∈ MP( s ) (iii) e n = −| e n − 1 | + 1 . 9 Pro of. W e prov e (i) by induction on n . F or n = 1, µ (1) = x − ∆ 1 · ε and max { e 0 , 0 } + L 0 = 1 = deg( µ (1) ). A s fo r the second equalit y , 1 ′ = 0 since e 0 = 1 > 0 and 1 ′ + 1 − L ′ 1 = 1 = deg ( µ (1) ). Supp ose inductiv ely that n ≥ 2 and that (i) is true fo r 1 ≤ j ≤ n − 1 . If e n − 1 ≤ 0 then µ ( n ) = ∆ ′ n · µ ( n − 1) − ∆ n · x − e n − 1 µ ′ ( n − 1) . W e ha ve to s how that − e n − 1 + L ′ n − 1 < L n − 1 . But − e n − 1 + L ′ n − 1 is − ( n − 2 L n − 1 ) + L ′ n − 1 = − n + 2 L n − 1 + ( n − 1) ′ + 1 − L n − 1 = L n − 1 + ( n − 1) ′ + 1 − n b y the inductiv e h yp othesis and w e kn ow that ( n − 1) ′ ≤ n − 2 for all n ≥ 1. Th us − e n − 1 + L ′ n − 1 < L n − 1 and e n − 1 ≤ 0 implies that deg ( µ ( n ) ) = L n − 1 . Supp ose no w that e n − 1 > 0. W e ha ve to sho w tha t deg ( µ ( n ) ) = e n − 1 + L n − 1 i.e. that e n − 1 + L n − 1 > L ′ n − 1 . But e n − 1 + L n − 1 = n − L n − 1 > L n − 1 since e n − 1 > 0 and L n − 1 ≥ L ′ n − 1 as L is non-decreasing. Hence e n − 1 + L n − 1 > L ′ n − 1 and deg ( µ ( n ) ) = max { e n − 1 , 0 } + L n − 1 . T o complete (i), w e hav e to show that deg ( µ ( n ) ) = n ′ + 1 − L ′ n if ∆ n 6 = 0. But if e n − 1 ≥ 0, then n ′ = ( n − 1) ′ b y definition a nd w e ha ve seen that deg( µ ( n ) ) = L n − 1 , so the result is trivially true in this case. If e n − 1 > 0 , then deg( µ ( n ) ) = n − L n − 1 and n ′ = n − 1 b y definition. Hence deg ( µ ( n ) ) = n ′ + 1 − L ′ n and the induction is complete. (ii) W e first sho w inductiv ely t ha t µ ( n ) ∈ Ann( s ). If n = 1 and ∆ 1 6 = 0, then µ (1) = x − ∆ 1 · ε ∈ MP( s (1) ). Suppose inductiv ely that n ≥ 2, (ii) is true fo r 1 ≤ j ≤ n − 1 a nd ∆ n 6 = 0. F rom P art (i), d = deg ( µ ( n ) ) = max { e n − 1 , 0 } + L n − 1 ≥ 0. In particular, µ ( n ) 6 = 0 . W e omit the pro of that µ ( n ) ∈ Ann( s ( n − 1) ), sho wing only that ( µ ( n ) · s ) d − n = 0 . Put µ = µ ( n − 1) , µ ′ = µ ′ ( n − 1) , e = e n − 1 , L = L n − 1 and L ′ = L ′ n − 1 . If e ≤ 0, t hen d = L and ( µ ( n ) · s ) L − n = ∆ ′ n · ( µ · s ) L − n − ∆ n · ( x − e µ ′ · s ) L − n = ∆ ′ n · ∆ n − ∆ n · ∆ ′ n = 0 since L − n + e = − L = L ′ − ( n − 1) ′ − 1 and so ( µ ′ · s ) L − n + e = ∆ ′ n . If e > 0 , d = n − L and ( µ ( n ) · s ) d − n = ( µ ( n ) · s ) − L = ∆ ′ n · ( x e µ · s ) − L − ∆ n · ( µ ′ · s ) − L = ∆ ′ n · ∆ n − ∆ n · ∆ ′ n = 0 since − L − e = L − n and − L = L ′ − ( n − 1) ′ − 1. Thus µ ( n ) ∈ Ann( s ). W e complete the pro of of (ii) by sho wing that µ ( n ) ∈ MP( s ). W e kno w that µ (1) ∈ MP( s (1) ). F or n ≥ 2, we kno w from (i) that deg( µ ( n ) ) = max { e n − 1 , 0 } + L n − 1 whic h is max { e n − 1 ( µ ( n − 1) ) , 0 } + deg ( µ ( n − 1) ) and therefore µ ( n ) ∈ MP( s ) by Lemma 3.4. (iii) W e also pro ve this inductiv ely . Supp ose first that n = 1 and ∆ 1 6 = 0 . Then e 1 ( µ (1) ) = 2 − 2 · 1 = 0 and since e 0 > 0, e 1 = − e 0 + 1 = 0. Let n ≥ 2 and ∆ n 6 = 0. If e n − 1 ≤ 0, then e n ( µ ( n ) ) = n + 1 − 2L n = n + 1 − 2L n − 1 = e n − 1 + 1 = e n , and if e n − 1 > 0 then e( µ ( n ) ) = n + 1 − 2( n − L n − 1 ) = 1 − n + 2L n − 1 = 1 − e n − 1 = −| e n − 1 | + 1 = e n .  10 Remarks 4.6 1. F or ∆ n 6 = 0 and e = e n − 1 µ ( n ) =    ∆ ′ n · µ ( n − 1) − ∆ n · x − e µ ′ ( n − 1) if e ≤ 0 ∆ ′ n · x + e µ ( n − 1) − ∆ n · µ ′ ( n − 1) if e ≥ 0 . 2. If s has pr e cis ely n − 1 ≥ 0 le ading zer o e s , The or e m 4.5 yields µ ( n ) = x n − ε . 3. We note that deg ( µ ( n ) ) = n ′ + 1 − L ′ n is trivial ly true if ∆ n = 0 or if n = 0 (if we set L ′ = 0 ). We c an al s o pr ove that deg ( µ ( n ) ) = n ′ + 1 − L ′ n using L emma 3. 4 (a s in [3], [13] and [18]) but pr efer the simpler, dir e ct ar gument use d in T h e or em 4.5. 4. As note d in [18], we c an use any µ ( k ) inste ad of µ ( i ) (with app r opriate p owers o f x ) as long as ∆ k +1 6 = 0 and k < n − 1 ), but mini mality is not guar ante e d. 4.3 Some Corollaries Let n ≥ 2, s ∈ D n and 2 ≤ j ≤ n . Then j is a jump p oint of s if L j > L j − 1 . W e write J( s ) for the set of jump p oints of s . W e do not assume that J( s ) 6 = ∅ . Eviden tly , the follo wing are equiv alen t: (i) j ∈ J( s ) (ii) e j − 1 > 0 (iii) j ′ = j − 1 (iv) L j = j − L j − 1 > L j . The follow ing is clear. Prop osition 4.7 F or al l s ∈ D n , L n = L 1 + P j ∈ J( s ) e j − 1 . Pro of. Simple inductiv e consequence o f Theorem 4.5( i) .  An imp ortan t consequence o f Theorem 4.5( i) is the following w ell-known result. Corollary 4.8 F or any s ∈ D n , L n = n ′ + 1 − L ′ n . Next we use the index function to simplify the pro of of [18, Prop osition 4 .1 3]. Prop osition 4.9 (Cf. [13]) L et s ∈ D n . If f ′ ∈ D [ x ] an d deg( f ′ ) ≤ − e n , then µ ( n ) + f ′ µ ′ ( n ) ∈ MP( s ) . In p articular, if e n ≤ 0 then | MP( s ) | > 1 . Pro of. W e will omit scripts. By Corolla ry 4.8 , L = n ′ + 1 − L ′ , so that deg( f ′ µ ′ ) ≤ − e + L ′ = 2 L − n − 1 + ( n ′ + 1 − L) = L + n ′ − n ≤ L − 1 (4) since n ′ ≤ n − 1, so deg ( µ + f ′ µ ′ ) = L. Let L − n ≤ j ≤ − 1. No w (( µ + f ′ µ ′ ) · s ) j = ( µ · s ) j + ( f ′ µ ′ · s ) j = ( f ′ µ ′ · s ) j . Inequalit y (4) g iv es deg( f ′ µ ′ ) − n ′ ≤ L − n , so we are done.  It is con v enien t to in tro duce p : { 0 , . . . , n − 1 } → N give n b y p( j ) =  1 if j = 0 j − j ′ otherwise. 11 It is clear t hat if ∆ n = 0 , then p( n ) = p( n − 1) + 1. W e set µ ( n ) ∗ = ∗ ◦ µ ( s, n ) where ∗ denotes t he recipro cal function and similarly for fixed s , µ ′ ( n ) ∗ = ∗ ◦ µ ( s, n ′ ) . Corollary 4.10 If ∆ n 6 = 0 then (i) µ ( n ) ∗ = ∆ ′ n · µ ( n − 1) ∗ − ∆ n · x p( n − 1) µ ′ ( n − 1) ∗ (ii) p( n ) = p( n − 1) + 1 if e n − 1 ≤ 0 and p( n ) = 1 otherwise. Pro of. Put µ = µ ( n − 1) , e = e n − 1 , L = L n − 1 , p = p( n − 1), µ ′ = µ ′ ( n − 1) and L ′ = L ′ n − 1 . If e ≤ 0, then µ ( n ) = ∆ ′ n · µ − ∆ n · x − e µ ′ and L n = L. Then µ ( n ) ∗ = x L n µ ( n ) ( x − 1 ) = ∆ ′ n · x L µ ( x − 1 ) − ∆ n · x L+ e µ ′ ( x − 1 ) = ∆ ′ n · µ ∗ − ∆ n · x L+ e − L ′ µ ′∗ = ∆ ′ n · µ ∗ − ∆ n · x p µ ′∗ since L + e − L ′ = n − L − L ′ = n − 1 − ( n − 1) ′ = p b y Corollary 4.8. If e > 0, µ ( n ) = ∆ ′ n · x e µ − ∆ n · µ ′ and L n = n − L, so µ ( n ) ∗ = x L n µ ( n ) ( x − 1 ) = ∆ ′ n · x n − L − e µ ( x − 1 ) − ∆ n · x n − L µ ′ ( x − 1 ) = ∆ ′ n · µ ∗ − ∆ n · x n − L − L ′ µ ′∗ = ∆ ′ n · µ ∗ − ∆ n · x p µ ′∗ since n − L − e = L a nd n − L − L ′ = n − ( n − 1) ′ − 1 = p. The v alue of p( n ) is immediate from Corolla ry 4.8 .  4.4 The Iterativ e V ersion W e could obtain µ ( n ) recursiv ely using Theorem 4.5, but it is more efficien t to obtain it iterativ ely . Corollary 4.11 (Iterative F orm of µ ) L et n ≥ 1 , s ∈ D n and ε ∈ D . Assume the initial val ues of Defi n ition 4.4. F or 1 ≤ j ≤ n , l e t µ ( j ) =    µ ( j − 1) if ∆ j = 0 ∆ ′ j · x max { e j − 1 , 0 } µ ( j − 1) − ∆ j · x max {− e j − 1 , 0 } µ ′ ( j − 1) otherwise. Then µ ( j ) ∈ MP ( s ( j ) ) . F urther, if ∆ j = 0 , then µ ′ ( j ) = µ ′ ( j − 1) , ∆ ′ j +1 = ∆ ′ j and e j = e j − 1 + 1 . If ∆ j 6 = 0 then (a) if e j − 1 ≤ 0 then µ ′ ( j ) = µ ′ ( j − 1) and ∆ ′ j +1 = ∆ ′ j (b) but if e j − 1 > 0 then µ ′ ( j ) = µ ( j − 1) and ∆ ′ j +1 = ∆ j (c) e j = −| e j − 1 | + 1 . 12 In other w ords, when ∆ j 6 = 0, µ ( j ) =    ∆ ′ j · µ ( j − 1) − ∆ j · x − e µ ′ ( j − 1) if e = e j − 1 ≤ 0 ∆ ′ j · x + e µ ( j − 1) − ∆ j · µ ′ ( j − 1) otherwise. W e are now ready to deriv e an a lg orithm to compute a minimal p o lynomial for s ∈ D n from Coro llary 4 .11. The initialisation is clear. Le t 1 ≤ j ≤ n − 1. F rom the definition of e j − 1 = e j − 1 ( µ ( j − 1) ), we hav e L j − 1 = j − e j − 1 2 and so ∆ j = ( µ ( j − 1) · s ( j ) ) L j − 1 − j = j − e j − 1 2 X k =0 µ ( j − 1) k s k + j +e j − 1 2 . F or the b o dy of the lo op, we next show how to suppress j − 1 and j . When ∆ j = 0, we ignore the up dating of µ ( j − 1) , µ ′ ( j − 1) and ∆ ′ j , but e j = e j − 1 + 1. But when ∆ j 6 = 0, (a) µ ( j ) =    ∆ ′ j · µ ( j − 1) − ∆ j · x − e µ ′ ( j − 1) if e ≤ 0 ∆ ′ j · x + e µ ( j − 1) − ∆ j · µ ′ ( j − 1) otherwise. and (b) w e need to up da te µ ′ ( j − 1) and ∆ ′ j when e j − 1 > 0; since (a) will o v erwrite µ ( j − 1) once we ha ve suppressed j , we k eep a copy t of µ ( j − 1) when e j − 1 > 0 , so tha t the up dating is µ ′ ( j ) = t and ∆ ′ j +1 = ∆ j . F o r (c), w e hav e e j = e j − 1 + 1 if e j − 1 ≤ 0 and e j = − e j − 1 + 1 otherwise. No w only the current v alues of the v ariables app ear and so we can suppress scripts. The following algor it hm (written in the sty le of [10]) is no w immediate. Algorithm 4.12 (Iter ativ e minimal p olynomial) Input: n ≥ 1 , ε ∈ D and s = ( s 1 , . . . , s n ) ∈ D n . Output: µ ∈ MP( s ) . { e := 1 ; µ ′ := ε ; ∆ ′ := 1 ; µ := 1; FOR j = 1 TO n { ∆ := P j − e 2 k =0 µ k s k + j + e 2 ; IF ∆ 6 = 0 THEN { IF e ≤ 0 THEN µ := ∆ ′ · µ − ∆ · x − e µ ′ ; ELSE { t := µ ; µ := ∆ ′ · x e µ − ∆ · µ ′ ; µ ′ := t ; ∆ ′ := ∆ ; e := − e }} e := e + 1 } RETURN ( µ ) } Example 4.13 T ab les 1 and 2 give the values of e and ∆ , and out p uts µ , µ ′ for the binary s e quenc e (1 , 0 ,1,0,0) of [13] and fo r the inte ge r se quenc e (0,1,1, 2 ), with ε = 0 in b oth c ases. 13 T able 1: Algorithm MP with ε = 0, input ( 1 , 0 , 1 , 0 , 0) ∈ GF(2) 5 j e j − 1 ∆ j µ ( j ) µ ′ ( j ) 1 1 1 x 1 2 0 0 x 1 3 1 1 x 2 + 1 x 4 0 0 x 2 + 1 x 5 1 1 x 3 x 2 + 1. T able 2: Algorit hm MP with ε = 0 , input (0 , 1 , 1 , 2) ∈ Z 4 j e j − 1 ∆ j µ ( j ) µ ′ ( j ) 1 1 0 1 0 2 2 1 x 2 1 3 − 1 1 x 2 − x 1 4 0 1 x 2 − x − 1 x − 1. 5 A Recu rsiv e BM Theo rem 5.1 Recipro cal P airs Definition 5.1 (Recipro cal Pair) L e t n ≥ 1 and s ∈ D n . We say that ( g , ℓ ) ∈ D [ x ] × [0 , n ] is a r e cipr o c al p air for s , written ( g , ℓ ) ∈ RP( s ) , if g 0 6 = 0 , d = de g ( g ) ≤ ℓ and ℓ + 1 ≤ j ≤ n implies that ( g · s ) j = g 0 s j + g 1 s j − 1 + · · · + g d s j − d = 0 . (5) F or n ≥ 2, the n th discr ep ancy of ( g , ℓ ) ∈ RP( s ( n − 1) ) is ∆ n ( g , ℓ ) = ( g · s ) n = P d k =0 g k s n − k , and ( g , ℓ ) ∈ RP( s ) if a nd o nly if ∆ n ( g , ℓ ) = 0. No t e that ℓ is often used instead of deg ( g ) in the sum o f Equation (5) and in the discrepancy [13]; w e prefer to use deg ( g ) since g is then a genuine p olynomial. Prop osition 5.2 L et s ∈ D n , f ∈ D [ x ] and d = deg ( f ) ≥ 0 . Then for d + 1 ≤ j ≤ n , ( f · s ) d − j = ( f ∗ s ) j . Thus if f ∈ Ann( s ) × , ( f ∗ , d ) ∈ RP( s ) and if ( g , ℓ ) ∈ RP( s ) , then x ℓ − deg( g ) g ∗ ∈ Ann ( s ) × . Pro of. W e ha ve f ∗ ( x ) = x d f ( x − 1 ), so f ( x − 1 ) = x − d f ∗ ( x ) and f ( x ) = x d f ∗ ( x − 1 ). Hence ( f ( x ) · s ) d − j = ( f ∗ ( x − 1 ) · s ) − j = ( f ∗ · s ) j .  In particular, ∆ n ( µ ( n − 1) ) = ∆ n ( µ ( n − 1) ∗ , deg( µ ( n − 1) )) and using ∆ n in t wo wa ys causes no confusion. 14 5.2 Shortest Recipro cal P airs Definition 5.3 (Shortest Recipro cal Pair) L et s ∈ D n . We say that a r e cipr o c al p air ( g , ℓ ) for s is shortest, written ( g , ℓ ) ∈ SRP , if x ℓ − deg( g ) g ∗ ∈ MP( s ) . Note that when x ℓ − deg( g ) g ∗ ∈ MP( s ), ℓ = deg ( x ℓ − deg( g ) g ∗ ) = L( s ) since g 0 6 = 0. W e define the index function exactly as in the minimal p olynomial case and set e n (  ( n ) , L n ) = n + 1 − 2L n . Theorem 5.4 (Recursive BM) L et n ≥ 1 and s ∈ D n . Put  ( − 1) = ε an d ∆ 0 = 1 . Define  ( n ) r e cursively by  ( n ) =     ( n − 1) if ∆ n = 0 ∆ ′ n · x max { e n − 1 , 0 }  ( n − 1) − ∆ n · x max {− e n − 1 , 0 }  ′ ( n − 1) otherwise. If ∆ n = 0 , cle arly  ( n ) ∈ SRP( s ) , L n = L n − 1 and e n = e n − 1 + 1 . If ∆ n 6 = 0 then (i) L n = ma x { e n − 1 , 0 } + L n − 1 = n ′ + 1 − L ′ n (ii)  ( n ) ∈ SRP( s ) (iii) e n = −| e n − 1 | + 1 . Pro of. W e supp ose that ∆ n 6 = 0. Let µ ( n − 1) = x L n − 1 − deg(  ( n − 1) )  ( n − 1) ∗ ∈ MP( s ( n − 1) ) b y Prop osition 5.2. No w let µ ( n ) ∈ MP( s ) b e as in Theorem 4.5. F urther, L n = max { e n − 1 , 0 } + L n − 1 and e n = n + 1 − L n = −| e n − 1 | + 1. By Corollary 4.10, µ ( n ) ∗ =  ( n ) and (  ( n ) , L n ) ∈ SRP( s ), whic h completes the pro of.  5.3 Iterativ e BM As b efo r e, it is conv enien t to write  ′ ( n ) =  ( n ′ ) . Corollary 5.5 (Iterative BM) L et n ≥ 1 , s ∈ D n and ε ∈ D . Put  (0) = 1 , e 0 = 1 ,  ′ (0) = ε and ∆ ′ 0 = 1 . F or 1 ≤ j ≤ n , let  ( j ) =     ( j − 1) if ∆ j = 0 ∆ ′ j ·  ( j − 1) − ∆ j · x p ( j − 1)  ′ ( j − 1) otherwise. Then  ( j ) ∈ SRP( s ( j ) ) . F urther, if ∆ j = 0 then  ′ ( j ) =  ′ ( j − 1) , ∆ ′ j +1 = ∆ ′ j , e j = e j − 1 + 1 and p ( j ) = p ( j − 1) + 1 . If ∆ j 6 = 0 then (a) if e j − 1 ≤ 0 then  ′ ( j ) =  ′ ( j − 1) , ∆ ′ j +1 = ∆ ′ j and p ( j ) = p ( j − 1) + 1 (b) but if e j − 1 > 0 then  ′ ( j ) =  ( j − 1) , ∆ ′ j +1 = ∆ j and p ( j ) = 1 (c) e j = −| e j − 1 | + 1 . 15 T able 3: Algorithm BM with ε = 0, input (1 , 0 , 1 , 0 , 0) ∈ GF(2) 5 j e j − 1 ∆ j − 1 p j − 1  ( j )  ′ ( j ) 1 1 1 1 1 1 2 0 0 1 1 1 3 1 1 2 1 + x 2 1 4 0 0 1 1 + x 2 1 5 1 1 2 1 1 + x 2 . As for the minimal p olynomial case, Corollary 5.5 immediately yields an algorithm. The only difference is that we now ha ve a single expression for  ( n ) (whic h we can factor out), w e b egin with p = 1 and w e set p = 0 if (∆ n 6 = 0 and e > 0 ) — so that w e alw ays incremen t p by 1. Algorithm 5.6 (Iter ativ e BM) (Cf. [13, Algorithm 2.2]) Input: n ≥ 1 , ε ∈ D , and s = ( s 1 , . . . , s n ) ∈ D n . Output: ( , L) ∈ SRP i. e .  0 6 = 0 and x L − deg (  )  ∗ ∈ MP( s ) . { e := 1 ;  ′ := ε : ∆ ′ := 1 ; p := 1 ;  := 1 ; FOR j = 1 TO n { ∆ := P deg(  ) k =0  k s j − k ; IF ∆ 6 = 0 THEN { t :=  ;  := ∆ ′ ·  − ∆ · x p  ′ ; IF e > 0 THEN {  ′ := t ; ∆ ′ := ∆ ; p := 0 ; e := − e }} p := p + 1 ; e := e + 1 } RETURN ( , n +1 − e 2 ) } If D is a field, ε = 1 and we make eac h  monic then Algorit hm 5.6 is equiv alen t t o the LFSR syn thesis algorit hm of [13, p. 124] (replace e b y j − 2L, deg (  ) b y L and relab el the v ariables). Example 5.7 T ables 3 gives the values of e , ∆ , p and the outputs  ,  ′ for the binary se quenc e (1,0,1,0,0) of [13]. T able 4 gives similar information for the inte ger se quenc e (0,1,1,2). In b oth c ases ε = 0 . Remark 5.8 A mor e c omplic ate d BM algorithm over a field (derive d fr om pr op erties of Hankel ma tric es) app e ars in [9, p. 148]. Inde e d, the algorithm of [9] (i) do es not use the in i tial values of Cor ol lary 5.5 , but has sever al initialization steps (ii) uses a variable c al le d ∆L which e quals e , but ∆L is not up date d incr emental ly (iii) uses a variable k ( j ) defi n e d as in [13] r ather than using j ′ and p j = j − j ′ (iv) do es no t ma intain variable s  ′ and ∆ ′ . 16 T able 4: Algorit hm BM with ε = 0, input (0 , 1 , 1 , 2) ∈ Z 4 j e j − 1 ∆ j − 1 p j − 1  ( j )  ′ ( j ) 1 1 0 1 1 0 2 2 1 2 1 1 3 − 1 1 1 1 − x 1 4 0 1 2 1 − x − x 2 1 − x . 6 Complexit y of the Ite rativ e Algo rithms It is straigh tf orw ard to show that at most n (3 n +1) 2 m ultiplications in D are required for Algorithm 4.12, [18 , Prop osition 3 .23]. In this section w e sho w that this can b e replaced b y 3 ⌊ n 2 4 ⌋ . 6.1 The Linear Co mp lexit y Sum W e contin ue the prev io us no t ation: µ (0) = 1, L 0 = 0, e 0 = 1 a nd for 0 ≤ j ≤ n , ∆ j +1 = ∆( µ ( j ) ) and e j = j + 1 − 2L j . The main result of t his subsection uses t he fo llo wing lemma. Lemma 6.1 F or inte g e rs u ≥ 0 an d t ≥ 1 , P 2 u +2 t j =2 u +1 ⌊ j +1 2 ⌋ = 2 tu + t 2 . Pro of. Put w = 2 u + t + 1. T he sum is t − 1 X k =0  ⌊ w − k 2 ⌋ + ⌊ w + k + 1 2 ⌋  = t − 1 X k =0  w − k 2 + w + k + 1 2 − 1 2  = tw since w − k and w + k + 1 hav e opp o site parity .  Lemma 6.2 P n i =1 L i ≤ P n i =1 ⌊ i +1 2 ⌋ . Pro of. L et us call j ≥ 0 stable if it is even , L j = j 2 and P j i =1 L i ≤ P j i =1 ⌊ i +1 2 ⌋ . Clearly 0 is stable, so supp ose inductiv ely that 2 u ≥ 0 is stable. In par t icular, L 2 u = u and L 2 u +1 = u indep enden t ly of ∆ 2 u +1 . If ∆ 2 u +2 6 = 0 then L 2 u +2 = u + 1 = ⌊ 2 u +3 2 ⌋ a nd we can replace u b y u + 1. Henc e w e can assume that ∆ 2 u +2 = 0, and that L 2 u +1 = · · · = L 2 u + t = u for some maximal t suc h that 2 u + 2 ≤ 2 u + t ≤ n . If 2 u + t = n , we are do ne since the result holds by the inductiv e h yp othesis. If 2 u + t < n , we sho w that there is a maximal stable j M ≤ n . First w e sho w that if v = 2 u + 2 t ≤ n , then v is stable. W e hav e L 2 u + t +1 6 = u and so ∆ 2 u + t +1 6 = 0 since t is maximal. Hence L 2 u + t +1 = u + t . An easy induction show s that L 2 u + t + j = L 2 u + t + j +1 for 1 ≤ j ≤ t i.e. that L v = L 2 u +2 t = L 2 u + t +1 = u + t = ⌊ v +1 2 ⌋ . Since 2 u is stable, it 17 is enough to show that P v j =2 u +1 L j = P v j =2 u +1 ⌊ j +1 2 ⌋ . The left-hand-side is tu + t ( u + t ) whic h equals the right-hand side b y Lemma 6.1 ( ii) . So v is stable. By induction there is a maximal stable j M ≤ n . If j M = n , w e are done. If j M < n , write n = 2 u + t + 1 + m f o r 0 ≤ m < t − 1. It is enough to sho w that P n i =2 u +1 L i ≤ P n i =2 u +1 ⌊ i +1 2 ⌋ since 2 u is stable. W rite the left- hand side as m X k =0 L 2 u + t − k + m X k =0 L 2 u + t + k + 1 + 2 u + t − m − 1 X i =2 u +1 L i The first summand is ( m + 1) u and the second is ( m + 1 ) ( u + t ). F or P n i =2 u +1 ⌊ i +1 2 ⌋ , we pro ceed as in Lemma 6.1(ii) using the pairs with indices 2 u + t − k , 2 u + t + k + 1 for k = 0 , . . . , m , while eac h of the terms in t he third summand ha ve L i = u , which is less or equal to the corresp onding ⌊ i +1 2 ⌋ .  The follow ing Corollary app eared in [5] for n ev en. Corollary 6.3 P n i =1 L i ≤ ⌊ ( n + 1) 2 / 4 ⌋ . Pro of. W e hav e P n j =1 ⌊ j +1 2 ⌋ = ⌊ ( n + 1) 2 / 4 ⌋ .  It turns out that sequences with a perfect linear complexit y profile sho w that the upp er b ound of Coro llary 6.3 is t ig h t. Recall that s has a p erfe ct l i n e ar c om plexity pr ofile (PLCP) if L j = ⌊ j +1 2 ⌋ for 1 ≤ j ≤ n [24]. (This definition was initially give n for binary sequence s, but b y Theorem 4.5, it extends to sequences ov er D .) Prop osition 6.4 The fol lowing ar e e quivalent. (i) s has a PLCP (ii) for 1 ≤ j ≤ n e j =  1 if j is even 0 otherwise (iii) ∆ j 6 = 0 for al l o dd j , 1 ≤ j ≤ n + 1 . Pro of. (i) ⇔ (ii) : Easy consequence of the definitions. (i) ⇒ (iii): If j ≤ n + 1 is o dd then ∆ j 6 = 0, fo r otherwise j − 1 2 + 1 = j +1 2 = L j = L j − 1 = j − 1 2 . (iii) ⇒ (i): Let ∆ j 6 = 0 for all o dd j , 1 ≤ j ≤ n + 1. Then s 1 6 = 0, L 1 = 1 and e 1 = 0 . If ∆ 2 = 0 , t hen L 2 = L 1 = 1, otherwise L 2 = ma x { e 1 , 0 } + 1 = 1, so that L 2 is a s required. Supp ose that j ≤ n is o dd and L k = ⌊ k +1 2 ⌋ for all k , 1 ≤ k ≤ j − 1. W e ha v e L j = j − L j − 1 = j − j − 1 2 = ⌊ j +1 2 ⌋ . If j = n + 1, w e a re done. Otherwise, if ∆ j +1 = 0 , w e ha ve L j +1 = L j = ⌊ j +1 2 ⌋ = ⌊ j +2 2 ⌋ , whereas if ∆ j +1 6 = 0, L j +1 = j + 1 − L j = j + 1 − ⌊ j +1 2 ⌋ = ⌊ j +2 2 ⌋ .  It follo ws that if s has a PLCP , then P n j =1 L j = ⌊ ( n + 1) 2 / 4 ⌋ . In particular, this is true if ∆ j is alw ays non- zero. Note that L j ≤ ⌊ j +1 2 ⌋ do es not hold in general: consider (0 , . . . , 0 , 1) ∈ D n where n ≥ 2 for example. W e do not kno w if P n j =1 L j = ⌊ ( n + 1) 2 / 4 ⌋ implies that s has a PLCP . 18 6.2 W orst-case Analysis It is no w immediate that Theorem 6.5 F or a se quenc e of n terms fr om D , A lgorithms 4.12 and 5.6 r e quir e at most 3 ⌊ n 2 4 ⌋ m ultiplic ations in D . As remark ed ab ov e, if D is a field then we can divide µ in Algorit hm 4.12 and  in Algorithm 5.6 b y ∆ ′ , making eac h p olynomial monic. If w e ignore the n umber of field divisions, this giv es at most 2 ⌊ n 2 4 ⌋ m ultiplications. W e note that an upp er b ound of n ( n +1) 2 for the maxim um n um b er of multiplications in t he BM alg o rithm app eared in [7 , p. 209A]. 6.3 Av erage Analysis An av erag e analysis of the BM algor it hm appeared in [7, Equation ( 15), p. 209] and w as based on Pro p osition 1, lo c. cit. , which was prov ed using the BM algo rithm under the hypothesis that ’there is one fo rm ula fo r a sequence of length zero’. Another pro of deriv ed fro m the n um b er of sequence s with prescrib ed linear complexit y and prescribed jump complexit y a pp eared in [1 5, Corollary 1]. W e give a direct inductiv e pro of of [7, Pro p osition 1] whic h is indep enden t of any par- ticular algorithm. In part icular, Theorem 6.6 a pplies to Algorithm 4.12 and to Algorithm 5.6. One could in principle set up and solv e recurrence equations similiar to [7, Equations (9), (10), (11)] to carry o ut an av erage analysis of Algorithms 4.12 a nd 5.6, but w e will do not do this here. Theorem 6.6 L et D = F q . The numb er of se quenc es of len gth n with line ar c o mplexity ℓ is            0 if ℓ < 0 1 if ℓ = 0 q 2 ℓ − 2 n − 1 ( q − 1) if 1 ≤ ℓ ≤ ⌊ n/ 2 ⌋ q 2 n − 2 ℓ ( q − 1) if ⌊ n/ 2 ⌋ < ℓ ≤ n 0 if ℓ > n. Pro of. Put N ( n, ℓ ) = |{ s ∈ D n : L n = ℓ }| . It is clear that N ( n, ℓ ) is as stated for ℓ < 0 or ℓ > n . W e will sho w b y induction on n that N ( n, ℓ ) is as claimed. Let 0 denote an all-zero sequence and n = 1. It is clear that 0 is the unique sequence with L 1 = 0 and that there are q − 1 sequences ( s 1 ) of complexit y 1. Supp ose inductiv ely that the result is tr ue for sequences of length n − 1 ≥ 1. W e consider t hr ee cases. (a) ℓ = 0 , n . Let ℓ = 0. Then clearly N ( n, ℓ ) ≥ 1. If L n ( s ) = 0 then s ( n − 1) = 0 by the inductiv e hypothesis since 0 ≤ ℓ n − 1 ≤ L n = ℓ and so N ( n, ℓ ) = 1. Supp ose now that ℓ = n . W e show that N ( n, ℓ ) = q − 1. If s ( n − 1) = 0 and ∆ n = s n 6 = 0 then L n = n , so N ( n, ℓ ) ≥ q − 1. Moreo v er, L n − 1 ≤ n − 1 and n = L n = max { L n − 1 , n − L n − 1 } forces L n − 1 = 0, so s ( n − 1) = 0 and thus N ( n, n ) = q − 1 . 19 (b) 1 ≤ ℓ ≤ ⌊ n/ 2 ⌋ . Suppo se first that 2 ℓ ≤ n − 1. T hen L n − 1 ≤ L n = ℓ ≤ ⌊ ( n − 1) / 2 ⌋ and w e can apply the inductiv e h yp othesis to an y s ( n − 1) . If s n is suc h that ∆ n = 0 fo r some s ( n − 1) , then 1 ≤ ℓ = L n = L n − 1 ≤ ⌊ ( n − 1) / 2 ⌋ , and we obtain N ( n − 1 , ℓ ) = q 2 ℓ − 1 ( q − 1) sequence s in t his wa y . W e also hav e ℓ < n − ℓ , so ℓ cannot result fr o m some s ( n − 1) with ∆ n 6 = 0. Th us N ( n, ℓ ) = N ( n − 1 , ℓ ) = q 2 ℓ − 1 ( q − 1) as required. Supp ose now that 2 ℓ = n . Then ℓ > ⌊ ( n − 1) / 2 ⌋ . If s n is suc h that L n = ℓ a nd ∆ n = 0 , the inductiv e hy p othesis yields N ( n − 1 , ℓ ) = q 2( n − 1 − ℓ ) ( q − 1) sequences. There are also ( q − 1) N ( n − 1 , ℓ ) = q 2( n − 1 − ℓ ) ( q − 1) 2 sequence s resulting from ∆ n 6 = 0. Th us N ( n, ℓ ) = N ( n − 1 , ℓ ) + ( q − 1) N ( n − 1 , ℓ ) , and substituting the inductiv e v alues and n = 2 ℓ yields the result. (c) ⌊ n/ 2 ⌋ < ℓ ≤ n . Then ( n − 1) / 2 < ℓ and max { ℓ, n − ℓ } = ℓ . If s n is suc h that ∆ n = 0 , then ⌊ ( n − 1) / 2 ⌋ < ℓ = L n − 1 ≤ n − 1 and we can apply the inductiv e h yp othesis to s ( n − 1) , giving N ( n − 1 , ℓ ) = q 2( n − 1 − ℓ ) ( q − 1) sequences. W e also get a sequence of complexit y ℓ if ∆ n 6 = 0 and either (i) L n − 1 = ℓ or ( ii) L n − 1 = n − ℓ . Since ⌊ ( n − 1) / 2 ⌋ < ℓ = L n − 1 ≤ n − 1, (i) giv es ( q − 1) N ( n − 1 , ℓ ) = q 2( n − 1 − ℓ ) ( q − 1 ) 2 sequence s. F or (ii), we hav e 1 ≤ n − ℓ ≤ ⌊ ( n − 1) / 2 ⌋ and so we o btain an additional ( q − 1) N ( n − 1 , n − ℓ ) = q 2( n − ℓ ) − 1 ( q − 1) 2 sequence s. Th us N ( n, ℓ ) = N ( n − 1 , ℓ ) + ( q − 1) N ( n − 1 , ℓ ) + ( q − 1) N ( n − 1 , n − ℓ ) and on substituting the inductiv e v alues, we easily get N ( n, ℓ ) = q 2( n − ℓ ) ( q − 1) a s required.  Corrigenda W e tak e this opp ortunit y to correct some typog r aphical erro r s in [20]: p. 3 3 5, l. 6. delete ε ( g ) + deg g ≤ m . p. 336, l. 2 . should read O ( X 2 − X ) = (( X 2 − X ) ◦ F ′ ) − 3+2 = F ′ − 3 − F ′ − 2 = 1 . p. 336 line 1 3 n < m should b e m ≤ n . p. 3 4 3, table for 1,1,2 iteratio ns: O µ − 1 = − 1, O µ − 2 = +1. References [1] A. Alecu a nd A. Salag ean. Mo dified Berlek amp-Massey Algorithm for Appro ximating the k - Error Linear Complexity of Binary Sequences. I.M.A. Conf e r enc e on Crypto g- r aphy and Co ding (S.D. Galbr aith, Ed.): Springer LNCS vol. 4887 , pag es 220– 2 32, 2007. [2] F. Arnault, Berger T.P ., and A. Necer. F eedback with Carry Shift R egisters Syn thesis With the Euclidean Algorithm. IEEE T r a n s. on In f o rmation The ory , 50:910 –916, 2004. 20 [3] R. Blahut. The o ry and Pr actic e of Err or Contr o l Co des . Add ison- W esley , 198 3. [4] G. L. F eng and K. K. Tzeng. A generalization of the Berlek amp-Massey algor it hm for m ultisequence shift register sequence syn t hesis with applications to deco ding cyclic co des. IEEE T r ans. Inf o rm. The ory , 3 7:1274–128 7, 199 1 . [5] P . Fitzpatr ick and S. Jennings. Comparison of t wo algorithms for deco ding alternan t co des. Applic able A l g e br a in Engine ering, Co mmunic ations an d Computing , 9 :2 11– 220, 1998 . [6] P . Fitzpatric k and G .H. Norton. The Berlek amp-Massey algorithm and linear recur- ring sequences ov er a factorial domain. Applic able Algebr a in Engine e rin g, Comm u- nic ation and Computing , 6:309– 3 23, 1995. [7] F.G. Gusta vson. Analysis of the Berlek amp-Massey linear feedbac k shift-register syn thesis algor ithm. IBM J. R es. D ev. , 20 :2 04–212, 1976. [8] A. E. Heydtmann a nd J.M.. Jensen. On the Equiv alence o f the Berlek amp-Massey and the Euclidean Algo rithms for Deco ding. IEEE T r ans . on Inform ation The ory , 46:2614–2 624, 2 000. [9] K. Imam ura and W. Y oshida. A Simple Deriv ation of the Berlek amp-Massey Algo- rithm and Some Applications. IEEE T r a n s. on Information The ory , 33 :146—150, 1987. [10] K. Jensen a nd N. Wirth. Pasc al: User Manual a n d R ep ort (2n d Edition) . Springer, 1978. [11] E. Jonc kheere and C. Ma. A Simple Hank el In terpretatio n o f the Berlek amp-Massey Algorithm. Line ar Algebr a a nd its Applic ations , 125:65— 76, 19 89. [12] R. Lidl and H. Niederreiter. Finite Fie lds, Encyclop e dia of Mathematics and i ts Applic ations , v olume 2 0 . Addison-W esley , Reading, 1983. [13] J. L. Massey . Shift-register syn thesis and BCH deco ding. IEEE T r a n s. Inform. The ory , 15:122 –127, 1969. [14] R. McEliece. The The ory of Informa tion and Co ding (Encyclop e dia of Mathematics and its Applic ations) , v olume 3. Cam bridge Univ ersit y Press, 2002. [15] H. Niederreiter. The linear complexit y profile and the jump complexit y of ke ystream sequence s. L e ctur e Notes in Computer Scienc e , 4 73:174–188, 1990. [16] G. H. Norton. Minimal P o lynomial Algorithms for Finite Sequences . I EEE T r ans. on I n formation The ory , 56:4643 – 4645, 2010. [17] G. H. Norton. On Minimal P o lynomial Iden tities for Finite Sequences. Subm itte d , pages 1 –25, 2010. 21 [18] G.H. Norton. On the Minimal R ealizat io ns of a Finite Sequence. J. Symb olic Com- putation , 20:93 –115, 1995 . [19] G.H. Norton. Some deco ding applications o f minimal r ealizatio n. In Crypto gr ap hy and Co ding , v olume 1 0 25, pag es 53–6 2. Lecture No tes in Computer Science. Springer, 1995. [20] G.H. Norton. O n shortest linear recurrences. J. Symb olic C omputation , 27:323–3 47, 1999. [21] G. H. Norton and A. Salagean. On the k ey equation o v er a comm utativ e ring. Designs, Co des and Crypto gr aphy , 20:1 25–141, 2000. [22] W. W. P eterson and W.J. W eldon. Err or-C orr e cting Co des . MIT Press, 1972. [23] I.S. Reed, M.T. Shih, and T.K. T ruong. VLSI design of in v erse-free Berlek amp- Massey algo rithm. IEE Pr o c. E, Comp uters and D igital T e c h niques , 138:5:295 –298, 1991. [24] R.A. Ruepp el. A nalysis and Design of Str e am Cip h e rs. Spring er, 1 986. [25] A. Sa lagean. On the Computation of the Linear Complexit y a nd the k -Error Linear Complexit y o f Binary Sequences With P erio d a P o wer o f 2. IEEE T r ans. Inform . The ory , 51:114 5–1150, 2005. [26] A. Salagean. An Algorithm for Computing Minimal Bidirectional Linear Recurrence Relations. IEEE T r ans. I nfo. The ory , 55:4695 –4700, 2009. 22

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment