Generating All Partitions: A Comparison Of Two Encodings
Integer partitions may be encoded as either ascending or descending compositions for the purposes of systematic generation. Many algorithms exist to generate all descending compositions, yet none have previously been published to generate all ascendi…
Authors: Jerome Kelleher, Barry OSullivan
Generating All P artitions : A Comparis on Of Tw o Enco dings Jerome Kelleher ∗ Barry O’Sulliv an † Ma y 5, 2014 Abstract Integer partitions may b e encod ed as either ascending or descendin g compositions for the purp oses of systematic generation. Man y algorithms exist to generate all descend ing comp ositions, yet none hav e prev iously b een published to generate all ascending comp ositions. W e develop three new alg orithms to generate al l ascending comp ositions and co mpare these with descending comp osition generators from the literature. W e analyse the new algori thms and p rovide new and more precise analyses for the descending comp osition generators. In each case, the ascending composi- tion generation algorithm is substantia lly more efficient than its descend- ing comp osition counterpart. W e develop a new formula for the partition function p ( n ) as part of our analysis of th e lexicographic succession rule for ascending comp ositions. 1 In tro duction A partition of a p os itive integer n is an unorder ed co llection of p ositive in te- gers whose sum is n . Partitions ha ve b een the sub ject of extensive study for many years and the theory of par titions is a la rge and diverse b o dy of kno wl- edge. Partitions are a fundamental mathematical conce pt and have connections with num b er theo ry [4], elliptic mo dular functions [48, p.2 24], Sch ur algebras and representation theory [33, p.13], deriv atives [61], symmetric gro ups [10, 7], Gaussian po lynomials [6, ch.7] a nd muc h else [2]. The theor y of par titions a lso has ma ny a nd v aried applications [12, 4 1, 30, 59, 18, 1, 5 7]. Combinatorial generation algor ithms allow us to systematica lly traverse all po ssibilities in some combinatorial universe, and hav e b een the sub ject of sus- tained interest for many years [28]. Many algor ithms are known to generate fundamen tal combinatorial ob jects; f or e xample, in 197 7 Sedgewick reviewed ∗ Institute of Evolutionary Biology , Universit y of Edinburgh, King’s Buildings, W est Mains Road, EH9 3JT U.K. jerome.kelleher @ed.ac.uk † Cork Constraint Computation Cent re, W estern Gatewa y Building, Unive rsity College Cork, Ireland. b .osullivan@c s.ucc.ie 1 more than thirty permutation generation algor ithms [49]. Many diff erent order s hav e be en pr op osed for generating combinatorial ob jects, the most common b e- ing lexicog raphic [23] and minimal-change o rder [46]. The choice of enco ding, the r epresentation of the ob jects we are int eres ted in as s impler str uctures, is of critical impor tance to the efficiency of combinatorial generation. In this pap er we demonstrate that by changing the enco ding for partitions from desc ending co mpo sitions to asc ending comp ositio ns w e obtain significantly more efficien t generation a lgorithms. W e develop three new algor ithms under the most common gener ation idioms: recursion (Section 2), successio n rules (Section 3), and efficient sequential g eneration (Section 4). In e ach case we rigoro usly a nalyse the ne w a lgorithm a nd use this analys is to c ompare with a commensurable alg orithm from the literature, for which we provide a new and more precise analysis. These analyses are per formed using a nov el application of Kemp’s abstrac tion of counting read and write op erations [23] and this approa ch is v alida ted in a n empirical study (Section 4 .3). In all three ca ses the new ascending compo sition generation algorithm is substantially more efficient than the algor ithm from the liter ature. As part of our study o f partition gener ation algorithms we provide a new pro of of a partition identit y in Section 4.1.1. W e also dev elop a new fo rmula for the pa rtition function p ( n ), one of the most impo rtant functions in the theor y of partitions [5], in Section 3 .4. 1.1 Related W ork A comp osition of a po sitive integer n is an express ion of n as an or dered sum of p ositive integers [52, p.14], and a compo sition a 1 + · · · + a k = n can b e represented by the sequence a 1 . . . a k . Since there is a unique wa y o f expre ssing each partition of n as comp osition of n in either as cending or descending order 1 , we can generate either the set of ascending or descending compo sitions of n in order to obtain the s et of partitions. More precis ely , we c an say that we are enco ding par titions as either ascending o r descending co mpo sitions for the purp oses of sys tematic gene ration. Although partitions are fundamen tally unordered they hav e come to be de- fined in more and mor e concrete terms as descending comp os itions. This trend can b e clea rly se en in the w ork s o f Sylvester [56], MacMahon [32, V ol.II p.91] and finally Andrews [4 , p.1]. Sylvester’s “constr uctive theory of partitions” , based o n the idea of trea ting a partition as “definite thing” [56] (in contrast to Euler’s alg ebraical identities [20]), ha s b een e xtremely succ essful [39]. As a re- sult of this, par titions are no w often defined as descending co mp ositions [4, p.1]; th us, a lgorithms to generate all partitions hav e naturally follow ed the prev ailing definition and gener ated des cending co mpo sitions. It is widely a ccepted that the mos t efficient means o f generating descending comp ositions is in reverse lexicogra phic order: see Andrews [4, p.230], Knuth [27, p.1], Nijenh uis & Wilf [36, p.65 –68], Page & Wilso n [38, § 5.5], Skiena [50 , p.52], 1 F or our purposes t he terms ‘ascending’ an d ‘descen ding’ are synony mous with ‘nonde- creasing’ and ‘nonincreasing’, resp ectively . 2 Stant on & White [53, p.13], W ells [60, p.15 0] or Zoghbi & Sto jmenovi ´ c [6 2]. Several different repr esentations (concrete data structures ) have be en used for generating des cending co mpo sitions: namely the se quence [3 4], m ultiplicity [36, ch.9] a nd part-count [55] representations. Although the lexicog raphic succession rules for descending comp os itions in the m ultiplicity or par t-count r epresenta- tions can b e implemented lo o plessly [14], they tend to b e less efficient that their sequence represen tation coun terparts [27, ex.5]. In an empirical a nalysis, Zoghbi & Sto jmenovi ´ c [6 2] demonstra ted that their sequence repr esentation algorithms are significa nt ly mo re efficient than a ll known multiplicit y and part-co un t r ep- resentation algorithms. Algorithms to ge nerate de scending co mpo sitions in lexicographic o rder have also b een published. See Kn uth [25, p.14 7] and Zog h bi & Sto jmenovi ´ c [62] for implementations using the sequence representation; Reing old, Nievergelt & Deo [42, p.193 ] and F enner & Loizou [16] for implemen tations using the m ultiplicity re presentation; and Klimko [24] for an implementation using the part-count re presentation. F enner & Loizo u’s tree cons truction op erations [1 5] can b e used to generate descending co mpo sitions in s everal other o rders. Several alg orithms are known to generate descending k -compos itions in lex- icogra phic [19, 58], reverse lexicog raphic [43], a nd minimal-change [45] or der. Hinden burg’s eighteen th century algorithm [13, p.10 6] gene rates ascending k - comp ositions in lexicog raphic o rder and is rega rded as the cano nical metho d to generate pa rtitions in to a fixed num b er of parts: see Knut h [27, p.2], An drews [4, p.232] or Reing old, Nievergelt & Deo [42, p.191]. Alg orithms due to Sto ck- mal [54], Lehmer [3 1, p.26 ], Narayana, Mathsen & Sa rangi [35], and Boyer [9] also genera te a scending k -co mpo sitions in lexico graphic order. Algorithms to generate al l ascending comp ositions, how ever, have no t b een consider ed. 1.2 Notation In genera l we use the notation and co n ven tions a dvocated by Knuth [2 6, p.1], using the term visit to refer to the process of making a complete ob ject av aila ble to some consuming pr o cedure. Thus, an y combinatorial ge neration a lgorithm m ust v isit e very elemen t in the comb inator ial universe in q uestion exa ctly once . In discuss ing the efficiency of combinatorial generation, w e s ay that an algo rithm is c onstant amortise d time [44, § 1.7] if the av erag e amount of time require d to generate a n ob ject is b ounded, fro m ab ov e, by some co nstant. Ordinarily , we deno te a sequence o f integers as a 1 . . . a k , w hich denotes a sequence of k integers indexed a 1 , a 2 , etc. When referr ing to shor t specific sequences it is conv enient to enclos e each element using h and i . Th us, if we let a 1 . . . a k = h 3 ih 23 i , w e hav e k = 2, a 1 = 3 and a 2 = 23. W e will also use the idea of prep ending a pa rticular v alue to the head of a s equence: thus, the notation 3 · h 23 i is the s ame se quence a s given in the preceding example. Definition 1 .1. A se quen c e of p ositive inte gers a 1 . . . a k is an ascending com- po sition of the p ositive inte ger n if a 1 + · · · + a k = n and a 1 ≤ · · · ≤ a k . 3 Definition 1.2. L et A ( n ) b e the set of all asc ending c omp ositions of n for some n ≥ 1 , and let A ( n, m ) ⊆ A ( n ) b e define d for 1 ≤ m ≤ n as A ( n, m ) = { a 1 . . . a k | a 1 . . . a k ∈ A ( n ) and a 1 ≥ m } . Also , let A ( n ) = |A ( n ) | and A ( n, m ) = |A ( n, m ) | . Definition 1.3. A se quen c e of p ositive inte gers d 1 . . . d k is a des cending com- po sition of the p ositive inte ger n if d 1 + · · · + d k = n and d 1 ≥ · · · ≥ d k . Definition 1.4. Le t D ( n ) b e t he set of al l desc ending c omp ositions of n for some n ≥ 1 , and let D ∗ ( n, m ) ⊆ D ( n ) b e define d for 1 ≤ m ≤ n as D ∗ ( n, m ) = { d 1 . . . d k | d 1 . . . d k ∈ D ( n ) and d 1 = m } . Also, let D ( n ) = |D ( n ) | and D ∗ ( n ) = |D ∗ ( n, m ) | . There is an asymmetry betw een the function used to en umerate the asce nd- ing comp ositions and the descending comp os itions: A ( n, m ) counts the ascend- ing compositions of n where the first part is at le ast m , w hereas D ∗ ( n, m ) coun ts the num b er o f descending comp ositio ns of n where the firs t part is exactly m . This asymmetry is ne cessary a s w e require A ( n, m ) in o ur analysis of a scending comp osition gener ation alg orithms and D ∗ ( n, m ) is es sential for the analysis of the r ecursive descending c omp osition generation a lgorithm of Section 2.2. 2 Recursiv e Algorithms In this section we exa mine r ecursive algor ithms to genera te asce nding a nd de- scending compo sitions. Recur sion is a po pular tec hnique in co mbinatorial gener- ation as it leads to elegant and concise generation pro cedures [44]. In Section 2.1 we develop and analyse a simple constant a mortised time recursive algor ithm to generate all a scending compo sitions of n . Then, in Section 2.2 w e study Ruskey’s descending co mp osition g enerator [44, § 4.8], and pr ovide a new analy- sis of this algorithm. W e compare these algorithms in Section 2.3 in terms of the total num ber of recursive inv o cations required to generate all p ( n ) par titions of n . 2.1 Ascending Comp osit ions The only r ecursive algor ithm to gener ate all as cending compos itions a v a ilable in the liter ature is de Moiv re’s metho d [1 1]. In de Mo ivre’s metho d we generate the ascending comp ositio ns of n by pr ep ending m to the (pr eviously listed) ascending comp os itions of n − m , fo r m = 1 , . . . , n [28, p.20]. Our new recursive algorithm to gene rate all asce nding comp ositions of n op erates on a similar principle, but do es not r equire us to hav e lar ge s ets of partitions in memory . W e first note that we can genera te a ll ascending comp ositions o f n , with smallest part at lea st m , b y prep ending m to all ascending c omp ositions of n − m . W e then observe that m can ra nge from 1 to ⌊ n / 2 ⌋ , s ince the sma llest part in a pa rtition o f n (with more than one pa rt) ca nnot b e less than 1 or greater than ⌊ n/ 2 ⌋ ; and we complete the pro ce ss by visiting the singleton comp o sition 4 Algorithm 2.1 RecAsc ( n, m, k ) Require: 1 ≤ m ≤ n 1: x ← m 2: while 2 x ≤ n do 3: a k ← x 4: RecAsc ( n − x, x, k + 1) 5: x ← x + 1 6: end whil e 7: a k ← n 8: visit a 1 . . . a k h n i . This provides sufficient information fo r us to derive a recursive generation pro cedure, Algorithm 2 .1, in the idiom of Page & Wilson [38]. T his a lgorithm generates all ascending comp ositions of n where the first pa rt is a t least m in lexicogra phic o rder. See Kelleher [22, § 5.2.1] f or a complete discussion and pro of of cor rectness o f Alg orithm 2 .1. F ollo wing the standa rd pra ctise for the analysis of recurs ive generatio n al- gorithms, we co unt the num ber of r ecursive calls required to genera te the set of combinatorial ob jects in question (e.g. Sawada [47]). By counting the total nu mber of recursive inv o cations requir ed, w e o btain a b ound on the total time required, as each inv o ca tion, discounting the time s p ent in recursive calls, re- quires constant time. T o establish that Algorithm 2.1 g enerates the se t A ( n ) in consta nt amor tised time w e must count the total n umber o f inv o cations , I A 2 . 1 ( n ), and s how that this v alue is prop or tional to p ( n ). Theorem 2.1 . F or al l p ositive inte gers n , I A 2 . 1 ( n ) = p ( n ) . Pr o of. Each in vocation o f Algorithm 2.1 visits e xactly one compo sition (line 8). The in voca tion RecAsc ( n, m, 1) corr ectly visits all p ( n ) ascending comp os itions of n [22, p.7 8] and it immediately follows, therefore, tha t there must b e p ( n ) inv o cations. Hence, I A 2 . 1 ( n ) = p ( n ). Theorem 2.1 giv es us an asymptotic mea sure of the total computational effort required to genera te all partitions of n using Algorithm 2.1. It is also useful to know the a verage amoun t of effort that this total implies p er partition. Therefore, w e let ¯ I A 2 . 1 ( n ) denote the av erag e num ber of inv o cations of RecAsc required to ge nerate an ascending comp osition of n . W e then triv ially get ¯ I A 2 . 1 ( n ) = 1 (1) from Theo rem 2.1, a nd w e c an see that Algo rithm 2.1 is o bviously co nstant amortised time. In this subsection we hav e develop ed a new alg orithm to generate all as - cending comp ositions of n . This a lgorithm, a lthough concise and simple, can be easily shown to b e constant amortis ed time. In the next subsection we ex- amine the most efficie nt known algor ithm to generate descending comp ositions, 5 Algorithm 2.2 RecDesc ( n, m, k ) Require: 1 ≤ m ≤ n and d j = 1 for j > k 1: d k ← m 2: if n = m o r m = 1 then 3: visit d 1 . . . d k + n − m 4: else 5: for x ← 1 to min( m, n − m ) do 6: RecDesc ( n − m, x, k + 1) 7: end for 8: d k ← 1 9: end if which we subsequently compare to the a scending comp o sition g enerator of this subsection. 2.2 Descending Composit ions Two recursive algorithms ar e av ailable to g enerate all descending comp osi- tions of n : P age & Wilson’s [38, § 5.5] genera tor (v ariants o f which hav e ap- pea red in several texts, including K reher & Stinson [29, p.68], Skiena [50, p.5 1] and Pemmara ju & Skiena [40, p.136]) a nd Ruskey’s improv e ment thereof [44, § 4.8]. Ruskey’s algor ithm, given in Algo rithm 2 .2, gener ates all descending comp ositions of n in which the firs t (and largest) pa rt is exactly m ; th us RecDesc (8 , 4 , 1) visits the compositions 41 111 , 42 11 , 422 , 431 , 44. RecDesc uses what Ruskey refers to a s a ‘pa th elimination technique’ [4 4, § 4.3 ] to attain constant amortised time p erfor mance. A slig ht complicatio n arise s when we wish to use RecDesc to generate al l descending compo sitions. As the algor ithm gene rates a ll des cending compos i- tions where the first part is exactly m , w e m ust iterate thro ugh all j ∈ { 1 , . . . , n } and inv oke RecDesc ( n, j, 1 ). F ollowing Ruskey’s r ecommendations [44, § 4.3 ], we cons ider instead the inv o cation RecDesc (2 n, n, 1). This in vocation will generate all descending comp ositions o f 2 n wher e the firs t part is exactly n ; therefore the remaining parts will b e a descending compo sition of n . Thus, if we alter line 3 to ig nore the first part in d (i.e. visit d 2 . . . d k + n − m ), we will visit all desc ending co mpo sitions o f n in lexicog raphic or der. Ruskey’s algorithm gener ates descending comp ositions where the la rgest pa rt is exactly m , and so we require a recurr ence relation to count o b jects o f this t yp e. Ruskey [4 4, § 4.8] provides a r ecurrence rela tion to compute D ∗ ( n, m ), which w e shall use for our a nalysis. Thus, we define D ∗ ( n, n ) = D ∗ ( n, 1) = 1, and in gener al, D ∗ ( n, m ) = min( m,n − m ) X x =1 D ∗ ( n − m, x ) . (2) Recurrence (2) is useful here b ecaus e it is the rec urrence rela tion up on which RecDesc is ba sed. Using this recurr ence we can then eas ily count the num b er 6 of inv o cations of RecDesc requir ed to generate the descending co mp ositions of n . Let us define I ′ A 2 . 2 ( n, m ) as the num be r of inv o cation of RecDesc required to gener ate all descending co mp ositions o f n where the firs t part is e xactly m . Then, I ′ A 2 . 2 ( n, n ) = I ′ A 2 . 2 ( n, 1) = 1, and I ′ A 2 . 2 ( n, m ) = 1 + min( m,n − m ) X x =1 I ′ A 2 . 2 ( n − m, x ) . (3) Recurrence (3) computes the n umber of inv o cations of Algo rithm 2 .2 required to generate all descending comp ositions of n with first part exactly m , but tells us little ab out the actual magnitude of this v alue. As a step tow a rds so lving this recurr ence in terms of the partitio n function p ( n ) we require the following lemma, in which we relate the I ′ A 2 . 2 ( n, m ) num b ers to the D ∗ ( n, m ) num b ers. Lemma 2.1. If 1 < m ≤ n t hen I ′ A 2 . 2 ( n, m ) = D ∗ ( n, m ) + D ∗ ( n − 1 , m ) . Pr o of. Pro ceed by s trong induction on n . Base case: n = 2 Suppo se 1 < m ≤ 2 ; it follows immediately that m = 2. Thu s, b y recurrence (3) we compute I ′ A 2 . 2 (2 , 2) = 1 and b y re currence (2) compute D ∗ (2 , 2) = 1 and D ∗ (1 , 2) = 0. Therefor e, I ′ A 2 . 2 (2 , 2) = D ∗ (2 , 2) + D ∗ (1 , 2), and so the inductive ba sis ho lds. Induction step Supp ose, for some p ositive in teger n , I ′ A 2 . 2 ( n ′ , m ′ ) = D ∗ ( n ′ , m ′ )+ D ∗ ( n ′ − 1 , m ′ ) for all p ositive integers 1 < m ′ ≤ n ′ < n . Then, supp o se m is an arbitrar y po sitive integer such tha t 1 < m ≤ n . Now, supp ose m = n . By (3) we know that I ′ A 2 . 2 ( n, m ) = 1 s ince m = m . Also, D ∗ ( n, m ) = 1 a s m = n , and D ∗ ( n − 1 , m ) = 0 as n − 1 6 = m , m 6 = 1 and min( m, n − m − 1 ) = − 1, ensuring that t he sum in ( 2) is empt y . Therefore, I ′ A 2 . 2 ( n, m ) = D ∗ ( n, m ) + D ∗ ( n − 1 , m ). Suppo se, on the other ha nd, that 1 < m < n . W e can see immediately that min( m, n − m ) ≥ 1, a nd so there m ust b e at leas t one term in the sum of (3). Extracting this fir st term where x = 1 from (3) we get I ′ A 2 . 2 ( n, m ) = 1 + I ′ A 2 . 2 ( n − m, 1) + min( m,n − m ) X x =2 I ′ A 2 . 2 ( n − m, x ) , and furthermo re, a s I ′ A 2 . 2 ( n, 1) = 1, we obtain I ′ A 2 . 2 ( n, m ) = 2 + min( m,n − m ) X x =2 I ′ A 2 . 2 ( n − m, x ) . (4) W e ar e a ssured that 1 < x ≤ n − m by the upp er and low er b ounds of the 7 summation in (4), and so w e can apply the inductive hypo thesis to get I ′ A 2 . 2 ( n, m ) = 2 + min( m,n − m ) X x =2 ( D ∗ ( n − m, x ) + D ∗ ( n − m − 1 , x )) = 2 + min( m,n − m ) X x =2 D ∗ ( n − m, x ) + min( m,n − m ) X x =2 D ∗ ( n − m − 1 , x ) . By the definition of D ∗ we know that D ∗ ( n, 1) = 1, and so D ∗ ( n − m, 1) + D ∗ ( n − m − 1 , 1) = 2. Replacing the leading 2 ab ov e with this expressio n, and inserting the terms D ∗ ( n − m, 1 ) and D ∗ ( n − m − 1 , 1) into the appropria te summations we find that I ′ A 2 . 2 ( n, m ) = min( m,n − m ) X x =1 D ∗ ( n − m, x ) + min( m,n − m ) X x =1 D ∗ ( n − m − 1 , x ) . (5) By (2 ) we know that the first ter m of (5) is equa l to the firs t term o f I ′ A 2 . 2 ( n, m ) = D ∗ ( n, m ) + D ∗ ( n − 1 , m ), it therefore r emains to s how that D ∗ ( n − 1 , m ) = min( m,n − m ) X x =1 D ∗ ( n − m − 1 , x ) , or equiv alently , that min( m,n − m − 1) X x =1 D ∗ ( n − m − 1 , x ) = min( m,n − m ) X x =1 D ∗ ( n − m − 1 , x ) . (6) Suppo se m ≤ n − m − 1. Then, min( m, n − m − 1 ) = min( m, n − m ), a nd so the left and right-hand sides of (6) are eq ual. Supp ose, a lternatively , that m > n − m − 1. Hence, min( m, n − m − 1) = n − m − 1 and min( m, n − m ) = n − m and so we get n − m X x =1 D ∗ ( n − m − 1 , x ) = n − m − 1 X x =1 D ∗ ( n − m − 1 , x ) + D ∗ ( n − m − 1 , n − m ) . Since n − m − 1 < n − m we know that D ∗ ( n − m − 1 , n − m ) = 0, a nd ther efore (6) is verified. Therefore, by (5) a nd (6) w e know that I ′ A 2 . 2 ( n, m ) = D ∗ ( n, m ) + D ∗ ( n − 1 , m ), as required. Lemma 2.1 is a cr ucial step in our analy sis o f Algo rithm 2.2 as it relates the nu mber of invocations re quired to genera te a giv en set of descending c omp osi- tions to the function D ∗ ( n, m ). Much is known ab out the D ∗ ( n, m ) num ber s, as they count the partitions of n whe re the la rgest par t is m ; thus, we can then relate the num ber of inv o ca tions required to the partition num b er s, p ( n ). Therefore, let us formally define I A 2 . 2 ( n ) to b e num b er of inv o cations of Algo- rithm 2.2 re quired to genera te all p ( n ) descending co mp ositions of n . W e then get the following res ult. 8 Theorem 2.2 . If n > 1 then I A 2 . 2 ( n ) = p ( n ) + p ( n − 1) . Pr o of. Suppos e n > 1. T o gener ate a ll descending compo sitions of n we in- vok e RecDesc (2 n , n, 1) (see discussion ab ov e), and as n > 1 we c an apply Lemma 2.1, to obtain I ′ A 2 . 2 (2 n, n ) = D ∗ (2 n, n ) + D ∗ (2 n − 1 , n ), a nd th us I A 2 . 2 ( n ) = D ∗ (2 n, n ) + D ∗ (2 n − 1 , n ). W e know that D ∗ (2 n, n ) = p ( n ), a s we ca n clearly o btain a descending comp osition o f n from a descending com- po sition of 2 n where the first part is e xactly n b y removing that first part. Similarly , D ∗ (2 n − 1 , n ) = p ( n − 1), as we can remove the first part of size n fro m a ny descending comp osition of 2 n − 1 with first part equal to n , ob- taining a desce nding comp osition o f n − 1. Th us, the descending c omp ositions counted by the functions D ∗ (2 n, n ) = p ( n ) and D ∗ (2 n − 1 , n ) = p ( n − 1). Hence, I A 2 . 2 ( n ) = p ( n ) + p ( n − 1), completing the pro of. Note that in Theorem 2.2, and in man y of the following analyses, we restrict our attent ion to v alues n > 1. This is to av oid unnecessar y complication of the relev ant formulas in accounting fo r the case where n = 1. In the a bove, if we compute I A 2 . 2 ( n ) = p ( n ) + p ( n − 1 ) fo r n = 1, we arr ive at the conclusion that the num ber of inv o cations r equired is 2, a s p (0 ) = 1 b y conv ent ion. In the int eres t of clarity we shall ignore such co ntin gencies , a s they do not affect the general co nclusions we dr aw. Using Theo rem 2 .2 it is now stra ightforw ard to s how that RecDesc gener- ates a ll de scending comp ositions of n in consta nt amortised time. T o show that the algo rithm is constant amortised time we m ust demonstra te that the av era ge nu mber of inv o cations of the a lgorithm p er o b ject generated is b o unded, from ab ov e, b y some c onstant. T o do this, let us formally define ¯ I A 2 . 2 ( n ) a s the av erage num b er of inv o cations of RecDesc req uired to gener ate a des cending comp osition o f n . Cle arly , as the total num b er of inv o ca tions is I A 2 . 2 ( n ) and the num b er of ob jects gener ated is p ( n ), we ha ve ¯ I A 2 . 2 ( n ) = I A 2 . 2 ( n ) /p ( n ). Since I A 2 . 2 ( n ) = p ( n ) + p ( n − 1) b y Theorem 2 .2, we have ¯ I A 2 . 2 ( n ) = 1 + p ( n ) /p ( n − 1). It is well known that p ( n ) > p ( n − 1) for all n > 1, and therefore p ( n − 1) /p ( n ) < 1. F rom this inequality we can then deduce that ¯ I A 2 . 2 ( n ) < 2, proving that Algor ithm 2.2 is c onstant amortised time. It is useful to have a more precise asymptotic expression for the av era ge n umber of inv o cations re quired to genera te a descending comp osition using RecDesc , ¯ I A 2 . 2 ( n ). By the a symptotic estimate for p ( n − t ) /p ( n ) [2 7, p.11 ] we then get ¯ I A 2 . 2 ( n ) = 1 + e − C / √ n 1 + O ( n − 1 / 6 ) , with C = π / √ 6. Simplifying this expression w e get ¯ I A 2 . 2 ( n ) = 1 + e − π / √ 6 n 1 + O ( n − 1 / 6 ) . (7) In this subsection we have describ ed a nd pr ovided a new a nalysis for the most efficient known recursive descending co mp osition gener ation algorithm, which is due to Ruskey [44, § 4.8]. Ruskey demonstrates that RecDesc is con- stant amor tised time by rea soning ab out the num b er of children each no de in the computation tree has, but do es not der ive the precis e n umber of invocations 9 T able 1: A compariso n of recur sive partition generato rs. The ra tio of the time required by our ascending comp osition generation algor ithm and Ruskey’s a l- gorithm in the Jav a a nd C langua ges is shown. n = 61 72 77 90 95 109 p ( n ) = 1 . 12 × 10 6 5 . 39 × 10 6 1 . 06 × 10 7 5 . 66 × 10 7 1 . 05 × 10 8 5 . 42 × 10 8 Jav a 0 . 56 0 . 56 0 . 56 0 . 55 0 . 55 0 . 55 C 0 . 40 0 . 48 0 . 49 0 . 50 0 . 50 0 . 50 Theoretical 0 . 5 4 0 . 54 0 . 53 0 . 53 0 . 5 3 0 . 5 3 inv o lved. In this section we r igoro usly counted the num b er of inv o cations re- quired to gener ate all descending comp ositio ns o f n using this algorithm, and related the re currence in volved to the partition n um b ers. W e then use d an asymptotic formula for p ( n ) to derive the num be r of invocations requir ed to generate each par tition, on av erage. In the next subsectio n we use this analys is to compa re Ruskey’s descending comp osition g enerator with our new as cending comp osition generator . 2.3 Comparison Performing the comparison b etw een the recurs ive algo rithms to gener ate all ascending compos itions a nd to generate all descending comp ositio ns o f n is a simple pro cedure. RecAsc requires p ( n ) in voca tions to genera te a ll p ( n ) partitions of n whereas RecDesc r equires p ( n ) + p ( n − 1) inv o cations. The asymptotics o f p ( n ) s how that, a s n b ecomes large, p ( n − 1) /p ( n ) approaches 1. Thus, we c an reaso nably exp ect the descending c omp osition gener ator to require approximately twice as long as the ascending comp o sition generato r to generate a ll par titions of n . In T able 1 we see a compariso n of the actual time sp ent in gener ating par- titions o f n using Ruskey’s algo rithm, Alg orithm 2.2, a nd o ur asce nding com- po sition gener ator, Algorithm 2.1. In this ta ble we rep ort the time sp ent b y Algorithm 2.1 in gener ating all a scending compo sitions of n , divided by the time req uired by Ruskey’s algo rithm (we rep ort these ratios as the actual du- rations are o f little in terest). Several s teps were tak en in an e ffort to a ddress Sedgewick’s concerns abo ut the empirical comparisons of algor ithms [4 9]. Direct and liter al implemen tation of the algor ithms concer ned were written in the C and Jav a languag es and compiled in the simplest p ossible manner (i.e., without the use of compiler ‘optimisations’). E xecution times were mea sured as accu- rately as po ssible and the minimum v alue ov er fiv e runs us ed. The C progr ams were compiled using GCC version 3 .3.4 and the Jav a progra ms compiled and run on the Jav a 2 Standard Editio n, version 1.4.2 . All progr ams were ex ecuted on an Intel Pentiu m 4 pro cessor running Lin ux kernel 2 .6.8. See Kelleher [2 2, p.111–1 14] for a full discussio n of the metho dolo gy adopted in making these observ ations. 10 The v alues of n a re selected s uch that n is the sma llest integer where p ( n ) > 1 × 10 x and p ( n ) > 5 × 10 x for 6 ≤ x ≤ 8. Or ders of magnitude larger than these v alues pr ov ed to b e infeasible on the exp erimental pla tform; simila rly , the time elapsed in gener ating fewer than a million partitions was to o brief to measure accurately . Along with the obs erved ratios of the time required by RecAsc and RecDesc we also repo rt the theoretically pr edicted ratio of the r unning times: p ( n ) / ( p ( n ) + p ( n − 1 )). W e can see fr om T able 1 that these theoretica lly predicted ratios a gree well with the empirical evidence. W e can also see that as n b eco mes la rger , Ruskey’s algorithm is tending tow ards taking twice as long as RecAsc to genera te the same partitions. 3 Succession Rules In this s ection we consider algo rithms of the form s tudied by Kemp in his gen- eral treatment o f the problem of generating co mb inator ial ob jects [2 3]. K emp reduced the pr oblem of genera ting c ombinatorial o b jects to the gener ation of all words in a formal language L , and develop ed powerful g eneral techniques to ana lyse such algorithms. Sp ecifically , K emp studied “direct gener ation alg o- rithms” that ob ey a simple tw o step pro cedure: (1) scan the cur rent word from right-to-left until we find the end of the common pr efix shar ed by the current word and its immediate success or; a nd (2) attach the new suffix to the end of this shared prefix. The cost of this pro ces s can b e easily quantified by count ing the num b er of ‘read’ op erations required in step (1), and the num ber of ‘write’ op erations in step (2). T o determine the complexity of genera ting a given lan- guage, we can count the num b er of these o p erations incurr ed in the pro c ess of generating a ll words in the language . The section pr o ceeds as follows. In Section 3.1 we der ive a new succes sion rule for a scending co mpo sitions. W e then use this suc cession rule to develop a generation a lgorithm, which we s ubsequently a nalyse. Then, in Section 3.2 we examine the well-know successio n r ule for gener ating descending compos itions in re verse lexicogra phic order, and analyse the resulting algor ithm. F ollowing this, Section 3.3 compares the t wo a lgorithms in terms of Kemp’s read and write op erations. Finally , in Section 3.4 we dev elop a new formula for p ( n ) using o ur analysis the succes sion rule for ascending comp ositions. 3.1 Ascending Comp osit ions W e are concer ned here with developing a simple successio n rule that will allow us to generate th e lexico graphic successor of a given ascending comp osition, and using this rule to de velop a gene ration algor ithm. T o do this it is conv enient to define the following notation. Definition 3.1 (Lexicogr aphic Minimum) . F or some p ositive inte gers m ≤ n , the function M A ( n, m ) c omput es the lexic o gr aphic al ly le ast element of the s et A ( n, m ) . 11 Definition 3.2 (Lexicog raphic Succ essor) . F or any a 1 . . . a k ∈ A ( n ) \ h n i the function S A ( a 1 . . . a k ) c omput es the imme diate lexic o gr aphic suc c essor of a 1 . . . a k . The successio n rule for ascending comp ositions is then stated simply . W e obtain the lexicographically lea st comp osition in A ( n, m ) by app ending m to t he lexicogra phically least compositio n in A ( n − m, m ). If 2 m > n then there are no comp ositions in A ( n, m ) with mo re than one part, le ading us to conclude that there is o nly one p ossible co mpo sition; and this m ust be the lexic ographica lly least. This leads us to the following recurr ence: M A ( n, m ) = m · M A ( n − m, m ) (8) where M A ( n, m ) = h n i if 2 m > n . See Kelleher [22, p.84] for a pro of of (8) . W e can also derive a nonrecur sive success ion rule for S A , which we develop in the following s equence o f results. Lemma 3.1. F or al l p ositive inte gers m ≤ n , the lexic o gr aphic al ly le ast element of the set A ( n, m ) is given by M A ( n, m ) = µ z }| { m . . . m h n − µm i , (9) wher e µ = ⌊ n/m ⌋ − 1 . Pr o of. Pro ceed by s trong induction on n . Base case: n = 1 Since 1 ≤ m ≤ n and n = 1 , then m = 1, a nd s o 2 m > n . Then, by (8), w e know that M A ( n, m ) = h n i . Thus, as µ = 0 when n = 1, (9) correctly computes M A ( n, m ) when n = 1. Induction step Supp ose, fo r some p ositive integer n that (9) ho lds true fo r all p o sitive integers m ′ ≤ n ′ < n . Supp ose m is an arbitrary positive in teger such that m ≤ n . Supp ose then that 2 m > n . By dividing b oth sides of this inequality by m , we s ee that n/m < 2, and s o ⌊ n/m ⌋ ≤ 1. Similar ly , as m ≤ n , it follows that 1 ≤ n/m , and so 1 ≤ ⌊ n/ m ⌋ . Thus, 1 ≤ ⌊ n/m ⌋ ≤ 1, a nd so ⌊ n/m ⌋ = 1; hence µ = 0. By (8) M A ( n, m ) = h n i , and as µ = 0, zero copies of m ar e conc atenated with h n − µm i , a nd so (9) corr ectly computes M A ( n, m ). Suppo se then that 2 m ≤ n . By the inductive hyp othesis a nd (8) we hav e M A ( n, m ) = m · µ ′ z }| { m . . . m h n − m − µ ′ m i Clearly , if µ = µ ′ + 1, then (9) corr ectly computes the lexico graphica lly least element of A ( n, m ). W e know that µ ′ = ⌊ ( n − m ) /m ⌋ − 1, which g ives us µ ′ = ⌊ n/m − 1 ⌋ − 1. It fo llows that µ ′ = ⌊ n/m ⌋ − 2, and, as µ = ⌊ n/m ⌋ − 1 from (9), we hav e µ = µ ′ + 1, completing the pro of. 12 Algorithm 3.1 RuleAsc ( n ) Require: n > 0 1: k ← 2 2: a 1 ← 0 3: a 2 ← n 4: while k 6 = 1 do 5: y ← a k − 1 6: k ← k − 1 7: x ← a k + 1 8: while x ≤ y do 9: a k ← x 10: y ← y − x 11: k ← k + 1 12: end whil e 13: a k ← x + y 14: visit a 1 . . . a k 15: end whil e Theorem 3.1 (Lexicogra phic Successor) . If a 1 . . . a k ∈ A ( n ) \ {h n i} then S A ( a 1 . . . a k ) = a 1 . . . a k − 2 µ z }| { m . . . m h n ′ − µm i (10) wher e m = a k − 1 + 1 , n ′ = a k − 1 + a k , and µ = ⌊ n ′ /m ⌋ − 1 . Pr o of. Suppos e n is a n arbitrary p ositive integer. Let a 1 . . . a k be an arbi- trary element of A ( n ) \ { h n i} . Clearly , there is no p ositive in teger x such that a 1 . . . a k − 1 h a k + x i ∈ A ( n ). The initial pa rt o f M A ( a k + a k − 1 , a k − 1 + 1) is the least p o ssible v a lue we ca n assign to a k − 1 ; and the remaining par ts (if any) are the lexicographically least w ay to extend a 1 . . . a k − 1 to a complete ascending comp osition of n . Therefore, S A ( a 1 . . . a k ) = a 1 . . . a k − 2 M A ( a k − 1 + a k , a k − 1 + 1). Then, using Lemma 3.1 we get (10) as requir ed. The success ion rule (1 0) is implemen ted in RuleAsc (Algorithm 3 .1). E ach iteration of the main lo op v isits exa ctly one comp osition, and the in ternal lo op generates any sequences of parts req uired to find the lexicogra phic successor. W e concentrate here on ana lysis o f this a lgorithm; see Kelleher [22, § 5 .3.1] fo r a full discussion and pro of o f corr ectness. The goal o f our analysis is to derive a simple ex pression, in terms of the num b er of par titions of n , for the total nu mber of rea d and write op eratio ns [23] made in the pro cess of gene rating all ascending c omp ositions of n . W e do this by firs t determining the frequency of certain key instructions and using this information to determine the num be r o f read and write op eratio ns inv olved. Lemma 3.2. The numb er of times line 6 is ex e cu t e d du ring t he exe cution of Algo rithm 3.1 is given by t 6 ( n ) = p ( n ) . 13 Pr o of. As Algo rithm 3 .1 correctly visits all p ( n ) ascending comp os itions of n , we know that line 1 4 is exe cuted exactly p ( n ) times. Clearly line 6 is executed precisely the same num b er of times a s line 14, and so we hav e t 6 ( n ) = p ( n ), as required. Lemma 3.3. The numb er of times line 11 is ex e cu t e d during the ex e cut ion of Algo rithm 3.1 is given by t 11 ( n ) = p ( n ) − 1 . Pr o of. The v ariable k is used to cont ro l termination of the algo rithm. F r om line 1 w e know that k is initially 2 , a nd from line 4 we know that the algo rithm terminates w hen k = 1. F urthermor e, the v alue of k is mo dified only on lines 6 and 11. By Lemma 3.2 we know tha t k is decremented p ( n ) times; it then follows immediately that k must b e incremented p ( n ) − 1 times, and so we hav e t 11 ( n ) = p ( n ) − 1, as requir ed. Theorem 3.2 . A lgorithm 3.1 r e quir es R A 3 . 1 ( n ) = 2 p ( n ) r e ad op er ations to gener ate the set A ( n ) . Pr o of. Read op erations are carr ied o ut on lines 7 and 5, which ar e exe cuted p ( n ) times each by Lemma 3.2. Thus, the to tal num b er of read oper ations is R A 3 . 1 ( n ) = 2 p ( n ). Theorem 3.3. Algo rithm 3.1 r e quir es W A 3 . 1 ( n ) = 2 p ( n ) − 1 write op er ations to gener ate the set A ( n ) , excluding initialisation. Pr o of. After initia lisation, write op erations are carried o ut in A lgor ithm 3.1 only on lines 9 and 13. Line 13 is e xecuted p ( n ) times b y Lemma 3.2. W e can also see that line 9 is executed exactly as many times as line 11, and by Lemma 3 .3 we know that this v alue is p ( n ) − 1. Therefo re, summing these contributions, we g et W A 3 . 1 ( n ) = 2 p ( n ) − 1 , co mpleting the pr o of. F ro m Theorem 3.3 and Theorem 3.2 it is ea sy to see that we require an a ver- age of t wo read and tw o write op eratio ns per partition generated, as w e required 2 p ( n ) of b oth oper ations to genera te all p ( n ) partitions of n . Thus, for any v alue of n we are assure d tha t the total time require d to g enerate all partitions o f n will be pro po rtional to the num b er of partitions generated, implying that the algorithm is co nstant amortised time. 3.2 Descending Composit ions Up to this p oint we ha ve considered only alg orithms that gener ate compo sitions in lexicographic order. The ma jorit y of desc ending composition generation algo- rithms, how ever, v isit comp ositio ns in re verse lexicogra phic order (McKay [34] refers to it as the ‘natural or der’ fo r pa rtitions). There are many different pre- sentations of t he succes sion rule required to transfor m a des cending comp osition from this lis t into its immediate successo r: see Andrews [4, p.230], Knuth [27, p.1], Nijenh uis & Wilf [36, p.65 –68], Page & Wilso n [38, § 5.5], Skiena [50 , p.52], Stant on & White [5 3, p.13], W ells [60, p.150] o r Z oghbi & Sto jmeno vi´ c [62]. No 14 Algorithm 3.2 RuleDesc ( n ) Require: n > 0 1: d 1 ← n 2: k ← 1 3: visit d 1 4: while k 6 = n do 5: ℓ ← k 6: m ← d k 7: while m = 1 do 8: k ← k − 1 9: m ← d k 10: end whil e 11: n ′ ← m + ℓ − k 12: m ← m − 1 13: while m < n ′ do 14: d k ← m 15: n ′ ← n ′ − m 16: k ← k + 1 17: end whil e 18: d k ← n ′ 19: visit d 1 . . . d k 20: end whil e analysis of this succession r ule in terms o f the n umber of read and w rite op era- tions [23] inv olved has b een published, how ever, a nd in this section we a nalyse a basic implementation of the r ule (w e study more sophisticated techniques in Section 4.2). If we formally define S D ( d 1 . . . d k ) to be the immediate lexicogr aphic prede- cessor of a d 1 . . . d k ∈ D ( n ) \ 1 . . . 1, the succession rule ca n be formulated as follows. Given a descending co mp osition d 1 . . . d k where d 1 6 = 1, we obtain the next comp osition in the ordering by applying the transformation S D ( d 1 . . . d k ) = d 1 . . . d q − 1 µ z }| { m . . . m h n ′ − µm i (11) where q is the rightmost no n-1 v alue (i.e., d j > 1 for 1 ≤ j ≤ q and d j = 1 for q < j ≤ k ), m = d q − 1 , n ′ = d q + k − q and µ = ⌊ n ′ /m ⌋ − [ n ′ mo d m = 0]. This presentation can readily be der ived from the treatments cited in the previous paragr aph. The successio n rule (11) is implemented in RuleDesc (Algorithm 3.2), where each iteration o f the main lo op implements a single application o f the rule. The internal lo op of lines 7–9 implemen ts a right-to-left scan for the largest index q such that d q > 1, and the lo op of lines 1 3–17 inserts µ co pies of m in to the ar ray . W e ana lyse the alg orithm by first determining the fr equency of cer tain key statements, and using this information to derive the num b er of read and wr ite op er ations needed to genera te all descending co mpo sitions of n . 15 Lemma 3.4. The numb er of times line 8 is ex e cu t e d du ring t he exe cution of Algo rithm 3.2 is given by t 8 ( n ) = 1 − n + P n − 1 x =1 p ( x ) . Pr o of. As exac tly one descending comp osition is visited p er iteration of the outer while lo op, we know that up on reaching line 6 there is a complete descending comp osition of n contained in d 1 . . . d k . F ur thermore, a s d 1 ≥ · · · ≥ d k , we know that all pa rts of size 1 are at the end o f the comp ositio n, and so it is clear that line 7 will be e xecuted exactly once for each part of size 1 in any given comp osition. As we visit the comp ositions at the end of the lo op and we terminate when k = n we will not reach line 5 when the comp osition in question consists of n copies of 1 (as this is the lexicogra phically least, and hence the last descending compo sition in reverse lexicographic order). Thus, line 7 will b e executed exa ctly as many times as there are par ts of size 1 in all pa rtitions of n , min us the n 1s contained in the last compo sition. It is well known [21, p.8 ] that the n umber o f 1s in all partitions of n is 1 + p (1 ) + · · · + p ( n − 1), a nd therefore we s ee that line 7 is executed ex actly 1 − n + P n − 1 x =1 p ( x ), as r equired. Lemma 3.5. The numb er of times line 16 is ex e cu t e d during the ex e cut ion of Algo rithm 3.2 is given by t 16 ( n ) = P n − 1 x =1 p ( x ) . Pr o of. The v a riable k is used to control termination o f Algor ithm 3.2: the algo- rithm b egins with k = 1 and terminates when k = n . Ex amining Algorithm 3.2 we see that k is mo dified on only t wo lines : it is incremented o n line 16 and decremented on line 8. Th us, we must have n − 1 more increment op eratio ns than decrements; by L emma 3.4 there a re exac tly 1 − n + P n − 1 x =1 p ( x ) decre- men t op erations, a nd s o we see tha t line 14 is executed P n − 1 x =1 p ( x ) times, as required. Theorem 3.4. Algorithm 3.2 r e quir es R A 3 . 2 ( n ) = P n x =1 p ( x ) − n r e ad op er a- tions t o gener ate the set D ( n ) . Pr o of. Read op erations are p erfor med on lines 6 and 9 of Algor ithm 3.2. By Lemma 3 .4 w e know that line 8 is e xecuted 1 − n + P n − 1 x =1 p ( x ) times, and so line 9 is executed an equal num b er o f times. Clear ly line 6 is exe cuted p ( n ) − 1 times, and so we get a to tal of R A 3 . 2 ( n ) = P n x =1 p ( x ) − n , as requir ed. Theorem 3.5 . Algorithm 3.2 re quir es W A 3 . 2 ( n ) = P n x =1 p ( x ) − 1 write op er a- tions t o gener ate the set D ( n ) , exclu ding initialisation. Pr o of. The only occas ions in Algorithm 3.2 wher e a v alue is written to the array d are lines 1 4 and 18. By Lemma 3 .5 we know that line 16 is executed exactly P n − 1 x =1 p ( x ) times, and it is straightf orward to see that line 14 is executed precisely the same num b er of times. As we visit exac tly o ne compo sition per iteration o f the outer while lo op, a nd all descending comp ositions e xcept the comp osition h n i are visited with this lo op, we then see that line 18 is executed p ( n ) − 1 times in all. Therefore, summing these con tributions w e get W A 3 . 2 ( n ) = P n − 1 x =1 p ( x ) + p ( n ) − 1 = P n x =1 p ( x ) − 1 as required. 16 Theorems 3.4 and 3.5 derive the precise n umber of read and write op erations required to genera te a ll descending comp ositio ns of n using Algo rithm 3.2, and this completes our a nalysis of the a lgorithm. W e discuss the implications o f these results in the nex t subsection, where we compare the total num ber of read and write o pe rations r equired by R uleAsc ( n ) and R uleDesc ( n ). 3.3 Comparison In this section we dev elop ed t wo algorithms. The fir st alg orithm we consider ed, R uleAsc (Algorithm 3.1), generates asc ending comp ositio ns of n ; the seco nd algorithm, RuleDesc (Algorithm 3.2), genera tes descending compositions of n . W e analysed the total num b er of read and write oper ations r equired by these algorithms to ge nerate all partitions of n by iteratively applying the s ucces- sion rule in volv ed. The totals obtained, disre garding unimp or tant (i.e. O (1) o r O ( n )) trailing terms, for the a scending comp osition gener ator ar e summarised as follows. R A 3 . 1 ( n ) ≈ 2 p ( n ) and W A 3 . 1 ( n ) ≈ 2 p ( n ) (12) That is, we require approximately 2 p ( n ) op er ations o f the form x ← a j and approximately 2 p ( n ) op erations of the form a j ← x to ge nerate a ll partitions of n using the ascending comp osition generator . T urning then to the descend- ing comp osition generato r, we obtained the following totals, again removing insignificant trailing terms. R A 3 . 2 ( n ) ≈ n X x =1 p ( x ) and W A 3 . 2 ( n ) ≈ n X x =1 p ( x ) (13) These tota ls w ould app ear to indica te a larg e dispar ity betw een the a lgorithms, but we must exa mine the asymptotics of P n x =1 p ( x ) to determine whether this is significa nt. W e shall do this in ter ms of the av erage num b er o f rea d and write op erations p er partition whic h is implied by these totals. W e know the total num b er of rea d and write op erations requir ed to generate all p ( n ) par titions of n using b oth alg orithms. Thus, to deter mine the expected nu mber of r ead and write op eratio ns requir ed to transform the av era ge parti- tion in to its immediate successor we m ust divide these to tals b y p ( n ). In the case of the a scending comp osition genera tion alg orithms this is tr ivial, as b oth expressions a re of the for m 2 p ( n ), a nd so dividing by p ( n ) plainly yields the v alue 2. Determining the average num ber o f read and w rite op erations using the succession r ule for des cending comp ositions is more difficult, how ever, as bo th expressions in volv e a factor of the form P n x =1 p ( x ). Using the asymptotic expressions for p ( n ) w e can get a qualitative estimate of these functions. O dlyzko [37, p.1083 ] derived an estimate for the v alue of sums of par tition num b ers whic h can be s tated as follows n X x =1 p ( x ) = e π √ 2 n/ 3 2 π √ 2 n 1 + O ( n − 1 / 6 ) . 17 Then, dividing this by the a symptotic express ion for p ( n ) we ge t the following approximation 1 p ( n ) n X x =1 p ( x ) ≈ 1 + √ 6 n π , (14) which, a lthough crude, is s ufficient for our purp os es. The key featur e of (14) is that the v a lue is not constant: it is O ( √ n ). Using this appr oximation we obtain the following v alues for the num be r of r ead and write op er ations expe cted to transform a rando m partitio n of n in to its successor. Reads W rites Ascending 2 2 Descending 1 + 0 . 78 √ n 1 + 0 . 7 8 √ n W e can see the qua litative differe nce b etw een the algor ithms b y examining their r ead and write tap es in Figure 1. The tap es in question a re ge nerated by imagining that rea d and wr ite heads mark a tap e each time o ne of these op erations is made. The hor izontal p osition o f e ach hea d is determined by the index o f the array element inv olved. The tap e is adv anced one unit each time a comp osition is visited, and so we ca n see the num ber of read and write op er ations required for e ach individual partition g enerated. Regarding Figur e 1 then, and examining the read tape for RuleAsc , we can see that every partition requires exactly 2 r eads; in contrast, the read tap e for RuleDesc shows a maximum of n − 1 r ead o per ations p er partition, and this oscillates r apidly as we mo ve along the ta pe. Similarly , the write tape for R uleAsc shows that we s ometimes need to make a long se quence of write op era tions to ma ke the tra nsition in question, but that these are comp ensated for — as our analysis has shown — by the o ccasions where we need only one write. The b ehaviour of the write head in R uleDesc is very similar to that of its re ad hea d, and we again s ee many transitions wher e a la rge num b er o f writes a re requir ed. The difference b etw een RuleAsc and R uleDesc is not due to s ome a l- gorithmic n uance; r ather, it reflects of a structura l prop erty of the ob jects in question. The total suffix leng th [23] of descending compositions is muc h greater than that of as cending comp ositions, b ecause in many descending co mpo sition the suffix cons ists of the sequence of 1s; a nd we known that the total num b er of 1 s in a ll pa rtitions o f n is P n − 1 x =1 p ( x ). In this w ell-defined wa y , it is more effi- cient to generate all ascending compositions than it is to generate all descending comp ositions. 3.4 A new form ula for p ( n ) Although not strictly relev a nt to our analyses of ascending and descending com- po sition g eneration alg orithms, another result follows directly from the a nalysis of Algorithm 3.1. If we compa re the lexicogr aphic s uccession rule (10) a nd Al- gorithm 3.1 carefully , we rea lise that the µ copies of m must b e inser ted into the a rray within the inner lo op of lines 8–1 2; and our analysis has given us the precise num b er of times that this happ ens. The refore, we k now that the sum 18 ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r Read ( 154 ) W rite ( 154 ) ( 271 ) Read ( 271 ) W rite R uleAsc (12 ) p (12) = 77 R uleDesc (12) Figure 1: Rea d and write tap es for the direct implement ations of succession rules to gener ate ascending and descending comp ositions . O n the left we hav e the rea d and write tap es for the ascending comp osition ge nerator, A lgor ithm 3.1; on the right, then, are the corre sp onding tap es for the descending comp osition generator , Algorithm 3.2. In bo th cases, the traces corr esp ond to the read and write o pe rations ca rried out in gener ating all partitions of 12. 19 of µ v alues ov er all a scending co mp ositions o f n (except the last compo sition, h n i ), must equal the num b er of w rite o per ations made in the inner lo op. Using this observ a tion we then g et the following theor em. Theorem 3.6 . F or al l n ≥ 1 p ( n ) = 1 2 1 + n + X a 1 ...a k ∈ A ( n ) \{h n i} a k − 1 + a k a k − 1 + 1 (15) Pr o of. W e know from Lemma 3.3 that the total num ber of write o pe rations made by Algorithm 3.1 in the inner lo o p of lines 8–12 is given by p ( n ) − 1. Algorithm 3.1 applies the lexicogra phic successio n r ule ab ov e to all elements of A ( n ) \ {h n i} , as well as one extr a comp osition, whic h we r efer to as the ‘initialisation compos ition’. The initialisation comp osition is not in the set A ( n ) as a 1 = 0, and so we must discount the num be r of writes incur red by a pplying the succession r ule to this comp os ition. The compo sition visited immedia tely after 0 n is 1 . . . 1, and so n − 1 copies of 1 m ust hav e b een inserted into the array in the inner loo p during this tra nsition. Therefor e, the tota l num b er of wr ites made within the inner lo op in applying the succession rule to all elements of A ( n ) \ {h n i} is g iven by p ( n ) − 1 − ( n − 1) = p ( n ) − n . Therefore, fr om this result a nd the successio n rule of Theorem 3 .1 we get p ( n ) − n = X a 1 ...a k ∈ A ( n ) \{h n i} a k − 1 + a k a k − 1 + 1 − 1 , from which it is easy to der ive (15), completing the pro of. W e c an simplify (15) if we supp ose that all a 1 . . . a k ∈ A ( n ) are prefixed by a v alue 0 . Mor e for mally , a direct co nsequence o f Theor em 3.6 is that p ( n ) = 1 2 1 + X a 1 ...a k ∈A ′ ( n ) a k − 1 + a k a k − 1 + 1 , (16) where A ′ ( n ) = { 0 · a 1 . . . a k | a 1 . . . a k ∈ A ( n ) } . F unda ment ally , wha t Theo- rem 3.6 shows us is that if we let y b e the la rgest part and x the second largest part in an ar bitrary partition of n , we can coun t the partitio ns of n by summing ⌊ ( x + y ) / ( x + 1) ⌋ ov er all partitions of n . The partition function p ( n ) is one o f the most imp or tant functions in the theory of partitions a nd has b een studied for several centuries [5]. The asymp- totic [20] and arithmetic [2] prop erties of p ( n ) hav e b een very tho roughly ex- amined. While (1 6) is clea rly no t an efficient means o f computing p ( n ), it may provide some new insight into this celebrated function. 20 4 Accelerated Algorithms In this section we examine a lgorithms tha t use structural prop erties of the sets o f ascending and des cending co mpo sitions to reduce the num b er of read and w rite op erations required. The a lgorithms presented are the most efficient k nown examples of ascending and descending comp osition generato rs, ens uring that we hav e a fair compar ison of the algo rithms a rising from the t wo candidate enco dings for partitions . In Section 4.1 we develop a new ascending comp o- sition gener ator that requires fewer read o p erations than R uleAsc . Then, in Section 4.2 we study the mo st efficient known des cending comp os ition genera- tion alg orithm, due to Z oghbi & Sto jmenovi ´ c [62], whic h r equires far fewer read and write op erations than RuleDesc . In Sectio n 4.3, w e compar e these t wo algorithms to determine which of the t wo is mo re efficie nt . 4.1 Ascending Comp osit ions In t his subsection we improve on R ul eAsc (Algorithm 3.1) by applying the the- ory of ‘terminal’ a nd ‘nonterminal’ comp os itions. T o enable us to fully analy se the resulting a lgorithm we requir e an express ion to enumerate ter minal ascend- ing comp ositio ns in terms of p ( n ). In the op ening par t of this subsection we develop the theory of ter minal and nonterminal comp ositions. A byproduct of this analy sis is a new pro of for a partition iden tity on the num be r o f partitions where the largest part is less tha n t wice the sec ond largest part. After develop- ing this neces sary theory , we mov e on to the description o f the alg orithm itself, and its subseque nt a nalysis. 4.1.1 T erminal and Nonterminal Comp os itions The algor ithm that we sha ll examine s hortly uses s ome structure within the set of a scending c omp ositions to ma ke many transitions very efficient. This structure is bas ed on the ideas of ‘ter minal’ and ‘no nterminal’ comp o sitions. W e now define these co ncepts and derive some basic enumerative r esults to aid us in our analysis . Definition 4.1 (T erminal Ascending Comp osition) . F or some p ositive inte ger n , an asc ending c omp osition a 1 . . . a k ∈ A ( n ) is terminal if k = 1 o r 2 a k − 1 ≤ a k . L et T A ( n, m ) denote t he set of terminal c omp ositions in A ( n, m ) , and T A ( n, m ) denote the c ar dinality of this set (i.e. T A ( n, m ) = |T A ( n, m ) | ). Definition 4 .2 (Non terminal Ascending Compo sition) . F or some p ositive in- te ger n , a 1 . . . a k ∈ A ( n ) is nonterminal if k > 1 and 2 a k − 1 > a k . L et N A ( n, m ) denote the set of nonterminal c omp ositions in A ( n, m ) , and let N A ( n, m ) denote the c ar dinality of this set (i.e. N A ( n, m ) = |N A ( n, m ) | ). If we let A ( n, m ) denote the num ber o f as cending comp ositions of n where the initia l part is at least m it can b e shown [22, ch.3] that 21 A ( n, m ) = 1 + ⌊ n/ 2 ⌋ X x = m A ( n − x, x ) (17) holds for all p os itive integers m ≤ n . W e req uire a similar recur rence to en u- merate the terminal ascending co mp ositions, and so w e le t T A ( n, m ) denote the nu mber of terminal comp ositions in the set A ( n, m ). The terminal as cending comp ositions ar e a subset of the ascending co mpo sitions, a nd the construction rule implied is the s ame: the n umber of termina l ascending co mpo sitions of n where the initial part is exactly m is equal to the num b er of terminal comp osi- tions of n − m with initial par t at le ast m . The only difference, then, b etw een the recurr ences for ascending co mpo sitions and terminal ascending comp ositions o ccurs in the b oundary conditions. The recurre nce can be stated as follows: for all p os itive in tegers m ≤ n , T A ( n, m ) sa tisfies T A ( n, m ) = 1 + ⌊ n/ 3 ⌋ X x = m T A ( n − x, x ) . (18) See Kelle her [2 2, p.1 60–16 1] for the pro o fs of recur rences (17) and (18). Before w e move ont o the main result, where we prov e that T A ( n, m ) = A ( n, m ) − A ( n − 2 , m ), we require some a uxiliary re sults which simplify the pro of o f this assertion. In Lemma 4.1 we prov e an equiv alence b etw een lo gical statements of a particula r for m inv olving the flo or function, which is useful in Lemma 4.2; the latter lemma then provides the main inductive step in our proo f of the central theo rem o f this section. In the interest of brev it y , w e limit our pro ofs to v alues of n > 3, since n ≤ 3 can b e easily demonstrated and w ould unnecessarily complicate the pro ofs. Lemma 4 .1. If x , m and n ar e p ositive inte gers then x ≤ ⌊ ( n − x ) /m ⌋ ⇐ ⇒ x ≤ ⌊ n/ ( m + 1 ) ⌋ . Pr o of. Suppos e x , m and n are p os itive integers. Supp ose x ≤ ⌊ ( n − x ) /m ⌋ . Thu s, x ≤ ( n − x ) /m , and s o x ≤ n/ ( m + 1). Then, as ⌊ n/ ( m + 1) ⌋ ≤ n/ ( m + 1) and x is an integer, w e know that x ≤ ⌊ n/ ( m + 1) ⌋ , and so x ≤ ⌊ ( n − x ) /m ⌋ = ⇒ x ≤ ⌊ n/ ( m + 1 ) ⌋ . Suppo se that x ≤ ⌊ n/ ( m + 1) ⌋ . Then, x ≤ n/ ( m + 1), a nd so x ≤ ( n − x ) /m . Once a gain, a s x is a n integer it is appar ent that x ≤ ⌊ ( n − x ) /m ⌋ ≤ ( n − x ) /m , and so x ≤ ⌊ n/ ( m + 1) ⌋ = ⇒ x ≤ ⌊ ( n − x ) /m ⌋ . Therefore, as x ≤ ⌊ ( n − x ) /m ⌋ = ⇒ x ≤ ⌊ n/ ( m + 1) ⌋ and x ≤ ⌊ n/ ( m + 1) ⌋ = ⇒ x ≤ ⌊ ( n − x ) /m ⌋ we see that x ≤ ⌊ ( n − x ) /m ⌋ ⇐ ⇒ x ≤ ⌊ n/ ( m + 1) ⌋ , as r equired. Lemma 4.2. F or al l p ositive int e gers n > 3 ⌊ n/ 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − x, x ) = 1 + ⌊ ( n − 2) / 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − 2 − x, x ) . (19) 22 Pr o of. Suppos e n > 3 and 1 ≤ m ≤ n , a nd consider the left-hand side of (19). W e know that A ( n, m ) = 1 if m > ⌊ n/ 2 ⌋ , as the summation in recurrence (17) will b e empt y . By the co nt ra po sitive of Lemma 4.1 we know that x > ⌊ ( n − x ) / 2 ⌋ ⇐ ⇒ x > ⌊ n / 3 ⌋ , and we therefore know that ea ch term in the summation o f the left-hand side of (19) is equal to 1. Thus, w e see that ⌊ n/ 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − x, x ) = ⌊ n/ 2 ⌋ − ⌊ n/ 3 ⌋ − 1 . (20) Similarly , as x > ⌊ n/ 3 ⌋ = ⇒ x > ⌊ ( n − x ) / 2 ⌋ , it clearly follo ws that x > ⌊ n/ 3 ⌋ = ⇒ x > ⌊ ( n − x ) / 2 ⌋ − 1, o r x > ⌊ n / 3 ⌋ = ⇒ x > ⌊ ( n − 2 − x ) / 2 ⌋ . Thu s, ea ch term in the summation o n the right-hand side of (19) must also equal 1 , and so we get 1 + ⌊ ( n − 2) / 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − 2 − x, x ) = 1 + ⌊ ( n − 2) / 2 ⌋ − ⌊ n/ 3 ⌋ − 1 = ⌊ n/ 2 ⌋ − ⌊ n/ 3 ⌋ − 1 . (21) Therefore, as (20) and (21) show that the left-hand and right-hand s ide of (19) are equa l, the pro of is complete. Theorem 4.1. If n ≥ 3 , then T A ( n, m ) = A ( n, m ) − A ( n − 2 , m ) for al l 1 ≤ m ≤ ⌊ n/ 2 ⌋ . Pr o of. Pro ceed by s trong induction on n . Base case: n = 3 As 1 ≤ m ≤ ⌊ n/ 2 ⌋ and n = 3, we know tha t m = 1. Computing T A (3 , 1), we get 1 + T A (2 , 1) = 2 . W e also find A (3 , 1) = 3 and A (1 , 1) = 1, and so the base ca se of the induction holds. Induction step Suppo se T A ( n ′ , m ) = A ( n ′ , m ) − A ( n ′ − 2 , m ) when 1 ≤ m ≤ ⌊ n ′ / 2 ⌋ , for all 3 < n ′ < n , and some integer n . Then, a s x ≤ ⌊ ( n − x ) / 2 ⌋ ⇐ ⇒ x ≤ ⌊ n/ 3 ⌋ , by Lemma 4.1, we can apply this inductiv e h yp othesis to each ter m T A ( n − x, x ) in (18), giv ing us T A ( n, m ) = 1 + ⌊ n/ 3 ⌋ X x = m ( A ( n − x, x ) − A ( n − 2 − x, x )) = 1 + ⌊ n/ 3 ⌋ X x = m A ( n − x, x ) − ⌊ n/ 3 ⌋ X x = m A ( n − 2 − x, x ) . (22) By Lemma 4.2 we know that ⌊ n/ 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − x, x ) − ⌊ ( n − 2) / 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − 2 − x, x ) − 1 = 0 , 23 and so we can add the left-hand side of this equation to the right-hand side of (22), to g et T A ( n, m ) = 1 + ⌊ n/ 3 ⌋ X x = m A ( n − x, x ) − ⌊ n/ 3 ⌋ X x = m A ( n − 2 − x, x ) + ⌊ n/ 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − x, x ) − ⌊ ( n − 2) / 2 ⌋ X x = ⌊ n/ 3 ⌋ +1 A ( n − 2 − x, x ) − 1 . Then, gathering the ter ms A ( n − x, x ) a nd A ( n − 2 − x, x ) into the appropria te summations we get T A ( n, m ) = 1 + ⌊ n/ 2 ⌋ X x = m A ( n − x, x ) − 1 − ⌊ ( n − 2) / 2 ⌋ X x = m A ( n − 2 − x, x ) , which by (18) gives us T A ( n, m ) = A ( n, m ) − A ( n − 2 , m ), as requir ed. F or the purpo ses of o ur analysis it is useful to know the total num b er of terminal and non terminal comp ositions of n , and it is worth while fo rmalis- ing the r esults he re for r eference. Therefor e, letting T A ( n ) = T A ( n, 1) and N A ( n ) = N A ( n, 1), we get the following corolla ries defined in ter ms of the par- tition function p ( n ). Corollary 4.1. F or al l p ositive inte gers n , T A ( n ) = p ( n ) − p ( n − 2) . Pr o of. As T A ( n ) = T A ( n, 1) and A ( n, 1) = p ( n ), pro of is immediate by Theo - rem 4 .1 for all n ≥ 3. Since p ( n ) = 0 for all n < 0 a nd p (0) = 1 , we ca n rea dily verify that T A (2) = T A (1) = 1, as r equired. Corollary 4.2. If n is a p ositive inte ger then N A ( n ) = p ( n − 2) . Pr o of. An a scending comp os ition is either terminal or nonterminal. As the total num b er o f a scending comp ositio ns of n is given by p ( n ), we get N A ( n ) = p ( n ) − ( p ( n ) − p ( n − 2)) = p ( n − 2), as requir ed. Corollar ies 4.1 and 4.2 prov e a no n trivia l str uctural prop erty of the set of all ascending comp os itions, and can b e phrased in mo re conv entional partition theoretic language. Cons ider an arbitrary partition of n , and let y b e the largest part in this par tition. W e then let x b e the se cond la rgest par t ( x ≤ y ). Corol- lary 4.2 then shows that the num b er of partitions of n where 2 x > y is equal to the num ber of partitions of n − 2. This result is known, a nd has been repor ted by Adams-W a tters [51, Seq .A02733 6]. The pre ceding treatment, how ever, w ould app ear to b e the first published pro of o f the identit y . 24 Algorithm 4.1 AccelAsc ( n ) Require: n ≥ 1 1: k ← 2 2: a 1 ← 0 3: y ← n − 1 4: while k 6 = 1 do 5: k ← k − 1 6: x ← a k + 1 7: while 2 x ≤ y do 8: a k ← x 9: y ← y − x 10: k ← k + 1 11: end whil e 12: ℓ ← k + 1 13: while x ≤ y do 14: a k ← x 15: a ℓ ← y 16: visit a 1 . . . a ℓ 17: x ← x + 1 18: y ← y − 1 19: end whil e 20: y ← y + x − 1 21: a k ← y + 1 22: visit a 1 . . . a k 23: end whil e 4.1.2 Algorithm Having derived some theoretical results ab out the terminal and nonterminal ascending comp ositio ns of n , we are now in a p osition to explo it those pr op- erties in a ge neration algor ithm. In the direct implementation of the lexico - graphic succession rule for ascending comp ositions, RuleAsc , we gener ate the successor of a 1 . . . a k by computing the lexico graphically lea st element of the set A ( a k − 1 + a k , a k − 1 + 1 ), and visit the res ulting co mpo sition. The algo- rithm op erates by implemen ting exa ctly one transition p er itera tion of the main lo op. The a ccelerated algo rithm, AccelAsc , develop ed here opera tes on a slightly different principle: we compute the lexicogr aphically lea st comp osition of A ( a k − 1 + a k , a k − 1 + 1), as b efore, but w e now k eep a w atchful ey e to see if the resulting comp osition is nonterminal. If it is, we can compute the lexico graphic successor simply b y incrementing a k − 1 and decr ement ing a k . Other wise, we revert to the standard mea ns of computing the lexicogra phic success or. By analysing this algo rithm, we shall see that this approach provides significa nt gains. W e conce nt ra te on the analy sis of Algor ithm 4.1 here — see Kelleher [22, § 4.4.2] for further discuss ion and pro of of co rrectness. Lemma 4.3. The numb er of times line 16 is ex e cu t e d during the ex e cut ion of 25 Algo rithm 4.1 is given by t 16 ( n ) = p ( n − 2) . Pr o of. Comp ositions vis ited on line 16 mu st be non terminal b eca use up o n reach- ing line 1 2, the conditio n 2 x > y must hold. As x and y are the s econd-last a nd last parts, resp ectively , o f the comp ositio n visited on line 1 6, then this co mpo - sition must b e nonterminal by definition. Subsequen t op erations on x and y within this lo o p do not alter the pr op erty that 2 x > y , and so all co mpo sitions visited on line 16 mu st b e nonterminal. F urther more, we also know that all compo sitions visited on line 2 2 must be terminal. T o demonstr ate this fact, we note that if a 1 . . . a k is the last comp osition visited b efore w e arrive at line 20, the composition visited on line 2 2 m ust b e a 1 . . . a k − 2 h a k − 1 + a k i . Therefore, t o demo nstrate that this co mpo sition is terminal, w e must show that 2 a k − 2 ≤ a k − 1 + a k . W e know that a k − 2 ≤ a k − 1 ≤ a k . It follows that 2 a k − 2 ≤ 2 a k − 1 , and also that 2 a k − 1 ≤ a k − 1 + a k . Combining these tw o inequalities, we s ee that 2 a k − 2 ≤ 2 a k − 1 ≤ a k − 1 + a k , and so 2 a k − 2 ≤ a k − 1 + a k . Thus all compos itions visited on line 22 m ust be terminal. Then, as Algorithm 4 .1 correc tly vis its all p ( n ) as cending comp ositions of n [22, p.105], since all comp ositions visited on line 22 are terminal and as all comp ositions visited on line 16 ar e nonterminal, we know that all nonterminal comp ositions of n m ust b e visited on line 16. By Corollary 4.2 there are p ( n − 2) nonterminal comp ositions o f n , and hence t 16 = p ( n − 2 ), as requir ed. Lemma 4.4. The numb er of times line 5 is ex e cu t e d du ring t he exe cution of Algo rithm 4.1 is given by t 5 ( n ) = p ( n ) − p ( n − 2) . Pr o of. By L emma 4.3 we know that the v isit s tatement on line 16 is executed p ( n − 2) times. As Algorithm 4.1 correctly visits all p ( n ) ascending comp ositions of n , then the remaining p ( n ) − p ( n − 2) compo sitions must be visited on line 22. Clearly then, line 22 (and hence line 5 ) is executed p ( n ) − p ( n − 2) times. Therefore, t 5 = p ( n ) − p ( n − 2 ), as required. Lemma 4.5. The numb er of times line 10 is ex e cu t e d during the ex e cut ion of Algo rithm 4.1 is given by t 10 ( n ) = p ( n ) − p ( n − 2) − 1 . Pr o of. The v a riable k is assigned the v alue 2 up o n initialis ation, and the algo - rithm terminates when k = 1 . As the v ariable is only updated via increment (line 10) and decrement (line 5) op eratio ns, we know that there must b e one more dec rement op eratio n tha n incr ements. By Lemma 4.4 we know that there are p ( n ) − p ( n − 2) decrements, a nd so ther e must b e p ( n ) − p ( n − 2) − 1 increment s on the v ariable. Therefo re, t 10 = p ( n ) − p ( n − 2) − 1. Theorem 4.2 . Algorithm 4.1 r e quir es R A 4 . 1 ( n ) = p ( n ) − p ( n − 2) r e ad op er a- tions t o gener ate the set A ( n ) . Pr o of. Only one read oper ation o ccurs Algorithm 4.1, and this is done on line 6. By L emma 4.4 w e know that line 5 is executed p ( n ) − p ( n − 2) times, and it immediately follows that line 6 is executed the same num b er of times. Therefo re, R A 4 . 1 ( n ) = p ( n ) − p ( n − 2), as requir ed. 26 Theorem 4.3. Algo rithm 4.1 r e quir es W A 4 . 1 ( n ) = 2 p ( n ) − 1 write op er ations to gener ate the set A ( n ) , excluding initialisation. Pr o of. W rite o p erations are pe rformed on lines 8, 14, 1 5 and 21. Lemma 4 .4 shows that line 21 is executed p ( n ) − p ( n − 2) times. F ro m Le mma 4 .5 we k now that line 8 is executed p ( n ) − p ( n − 2) − 1 times. Then, b y Lemma 4 .3 we know that lines 1 4 and 15 are executed p ( n − 2) times each. Summing these contributions we get W A 4 . 1 ( n ) = p ( n ) − p ( n − 2 ) + p ( n ) − p ( n − 2 ) − 1 + 2 p ( n − 2) = 2 p ( n ) − 1 , as requir ed. Theorems 4.2 and 4.3 derive the precise n umber of read and write op erations required to gener ate all pa rtitions o f n using Algorithm 4 .1. This a lgorithm is a considerable impr ov ement o ver our bas ic implementation o f the succ ession rule, Algorithm 3 .1, in tw o wa ys. Firstly , by keeping p ( n − 2) of the vis it op er ations within the loo p of lines 13–19, we significantly reduce the a verage cos t of a write op eration. T hu s, although we do no t appreciably reduce the total num ber o f write op er ations in volv ed, we ensure that 2 p ( n − 2) of those writes ar e exec uted at the cost of an increment and decre ment o n a lo cal v ariable and the cost o f a ≤ co mparison o f tw o lo c al v ariables — in sho rt, very cheaply . The seco nd improvemen t is that we dramatically reduce the total n umber o f read op erations inv olved. Recall tha t RuleAsc required 2 p ( n ) read op era tions to generate all ascending comp ositions of n ; Theorem 4.2 sho ws that AccelAsc requires only p ( n ) − p ( n − 2) r ead o per ations. W e also reduced the n umber o f read oper ations by a factor of 2 b y ma intaining the v alue of y betw een iteratio ns of the main while lo op, but this c ould equally b e applied to R uleAsc , and is only a minor improvemen t at any ra te. The real gain here is obtained from exploiting the blo c k-based nature of the s et of asc ending compo sitions, a s w e do not need to p erform a ny rea d op eratio ns o nce we have b eg un iterating through the no nterminal co mp ositions within a blo c k. 4.2 Descending Composit ions In Sec tion 3.2 we derived a direct implementation of the successio n rule for descending comp ositions. W e then analys ed the cost of using this dir ect im- plement ation to genera te a ll descending c omp ositions of n , a nd found that it implied an average of O ( √ n ) read and write o per ations pe r partition. There are, ho wev er, sev eral constan t amortised time a lgorithms to generate descending comp ositions, and in this section we study the mos t efficient exa mple. There is one basic problem with the direct implementation of the successio n rule for descending co mp ositions ( R u leDesc ): most of the read and write op erations it makes are redundant. T o beg in with, the r ead o p erations incur red by RuleDesc in s canning the current co mpo sition to find the rightmost non-1 v alue ar e unnecessary . As McKay [34] no ted, we can easily keep track o f the index of the largest non-1 v alue b etw een itera tions, and thereb y eliminate the right-to-left s can altogether. The means by whic h we c an av oid the ma jority of the wr ite op erations is a little more subtle, and was first noted by Zoghbi & 27 Sto jmeno vi´ c [6 2]. F or insta nce, cons ider the tra nsition 33211 11 → 331 1111 1 . (23) R uleDesc implements the transition from 3 32111 1 to 3311 1111 by finding the prefix 33 and writing six copies of 1 after it, oblivious to the fac t that 4 of the array indices alr e ady contain 1. Thus, a mor e r easonable approach is to make a sp ecial ca se in the success ion rule so tha t if d q = 2, we simply set d q ← 1 and app end 1 to the end of the co mp osition. This observ ation pro ves to be sufficient to remove the worst excesses of R uleDesc , a s 1 s ar e by far the most numerous part in the partitions of n . Zoghbi & Sto jmenovi ´ c’s algorithm implements b oth o f these ideas, and makes one further inno v a tion to reduce the n umber of write oper ations required. By initialising the a rray to hold n copies of 1, w e know that any index > k must contain the v alue 1, and so we can save a nother wr ite o pe ration in the sp ecial case o f d q = 2 outlined ab ov e. Thus, Zo ghbi & Sto jmenovi ´ c’s algorithm is the most efficient example, and consequently it is the algor ithm that we sha ll use for o ur co mparative analysis. Knuth developed a s imilar a lgorithm [27, p.2]: he also noted the ne cessity of keeping tra ck of the v alue of q betw een iteratio ns, and also implemented the sp ecial case for d q outlined ab ov e. K nut h’s algo - rithm, how ever, do es no t co ntain the further improvemen t included by Zoghbi & Sto jmeno vi´ c (i.e. initialising the array to 1 . . . 1 and av oiding the second write op eration in the d q = 2 sp ecial case), a nd therefor e requires strictly more wr ite op erations than Zoghbi & Sto jmenovi ´ c’s. Zo ghbi & Sto jmenovi ´ c’s algor ithm also cons istently outper forms K nuth’s algorithm in empir ical tests . Zoghbi & Sto jmenovi ´ c’s alg orithm is prese nt ed in Algorithm 4.2, which we shall also refer to as AccelDesc . Each iteration of the main lo op implements a single transition, and t wo cases are identifi ed fo r p erforming the tr ansition. In the co nditional blo c k of lines 8 – 1 0 we implement the specia l cas e for d q = 2: we can see that the length of the comp osition is incremented, d q is assigned to 1 and the v a lue of q is up dated to p o int to the new rig ht most non- 1 par t. The general case is dealt with in the block o f lines 11 –29; the appro ach is mu ch the same as that o f RuleDesc , except in this case we have the additional complexity o f maintaining the v alue of q b etw een iterations. Lemma 4.6. The numb er of times line 10 is ex e cu t e d during the ex e cut ion of Algo rithm 4.2 is given by t 10 ( n ) = p ( n − 2) . Pr o of. The v aria ble q points to the s mallest non-1 v a lue in d 1 . . . d k , and w e hav e a complete descending comp os ition in the ar ray each time we reach line 7. Therefore, line 10 will b e executed once for every descending comp osition of n whic h contains at least one 2; and it is w ell known that this is p ( n − 2). Therefore, t 10 ( n ) = p ( n − 2), as requir ed. Lemma 4.7. The numb er of times line 16 is ex e cu t e d during the ex e cut ion of Algo rithm 4.2 is given by t 16 ( n ) + t 25 ( n ) = p ( n − 2) − 1 . 28 Algorithm 4.2 AccelDesc ( n ) Require: n ≥ 1 1: k ← 1 2: q ← 1 3: d 2 . . . d n ← 1 . . . 1 4: d 1 ← n 5: visit d 1 6: while q 6 = 0 do 7: if d q = 2 then 8: k ← k + 1 9: d q ← 1 10: q ← q − 1 11: else 12: m ← d q − 1 13: n ′ ← k − q + 1 14: d q ← m 15: while n ′ ≥ m do 16: q ← q + 1 17: d q ← m 18: n ′ ← n ′ − m 19: end whil e 20: if n ′ = 0 then 21: k = q 22: else 23: k ← q + 1 24: if n ′ > 1 then 25: q ← q + 1 26: d q ← n ′ 27: end if 28: end if 29: end if 30: visit d 1 . . . d k 31: end whil e Pr o of. The v ariable q controls the termination of the a lgorithm. It is initialised to 1 on line 2, and the alg orithm terminates when q = 0. W e mo dify q via increment op er ations on lines 16 and 25, and decrement o per ations on line 10 only . Therefore, there must b e o ne more decrement op eratio n than increments on q . By Lemma 4 .6 there are p ( n − 2) decr ements p erfo rmed on q , and ther e m ust therefore be p ( n − 2) − 1 incr ements. Therefor e, t 16 ( n ) + t 25 ( n ) = p ( n − 2) − 1, as required. Theorem 4. 4. Algorithm 4.2 re quir es R A 4 . 2 ( n ) = 2 p ( n ) − p ( n − 2) − 2 r e ad op er ations to gener ate the set D ( n ) . Pr o of. Read o p erations are p er formed o n lines 7 and 12 of Algorithm 4.2. 29 Clearly , as a ll but the comp osition h n i are vis ited on line 30, line 7 is ex e- cuted p ( n ) − 1 times. Then, a s a cons equence of Lemma 4.6, we know that line 1 2 is executed p ( n ) − p ( n − 2) − 1 times. Therefore, the total num b er of read op er ations is given by R A 4 . 2 ( n ) = 2 p ( n ) − p ( n − 2) − 2, as required. Theorem 4.5. Algori thm 4.2 r e quir es W A 4 . 2 ( n ) = p ( n ) + p ( n − 2) − 2 write op er ations to gener ate the set D ( n ) , ex cluding initialisation. Pr o of. After initialisation, write ope rations are p erformed on lines 9, 14, 17 and 26 of Algorithm 4.2. Line 9 contributes p ( n − 2) wr ites by Lemma 4.6; and similarly , line 1 4 is executed p ( n ) − p ( n − 2) − 1 times. By Lemma 4.7 we k now that the total num b er of write op erations inc urred b y lines 17 and 26 is p ( n − 2) − 1 . Ther efore, summing these co nt ributions we g et W A 4 . 2 ( n ) = p ( n ) + p ( n − 2) − 2, a s re quired. Theorems 4.4 and 4.5 show that Zoghbi & Sto jmenovi ´ c’s alg orithm is a v ast improv ement on R uleDesc . Recall that R uleDesc ( n ) requires r oughly P n x =1 p ( x ) rea d and P n x =1 p ( x ) write op eratio ns; and w e have seen that Ac- celDesc ( n ) requir es only 2 p ( n ) − p ( n − 2) re ad and p ( n ) + p ( n − 2 ) write op erations. Zoghbi & Sto jmeno vi´ c [6 2] also provided an analys is of A ccelDesc , and prov ed that it genera tes partitions in constant amortise d time. W e briefly sum- marise this analysis to pro vide some persp ective on th e approa ch we hav e taken. Zoghbi & Sto jmenovi ´ c b egin their analysis by demonstra ting that D ( n, m ) ≥ n 2 / 12 for all m > 2, where D ( n, m ) en umerates the descending comp ositions of n in which the initial par t is no mor e than m . They use this result to re ason that, for each d q > 2 enco untered, the tota l num b er of iteratio ns of the internal while lo op is < 2 c , for s ome constant c . Thus, since the num b er of itera tions of the in ternal lo op is co nstant whenever d q ≥ 3 (the c ase fo r d q = 2 obvi- ously r equires constant time), the algor ithm genera tes desce nding compo sitions in c onstant a mortised time. The prec eding paragr aph is not a rigoro us a rgument proving that AccelDesc is constant amortised time. It is in tended only to illus trate the differe nce in the approach that we hav e taken in this sec tion to Zoghbi & Sto jmeno vi´ c’s anal- ysis, and p erhaps hig hlight s ome of the a dv antages of us ing K emp’s abstr act mo del of counting rea d a nd write op eratio ns [23]. By using Kemp’s model we were able to ig nore irrelev a nt details re garding the a lgorithm’s implemen tation, and co ncent ra te instead o n the algo rithm’s effe ct : r eading and writing parts in comp ositions. 4.3 Comparison Considering AccelAsc (Algorithm 4.1 ) first, w e derived the following n umbers of read and write op er ations req uired to gener ate all as cending compo sitions of n , igno ring inconse quential trailing ter ms. R A 4 . 1 ( n ) ≈ p ( n ) − p ( n − 2) and W A 4 . 1 ( n ) ≈ 2 p ( n ) (24) 30 W e ca n see tha t the tota l num ber of write op erations is 2 p ( n ); i.e., the total nu mber of write op eratio ns is t wice the total num b er of par titions g enerated. On the other hand, the total nu mber of read ope rations required is o nly p ( n ) − p ( n − 2), which, a s we shall see prese nt ly , is asymptotically neg ligible in compar ison to p ( n ). The num ber of read oper ations is sma ll b eca use w e only re quire one read o p eration p er iteratio n o f the o uter lo op. Once we have sto red a k − 1 in a lo cal v ariable, we ca n then extend the co mp osition as necess ary and visit all of the following no nt erminal co mpo sitions without needing to p er form a read op eration. Thus, it is the write op eratio ns that dominate the cost of generatio n with this algorithm and, as we noted ea rlier, the av erage cost of a write op eration in this algo rithm is quite small. F or the descending composition generator, A ccelDesc (Algor ithm 4.2), the following read and write to tals were der ived (w e ig nore the insignifica nt trailing terms in b oth case s). R A 4 . 2 ( n ) ≈ 2 p ( n ) − p ( n − 2) and W A 4 . 2 ( n ) ≈ p ( n ) + p ( n − 2) (25) The total num ber of write op eratio ns r equired by this algo rithm to generate all partitions of n is p ( n ) + p ( n − 2). Althoug h this v alue is strictly less than the write total for AccelAsc , the difference is not asymptotically significant as p ( n − 2) /p ( n ) tends tow ards 1 a s n beco mes larg e. Ther efore, we should not exp ect any appreciable differe nce b etw een the per formances of the tw o algo - rithms in terms of the num ber of write op er ations inv o lved. There is, how ever, an a symptotically significant difference in the n umber of read op erations per - formed by the algor ithms. The total num b er of read op era tions required by AccelDesc is 2 p ( n ) − p ( n − 2). T his expre ssion is complicated by an algo rithmic considera tion, wher e it pr ov ed to b e more efficient to p erfor m p ( n ) − p ( n − 2) extra read o p erations than to sav e the relev ant v a lue in a lo ca l v ariable. Essentially , A ccelDesc needs to p erform o ne rea d op eratio n for every iteration of the external lo o p, to determine the v alue of d q . If d q = 2 we ex ecute the sp ecial ca se and quickly generate the next descending composition; otherwise, we apply th e gener al case. W e cannot k e ep t he v alue of d q lo cally becaus e th e v alue of q ch ange s constantly , and so we do not sp end significa nt pe rio ds of time op erating on the same array indices, as we do in AccelAsc . Thus, we must read the v alue of d q for every transition, and we can therefore simplify b y saying that Acce lDesc ( n ) requires p ( n ) read op era tions. In the in terest o f the fair est p ossible comparison b etw een ascending a nd descending comp ositions genera tion algo rithms, let us therefore simplify , and assume that any descending co mp osition generation algor ithm utilising the same prop erties as AccelDesc r equires p ( n ) read op erations. W e know from (24) that our ascending comp os ition generation algor ithm required only p ( n ) − p ( n − 2) rea ds. W e can there fore ex pe ct tha t a n asce nding comp o sition ge nerator will require p ( n − 2) less read op eratio ns than a descending co mpo sition gene rator similar to AccelDesc . O ther things being equal, we sho uld expect a significant difference betw een the total time requir ed to generate all pa rtitions using an 31 ascending comp ositio n genera tion algorithm and a commensur able descending comp osition generator . W e can ga in a qualitative idea of the differe nces inv olved if we examine the av erage n umbers o f read and wr ite op era tions using the asymptotic v a lues of p ( n ). Aga in, to deter mine the av erag e num be r o f read and write op erations required p er pa rtition gener ated we must divide the totals inv olved by p ( n ). W e s tated ear lier that the v alue of p ( n ) − p ( n − 2) is asymptotically neglig ible compared to p ( n ); w e can quantify this statement using the asymptotic formulas for p ( n ). K nuth [27, p.11] provides a n approximation of p ( n − 2) /p ( n ), which can b e expr essed as follows: p ( n − 2) p ( n ) ≈ 1 e 2 π / √ 6 n . (26) Using this approximation, we o btain the following estimates for the av era ge nu mber of read and write oper ations requir ed to generate each ascending and descending c omp osition of n . Reads W rites Ascending 1 − e − 2 π / √ 6 n 2 Descending 1 1 + e − 2 π / √ 6 n Suppo se w e wished to genera te all partitions of 1 000. Then, using the b est known descending comp osition generation a lgorithm we would exp ect to make 1 rea d a nd 1 . 92 write op era tions pe r par tition genera ted. On the other hand, if w e used AccelAsc , we would exp ect to make only 0 . 08 r ead and 2 write op erations p er partition. The qualitative b ehaviour of Acce lAsc and AccelDesc can b e seen fr om their read and write tap es (Figure 2). Comparing the write tap es for the algo- rithms, we can see that the to tal num be r of write op erations is roughly equal in b o th a lgorithms, altho ugh they follow an a ltogether differen t s patial pa ttern. The read tapes for the algor ithms, ho wev er, demonstrate the essential differe nce betw een the algor ithms: AccelDesc makes one read o per ation for every parti- tion generated, while the read op erations for A ccelAsc are sparsely distributed across the tap e. W e hav e derived expressions to count the total num b er o f read and write op erations re quired to gener ate all partitions of n using AccelAsc a nd Ac- celDesc . W e can now use these expressio ns to make some q uantitativ e predic- tions abo ut the relative efficiencies of the algorithms. If w e assume that the cost of read and write o p erations are equa l, w e can then derive a prediction for the ratio of the tota l time ela psed using both algo rithms. Therefore, let E 4 . 1 ( n ) be the exp ected total r unning time of AccelAsc ( n ), and similarly define E 4 . 2 ( n ) for AccelDesc ( n ). W e can then predict that the ratio of the running times should b e equa l to the ratio of their total r ead and write coun ts. Thus, using the v alues of (24) and (25), we get E 4 . 1 ( n ) E 4 . 2 ( n ) = 3 p ( n ) − p ( n − 2) 3 p ( n ) . (27) 32 ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r Read ( 35 ) W rite ( 154 ) ( 77 ) Read ( 119 ) W rite AccelAsc (12) p (12) = 77 AccelDesc (12) Figure 2: Read and write tap es for the accelera ted a lgorithms to g enerate as- cending and de scending comp o sitions. On the left we have the r ead and write tap es for the ascending comp ositio n genera tor, Algorithm 4.1; on the right, then, a re the corresp onding tapes for the desc ending comp osition generator, Algorithm 4.2. In b oth ca ses, the trace s cor resp ond to the read and write o p- erations ca rried out in gener ating all partitions of 12. 33 T able 2: E mpirical analysis o f accele rated ascending and descending comp o sition generation a lgorithms. The r atio o f the time req uired to g enerate all par titions of n using AccelAsc and AccelDesc is given: measured ra tios for implemen- tations in the Jav a a nd C la nguages as well as the theoretically pr edicted ratio are shown. n p ( n ) Jav a C Theoretical 100 1 . 91 × 10 8 0.85 0.77 0.74 105 3 . 42 × 10 8 0.85 0.77 0.74 110 6 . 07 × 10 8 0.84 0.75 0.74 115 1 . 06 × 10 9 0.84 0.75 0.73 120 1 . 84 × 10 9 0.83 0.75 0.73 125 3 . 16 × 10 9 0.83 0.74 0.73 130 5 . 37 × 10 9 0.83 0.74 0.73 135 9 . 04 × 10 9 0.82 0.74 0.73 r Jav a = 0 . 9891 r C = 0 . 932 1 Consequently , we exp ect that the total amo unt of time r equired to generate all ascending comp ositions of n sho uld b e a factor of p ( n − 2) / 3 p ( n ) le ss than that requir ed to g enerate all desc ending comp o sitions of n . T o tes t this hypo th- esis we mea sured the total elapsed time r equired to gener ate all par titions of n using AccelAsc and AccelDesc , using the methodolo gy outlined in Sec- tion 2 .3. W e rep ort the r atio of these times in T a ble 2, for bo th the C and J av a implemen tations of the algorithms. T able 2 supp or ts our qualitative predictions w ell. The theo retical analysis of ascending and descending comp osition generation alg orithms in this section sug- gests that the a scending comp os ition gene rator should r equire significa ntly less time to generate a ll partitions of n than its descending compositio n coun terpart; and the data o f T able 2 suppo rts this prediction. In the J av a implement ations , the ascending compo sition generator requires 15% less time to genera te all par- titions o f 1 00 than the descending co mpo sition generatio n algorithm; in the C version, the difference is around 23%. These differences increase as the v alue of n increa ses: when n = 135 , w e s ee that AccelAsc require s 18% and 26 % less time tha n AccelDesc in the C a nd Jav a implementations, resp ectively . W e als o made a quantitativ e pr ediction ab out the ratio of the time requir ed to genera te all partitions o f n using AccelAsc and AccelDesc . Using the theoretical analysis, where we counted the total num b er of read and write op- erations required by these algo rithms, we can predict the exp ected r atio o f the time req uired b y both algorithms. This ratio is als o rep o rted in T a ble 2 , and we ca n se e that it is consistent with the measur ed ratios for the Jav a and C implemen tations of the alg orithms. In the case of the Jav a implement ation, the theoretically predicted ra tios are to o optimistic, sug gesting that the mo del of counting only rea d and wr ite op eratio ns is a little overly simplistic in this case. The corres po ndence b etw een the measured and predicted ratios in the C 34 implemen tation is m uch clo ser, as we can see from T able 2. In b o th cases there is a strong p ositive corr elation b etw een the predicted and measured ratios . 5 Conclusion In this pap er w e have s ystematically co mpared algor ithms to generate all ascend- ing and descending comp ositions, tw o p ossible enco dings for integer pa rtitions. In Section 2 we c ompared tw o recursive a lgorithms: our new ascending comp o- sition generator , and Rusk ey’s descending comp osition generato r. By analy sing these algo rithms we were able to show that although b oth a lgorithms are con- stant amor tised time, the desce nding comp osition genera tor requires approxi- mately twice as long to gener ate a ll partitions of n . In Sec tion 3 we compared t wo generators in K emp’s idiom: succ ession r ules that r equire no sta te to be maintained b etw een tra nsitions. W e developed a new successio n rule for as- cending comp ositions in lexicog raphic o rder, and implemented the well known succession rule for descending co mp ositions in reverse le xicogra phic order. The analyses of these algo rithms showed that the ascending comp o sition generato r required consta nt time, on average, to ma ke each transition; whereas the de- scending comp ositio n gener ator required O ( √ n ) time. Section 4 then compa red the most efficien t known alg orithms to generate a ll ascending and descending comp ositions. W e develop ed a new gener ation algorithm for the ascending co m- po sitions by utilising structure within the set o f ascending comp o sitions. W e also analysed Zoghbi & Sto jmenovi ´ c’s algorithm and compar ed thes e tw o al- gorithms theo retically and empirica lly . As a result of this analys is, we show ed that the ascending comp os ition gene rator requir es roughly thr ee quar ters of the time required b y the descending compos ition g enerator . These three compar - isons of algor ithms show tha t as cending comp os itions ar e a sup erior enco ding for g enerating a ll par titions. Generation efficiency is not the only adv ant ag e of enco ding partitions as ascending comp ositions. As part o f our a nalysis of the succession r ule for as- cending comp os itions in Section 3 we proved a new fo rmula for computing the nu mber of partitions o f n in terms of the la rgest and second lar gest parts. In Section 4.1 we developed a new pro of for a co mbinatorial iden tity , showing that the n umber of partitions of n where the larg est part is less that t wice the second largest part is equal to the num b er of partitions o f n − 2. These mathematical results were motiv ated by studying algo rithms to generate a scending co mpo si- tions. Another adv antage of using ascending co mp ositions to encode partitions, not men tioned here, is the p o ssibility of developing a lgorithms to generate a v ery flexible class of restricted partition. By gener alising the algorithms dev e lop ed in this pa p er it is po ssible to ge nerate (and enumerate) combinatorially imp ortant classes o f pa rtition such as the pa rtitions in to dis tinct parts [8, § 2], Roge rs- Ramanujan pa rtitions [17] and G¨ ollnitz-Gordon par titions [3]. The fr amework for describing these res trictions and developing generatio n a nd enumeration al- gorithms is descr ib ed by K elleher [22, ch.3–4]. 35 References [1] Alfred Arthur Actor. Infinite pro ducts, partition functions, and the Meinar- dus theor em. Journal of Mathematic al Physics , 35 (11):574 9–57 64, No vem- ber 1994 . [2] Scott Ahlgren and Ke n Ono. Addition and count ing: The a rithmetic of partitions. Notic es of t he AMS , 48(9 ):978–9 84, Octo ber 20 01. [3] Krishnaswami Alla di and Alexander Berk ovic h. G¨ ollnitz-Gordon partitions with w eights and parity conditions. In T. Aoki, S. K anemitsu, M. Nak ahar a, and Y. Ohno, editors, Zeta functions top olo gy and qu ant um physics , pages 1–18. Springer V erla g, US, 2005. [4] George E. Andrews. The The ory of Partitions . Encyclop edia of Mathe- matics and its Applications . Addison-W esley , L ondon, 1 976. [5] George E. Andrews. Partitions. In History of Combinatorics , volume 1. T o app ear, 2005. [6] George E. Andrews and K immo E riksso n. In t e ger Partitions . Cambridge Univ ers it y Press, Ca mbridge, 2004 . [7] Rober t L. Bivins, N. Metrop olis, Paul R. Stein, and Mar k B. W ells . Cha r- acters of the symmetric g roups of degree 15 and 16 . Mathematic al T ables and Other Aids to Computation , 8(48):21 2–216 , Octob er 1954 . [8] Anders Bj¨ orner and Richard P . Stanley . A co mbinatorial miscellany . http:/ /www. math. kth.se/ ~ bjorne r/fil es/CU P.ps , 20 05. T o app ear in L’Enseignement Math ´ ematique . [9] John M. Boy er. Simple constant amo rtized time genera tion of fixe d length nu meric par titions. Journal of Algorithms , 54(1 ):31–39 , January 20 05. [10] Stig Com´ et. Nota tions for par titions. Mathematic al T ables and Other Aids to Computation , 9 (52):143 –146 , Octob er 1955. [11] Abraham de Moivre. A metho d of ra ising a n infinite m ultinomial to a ny given pow er, or extracting any given ro ot of the same. Philosophic al T r ans- actions , 19(230 ):619–6 25, 1697 . [12] P . D ´ esesquelles. Calculation o f the n umber of pa rtitions with constraints on the fragment size. Physic al R eview C (Nucle ar Physics) , 65(3):0346 03, March 2002 . [13] Leonard E. Dic kson. History of the the ory of numb ers , volume II, Diopha n- tine Ana lysis, chapter 3, pag es 10 1–16 4. Chelsea, New Y ork , 19 52. [14] Gideon Ehrlich. Lo opless algo rithms for ge nerating p ermutations, com- binations, and other combinatorial configur ations. J ournal of the ACM , 20(3):500 –513 , July 1973 . 36 [15] T revor I. F enner a nd Georg hois Lo izou. A binary tree repres entation and related algorithms fo r g enerating in teger partitions. The Computer Journal , 23(4):332 –337 , 19 80. [16] T revor I. F enner and Georghois Loizou. An analysis of t wo related loop- free algorithms for genera ting integer partitions. A cta In formatic a , 16:23 7–25 2, 1981. [17] Jason F ulman. The Roger s-Ramanujan identities, the finite g eneral linear groups, and the Ha ll-Littlewoo d p olynomials. Pr o c e e dings of the Americ an Mathematic al So ciety , 1 28(1):17 –25, 2000. [18] Siegfried Gros smann and Martin Holthaus. F rom num b er theory to sta- tistical mechanics: Bose–E instein condensation in iso lated traps. Chaos, Solitons & F r actals , 1 0(4–5):7 95–8 04, April 1999. [19] Udai I. Gupta, D. T. Lee, and C. K . W ong. Rank ing a nd unr anking o f B-trees. Journ al of Algo rithms , 4 (1):51–60 , March 19 83. [20] Godfrey H. Hardy . Asymptotic for mu lae in combinatory analysis. In Col- le cte d p ap ers of G.H. H ar dy: including joint p ap ers with J.E. Littlewo o d and others e dite d by a c ommittte e app ointe d by the L ondon Mathematic al So ciety , volume 1, pages 265–2 73. Clarendon P ress, O xford, 1 966. [21] Ross Honsb erger . Mathematic al Gems III . Number 9 in Dolciani Mathe- matical Exp ositio ns. Mathematical Asso ciation of America , 1985. [22] Jerome Kelleher . Enc o ding Partitions as Asc ending Comp ositions . PhD thesis, Universit y Colle ge Cork , 20 06. [23] Rainer Kemp. Genera ting words lexicogra phically: An average-case a nal- ysis. Acta Informatic a , 35(1):17– 89, January 1998. [24] Eugene M. K limko. An algor ithm for calculating indices in F` aa Di Bruno’s formula. BIT , 13 (1):38–4 9, 1 973. [25] Donald E. Knuth. The Stanfor d Gr aphBa se: a platform for c ombinatorial c omputing . Addison-W es ley , 19 94. [26] Donald E. Kn uth. Generating a ll n-tuples, 2 004. Pr e-fascicle 2A of The Art of Computer Pr o gr amming A draft o f section 7.2.1 .1. http:/ /www- cs- faculty.stanford.edu/ ~ knuth/ fasc2 a.ps.gz . [27] Donald E. Knuth. Generating all partitions, 2004 . Pr e-fascicle 3 B of The Art of Computer Pr o gr amming , A draft o f sections 7 .2.1.4– 5 http:/ /www- cs- faculty.stanford.edu/ ~ knuth/ fasc3 b.ps.gz . [28] Donald E. Knuth. History of combinatorial gener ation, 2 004. Pre-fas cicle 4B of The Art of Computer Pr o gr amming , A draft of sectio n 7.2 .1.7. http:/ /www- cs- faculty.stanford.edu/ ~ knuth/ fasc4 b.ps.gz . 37 [29] Donald L. K reher a nd Doug las R. Stinson. Combinatorial Alg orithms: Gen- er ation, Enumer ation and Se ar ch . CRC press L TC, B o ca Raton, Flo rida, 1998. [30] Anna Kuba siak, Ja ros law K. Korbicz, Jakub Zakr zewski, a nd Mac iej Lewenstein. F ermi- Dirac statistics and the num b er theo ry . Eur ophysics L ett ers , 72 (4):506– 512, 2005. [31] Derric k H. Lehmer . The machine to ols of co mbin ator ics. In Edwin F. Beck enbac h, e ditor, Applie d Combinatorial Mathematics , chapter 1, pag es 5–31. Wiley , New Y o rk, 1 964. [32] P ercy A. MacMahon. Combinatory Analysis . Cambridge University Press, 1915. [33] Stuart Martin. Schur A lgebr as and R epr esentation The ory . Cambridge Univ ers it y Press, 1 999. [34] J. K. S. McKay . Algorithm 371: Partitions in natural order. Communic a- tions of the ACM , 13(1):5 2, January 1970. [35] T. V. Naray ana, R. M. Mathsen, a nd J. Sara nji. An algor ithm for gen- erating partitions and its applica tions. J ournal of Combinatori al The ory, Series A , 1 1(1):54– 61, J uly 1971 . [36] Albert Nijenhuis and Herb ert S. Wilf. Combinatoria l Algori thms for Com- puters and Calculators . Academic P ress, New Y ork, second e dition, 1978. [37] Andrew M. Odlyzko. Asymptotic enumeration methods . In R. L. Graha m, M. Gr¨ o tschel, and L. Lov´ asz, editor s, Handb o ok of c ombinatorics , volume II, pages 1063– 1229 . MIT Press, Ca mbridge, MA, USA, 19 96. [38] E. S. Page and L. B. Wilson. An Intro du ct ion t o Computational Combina- torics . Ca m bridge Univ ersity Pr ess, Cambridge, 1979 . [39] Igor Pak. Partition bijections, a survey . The R amanujan Journ al , 1 2(1):5– 75, 2006. [40] Sriram Pemmara ju and Steven S. Skiena. Computational Discr ete Mathe- matics: Combinatorics and Gr aph The ory With Mathematic a . Cambridge Univ ers it y Press, 2 003. [41] Mic hel Pla nat. Thermal 1 /f noise from the theory of par titions: application to a quartz resona tor. P hysic a A: Statistic al Me chanics and its Applic ations , 318(3– 4):371– 386, F e bruary 2003 . [42] Edw ard M. Reingold, Jurg Nievergelt, and Narsingh Deo. Combinatorial algorithms: the ory and pr actic e . Ridge Pr ess/Random House, 1977 . [43] W. Riha and K . R. Ja mes. Algor ithm 2 9: Efficien t a lgorithms fo r doubly and m ultiply restric ted partitions . Computing , 1 6:163 –168 , 197 6. 38 [44] F rank Ruskey . Combinatorial Gener ation . W o rking version 1i http:/ /www. cs.us yd.edu.au/ ~ algo43 01/Bo ok.ps , 200 1. [45] Carla D. Sa v ag e. Gray co de sequences of par titions. Journal of Algorithms , 10(4):577 –595 , 19 89. [46] Carla D. Sav a ge. A sur vey of combinatorial gray co des. SIAM Revi ew , 39(4):605 –629 , December 1997 . [47] Joe Sawada. Generating bra celets in constant amortized time. SIAM Jour- nal on Computing , 31 (1):259– 268, 20 01. [48] Manfred R. Schroeder . Numb er The ory in Scienc e and Communic ation with Applic ations in Crypto gr aphy, Physics, Digital Informatio n, Computing, and Self-Similarity . Springer -V er lag, Berlin, second enlarged edition, 19 86. [49] Robert Sedg ewick. Permutation generatio n metho ds . ACM Computing Surveys , 9(2):137 –164, June 1 977. [50] Stev en S. Skiena. Implementing discr ete mathematics: c ombinatorics and gr aph the ory with mathematic a . Addison-W esley , Redwoo d Cit y , California , 1990. [51] Neil J. A. Sloa ne. The on-line encyclop edia of integer sequences . http:/ /www. resea rch.att.com/ ~ njas/s equen ces/ , 20 09. [52] Ric hard P . Stanley . Enumer ative Combinatorics . W adsworth, Belmo nt , California, 1986 . [53] Dennis Stanton and Dennis White. Constructive c ombinatorics . Spring er- V erla g, Be rlin, 1 986. [54] F rank Sto ckmal. Algo rithm 114: Genera tion of partitions with co nstraints. Communic ations of the ACM , 5(8):434 , Aug ust 1962. [55] F rank Stockmal. Algorithm 95: Genera tion of partitions in part-count form. Communic ations of t he ACM , 5(6):34 4, June 1962. [56] James J. Sylvester. A constr uctive theory of partitio ns, arr anged in three acts, an interact and an exo dion. A meric an Journal of Mathematics , 5(1/4):25 1–33 0, 188 2. [57] H. N. V. T emp erley . Statis tical mechanics and the partition of nu mbers. I. The trans ition of liquid helium. Pr o c e e dings of the R oyal So ciety of L ondon. Series A, Mathematic al and Physic al Scienc es , 199(10 58):361 –375 , Nov em b er 1949 . [58] C. T omasi. Tw o s imple algorithms for the genera tion of pa rtitions of an int eger . Alta F r e quenza , 5 1(6):352 –356 , 198 2. 39 [59] Muoi N. T r an, M. V. N. Murthy , and Ra jat K. Bhaduri. On the quantum density of states a nd partitioning an in teger. Annals of Physics , 311(1):204– 219, May 2 004. [60] Mark B. W ells. Elements of Combinatorial Computing . Pergamon P ress, Oxford, 1971. [61] Winston C. Y ang. Deriv a tives ar e essentially integer par titions. Discr ete Mathematics , 222(1 –3):235 –245 , July 200 0. [62] An to ine Zog hbi and Iv an Sto jmenovi ´ c. F ast a lgorithms for generating inte- ger pa rtitions. Int ernational Journal of Computer Math , 70 :319–3 32, 1998. 40
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment