Scheduling partially ordered jobs faster than 2^n
In the SCHED problem we are given a set of n jobs, together with their processing times and precedence constraints. The task is to order the jobs so that their total completion time is minimized. SCHED is a special case of the Traveling Repairman Pro…
Authors: Marek Cygan, Marcin Pilipczuk, Micha{l} Pilipczuk
Sc heduling partially ordered jobs f aster than 2 n ∗ Marek Cygan † Marcin Pilipczuk ‡ Mic ha l Pilip czuk § Jakub On ufry W o jtaszczyk ¶ Abstract In a sc h eduling problem, denoted by 1 | prec | P C i in the Graham notation, we are giv en a set of n jobs, together with their processing times and precedence constrain ts. The task is to ord er the jobs so that their total completion time is minimized. 1 | prec | P C i is a sp ecial case of the T rav eling Repairman Problem with precedences. A natural d ynamic programming algori thm solv es b oth these problems in 2 n n O (1) time, and whether th ere exists an algorithms s olving 1 | prec | P C i in O ( c n ) time for some constant c < 2 w as an open problem p osted in 2004 by W o eginger. In this paper w e answ er this question p ositiv ely . 1 In tro d uction It is commonly b elieved that no NP-hard problem is solv able in p olynomial time. How ever, while all NP- complete problems ar e equiv alent with r esp ect to p olynomial time reductions, they appe a r to be very different with respect to the best exp o nent ial time exact so lutions. In particular , most NP-complete problems can b e solved significantly faster tha n the (generic for the NP clas s) obvious brute-force algorithm that chec ks all po ssible so lutions; exa mples are Independent Set [11], Domina ting Set [11, 23], Chroma tic Number [4] and Bandwidth [8]. The a r ea of mo derately exp onential time algor ithms s tudies upper and low er b ounds for exact solutions for har d problems. The race for the fastest exact algorithm inspired several v ery interesting to ols a nd techniques such as F ast Subset Con volution [3] and Measur e&Conquer [11] (for an o verview of the field we refer the reader to a recent bo ok by F omin and Kratsch [10]). F or s everal problems, including TSP , Chroma tic Number , Permanent , Set Co ver , # Hamil tonian Cycles and SA T , the cur rently bes t known time complexity is of the form 1 O ∗ (2 n ), whic h is a result of a pplying dy namic pr ogramming over subsets, the inclusion-exc lus ion principle or a brute force search. The question remains, how ever, whic h of those pro blems ar e inheren tly so hard that it is not p ossible to break the 2 n barrier and which are just waiting for new tools and techniques still to b e discov er e d. In par ticular, the hardness of the k - SA T proble m is the starting p oint for the Strong Exp onential Time Hypo thesis of Impagliaz zo and Paturi [15], which is us ed as an argument that other pr oblems are hard [7, 19, 22]. Recently , on the p ositive side, O ( c n ) time algo rithms for a constant c < 2 hav e been developed for Cap acit a ted D omina tion [9], Irredundance [1], Maximum Ind uced Pl anar Su bgraph [12] and (a ma jor breakthrough in the field) for the undirected version of the Hamil tonian Cycl e problem [2]. In this paper we extend this list b y one imp ortant scheduling problem. The area of sc heduling algorithms originates from pr actical questions re garding scheduling jobs on single- o r multiple-proce s sor machines or scheduling I/ O requests. It has quickly b ecome one o f the most imp or ta n t areas in algor ithmics, with significant influence on other branches of computer science. F or e x ample, the r esearch o f the job-shop scheduling problem in 1960 s resulted in desig ning the co mp etitive analy sis [13], initiating the res earch of ∗ An extended abstract of this paper ap pears at 19th European Symp osium on Algori thms, Saa rbr ¨ uc ken, Germany , 2011. † Institute o f I nformatics, Universit y of W arsaw, P oland, cygan@m imuw.edu .pl . Supported by Polish Ministry of Science gran t no. N206 355 636 and F oundation for P olish Science. ‡ Institute of Informatics, Universit y of W arsa w, Poland , m alcin@mim uw.edu.pl . Supported b y Polish Ministry of Science gran t no. N206 355 636 and F oundation for P olish Science. § F aculty of Mathematics, Informatics and Mechanics, Univ ersity o f W arsa w, P oland, michal.p ilipczuk @students.mimuw.edu.pl ¶ Google Inc . , Craco w, P oland, onufry@google.c om 1 The O ∗ () notation suppresses factors p olynomial in the input size. 1 online algorithms. Up to to day , the scheduling literature consists of thousands o f research publications. W e refer the rea der to the classical textb o ok of Brucker [5]. Among scheduling problems one may find a bunch of pr oblems solv able in p olynomia l time, as w ell as many NP-hard ones. F or example, the afor e mentioned job-shop problem is NP-complete o n at leas t three machines [17], but polyno mial on t wo machines with unitary proc essing times [14]. Schedulin g problems come in num erous v ariants. F or example, one may consider scheduling on one machine, or man y uniform or non-unifor m mac hines. The jobs can have different a ttributes: they may arrive at differ e n t times, may hav e dea dlines o r pr ecedence cons tr aints, preemption may or ma y no t be allow ed. There are also man y ob jective functions, for example the makespan of the computation, total completion time, total lateness (in case of deadlines for jobs) etc. Let us focus on the case of a single machine. Assume we are given a s et of jobs V , and each job v has its proces sing time t ( v ) ∈ [0 , + ∞ ). F or a job v , its completion time is the total amoun t of time that this job waited to be finished; formally , the completion time of a job v is defined as the sum of pro cessing times o f v and all jobs sc heduled earlier. If we a re to minimize the total c o mpletion time (i.e, the sum of completio n times over all jobs), it is clea r tha t the jobs should be s ch eduled in o rder o f increasing process ing times. The question of minimizing the mak espan of the computation (i.e., maximum completion tim e) is obvious in this setting, but we note that minimizing makespan is poly nomially solv able even if we are given a precedence constraints on the jobs (i.e., a partial o rder on the set of jobs is given, and a job cannot be scheduled b efore all its predecessor s in the partial or der are finis he d) and jobs arrive at differe nt times (i.e., ea c h job has its arriv al time, b efore which it cannot be scheduled) [16]. Lenstra and Rinno oy Kan [18] in 1978 prov ed that the question o f minimizing total co mpletion time on one mac hine b ecomes NP- complete if we ar e given precedence constraints on the set o f jobs. T o the b est of o ur knowledge the cur rently s mallest a pproximation ra tio for this case equals 2, due to indep endently discov er ed alg orithms by Chekuri and Motw a ni [6] as w ell as Ma rgot et al. [20]. The pr oblem of minimizing total completion time on one machine, g iven precedence co nstraints on the set of jo bs, can b e solved b y a standar d dynamic prog ramming algorithm in time O ∗ (2 n ), where n denotes the num b er of jobs. In this pap er we br eak the 2 n -barrier for this problem. Before we start, let us define formally the co nsidered problem. As we fo cus on a single s ch eduling problem, for brevit y w e denote it by SCHE D . W e no te that th e pr ope r name of this pr oblem in the Graham notation is 1 | prec | P C i . SCHED Input: A par tially o rdered set of jobs ( V , ≤ ), toge ther with a nonnegative pro ce s sing time t ( v ) ∈ [0 , + ∞ ) for each job v ∈ V . T ask: Compute a bijection σ : V → { 1 , 2 , . . . , | V |} (called an or dering ) that satisfies the prece de nce constraints (i.e., if u < v , then σ ( u ) < σ ( v )) and minimizes the tota l completion time of a ll jobs defined as T ( σ ) = X v ∈ V X u : σ ( u ) ≤ σ ( v ) t ( u ) = X v ∈ V ( | V | − σ ( v ) + 1) t ( v ) . If u < v for u, v ∈ V (i.e., u ≤ v a nd u 6 = v ), we say that u pr e c e des v , u is a pr e de c essor or pr ere quisite of v , u is r e quir e d for v or that v is a suc c essor o f u . W e denote | V | b y n . SCHED is a sp ecial case o f the pr e cedence constrained T ra velling Repair man P r oblem (prec - TRP ), defined as follo ws. A repairman needs to visit all vertices of a (directed or undirected) graph G = ( V , E ) with distances d : E → [0 , ∞ ) on edges . A t each vertex, the repairman is supp osed to repair a bro ken machine; a cost of a machine v is the time C v that it waited before b eing repa ir ed. Thus, the go al is to minimize the to tal r e pair time, that is, P v ∈ V C v . Additionally , in the precedence constrained case, w e a re given a partial order ( V , ≤ ) on the set of v ertice s of G ; a machine ca n be repaired only if all its predecessors are already repaired. Note that, given a n insta nce ( V , ≤ , t ) of S CHED , w e may co nstruct eq uiv a len t prec- TRP instance, b y ta k ing G to b e a co mplete directed graph on the vertex set V , k e e ping the prece denc e constraints unmodified, and s e tting d ( u, v ) = t ( v ). 2 The TRP problem is closely related to the T raveling Salesman Pro ble m ( TSP ). All these pro blems are NP-complete a nd solv a ble in O ∗ (2 n ) time by an easy a pplication of the dynamic progr amming approa ch (here n stands for the num b er of vertices in the input gr a ph). In 201 0, Bj¨ orklund [2] discovered a genu ine wa y to s olve pro bably the ea siest NP-complete version o f the TSP problem — the question of deciding whether a given undire c ted graph is Hamiltonian — in randomized O (1 . 66 n ) time. How e ver, his approach do es not extend to directed graphs, not even men tioning gr a phs with distances defined on edges . Bj¨ ork lund’s approach is based on pur ely graph- theoretical and c o m binatorial rea s onings, and seem unable to cop e with ar bitrary (large, real) weights (distances, pro cessing times). This is a lso the ca se with man y other combinatorial appro aches. Proba bly motiv ated b y this, W o eginger a t International W orkshop on Parameterized and Exact Computation (IWPEC) in 2 0 04 [24] has posed the question (rep eated in 2 0 08 [25]), whether it is p ossible to construct a n O ((2 − ε ) n ) time algorithm for the SCHED pro blem 2 . This pr oblem seems to b e the easiest case of the aforementioned family of TSP -related pr oblems with a r bitrary weigh ts. In this pap er we present such an algorithm, th us affir matively answering W oeginger ’s question. W o eging er also a sked [24, 25] whether an O ((2 − ε ) n ) time algo rithm for o ne of the pro blems TRP , TSP , prec- TRP , SCHED implies O ((2 − ε ) n ) time algo rithms for the other problems. This pr oblem is still op en. The mo s t impo rtant ingredient of o ur algo rithm is a combinatorial lemma (Lemma 2.6) which a llows us to investigate the structure of the SCH ED problem. W e heavily use the fact that we are so lving the SCHED problem and not its more genera l TSP related version, and for this reason we believe that o btaining O ((2 − ε ) n ) time algo rithms for other problems listed by W oeg inger is muc h harder. 2 The algorithm 2.1 High-lev el o v erview — part 1 Let us r e call tha t our task in the SCHED problem is to compute an ordering σ : V → { 1 , 2 , . . . , n } tha t satisfies the precedence constraints (i.e., if u < v then σ ( u ) < σ ( v )) and minimizes the total completion time of all jobs defined as T ( σ ) = X v ∈ V X u : σ ( u ) ≤ σ ( v ) t ( u ) = X v ∈ V ( n − σ ( v ) + 1 ) t ( v ) . W e define the c ost of job v at p osition i to b e T ( v , i ) = ( n − i + 1 ) t ( v ). Thus, the total completion time is the total cost of a ll jobs at their resp ective positions in the ordering σ . W e b egin b y descr ibing the algorithm that s o lves SCHED in O ⋆ (2 n ) time, which w e call the DP algorithm — this will b e the basis for our further work. The idea — a standard dyna mic pro gramming ov e r subsets — is that if w e decide that a particular set X ⊆ V will (in some order) fo rm the prefix of our optimal σ , then the order in which w e take the elemen ts of X do e s not affect the choices w e make regarding the ordering of the remaining V \ X ; the only thing that matters a re the precedence constraints imp ose d b y X on V \ X . Thus, for each candidate set X ⊆ V to form a prefix, the algorithm computes a bijectio n σ [ X ] : X → { 1 , 2 , . . . , | X |} that minimizes the cost of jobs from X , i.e., it minimizes T ( σ [ X ]) = P v ∈ X T ( v , σ [ X ]( v )). The v alue o f T ( σ [ X ]) is computed using the follo wing easy to c heck r ecursive fo r m ula: T ( σ [ X ]) = min v ∈ m ax( X ) [ T ( σ [ X \ { v } ]) + T ( v , | X | )] . (1) Here, by max ( X ) w e mea n the set of maxim um elemen ts of X — those w hich do not precede any elemen t of X . The bijection σ [ X ] is c onstructed by prolonging σ [ X \ { v } ] by v , where v is the job at which the minimu m is attained. Notice that σ [ V ] is exactly the ordering w e are lo o king fo r. W e calculate σ [ V ] recursively , using formula (1), stor ing a ll co mputed v a lues σ [ X ] in memory to av oid recomputation. Thus, as the c o mputation of a single σ [ X ] v alue giv en all the smaller v a lues ta kes p olynomia l time, while σ [ X ] for each X is computed at most once the whole a lgorithm indeed runs in O ⋆ (2 n ) time. The ov er all idea of our algorithm is to identify a family of sets X ⊆ V that — for some reason — ar e not reas onable prefix candidates, and we can skip them in the computatio ns of the DP algorithm; w e will 2 Although W o eginger in his papers asks for an O (1 . 99 n ) algorithm, the inten tion i s clearly to ask for an O ((2 − ε ) n ) algorithm. 3 call these unfe asible sets . If the n umber of feasible sets is not lar ger than c n for some c < 2, we will b e done — our recur sion will visit only feas ible sets, assuming T ( σ [ X ]) to be ∞ for unfeasible X in form ula (1), and the running time will b e O ⋆ ( c n ). T his is formalized in the following prop o sition. Prop ositio n 2.1. Assume we ar e given a p olynomial-time algorithm R that, given a set X ⊆ V , either ac c epts it or r eje cts it. Mor e over, assu me that the numb er of set s ac c epte d by R is b ounde d by O ( c n ) for some c onstant c . Then one c an find in time O ⋆ ( c n ) an optimal or dering of the jobs in V among those or derings σ wher e σ − 1 ( { 1 , 2 , . . . , i } ) is ac c epte d by R for al l 1 ≤ i ≤ n , whenever s u ch or dering ex ists. Pr o of. Consider the following recursive pro cedur e to compute optimal T ( σ [ X ]) for a g iven set X ⊆ V : 1. if X is rejected b y R , r eturn T ( σ [ X ]) = ∞ ; 2. if X = ∅ , return T ( σ [ X ]) = 0; 3. if T ( σ [ X ]) has been already computed, return the stor ed v a lue of T ( σ [ X ]); 4. otherwise, compute T ( σ [ X ]) using formula (1), calling r ecursively the procedure itself to obtain v a lues T ( σ [ X \ { v } ]) for v ∈ max( X ), and store the computed v alue for further use. Clearly , the ab ov e pro cedur e , inv oked on X = V , computes optimal T ( σ [ V ]) among those orderings σ where σ − 1 ( { 1 , 2 , . . . , i } ) is a ccepted by R for all 1 ≤ i ≤ n . It is straightforward to augment this pro cedure to return the order ing σ itself, instead of only its co s t. If we us e balanced sear ch tree to store the computed v alues o f σ [ X ], each r ecursive ca ll of the describ ed pro cedure runs in polynomial time. Note that the last step of the pro cedure is inv oked at most once for eac h set X a ccepted by R and nev er for a set X rejected by R . As a n application of this step results in at most | X | ≤ n recur sive calls, we obtain that a computation of σ [ V ] using this pro cedure results in the n umber of recursive calls bounded by n times the num b er of sets accepted b y R . The time b ound follows. 2.2 The large matc hing case W e b eg in by noticing that the DP algorithm needs to compute σ [ X ] only for those X ⊆ V that are downw ard closed, i.e., if v ∈ X and u < v then u ∈ X . If there are many constraints in our problem, this alone will suffice to limit the num b er of feasible se ts co nsiderably , as follows. Constr uct an undirected graph G with the vertex set V and edge se t E = { u v : u < v ∨ v < u } . Let M b e a maximum matc hing 3 in G , which can be found in po ly nomial time [21]. If X ⊆ V is down ward clo sed, and uv ∈ M , u < v , then it is not possible that u / ∈ X and v ∈ X . Obviously c hecking if a subset is downw ard closed ca n b e p erformed in p olynomial time, thus we ca n apply Prop osition 2.1, a ccepting only down ward closed subs e ts of V . This leads to the following lemma: Lemma 2.2 . The n umb er of downwar d close d subsets of V is b ounde d by 2 n − 2 |M| 3 |M| . If |M | ≥ ε 1 n , then we c an s olve the SCHED pr oblem in time T 1 ( n ) = O ⋆ ((3 / 4) ε 1 n 2 n ) . Note that for an y small positive constant ε 1 the complexity T 1 ( n ) is of required order , i.e ., T 1 ( n ) = O ( c n ) for some c < 2 that dep ends on ε 1 . Thus, w e only hav e to deal with the case wher e |M| < ε 1 n . Let us fix a maximum matching M , let M ⊆ V be the set of endp oints of M , and let I 1 = V \ M . Note that, as M is a maxim um matc hing in G , no tw o jobs in I 1 are bound by a precedence constraint, and | M | ≤ 2 ε 1 n , | I 1 | ≥ (1 − 2 ε 1 ) n . See Figure 1 for an illustration. 3 Ev en an inclusion-maximal match ing, which can be f ound greedily , is enough. 4 M I 1 Figure 1 : An illustra tion of the case left after Lemma 2.2. In this and all further figures, an arrow p oints from the successo r job to the predecessor one. 2.3 High-lev el o v erview — part 2 W e are left in the situation where there is a small num b er of “sp ecial” elements ( M ), and the bulk r emainder ( I 1 ), consisting of elements that are tied by precedence constraints only to M and not to each other. First notice that if M w a s empt y , the problem would b e trivial: with no precedence constra in ts we sho uld simply order the tas ks from the shortes t to the longest. Now let us c o nsider what would happ en if all the constraints b e t ween a n y u ∈ I 1 and w ∈ M would b e o f the form u < w — that is, if the jobs from I 1 had no predecessors. F or any pr e fix set candidate X we consider X I = X ∩ I 1 . Now fo r an y x ∈ X I , y ∈ I 1 \ X I we hav e an a lternative prefix candidate: the set X ′ = ( X ∪ { y } ) \ { x } . If t ( y ) < t ( x ), there ha s to b e a reason why X ′ is not a strictly better prefix ca ndidate than X — namely , ther e has to exist w ∈ M suc h that x < w , but y 6 < w . A s imilar reasoning would hold even if not all of I 1 had no pre dec essors, but just some constant fraction J of I — again, the only feasible prefix candidates would be those in which for ev er y x ∈ X I ∩ J and y ∈ J \ X I there is a reason (either t ( x ) < t ( y ) or a n element w ∈ M which requires x , but not y ) not to e xchange them. It turns out that if | J | > ε 2 n , where ε 2 > 2 ε 1 , this observ atio n suffices to prov e that the n umber of po ssible intersections of feasible sets with J is exp onentially smaller than 2 | J | . This is for malized and proved in Lemma 2.6, and is the cornerstone of the whole result. A typical applica tion of this lemma is as follows: say we hav e a set K ⊆ I 1 of ca rdinality | K | > 2 j , while we kno w for some reaso n that all the predecess ors of elements of K app ear o n p o sitions j and earlie r. If K is la rge (a constant fraction of n ), this is enough to limit the num b er of feasible sets to (2 − ε ) n . T o this end it suffices to show tha t there are exp onentially few er than 2 | K | po ssible intersections of a feasible set with K . Ea ch such intersection consists of a set o f a t most j elements (that will be put on p ositions 1 throug h j ), a nd then a set in whic h ev er y elemen t has a reaso n not to b e exchanged with something from outside the set — and there are relatively few of those by Lemma 2.6 — a nd when we do the c a lculations, it tur ns out the resulting num b er of p oss ibilities is expo nent ially smaller than 2 | K | . T o apply this r easoning, we need to b e able to tell that all the prer equisites of a given element app ear at s o me p osition o r earlier . T o achiev e this, w e need to know the approximate po s itions o f the elements in M . W e achiev e this by bra nc hing into 4 | M | cases, for each element w ∈ M choo sing to which of the four quarters of the set { 1 , . . . , n } will σ opt ( w ) b elong . This incurs a multip licative cost 4 of 4 | M | , which will b e offset by the gains from applying Lemma 2.6. W e will now rep eatedly apply Lemma 2.6 to obtain information ab out the p ositions of v ar io us element s of I 1 . W e will rep e atedly s ay tha t if “many” elements (by which we alwa ys mean more tha n εn for some ε ) do no t satisfy something, we can bound the num b er of feasible sets, and thus finish the algorithm. F or instance, loo k at those elements of I 1 which can app ear in the first qua rter, i.e., none of their pr erequisites app ear in quar ters t wo, th ree and four. If there is mor e than ( 1 2 + δ ) n of them fo r some consta n t δ > 0, we 4 Actually , this b ound can be im pro v ed to 10 | M | / 2 , as M are endpoin ts of a matching in the graph corresp onding to the set of precedence s . 5 can apply the ab ov e re a soning for j = n/ 4 (Lemma 2.10). Subsequent lemma ta b ound the n umber of fea sible sets if there are many elements that cannot a pp ear in any o f the t wo firs t quar ters (Lemma 2.8), if less than ( 1 2 − δ ) n elements can appea r in the first qua rter (Lemma 2.10) and if a constant fraction o f elemen ts in the second quarter could a ctually app ear in the first quarter (Lemma 2 .1 1). W e also apply s imilar reaso ning to elements that can or cannot app ear in the la st quarter. W e end up in a situation where w e have fo ur groups of e le ments, each of size roughly n/ 4, split upo n whether they can app ear in the first quarter and w he ther they can a ppea r in the la st one; moreover, thos e that ca n app ear in the first quarter will not app ear in the second, a nd those that can app ear in the fourth will not appea r in the third. This means that there ar e t wo pairs of parts w hich do not interact, as the set of places in which they can a pp ear ar e disjoin t. W e use this independenc e of sorts to construct a different algorithm than the DP w e used so far, which solves our problem in this spe c ific cas e in time O ⋆ (2 3 n/ 4+ ε ) (Lemma 2.12). As can b e gathered fr om this o verview, there are many technical details w e will hav e to navigate in the algorithm. This is made more preca r ious b y the need to carefully select all the epsilons. W e decided to use symbolic v alues for them in the main pro o f, des cribing their rela tionship a ppropriately , using four constants ε k , k = 1 , 2 , 3 , 4. The constants ε k are very small p os itive reals , and additiona lly ε k is muc h smaller than ε k +1 for k = 1 , 2 , 3. At each step, w e shor tly discuss the ex istence of suc h constants. W e discuss the c ho ice of optimal v alues of these constants in Sec tio n 2.9, although the v alue we p erceive in our algorithm lies rather in the existence of a n O ⋆ ((2 − ε ) n ) algor ithm tha n in the v alue of ε (which is admittedly very small). 2.4 T ec hnical preliminaries W e start with a few simplifications. First, we a dd a few dumm y jobs with no pr e cedence constraints and zero pro cessing times, so that n is divisible b y four. Sec o nd, by slightly p erturbing the jobs ’ pro ces sing times, w e can assume that all pro cessing times ar e pairwis e different and, moreover, eac h ordering ha s different total completion time. This can be done, for instance, by repla cing time t ( v ) with a pa ir ( t ( v ) , ( n + 1) π ( v ) − 1 ), where π : V → { 1 , 2 , . . . , n } is a n ar bitrary num b ering of V . The addition of pairs is pe r formed coo rdinatewise, whereas compa r ison is per formed lexico graphically . Note that this in par ticular implies that the optimal solution is unique, we denote it b y σ opt . T hir d, at the c o st of an n 2 m ultiplicative overhead, we guess the jobs v begin = σ − 1 opt (1) and v end = σ − 1 opt ( n ) and we add pr ecedence constraints v begin < v < v end for each v 6 = v begin , v end . If v begin or v end were not in M to beg in with, we add them there. A num b er o f times our algor ithm branc hes into several subcases , in eac h bra nch assuming some pro per t y of the optimal so lution σ opt . F ormally sp eaking, in each branch we seek the optima l order ing among those that satisfy the assumed prop erty . W e somewhat abuse the notation and denote by σ opt the optimal solution in the currently consider ed subca se. Note that σ opt is always unique within an y subca se, as each order ing has different total completion time. F or v ∈ V by pred ( v ) we denote the set { u ∈ V : u < v } of predece ssors of v , and by succ ( v ) we denote the set { u ∈ V : v < u } of successors of v . W e extend this notation to subsets of V : pre d ( U ) = S v ∈ U pre d ( v ) and succ ( U ) = S v ∈ U succ ( v ). No te that for any set U ⊆ I 1 , both pred ( U ) and succ ( U ) are subsets of M . In a few places in this paper we use the following simple b ound o n binomial co efficien ts that can be ea sily prov en using the Stirling’s formula. Lemma 2 .3. L et 0 < α < 1 b e a c onstant. Then n αn = O ∗ 1 α α (1 − α ) 1 − α n . In p articular, if α 6 = 1 / 2 then ther e exists a c onstant c α < 2 t hat dep ends only on α and n αn = O ∗ ( c n α ) . 6 2.5 The c ore lemma W e now formalize the idea of exchanges pr esented a t the b eginning of Section 2.3 . Definition 2.4 . Consider some set K ⊆ I 1 , and its subset L ⊆ K . If ther e exist s u ∈ L su ch that for every w ∈ succ ( u ) we c an fin d v w ∈ ( K ∩ pred ( w )) \ L with t ( v w ) < t ( u ) then we say L is su cc -exch angeable with r esp e ct to K , otherwise we say L is non- succ -exchangeable with r esp e ct to K . Similarly, if t her e exists v ∈ ( K \ L ) such that for every w ∈ pred ( v ) we c an fin d u w ∈ L ∩ su cc ( w ) with t ( u w ) > t ( v ) , we c al l L pr e d -exchangeable with r esp e ct to K , otherwise we c al l it non- p r ed -exchangeable with resp e ct to K . Whenever it is clear from the co n tex t, w e omit the se t K with respe c t to which its subset is or is not pre d - or succ -exchangeable. Let us now give some more intuit ion on the exchangeable sets. Let L b e a non- su cc -exc hangeable set with resp ect to K ⊆ I 1 and let u ∈ L . By the definition, there ex ists w ∈ succ ( u ), such that for all v w ∈ ( K ∩ p r ed ( w )) \ L we hav e t ( v w ) ≥ t ( u ); in other words, all predecessors of w in K that are scheduled after L have larger pro cessing time than u — which seems like a “ correct” choice if w e are to optimize the total completion time. On the other ha nd, let L = σ − 1 opt ( { 1 , 2 , . . . , i } ) ∩ K for some 1 ≤ i ≤ n and assume that L is a su cc - exchangeable set with resp ect to K with a job u ∈ L witnessing this fact. Let w be the job in succ ( u ) that is s cheduled first in the optimal ordering σ opt . By the definition, there exists v w ∈ ( K ∩ pr ed ( w )) \ L with t ( v w ) < t ( u ). It is tempting to decrease the total completion time of σ opt by swapping the jobs v w and u in σ opt : b y the c ho ice of w , no prece de nc e constra in t inv o lving u will be viola ted by such an exc hange, so we need to care only a bout the pr edecessors of v w . W e formalize the aforementioned applica bility of the definition of pr ed - and succ -exchangeable sets in the following lemma: Lemma 2.5. L et K ⊆ I 1 . If for al l v ∈ K , x ∈ p red ( K ) we have that σ opt ( v ) > σ opt ( x ) , then for any 1 ≤ i ≤ n the set K ∩ σ − 1 opt ( { 1 , 2 , . . . , i } ) is non- succ -exchange able with r esp e ct to K . Similarly, if for al l v ∈ K , x ∈ succ ( K ) we have σ opt ( v ) < σ opt ( x ) , then the set s K ∩ σ − 1 opt ( { 1 , 2 , . . . , i } ) ar e non- pre d -exchange able with r esp e ct t o K . Pr o of. The pro ofs for the first and the s econd case are analogous. How ever, to help the reader get intuition on exchangeable sets, we pr ovide them b oth in full detail. See Figur e 2 for an illustratio n on the succ - exchangeable case. Non- succ -exchangeable sets. Assume, by cont radiction, that for so me i the set L = K ∩ σ − 1 opt ( { 1 , 2 , . . . , i } ) is succ - e xchangeable. Let u ∈ L be a job witnessing it. Let w b e the successo r of u with minim um σ opt ( w ) (there exists one, as v end ∈ succ ( u )). By Definition 2.4, we ha ve v w ∈ ( K ∩ pr ed ( w )) \ L with t ( v w ) < t ( u ). As v w ∈ K \ L , w e have σ opt ( v w ) > σ opt ( u ). As v w ∈ pre d ( w ), w e have σ opt ( v w ) < σ opt ( w ). Consider an ordering σ ′ defined as σ ′ ( u ) = σ opt ( v w ), σ ′ ( v w ) = σ opt ( u ) a nd σ ′ ( x ) = σ opt ( x ) if x / ∈ { u, v w } ; in other words, we swap the p ositio ns of u and v w in the ordering σ opt . W e cla im that σ ′ satisfies a ll the precedence co nstraints. As σ opt ( u ) < σ opt ( v w ), σ ′ may only violates co nstraints of the for m x < v w and u < y . Howev er, if x < v w , then x ∈ pr ed ( K ) and σ ′ ( v w ) = σ opt ( u ) > σ opt ( x ) = σ ′ ( x ) b y the assumptions of the Lemma. If u < y , then σ ′ ( y ) = σ opt ( y ) ≥ σ opt ( w ) > σ opt ( v w ) = σ ′ ( u ), by the choice of w . Thus σ ′ is a feasible solution to the co ns idered SCHED instance. Since t ( v w ) < t ( u ), we ha ve T ( σ ′ ) < T ( σ opt ), a contradiction. Non- pr ed -exchangeable s ets. Assume, by co n tr adiction, that for some i the set L = K ∩ σ − 1 opt ( { 1 , 2 , . . . , i } ) is pr ed -exchangeable. Let v ∈ ( K \ L ) b e a job witness ing it. Let w b e the predecessor of v w ith maxim um σ opt ( w ) (there exists one, as v begin ∈ pre d ( v )). By Definition 2 .4, we have u w ∈ L ∩ su c c ( w ) with t ( u w ) > t ( v ). As u w ∈ L , we have σ opt ( u w ) < σ opt ( v ). As u w ∈ succ ( w ), we have σ opt ( u w ) > σ opt ( w ). Consider an order ing σ ′ defined as σ ′ ( v ) = σ opt ( u w ), σ ′ ( u w ) = σ opt ( v ) and σ ′ ( x ) = σ opt ( x ) if x / ∈ { v, u w } ; in other words, we swap the p ositions of v and u w in the ordering σ opt . W e claim tha t σ ′ satisfies a ll the precedence constra in ts. As σ opt ( u w ) < σ opt ( v ), σ ′ may only violates cons tr aints of the form x > u w and 7 v > y . Ho wever, if x > u w , then x ∈ succ ( K ) and σ ′ ( u w ) = σ opt ( v ) < σ opt ( x ) = σ ′ ( x ) b y the assumptions of the Lemma. If v > y , then σ ′ ( y ) = σ opt ( y ) ≤ σ opt ( w ) < σ opt ( u w ) = σ ′ ( v ), b y the choice o f w . Thu s σ ′ is a feasible solution to the co ns idered SCHED ins ta nce. Since t ( u w ) > t ( v ), w e hav e T ( σ ′ ) < T ( σ opt ), a contradiction. i σ opt ( u ) σ opt ( w ) succ ( u ) pre d ( K ) σ opt ( v w ) σ opt ( v begin ) σ opt ( v end ) Figure 2: Figur e illustr ating the succ -exchangeable ca se of Lemma 2.5. Gr ay circles indicate p ositions of elements of K , black contour indicates that an element is also in L . Black squares indica te p o sitions of elements fro m pr ed ( K ), and black circles — p ositions of other elements from M . Lemma 2.5 means that if we manage to ident ify a set K satisfying the as sumptions of the lemma, the only sets the DP algorithm has to consider are the non-exchangeable ones. The following cor e lemma proves that there are few o f those (provided that K is big enough), a nd w e can identify them easily . Lemma 2. 6 . F or any s et K ⊆ I 1 the numb er of n on- succ -exchange able (non- p red -ex change able) subsets with r e gar d to K is at most P l ≤| M | | K | l . Mor e over, ther e ex ists an algorithm which che cks whether a set is succ -exchange able ( pred -exchange able) in p olynomial time. The idea o f the proof is to co nstruct a function f that enco des each non-exchangeable set by a subset of K no larger than M . T o show this enco ding is injective, we provide a deco ding function g and show that g ◦ f is a n iden tity on non-exchangeable sets. Pr o of. As in Lemma 2.5, the pro ofs for succ - and pr ed -exchangeable s e ts a re analogo us, but for the sa ke or clarity w e include b oth pr o ofs in full detail. Non- succ -exchangeable sets. F or any set Y ⊆ K we define the function f Y : M → K ∪ { nil } as follows: for any element w ∈ M we define f Y ( w ) (the le ast exp ensive pr e de c essor of w outside Y ) to be the e le men t of ( K \ Y ) ∩ pre d ( w ) whic h has the smallest pro cessing time, or nil if ( K \ Y ) ∩ pred ( w ) is empty . W e now take f ( Y ) (the set of the le ast exp ensive pr e de c essors outside Y ) to b e the set { f Y ( w ) : w ∈ M } \ { nil } . W e see that f ( Y ) is indeed a set of cardinality at most | M | . Now we aim to prov e that f is injectiv e on the family of non- succ -exc hang eable sets. T o this end w e define the re v erse function g . F or a set Z ⊆ K (whic h we think of as the set of the least exp ensive predecessor s outside some Y ) let g ( Z ) be the set of such elemen ts v of K that there exists w ∈ succ ( v ) such that for any z w ∈ Z ∩ pred ( w ) we have t ( z w ) > t ( v ). Notice, in particular , that g ( Z ) ∩ Z = ∅ , as for v ∈ Z a nd w ∈ succ ( v ) w e hav e v ∈ Z ∩ pr ed ( w ). First we pr ov e g ( f ( Y )) ⊆ Y fo r a n y Y ⊆ K . T a ke an y v ∈ K \ Y and conside r any w ∈ succ ( v ). Then f Y ( w ) 6 = nil and t ( f Y ( w )) ≤ t ( v ), as v ∈ ( K \ Y ) ∩ pr ed ( w ). Thus v / ∈ g ( f ( Y )), as for any w ∈ succ ( v ) we can take a witness z w = f Y ( w ) in the definition of g ( f ( Y )). In the other dir ection, let us assume that Y do es not satisfy Y ⊆ g ( f ( Y )). This mea ns we hav e u ∈ Y \ g ( f ( Y )). Then we show that Y is suc c -exc hangeable. Cons ider any w ∈ succ ( u ). As u / ∈ g ( f ( Y )), b y the definition of the fu nction g a pplied to the se t f ( Y ), there exists z w ∈ f ( Y ) ∩ pr ed ( w ) with t ( z w ) ≤ t ( u ). But f ( Y ) ∩ Y = ∅ , while u ∈ Y ; and a s all the v a lues o f t a re distinct, t ( z w ) < t ( u ) and z w satisfies the condition for v w in the definition o f suc c - exchangeabilit y . Non- pr ed -exchangeable s ets. F or an y s et Y ⊆ K we define the function f Y : M → K ∪ { nil } as follows: for any elemen t w ∈ M we define f Y ( w ) (the most exp ensive suc c essor of w in Y ) to b e the element of Y ∩ su cc ( w ) which has the largest pro cessing time, or nil if Y ∩ su cc ( w ) is empt y . W e no w take f ( Y ) (the 8 set of the most exp ensive suc c essors in Y ) to be the set { f Y ( w ) : w ∈ M } \ { nil } . W e see that f ( Y ) is indeed a set of cardina lity at most | M | . Now we aim to pr ov e that f is injective o n the family of non- pr ed -exchangeable sets. T o this end w e define the r everse function g . F or a set Z ⊆ K (whic h we think of as the set of most expensive successors in some Y ) let g ( Z ) b e the set of such element s v of K that for a n y w ∈ pred ( v ) there exists a z w ∈ Z ∩ succ ( w ) with t ( z w ) ≥ t ( v ). Notice, in pa rticular, that g ( Z ) ⊆ Z , as for v ∈ Z the job z w = v is a go o d witness for any w ∈ pred ( v ). First we prov e Y ⊆ g ( f ( Y )) for any Y ⊆ K . T a ke a n y v ∈ Y a nd consider any w ∈ pre d ( v ). Then f Y ( w ) 6 = nil and t ( f Y ( w )) ≥ t ( v ), as v ∈ Y ∩ succ ( w ). Thus v ∈ g ( f ( Y )), as for any w ∈ pr ed ( v ) we can take z w = f Y ( w ) in the definition of g ( f ( Y )). In the other direction, let us a ssume that Y do es not satisfy g ( f ( Y )) ⊆ Y . This means we ha ve v ∈ g ( f ( Y )) \ Y . Then w e show that Y is p r ed -exchangeable. Consider any w ∈ pr e d ( v ). As v ∈ g ( f ( Y )), by the definition of the function g applied to the set f ( Y ), there exists z w ∈ f ( Y ) ∩ succ ( w ) with t ( z w ) ≥ t ( v ). But f ( Y ) ⊆ Y , while v 6∈ Y ; and as all the v alues of t ar e distinct, t ( z w ) > t ( v ) and z w satisfies the condition for u w in the definition o f pr ed - exchangeabilit y . Thu s, in both cases, if Y is non-exchangeable then g ( f ( Y )) = Y (in fact it is p ossible to prov e in b oth cases that Y is non-exchangeable iff g ( f ( Y )) = Y ). As there are P | M | l =0 | K | l po ssible v alues o f f ( Y ), the first part of the lemma is prov en. F or the second, it suffices to notice that succ - a nd pred -exchangeability c a n be chec ked in time O ( | K | 2 | M | ) dire c tly from the definition. Example 2. 7. T o illustrate the applicabilit y of Lemma 2.6, we a nalyze the following very simple case: assume the who le set M \ { v begin } s ucceeds I 1 , i.e., for every w ∈ M \ { v begin } and v ∈ I 1 we hav e w 6 < v . If ε 1 is small, then we can use the first case of Le mma 2.5 for the whole set K = I 1 : we hav e pr ed ( K ) = { v begin } and we only look for order ings that put v begin as the first pr o cessed job. Thus, we can a pply Pr op osition 2.1 with algorithm R that rejects sets X ⊆ V where X ∩ I 1 is su cc -exchangeable with respect to I 1 . By Lemma 2.6, the num b er of sets a c c epted b y R is bounded b y 2 | M | P l ≤| M | | I 1 | l , which is small if | M | ≤ ε 1 n . 2.6 Imp ortan t jobs at n/ 2 As w a s already men tioned in the ov erview, the assumptions of Lemma 2.5 are quite strict; therefor e, w e need to lea rn a bit more on ho w σ opt behaves on M in order to distinguish a suitable place for a n applica tion. As | M | ≤ 2 ε 1 n , w e can afford bra nch ing in to few sub cas es for every job in M . Let A = { 1 , 2 , . . . , n/ 4 } , B = { n/ 4 + 1 , . . . , n/ 2 } , C = { n/ 2 + 1 , . . . , 3 n/ 4 } , D = { 3 n/ 4 + 1 , . . . , n } , i.e., we split { 1 , 2 , . . . , n } in to quarters. F or each w ∈ M \ { v begin , v end } we br a nch in to t wo ca ses: whether σ opt ( w ) b elongs to A ∪ B or C ∪ D ; how ever, if some predecessor (suc c e ssor) of w has b een alre ady assig ned to C ∪ D ( A ∪ B ), we do not allow w to be placed in A ∪ B ( C ∪ D ). Of cour se, we already know that σ opt ( v begin ) ∈ A a nd σ opt ( v end ) ∈ D . Recall that the vertices of M c a n be paired into a matching; since for each w 1 < w 2 , w 1 , w 2 ∈ M we cannot ha ve w 1 placed in C ∪ D and w 2 placed in A ∪ B , this br anching leads to 3 | M | / 2 ≤ 3 ε 1 n sub c ases, and th us the s ame ov er head in the time complex it y . By the ab ov e pro cedure , in all branches the guesses abo ut alignment of j obs from M satisfy precedence constraints inside M . Now consider a fixed branch. Let M AB and M C D be the sets of elements of M to be placed in A ∪ B and C ∪ D , resp ectively . Let us now see what w e ca n learn in a fixed branch about the behaviour of σ opt on I 1 . Let W AB half = v ∈ I 1 : ∃ w w ∈ M AB ∧ v < w W C D half = v ∈ I 1 : ∃ w w ∈ M C D ∧ w < v , that is W AB half (resp. W C D half ) are those elements of I 1 which a re forced into the fir st (resp. s econd) half of σ opt by the choices we ma de abo ut M (see Figure 3 for a n illustration). If o ne of the W half sets is muc h larger than M , w e hav e obtained a ga in — by br anching into at most 3 ε 1 n branches we gained additional information ab out a s ignificant (muc h lar ger tha n (log 2 3) ε 1 n ) num b er of other elements (and so we will 9 M AB M C D M W AB half W C D half I 2 I 1 Figure 3: An illustration of the sets M AB , M C D , W AB half and W C D half . be able to av o id consider ing a significa nt n umber o f sets in the DP algor ithm). This is forma liz e d in the following lemma: Lemma 2.8. Consider a fi xe d br anch. If W AB half or W C D half has at le ast ε 2 n elements, then the DP algorithm c an b e augmente d to solve the instanc e in the c onsider e d br anch in time T 2 ( n ) = 2 (1 − ε 2 ) n + n (1 / 2 − ε 2 ) n + 2 ε 2 n (1 − ε 2 ) n n/ 2 n O (1) . Pr o of. W e describ e here only the case | W AB half | ≥ ε 2 n . The second case is symmetrical. Recall that the set W AB half needs to b e placed in A ∪ B by the optimal order ing σ opt . W e use Prop osition 2.1 with an algorithm R that accepts sets X ⊆ V such that the set W AB half \ X (the elements of W AB half not scheduled in X ) is of size at most max(0 , n/ 2 − | X | ) (the num b er of jobs to b e scheduled after X in the first half o f the jobs). Moreover, the alg orithm R tests if the set X conforms with the gues sed sets M AB and M C D , i.e.: | X | ≤ n/ 2 ⇒ M C D ∩ X = ∅ | X | ≥ n/ 2 ⇒ M AB ⊆ X . Clearly , for an y 1 ≤ i ≤ n , the set σ − 1 opt ( { 1 , 2 , . . . , i } ) is accepted by R , as σ opt places M AB ∪ W AB half in A ∪ B and M C D in C ∪ D . Let us now estimate the num b er o f sets X accepted by R . An y set X of size larger than n / 2 needs to contain W AB half ; there a re at most 2 n −| W AB half | ≤ 2 (1 − ε 2 ) n such sets. All sets of size at mo st n/ 2 − | W AB half | are accepted by R ; there are a t most n n (1 / 2 − ε 2 ) n such sets. Consider now a set X of size n/ 2 − α for some 0 ≤ α ≤ | W AB half | . Such a se t needs to contain | W AB half | − β elemen ts of W AB half for some 0 ≤ β ≤ α and n/ 2 − | W AB half | − ( α − β ) e lemen ts o f V \ W AB half . There fo re the num b er of such sets (for a ll p oss ible α ) is bo unded by: | W AB half | X α =0 α X β =0 | W AB half | | W AB half | − β n − | W AB half | n/ 2 − | W AB half | − ( α − β ) ≤ n 2 max 0 ≤ β ≤ α ≤| W AB half | | W AB half | β n − | W AB half | n/ 2 + ( α − β ) ≤ n 2 2 | W AB half | n − | W AB half | n/ 2 ≤ n 2 2 ε 2 n (1 − ε 2 ) n n/ 2 The last ineq uality follows from the fa c t tha t the function x 7→ 2 x n − x n/ 2 is decr easing fo r x ∈ [0 , n/ 2]. The bo und T 2 ( n ) follows. 10 Note that we hav e 3 ε 1 n ov erhea d so far, due to guessing placement of the jobs fro m M . By Lemma 2.3, (1 − ε 2 ) n n/ 2 = O ((2 − c ( ε 2 )) (1 − ε 2 ) n ) and n (1 / 2 − ε 2 ) n = O ((2 − c ′ ( ε 2 )) n ), for some p ositive constants c ( ε 2 ) and c ′ ( ε 2 ) that dep end only on ε 2 . Thus, for any small fixed ε 2 we can choose ε 1 sufficiently small so that 3 ε 1 n T 2 ( n ) = O ( c n ) for some c < 2. Note that 3 ε 1 n T 2 ( n ) is an upp er bound on the total time sp ent on pro cessing all the consider ed subca s es. Let W half = W AB half ∪ W C D half and I 2 = I 1 \ W half . F rom this p oint we ass ume that | W AB half | , | W C D half | ≤ ε 2 n , hence | W half | ≤ 2 ε 2 n and | I 2 | ≥ (1 − 2 ε 1 − 2 ε 2 ) n . F o r each v ∈ M AB ∪ W AB half we branc h into t wo sub cases, whether σ opt ( v ) belo ng s to A or B . Similarly , for each v ∈ M C D ∪ W C D half we guess whether σ opt ( v ) belo ng s to C or D . Mo reov er , we terminate branches whic h ar e trivially cont radicting the constraints. Let us now estimate the num b er of sub cases cr eated by this branch. Recall that the vertices of M can be paired in to a matc hing; since f or each w 1 < w 2 , w 1 , w 2 ∈ M we cannot hav e w 1 placed in a later segmen t than w 2 ; this gives us 10 o ptions for eac h pair w 1 < w 2 . Thus, in total they are a t most 10 | M | / 2 ≤ 1 0 ε 1 n wa ys of placing vertices of M in to q ua rters without contradicting the constraints. Moreover, this step gives us a n additional 2 | W half | ≤ 2 2 ε 2 n ov erhea d in the time complexit y for vertices in W half . Overall, a t this p oint we a re considering at most 10 ε 1 n 2 2 ε 2 n n O (1) sub c ases. W e denote the set of elements of M and W half assigned to q uarter Γ ∈ { A, B , C, D } by M Γ and W Γ half , resp ectively . 2.7 Quarters and applications of the core lemma In this section w e try to apply Lemma 2.6 as follows: W e lo ok whic h elements o f I 2 can be placed in A (the set P A ) and which canno t (the set P ¬ A ). Simila rly w e define the s e t P D (can b e placed in D ) and P ¬ D (cannot b e place d in D ). F or each o f these sets, we try to a pply Lemma 2.6 to some subset of it. If we fail, then in the next subsection we infer that the solutions in the quarters a re pa rtially independent o f ea ch other, and we can solve the problem in time r o ughly O (2 3 n/ 4 ). Let us no w pr o c e ed with a more detailed argumentation. W e define the following t wo partitions of I 2 : P ¬ A = v ∈ I 2 : ∃ w w ∈ M B ∧ w < v , P A = I 2 \ P ¬ A = v ∈ I 2 : ∀ w w < v ⇒ w ∈ M A , P ¬ D = v ∈ I 2 : ∃ w w ∈ M C ∧ w > v , P D = I 2 \ P ¬ D = v ∈ I 2 : ∀ w w > v ⇒ w ∈ M D . In other words, the elements of P ¬ A cannot b e placed in A b ecaus e some of their requirements are in M B , and the elements of P ¬ D cannot b e placed in D b ecaus e they are r e quired b y some elements of M C (see Figure 4 for an illustration). Note that these definitions are indep enden t of σ opt , so sets P ∆ for ∆ ∈ { A, ¬ A, ¬ D , D } can be computed in p olynomial time. Let p A = | σ opt ( P A ) ∩ A | , p B = | σ opt ( P ¬ A ) ∩ B | , p C = | σ opt ( P ¬ D ) ∩ C | , p D = | σ opt ( P D ) ∩ D | . Note that p Γ ≤ n/ 4 for e very Γ ∈ { A, B , C, D } . As p A = n/ 4 − | M A ∪ W A half | , p D = n/ 4 − | M D ∪ W D half | , these v alues can b e computed by the a lg orithm. W e br anch into (1 + n/ 4) 2 further sub cases, g uessing the (still unknown) v alues p B and p C . Let us focus on the q uarter A and assume that p A is significantly sma ller than | P A | / 2 (i.e., | P A | / 2 − p a is a consta n t fra ction of n ). W e claim that we ca n apply Lemma 2.6 a s fo llows. While c o mputing σ [ X ], 11 M A M B M C M D M P A P ¬ A P D P ¬ D I 2 Figure 4: An illustra tion o f the sets P ∆ for ∆ ∈ { A, ¬ A, ¬ D , D } a nd their re la tion with the sets M Γ for Γ ∈ { A, B , C, D } . if | X | ≥ n/ 4, we ca n r e pr esent X ∩ P A as a disjoint sum of t wo subsets X A A , X A B C D ⊆ P A . The first one is of size p A , and represents the element s o f X ∩ P A placed in quarter A , a nd the s e cond repre s en ts the elements of X ∩ P A placed in qua rters B ∪ C ∪ D . Note that the elemen ts of X A B C D hav e a ll predecessors in the quarter A , so by Lemma 2.5 the set X A B C D has to b e non- succ -exchangeable with resp ect to P A \ X A A ; therefore, by Lemma 2.6, we can consider o nly a very narrow c hoice of X A B C D . Thus, the whole part X ∩ P A can b e represented by its subset of cardinality at most p A plus some small information about the re st. If p A is significa n tly smaller than | P A | / 2, this repr esentation is mor e concise than simply remember ing a subset of P A . Thus we o btain a better bound on the num b er of feasible sets. A symmetric situation arises when p D is s ignificantly smaller than | P D | / 2; mor eov er, we can similarly use L e mma 2.6 if p B is s ignificantly smaller than | P ¬ A | / 2 or p C than | P ¬ D | / 2. This is formalized by the following lemma. Lemma 2.9. If p Γ < | P ∆ | / 2 for some (Γ , ∆) ∈ { ( A, A ) , ( B , ¬ A ) , ( C , ¬ D ) , ( D , D ) } and ε 1 ≤ 1 / 4 , then t he DP algorithm c an b e au gmen te d to solve the r emaining instanc e in time b ounde d by T p ( n ) = 2 n −| P ∆ | | P ∆ | p Γ n | M | n O (1) . A B C D X = σ − 1 opt ( { 1 , 2 , . . . , i } ) X ∩ P A P A p A L non- succ -exchangeable w.r.t. P A \ L Figure 5: An illustration of the pro o f of Lemma 2.9 for (Γ , ∆) = ( A, A ). Pr o of. W e first describ e in detail the case ∆ = Γ = A , a nd, later, we shor tly describ e the other cases that are proven analog ously . An illustr ation of the pro o f is depicted o n Figure 5. 12 On a hig h-level, we wan t to proc e ed as in Prop ositio n 2.1, i.e., use the standard DP algorithm des crib ed in Section 2.1, while terminating the computation for some unfeasible subsets of V . How ever, in this cas e w e need to slightly modify the recur s ive form ula used in the computations, a nd we compute σ [ X , L ] for X ⊆ V , L ⊆ X ∩ P A . Intuitiv ely , the set X plays the same role as before, whereas L is the subset of X ∩ P A that was placed in the quar ter A . F ormally , σ [ X , L ] is the o rdering of X that attains the minim um total cost among those orderings σ for whic h L = P A ∩ σ − 1 ( A ). Thus, in the DP alg orithm we use the following rec ur sive formula: T ( σ [ X , L ]) = min v ∈ m ax( X ) [ T ( σ [ X \ { v } , L \ { v } ]) + T ( v, | X | )] if | X | ≤ n/ 4 and L = X ∩ P A , + ∞ if | X | ≤ n/ 4 and L 6 = X ∩ P A , min v ∈ m ax( X ) \ L [ T ( σ [ X \ { v } , L ]) + T ( v , | X | )] otherwise. In the ne x t parag r aphs we describ e a p olynomial-time algor ithm R that accepts or rejects pa irs of s ubsets ( X, L ), X ⊆ V , L ⊆ X ∩ P A ; we terminate the computation on rejected pairs ( X , L ). As e ach single calculation of σ [ X , L ] uses at most | X | recursive calls, the time complexity o f the algor ithm is b ounded by the num b er of accepted pair s , up to a po lynomial mult iplicative factor. W e now describ e the algorithm R . First, given a pair ( X , L ), we ensur e that w e fulfill the gues s ed sets M Γ and W Γ half , Γ ∈ { A, B , C, D } , that is: E .g., we requir e M B , W B half ⊆ X if | X | ≥ n/ 2 and ( M B ∪ W B half ) ∩ X = ∅ if | X | ≤ n/ 4. W e require similar conditions for other quar ters A , C a nd D . Mo r eov er, we re q uire tha t X is downw ard c lo sed. Note that this implies X ∩ P ¬ A = ∅ if | X | ≤ n/ 4 and P ¬ D ⊆ X if | X | ≥ 3 n/ 4. Second, we r equire the following: 1. If | X | ≤ n/ 4, we require that L = X ∩ P A and | L | ≤ p A ; as p A ≤ | P A | / 2, ther e are at most 2 n −| P A | | P A | p A n such pairs ( X, L ); 2. Otherwise, w e r equire that | L | = p A and that the s et X ∩ ( P A \ L ) is non- succ -exchangeable with resp ect to P A \ L ; b y Lemma 2.6 there are at most P l ≤| M | | P A \ L | l ≤ n n | M | (since | M | ≤ 2 ε 1 n ≤ n/ 2) non- succ -exchangeable sets with resp ect to P A \ L , th us there are at most 2 n −| P A | | P A | p A n | M | n such pairs ( X, L ). Let us now c heck the corre c tnes s of the ab ov e pruning. Let 0 ≤ i ≤ n and let X = σ − 1 opt ( { 1 , 2 , . . . , i } ) and L = σ − 1 opt ( A ) ∩ X ∩ P A . It is eas y to see that Lemma 2.5 implies that in ca se i ≥ n/ 4 the set X ∩ ( P A \ L ) is non- succ -exchangeable and the pair ( X , L ) is accepted. Let us no w shortly discus s the case Γ = B a nd ∆ = ¬ A . Recall that, due to the pr ecedence constraints betw een P ¬ A and M B , the jobs from P ¬ A cannot b e scheduled in the segment A . Therefore, while computing σ [ X ] for | X | ≥ n/ 2, we can represent X ∩ P ¬ A as a disjoin t sum of t wo subsets X ¬ A B , X ¬ A C D : the first one, of size p B , to b e pla ced in B , a nd the second o ne to be placed in C ∪ D . Recall that in Section 2.6 we hav e ensured that for an y v ∈ I 2 , all predecessors of v app ear in M AB and all successo rs of v app ear in M C D . W e infer that all predecessor s of jobs in X ¬ A C D app ear in segments A and B and, by Lemma 2.5, in the o ptimal solution the s e t X ¬ A C D is non- succ -exchangeable with resp ect to P ¬ A \ X ¬ A B , Therefore w e ma y pro ceed as in the case of (Γ , ∆) = ( A, A ); in particular , while computing σ [ X , L ]: 1. If | X | ≤ n/ 4, w e require that L = X ∩ P ¬ A = ∅ ; 2. If n/ 4 < | X | ≤ n/ 2, we require that L = X ∩ P ¬ A and | L | ≤ p B ; 3. Otherwise, we require that | L | = p B and tha t the se t X ∩ ( P ¬ A \ L ) is non- su cc -exc hang eable with resp ect to P ¬ A \ L . The cases (Γ , ∆) ∈ { C , ¬ D ) , ( D , D ) } are symmetrical: L corresp onds to jobs from P ∆ scheduled to be done in seg men t Γ and we require that X ∩ ( P ∆ \ L ) is non- pre d -exc ha ngeable (instead of non- succ - exchangeable) with respect to P ∆ \ L . The recursive definition of T ( σ [ X , L ]) should b e also adjusted. 13 Observe that if an y of the sets P ∆ for ∆ ∈ { A, ¬ A, ¬ D, D } is significantly lar ger than n/ 2 (i.e., larger than ( 1 2 + δ ) n for so me δ > 0), one of the situations in Lemma 2.9 indeed oc curs, since p Γ ≤ n/ 4 for Γ ∈ { A, B , C, D } and | M | is small. Lemma 2 .10. If 2 ε 1 < 1 / 4 + ε 3 / 2 and at le ast one of the sets P A , P ¬ A , P ¬ D and P D is of size at le ast (1 / 2 + ε 3 ) n , then t he D P algorithm c an b e augmente d to solve the r emaining inst anc e in time b ounde d by T 3 ( n ) = 2 (1 / 2 − ε 3 ) n (1 / 2 + ε 3 ) n n/ 4 n 2 ε 1 n n O (1) . Pr o of. The claim is straightforward; no te only that the term 2 n −| P ∆ | | P ∆ | p Γ for p Γ < | P ∆ | / 2 is a decreas ing function of | P ∆ | . Note that we have 10 ε 1 n 2 2 ε 2 n n O (1) ov erhea d so fa r. As (1 / 2+ ε 3 ) n n/ 4 = O ((2 − c ( ε 3 )) (1 / 2+ ε 3 ) n ) fo r some co n- stant c ( ε 3 ) > 0, for a n y small fixed ε 3 we ca n choos e sufficiently small ε 2 and ε 1 to have 10 ε 1 n 2 2 ε 2 n n O (1) T 3 ( n ) = O ( c n ) for some c < 2. F rom this point we assume that | P A | , | P ¬ A | , | P ¬ D | , | P D | ≤ (1 / 2 + ε 3 ) n . As P A ∪ P ¬ A = I 2 = P ¬ D ∪ P D and | I 2 | ≥ (1 − 2 ε 1 − 2 ε 2 ) n , this implies that these four sets ar e of size a t least (1 / 2 − 2 ε 1 − 2 ε 2 − ε 3 ) n , i.e., they are of size roug hly n/ 2. Having bounded the s izes of the sets P ∆ from be low, we are able to use Lemma 2.9 again: if any of the n umbers p A , p B , p C , p D is significant ly sma ller tha n n/ 4 (i.e., smaller than ( 1 4 − δ ) n for some δ > 0 ), then it is also significantly smaller than half of the cardinality of the corresp onding se t P ∆ . Lemma 2.1 1 . L et ε 123 = 2 ε 1 + 2 ε 2 + ε 3 . If at le ast one of t he numb ers p A , p B , p C and p D is smal ler than (1 / 4 − ε 4 ) n and ε 4 > ε 123 / 2 , then the DP algorithm c an b e augmente d to solve t he r emaining instanc e in time b ounde d by T 4 ( n ) = 2 (1 / 2+ ε 123 ) n (1 / 2 − ε 123 ) n (1 / 4 − ε 4 ) n n 2 ε 1 n n O (1) . Pr o of. As, b efore , the claim is a straightforward application of Lemma 2.9, and the fact that the term 2 n −| P ∆ | | P ∆ | p Γ for p Γ < | P ∆ | / 2 is a decr easing function of | P ∆ | . So far we hav e 10 ε 1 n 2 2 ε 2 n n O (1) ov erhea d. Similarly as before, for any s mall fixed ε 4 if we choose ε 1 , ε 2 , ε 3 sufficiently small, we have (1 / 2 − ε 123 ) n (1 / 4 − ε 4 ) n = O ((2 − c ( ε 4 )) (1 / 2 − ε 123 ) n ) and 10 ε 1 n 2 2 ε 2 n n O (1) T 4 ( n ) = O ( c n ) for some c < 2. Thu s we a r e left with the case when p A , p B , p C , p D ≥ (1 / 4 − ε 4 ) n . 2.8 The r emaining case In this subsection w e infer that in the remaining case the quarters A , B , C and D are so mewhat indep endent, which allows us to develop a faster a lgorithm. More pr ecisely , note that p Γ ≥ (1 / 4 − ε 4 ) n , Γ ∈ { A, B , C, D } , means that almost all elemen ts that are placed in A b y σ opt belo ng to P A , while almost all elemen ts placed in B belong to P ¬ A . Similarly , almost all elements placed in D b elong to P D and almost all elemen ts place d in C belo ng to P ¬ D . As P A ∩ P ¬ A = ∅ and P ¬ D ∩ P D = ∅ , this implies that what happ e ns in the quarters A and B , a s w ell as C and D , is (almost) indep endent. This key observ ation can b e used to develop an algorithm that solves this specia l case in time roug hly O (2 3 n/ 4 ). Let W B quarter = I 2 ∩ ( σ − 1 opt ( B ) \ P ¬ A ) and W C quarter = I 2 ∩ ( σ − 1 opt ( C ) \ P ¬ D ). A s p B , p C ≥ (1 / 4 − ε 4 ) n we hav e that | W B quarter | , | W C quarter | ≤ ε 4 n . W e branch into at most n 2 n ε 4 n 2 sub c ases, g uessing the s ets W B quarter and W C quarter . Let W quarter = W B quarter ∪ W C quarter , I 3 = I 2 \ W quarter , Q ∆ = P ∆ \ W quarter for ∆ ∈ { A, ¬ A, ¬ D , D } . Mo reov er , let W Γ = M Γ ∪ W Γ half ∪ W Γ quarter for Γ ∈ { A, B , C, D } , using the conven tion W A quarter = W D quarter = ∅ . Note that in the current branch for any ordering and any Γ ∈ { A, B , C , D } , the segment Γ gets all the jobs from W Γ and q Γ = n/ 4 − | W Γ | jobs from appropr iate Q ∆ (∆ = A, ¬ A, ¬ D , D for Γ = A, B , C, D , 14 resp ectively). Th us, the behaviour of a n ordering σ in A influences the b ehaviour of σ in C by the choice of which elements of Q A ∩ Q ¬ D are pla c ed in A , and whic h in C . Similar dependencies a re b e t ween A and D , B and C , as w ell as B and D (see Figure 6). In particular , there are no dependencies b etw een A and B , as w e ll as C and D , and we can compute the optimal arr a ngement by k eeping tra c k of only three out of four dep endencies at o nce, leading us to an algo r ithm running in time roughly O (2 3 n/ 4 ). T his is formalized in the following lemma: Lemma 2.1 2 . If 2 ε 1 + 2 ε 2 + ε 4 < 1 / 4 and the assumptions of L emmata 2.2 and 2.8–2.11 ar e not satisfie d, the instanc e c an b e solve d by an algorithm ru nning in time b ounde d by T 5 ( n ) = n ε 4 n 2 2 (3 / 4+ ε 3 ) n n O (1) . A or C B o r C A or D B o r D Q A Q ¬ A Q D Q ¬ D D B C A Q ¬ A ∩ Q D Q A ∩ Q ¬ D Q A ∩ Q D Q ¬ A ∩ Q ¬ D Figure 6: Dep e ndenc ie s betw een quarters and sets Q ∆ . The le ft part o f the figure illustra tes wher e the jobs fr om Q ∆ 1 ∩ Q ∆ 2 may be placed. The r ight part of the figure illustrates the dependencies b etw een the quarters. Pr o of. Let (Γ , ∆) ∈ { ( A, A ) , ( B , ¬ A ) , ( C, ¬ D ) , ( D, D ) } . F or ea ch set Y ⊆ Q ∆ of size q Γ , for e a ch bijection (partial order ing) σ Γ ( Y ) : Y ∪ W Γ → Γ let us define its co st as T ( σ Γ ( Y )) = X v ∈ Y ∪ W Γ T ( v , σ Γ ( Y )( v )) . Let σ Γ opt ( Y ) b e the partial or dering that minimizes the cost (reca ll that it is unique due to the initial steps in Section 2.4). Note that if we define Y Γ opt = σ − 1 opt (Γ) ∩ Q ∆ for (Γ , ∆) ∈ { ( A, A ) , ( B , ¬ A ) , ( C, ¬ D ) , ( D , D ) } , then the order ing σ opt consists of the pa rtial orderings σ Γ opt ( Y Γ opt ). W e first compute the v alues σ Γ opt ( Y ) for a ll (Γ , ∆) ∈ { ( A, A ) , ( B , ¬ A ) , ( C, ¬ D ) , ( D, D ) } and Y ⊆ Q ∆ , | Y | = q Γ , by a straightforw ard mo dification of the DP algorithm. F or fixe d pair (Γ , ∆), the DP a lgorithm computes σ Γ opt ( Y ) for all Y in time 2 | W Γ | + | Q ∆ | n O (1) ≤ 2 (2 ε 1 +2 ε 2 + ε 4 ) n +(1 / 2+ ε 3 ) n n O (1) = O (2 (3 / 4+ ε 3 ) n ) . The last inequality follo ws from the assumption 2 ε 1 + 2 ε 2 + ε 4 < 1 / 4. Let us fo cus on the sets Q A ∩ Q ¬ D , Q A ∩ Q D , Q ¬ A ∩ Q ¬ D and Q ¬ A ∩ Q D . Without loss o f gener ality w e assume that Q A ∩ Q ¬ D is the smalles t among those. As they all are pairwise disjoint a nd sum up to I 2 , w e hav e | Q A ∩ Q ¬ D | ≤ n/ 4. W e bra nc h in to a t most 2 | Q A ∩ Q ¬ D | + | Q ¬ A ∩ Q D | sub c ases, guessing the s ets Y AC opt = Y A opt ∩ ( Q A ∩ Q ¬ D ) = ( Q A ∩ Q ¬ D ) \ Y C opt and Y B D opt = Y B opt ∩ ( Q ¬ A ∩ Q D ) = ( Q ¬ A ∩ Q D ) \ Y D opt . 15 Then, we c ho ose the set Y AD opt = Y A opt ∩ ( Q A ∩ Q D ) = ( Q A ∩ Q D ) \ Y D opt that optimizes T ( σ A opt ( Y AC opt ∪ Y AD opt )) + T ( σ D opt ( Q D \ ( Y AD opt ∪ Y B D opt )) . Independently , we c ho o se the set Y B C opt = Y B opt ∩ ( Q ¬ A ∩ Q ¬ D ) = ( Q ¬ A ∩ Q ¬ D ) \ Y C opt that optimizes T ( σ B opt ( Y B C opt ∪ Y B D opt )) + T ( σ C opt ( Q ¬ D \ ( Y B C opt ∪ Y AC opt )) . T o see the corr ectness of the ab ov e step, note that Y A opt = Y AC opt ∪ Y AD opt , and similarly for o ther quarters. The time complexity of the above step is bounded by 2 | Q A ∩ Q ¬ D | + | Q ¬ A ∩ Q D | 2 | Q A ∩ Q D | + 2 | Q ¬ A ∩ Q ¬ D | n O (1) = 2 | Q A ∩ Q ¬ D | 2 | Q D | + 2 | Q ¬ A | n O (1) ≤ 2 (3 / 4+ ε 3 ) n n O (1) and the b ound T 5 ( n ) follows. So far w e ha ve 10 ε 1 n 2 2 ε 2 n n O (1) ov erhea d. F or s ufficiently small ε 4 we hav e n ε 4 n = O (2 n/ 16 ) and then for sufficiently small constant s ε k , k = 1 , 2 , 3 w e hav e 10 ε 1 n 2 2 ε 2 n n O (1) T 5 ( n ) = O ( c n ) for some c < 2. 2.9 Numerical v alues of the constan ts Reference Running time Lemma 2.2 T 1 ( n ) = O ⋆ ((3 / 4) ε 1 n 2 n ) Lemma 2.8 3 ε 1 n T 2 ( n ) n O (1) = 3 ε 1 n 2 (1 − ε 2 ) n + n (1 / 2 − ε 2 ) n + 2 ε 2 n (1 − ε 2 ) n n/ 2 n O (1) Lemma 2.10 10 ε 1 n 2 2 ε 2 n T 3 ( n ) n O (1) = 10 ε 1 n 2 2 ε 2 n 2 (1 / 2 − ε 3 ) n (1 / 2+ ε 3 ) n n/ 4 n 2 ε 1 n n O (1) Lemma 2.11 10 ε 1 n 2 2 ε 2 n T 4 ( n ) n O (1) = 10 ε 1 n 2 2 ε 2 n 2 (1 / 2+2 ε 1 +2 ε 2 + ε 3 ) n (1 / 2 − 2 ε 1 − 2 ε 2 − ε 3 ) n (1 / 4 − ε 4 ) n n 2 ε 1 n n O (1) Lemma 2.12 10 ε 1 n 2 2 ε 2 n T 5 ( n ) n O (1) = 10 ε 1 n 2 2 ε 2 n n ε 4 n 2 2 (3 / 4+ ε 3 ) n n O (1) T able 1: Summar y of running times of all cases of the algor ithm. T able 1 summarizes the running times of all cases of the algorithm. Using the following v a lues of the constants: ε 1 = 2 . 6770 01953 125 · 10 − 10 ε 2 = 0 . 0000 27246 28851234912872314453125 ε 3 = 0 . 0070 10121 770270753069780766963958740234375 ε 4 = 0 . 0165 26753 505895047409353537659626454114913940429688 we g et that the running time o f our algorithm is b ounded by: O 2 − 10 − 10 n . 16 3 Conclusion W e presented an algo rithm that solves SCHE D in O ((2 − ε ) n ) time for some small ε . This shows that in some sense SCHED app ears to b e easier than r e solving CNF-SA T formulae, whic h is conjecture d to need 2 n time (the so-ca lled Strong E xpo nen tial Time Hypothesis ). Our alg orithm is based on an in teres ting property of the optimal solution expressed in Lemma 2 .6, whic h can b e of indep endent interest. Ho wev er, our bes t efforts to n umer ic ally compute an optimal choice of v alues of the constants ε k , k = 1 , 2 , 3 , 4 lead us to an ε of the order o f 10 − 10 . Although L e mma 2 .6 seems p ow erful, we lost a lot while applying it. In particular, the worst trade-off s eems to happen in Section 2 .6, where ε 1 needs to be c hosen m uch smaller than ε 2 . The natural question is: can the base of the exp onent be s ig nificantly impr ov ed? Ac kno wledgeme nts W e thank Dominik Scheder for v ery useful disc us sions on the SCHED problem during his stay in W ar saw. Moreov er , w e greatly a ppreciate the detailed comments of anon ymo us reviewers, esp ecially rega r ding presen tation issues and minor optimizations in our a lgorithm. References [1] Daniel Binkele-Raible, Ljiljana Branko vic, Marek Cygan, Hennin g F ernau, Joac him Kn eis, Dieter Kratsc h , Alexander Langer, Mathieu Liedloff, Marcin Pilipczuk , Peter R ossmanith, and Jakub Onufr y W o jtaszczyk. Breaking t he 2 n -barrier for irredundance: Two lines of attac k. Journal of Discr ete Algorithms , 9(3):214–230 , 2011. [2] Andreas Bj¨ orklund. Determinant sums for u ndirected hamiltonicity . In 51th Annual IEEE Symp osi um on F oundations of Computer Scienc e (F OCS) , pages 17 3–182. IEEE Computer So ciet y , 2010 . [3] Andreas Bj¨ orklund, Thore Husfeldt, P etteri Kaski, and Mikko Koivisto. F ourier meets m¨ obius: fast subset conv olution. In 39th Ann ual A CM Symp osium on The ory of Computing (STOC) , p ages 67 –74, 2007 . [4] Andreas Bj¨ orklund, Thore Husfeldt, and Mikko Koivisto. S et p artitioning via inclusion-exclusion. SIAM Journal of Computing , 39(2):546–563, 2009. [5] P et er Bruck er. Sche duling Algorithms . Springer, Heidelb erg, 2 edition, 199 8. [6] Chandra Chekuri and Ra jeev Motw ani. Precedence constrained scheduling to minimize sum of wei ghted com- pletion times on a single mac hine. Discr ete Applie d Mathematics , 98(1-2):29–38, 1999. [7] Marek Cygan, Jesp er Nederlof, Marcin Pilipczuk , Mic h al Pilip czu k, Johan M. M. v an Ro oij, and Jakub Onufry W o jtaszczyk. Solving conn ectivity problems parameterized by treewidth in single exp onential time. In 52nd Ann ual Symp osium on F oundations of Computer Scienc e (FOCS) , pages 150–159. IEEE, 2011. [8] Marek Cygan and Marcin Pilipczuk. Exact and approximate bandwidth. The or etic al Computer Scienc e , 411(40- 42):3701 –3713, 2010. [9] Marek Cygan, Marcin Pilipczuk , and Jakub Onufry W o jtaszczyk. Capacitated d omination faster th an O (2 n ). Information Pr o c essing L etters , 111:1099 –1103, 2011. [10] F edor F omin and Dieter Kratsc h. Exact Exp onential Algorithms . Springer, 2010. [11] F edor V. F omin, F abrizio Grand oni, and Dieter Kratsch. A measure & conquer approac h for the analysis of exact algorithms. Journal of the ACM , 56(5):1–32, 2009. [12] F edor V. F omin, I oan T odinca, and Yngve Villanger. Exact algorithm for the maximum ind uced planar sub graph problem. In 19th Annual Eur op e an Symp osium on A lgorithms (ESA) , volume 6942 of L e ctur e Notes in Computer Scienc e , pages 287–298. Springer, 20 11. [13] R. Graham. Bounds for certain m ultipro cessing anomal ies. Bel l Syst em T e chnic al Journal , 45:1563–1581 , 1966. [14] N. Hefetz and I. Adiri. An efficient optimal algorithm for the tw o-machines unit-time jobshop schedule-length problem. Mathematics of Op er ations R ese ar ch , 7:354–3 60, 1982. [15] Russell Impagliazzo and Ramamohan Paturi. On the complexity of k -SA T. Journal of Computer and System Scienc es , 62(2 ):367–37 5, 2001. [16] E. L. La wler. O ptimal seq u encing of a single machine sub ject to precedence constrain ts. Management Scienc e , 19:544– 546, 1973. [17] J. K. Lenstra, A. H. G. Rinno oy Kan, and P . Bru cker. Complexity of mac hine sc h eduling problems. Annals of Discr ete Mathematics , 1:34 3–362, 1977. 17 [18] J. K. Len stra and A.H.G. Rinno oy Kan. Complexit y of sc heduling under precedence constraints. Op er ations R ese ar ch , 26:22–3 5, 1978. [19] Daniel Lokshtano v, Daniel Ma rx, and Sak et Saurabh. Know n Algorithms on Graphs of Bound ed T reewidth are Probably Optimal. In Twenty-Se c ond An nual ACM-SIAM Symp osium on Discr ete Algorithms (SODA) , pages 777–789 , 2011. [20] F ran¸ cois Marg ot, Maurice Queyranne, and Y aoguang W ang. Decomp ositions, netw ork flows, and a p recedence constrained singl e-mac hine sc hedu ling prob lem. Op er ations Re se ar ch , 51(6):981–992, 2003. [21] Marcin Mucha and Piotr Sankow ski. Maximum matchings via gaussian elimination. In 45th Symp osium on F oundations of Computer Scienc e (F OCS) , pages 24 8–255. IEEE Computer So ciet y , 2004 . [22] Mihai Patrascu and Ry an Williams. On the p ossibilit y of faster SA T algorithms. In Twenty-First Annual ACM-SIAM Symp osium on Discr ete Algorithms (SODA) , pages 1065–10 75, 2010. [23] Johan M. M. van Ro oij, Jesp er Nederlof, and Thomas C. v an Dijk. Inclusion/exclusion meets measure and conquer. I n 17th A nnual Eur op e an Symp osium (ESA) , volume 5757 of L e ctur e Notes in Compute r Scienc e , pages 554–565. Springer, 2009. [24] Gerhard J. W o eginger. Space and time complexity of exact algorithms: Some op en problems (invited talk). In First International Workshop on Par ameterize d and Exact Computation (IWPEC) , volume 3162 of L e ctur e Notes in Computer Scienc e , pages 281–290. Springer, 20 04. [25] Gerhard J. W oeginger. Op en problems around exact algorithms. Discr ete Appli e d Mathematics , 156 ( 3):397–4 05, 2008. 18
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment