Deterministic Algorithm for Non-monotone Submodular Maximization under Matroid and Knapsack Constraints

Submodular maximization constitutes a prominent research topic in combinatorial optimization and theoretical computer science, with extensive applications across diverse domains. While substantial advancements have been achieved in approximation algo…

Authors: Shengminjie Chen, Yiwei Gao, Kaifeng Lin

Deterministic Algorithm for Non-monotone Submo dular Maximization under Matroid and Knapsac k Constrain ts ∗ Shengminjie Chen 1,2 , Yiw ei Gao 1,2 , Kaifeng Lin 1,2 , Xiaoming Sun 1,2 , and Jialin Zhang 1,2 1 State K ey L ab of Pr o c essors, Institute of Computing T e chnolo gy, Chinese A c ademy of Scienc es, Beijing 100190, China 2 Scho ol of Computer Scienc e and T e chnolo gy, University of Chinese A c ademy of Scienc es, Beijing 100049, China Marc h 17, 2026 Abstract Submo dular maximization constitutes a prominent research topic in com binatorial op- timization and theoretical computer science, with extensiv e applications across diverse do- mains. While substan tial adv ancements hav e b een ac hiev ed in appro ximation algorithms for submo dular maximization, the ma jorit y of algorithms yielding high approximation guaran tees are randomized. In this work, we inv estigate deterministic appro ximation algorithms for maximizing non-monotone submo dular functions subject to matroid and knapsac k constraints. F or the tw o distinct constraint settings, we prop ose nov el determin- istic algorithms grounded in an extended m ultilinear extension framework. F or matroid constrain ts, our algorithm ac hieves an appro ximation ratio of (0 . 385 − ε ) , while for knap- sac k constrain ts, the prop osed algorithm attains an appro ximation ratio of (0 . 367 − ε ) . Both algorithms run in p oly ( n ) v alue queries, where n is the size of the ground set, and impro v e up on the state-of-the-art deterministic appro ximation ratios of (0 . 367 − ε ) for matroid constrain ts and 0 . 25 for knapsac k constrain ts. 1 In tro duction Submo dularit y , capturing the principle of diminishing returns, naturally arises in man y functions of theoretical in terest and provides a p ow erful abstraction for man y subset selection problems suc h as game theory [20, 21, 19], maximal so cial w elfare [26, 35, 32], influence maxi- mization [36, 16, 31], facilit y lo cation [2, 1], data summarization [40, 41, 45], etc. Maximizing submo dular functions sub ject to v arious constrain ts, due to the N P -hard prop ert y , is a funda- men tal problem that has b een extensively studied in b oth op erations researc h and theoretical ∗ Alphab etical Order. Corresp onding Author: Shengminjie Chen, csmj@ict.ac.cn 1 computer science, pla ying a central role in combinatorial optimization and approximation al- gorithms. F ormally , for a ground set N , the submo dular function f : 2 N → R defined on N satisfies the following prop erty: for an y subset A ⊆ B ⊆ N and any element u ∈ N \ B , f ( A ∪ { u } ) − f ( A ) ≥ f ( B ∪ { u } ) − f ( B ) . Equiv alen tly , a function f is submo dular if and only if for an y sets S, T ⊆ N , f ( S ) + f ( T ) ≥ f ( S ∪ T ) + f ( S ∩ T ) . The systematic study of submo dular maximization dates bac k to 1978, when Nemhauser et al. [38] abstracted the concept of submo dularity from v arious combinatorial problems and established the 1 − 1 /e approx imation ratio for monotone functions via a greedy algorithm. Since then, numerous com binatorial algorithms hav e b een dev elop ed, particularly for cardinality constrain ts [42, 37, 11], knapsack constraints [44, 34, 3, 25, 33], matroid constrain ts [27, 10, 30], and the unconstrained case [12, 39]. A significant adv ancemen t came with the introduction of the m ultilinear extension framework by V ondrák [46]. This framew ork extends a discrete submo dular function to a con tinuous m ultilinear function F M E : [0 , 1] N → R , F M E ( x ) = E [ f ( R ( x ))] = X S ⊆N f ( S ) Y i ∈ S x i Y j / ∈ S (1 − x j ) , where R ( x ) is a random subset of N in which each element i ∈ N is included indep endently with probabilit y x i . By applying a F rank-W olfe-lik e contin uous optimization algorithm, a frac- tional solution is obtained, whic h is then con v erted into a discrete solution via a rounding pro cedure suc h as Pipage Rounding [18] and Swap Rounding [13]. This approach ac hiev es an appro ximation ratio 1 − 1 /e for monotone functions under matroid constraints. Crucially , the optimization comp onen t of this framework generally requires only that the constraint b e a down-closed conv ex p olytop e. Giv en that lossless rounding tec hniques exist for cardinal- it y and matroid constraints, and that knapsac k constraints admit nearly lossless rounding via con tention resolution sc hemes [14], the multilinear extension framework supp orts a v ersatile algorithmic design that is not tied to sp ecific constraint types. Subsequent researc h has con- tin uously adv anced the approximation ratio for non-monotone submo dular maximization, from the long-standing 1 /e ( ≈ 0 . 367) [46] to (0 . 372 − ε ) [22], (0 . 385 − ε ) [6], and (0 . 401 − ε ) [7]. Closing the remaining gap b etw een these re sults and the theoretical upp er bound of 0 . 478 [28] remains a significant op en challenge in the field. Computing the v alue of the m ultilin- ear extension exactly typically requires exp onen tially many function ev aluations, and random sampling is widely regarded as the only p olynomial-query approach for estimating its v alue. Ho wev er, suc h randomness can b e undesirable in applications where deterministic outcomes are required, suc h as safet y-critical or repro ducibilit y-sensitive en vironmen ts. Randomized al- gorithms provide guaran tees in exp ectation but not necessarily in the worst case, making them less predictable in certain scenarios. Designing deterministic algorithms for submo dular maximization, whic h returns the same solution on every run and guarantees an appro ximation ratio in the worst case rather than only in exp ectation, has emerged as an imp ortan t and active research direction in recent y ears. Broadly sp eaking, deterministic algorithms for submo dular maximization follow t wo 2 main paradigms: (i) designing algorithms that are deterministic by construction, and (ii) de- randomizing existing randomized algorithms while preserving their approximation guarantees. Under the first paradigm, man y classical algorithms for monotone submo dular maximization are inherently deterministic. Representativ e examples include greedy-lik e algorithms, threshold- based metho ds, and discrete lo cal search algorithms. The standard greedy algorithm ac hiev es a (1 − 1 /e ) -approximation under a cardinality constrain t [38] and a 1 / 2 -approximation under matroid constraints [27], while Sviridenko prop osed a (1 − 1 /e ) -appro ximation under a knap- sac k constraint [44]. In addition, threshold decreasing algorithms, which iteratively lo wer a marginal-gain threshold and select elemen ts exceeding the current threshold, pro vide a deter- ministic and oracle-efficien t alternativ e to the standard greedy approach, and achiev e near- optimal approximation guarantees with significan tly improv ed running time [4]. Buc h binder et al. [10] sho w ed that for monotone submo dular maximization under a matroid constrain t, the split-and-gr ow algorithm enables deterministic algorithms to surpass the 1 / 2 barrier, achieving a (1 / 2 + ε ) -appro ximation. F or non-monotone objectives, deterministic lo cal searc h algorithms w ere systematically studied by F eige et al. [23], establishing constant-factor guaran tees for un- constrained submo dular maximization. Bey ond these general framew orks, several algorithms ha ve b een explicitly designed to achiev e strong deterministic guaran tees. Notably , the twin greedy algorithm prop osed b y Han et al. [29] achiev es a 1 / 4 -appro ximation for non-monotone submo dular maximization under matroid constrain ts, whic h w as later extended to knapsac k constrain ts by Sun et al. [43] while preserving the same approximation ratio. T o the b est of our kno wledge, this remains the strongest kno wn deterministic guaran tee for non-monotone submo dular maximization under knapsac k constrain ts. The second paradigm fo cuses on der andomizing pr eviously r andomize d algorithms , and has witnessed substantial progress in recen t y ears. Early examples include the random greedy algorithm for cardinality constraints [11] and the double greedy algorithm for unconstrained non-monotone maximization [12], both of whic h were later systematically derandomized by Buc hbinder and F eldman [5]. More recen tly , Buch binder and F eldman [8] derandomized the non-oblivious lo cal searc h algorithm, obtaining a deterministic (1 − 1 /e − ε ) -appro ximation for monotone submo dular maximization under matroid constraints. In addition, Chen et al. [17] derandomized the measured contin uous greedy algorithm [6], achieving approximation ratios of 0 . 385 − ε under cardinalit y constrain ts and 0 . 305 − ε under matroid constraints. V ery recen tly , Buc hbinder et al. [9] introduced the Extende d Multiline ar Extension (EME), a nov el derandom- ization framework that maintains a distribution with constant-size support and enables exact ev aluation without random sampling. Using this framew ork, they further obtained a deter- ministic (1 /e − ε ) -approximation for non-monotone submo dular maximization under matroid constrain ts. While the Extended Multilinear Extension framework offers a promising av en ue for deran- domizing submo dular maximization algorithms, exploiting this framework in broader settings presen ts substantial technical challenges. At a high lev el, the p o wer of EME critically relies on maintaining a distribution whose fractional supp ort v alue remains constant throughout the algorithm. This structural requiremen t fundamentally alters the nature of the optimization pro cess: unlik e classical multilinear extension metho ds, the optimization under EME can no longer b e carried out via purely con tin uous pro cedures. Instead, each up date step must care- fully combine contin uous adjustmen ts with embedded com binatorial subroutines that preserv e 3 the b ounded supp ort prop ert y . Designing suc h h ybrid steps is non trivial, as standard contin u- ous optimization tec hniques are no longer applicable. Moreov er, the existing derandomization approac h for EME is tightly coupled with the exchange prop erties of matroids, and the re- sulting algorithmic framework relies hea vily on matroid-sp ecific com binatorial structures. As a consequence, these tec hniques do not readily extend to other algorithms and constraint fami- lies. F or more general combinatorial constrain ts, additional difficulties arise: one m ust design new rounding pro cedures that are compatible with the EME framework while sim ultaneously guaran teeing that the supp ort size remains constant. Balancing these requirements p oses a cen tral obstacle to extending EME-based derandomization b ey ond the matroid setting. These c hallenges naturally raise the follo wing question: Can we lever age the EME fr amework to der andomize c ontinuous optimization algorithms and achieve impr ove d appr oximation r atios under differ ent c ombinatorial c onstr aints? Our Contribution. In our w ork, w e consider the problem of maximizing a non-monotone submo dular function sub ject to constraints of the form max { f ( S ) : S ∈ C } where C denotes the constraint of the submo dular maximization problem, that is, the collection of feasible sets: • Matroid constrain ts. A matr oid M = ( N , I ) is a set system, where I ⊆ 2 N is a family of subsets of the ground set N satisfying the following prop erties: (i) ∅ ∈ I ; (ii) If A ⊆ B and B ∈ I , then A ∈ I ; (iii) If A, B ∈ I with | A | < | B | , there exists an elemen t u ∈ B \ A suc h that A + u ∈ I . The members of I are called indep endent sets , and the maximal indep enden t sets are called b ases . All bases of a matroid hav e the same cardinalit y , which is referred to as the r ank of M . F or a subset S ⊆ N (not necessarily indep enden t), its r ank is defined as the size of its maxim um indep endent subset, i.e., rank( S ) = max T ⊆ S, T ∈I | T | . The sp an of a set S ⊆ N is defined as the set of elements u ∈ N such that r = rank( S ∪ { u } ) = rank ( S ) . • Knapsac k constrain ts. Each element u ∈ N is asso ciated with a non-negativ e cost w ( u ) , and a set S ⊆ N is feasible if P u ∈ S w ( u ) ≤ B , where B > 0 denotes the knapsac k budget. F or simplicit y , W e denote the weigh t of a set S by w ( S ) = P u ∈ S w ( u ) . W e firstly address these c hallenges b y derandomizing the aided contin uous greedy algorithm under matroid constraints. Our approach exploits a discrete stationary p oint to aid the opti- mization tra jectory and p erforms a piecewise optimization o v er the EME, thereb y maintaining a constant-size supp ort throughout the pro cess. Ho wev er, extending the conv ergence analysis to the EME setting is non trivial. Existing conv ergence analyses of the aid measured con tinuous greedy algorithm rely on the Lov ász extension as a low er b ound of the multilinear extension [6] or on the directional conca vity of the multilinear extension of a submo dular function [15]. These prop erties do not carry ov er to the EME, whic h implies us adopt a new analysis pro cess. When combined with the rounding sc heme of Buch binder et al. [9], our framework yields a de- terministic (0 . 385 − ε ) -approximation. This strictly impro v es up on the previously b est-kno wn deterministic guarantee of (1 /e − ε ) under matroid constraints, whic h was also achiev ed in [9]. The formal statement is as follows: 4 T able 1: Comparison of algorithms for non-monotone submo dular maximization under matroid and knapsac k constrain ts. The table includes state-of-the-art results, our deterministic algo- rithms, and algorithms w e derandomize (mark ed with † ). Constrain ts Appro ximation Ratio Query Complexity Type Reference 0 . 385 − ε O ε ( n 11 ) [6] † 0 . 401 − ε p oly( n, 1 /ε ) Random [7] 0 . 305 − ε O ε ( nk ) [17] 0 . 367 − ε O ε ( n 5 ) [9] Matroid 0 . 385 − ε O ε ( n 5 ) Deterministic Ours 0 . 367 − ε O ε ( n ε − 2 ) [24] † 0 . 401 − ε O ε ( n ε − 2 ) Random [7] 0 . 25 O ( n 4 ) [43] Knapsac k 0 . 367 − ε O ε ( n ε − 2 ) Deterministic Ours Theorem 1. Given a non-ne gative submo dular function f : 2 N → R ≥ 0 and a matr oid M = ( N , I ) , Algorithm 1 is a deterministic algorithm that uses O ε ( n 5 ) queries, r eturns a set S satisfying the matr oid c onstr aint M = ( N , I ) , and achieves f ( S ) ≥ (0 . 385 − O ( ε )) f (OPT) . F or knapsack constraints, the lac k of matroid exc hange prop erties necessitates a different and more delicate approach. W e b egin by enumerating a set of O (1 /ε 2 ) elements, which allows us to assume without loss of generality that all remaining elements are smal l enough. Under this small-element assumption, we dev elop an optimization algorithm o ver the EME that is guided by element densities. This density-based optimization admits a prov able approximation guaran tee on the EME solution and can b e implemen ted while maintaining a constant-size supp ort. W e then apply a rounding pro cedure inspired b y Pipage Rounding, which rep eatedly selects t wo non-integral elements in the support of the EME and mov es along a suitable conv ex direc- tion to integralize one of them. Unlike the matroid setting, this pro cess ma y leav e at most one elemen t in a fractional state. Ho w ever, since this element is guaranteed to b e small, feasibilit y can b e ensured b y executing the optimization step with a reduced knapsack capacit y of (1 − ε ) , thereb y reserving sufficient slac k to accommo date the final rounding. Ov erall, this framew ork yields a deterministic (1 /e − ε ) -appro ximation for submo dular maximization under a knapsac k constrain t, substan tially impro ving upon the 1 / 4 approximation ratio of t win greedy methods. Theorem 2. Given a non-ne gative submo dular function f : 2 N → R ≥ 0 and a weight function w : N → R ≥ 0 , A lgorithm 5 is a deterministic algorithm that uses O ε ( n ε − 2 ) queries, r eturns a set S satisfying the knapsack c onstr aint P u ∈ S w ( u ) ≤ B , and achieves f ( S ) ≥ (1 /e − O ( ε )) f (OPT) . Organization. Our paper is organized as follo ws: in Section 2, w e in tro duce basic notations and review the definition and fundamen tal prop erties of the extended multilinear extension. In Section 3, w e present our algorithm for the matroid constrain t: Section 3.1 describ es the dis- crete lo cal searc h pro cedure, Section 3.2 explains how a lo cal search stationary p oint guides the Deterministic aided Contin uous Greedy algorithm, and Section 3.3 analyzes the p erformance 5 of the ov erall algorithm. In Section 4, we present our algorithm for the knapsack constraint: Section 4.1 b egins with the enumeration of elemen ts, Section 4.2 introduces a deterministic con tinuous greedy algorithm for the knapsac k constrain t, Section 4.3 presents our rounding pro cedure, and Section 4.4 analyzes the p erformance of the entire algorithm. Finally , in Sec- tion 5, we summarize our work and discuss possible directions for future researc h. 2 Preliminary In this section, w e first introduce some basic definitions and essential lemmas adopted in our algorithm design. Without ambiguit y , we denote n = |N | and use OPT to represent the optimal feasible solution. W e do not explicitly consider how the submo dular function f is computed; instead, we assume access to a value or acle that returns f ( S ) for any set S ⊆ N in O (1) time. In addition, for problems under matroid constrain ts, we assume access to a memb ership (or indep endenc e) or acle that determines whether a given set is indep endent in O (1) time. The total num b er of calls to the v alue oracle and the mem b ership oracle made b y an algorithm is referred to as its query c omplexity . P olytop e of Constrain ts. Algorithms based on the con tin uous optimization to ol are typi- cally executed o ver a feasible region, whose form dep ends on the underlying constrain ts and is referred to as the p olytop e associated with the corresp onding constraint. The matroid p olytop e P ( M ) ⊆ [ 0 , 1] N is defined as: P ( M ) = conv { 1 S | S ∈ I } where 1 S ∈ [ 0 , 1] N is a vector suc h that for any u ∈ S , ( 1 S ) u = 1 , and for any u ∈ N \ S , ( 1 S ) u = 0 . The knapsac k p olytop e is defined as P ( B ) = ( x ∈ [0 , 1] N : X u ∈N w ( u ) · x u ≤ B ) Extended Multilinear Extension. The pap er from [9] prop osed an extension of the mul- tilinear extension to address the issue that the standard multilinear extension cannot yield deterministic algorithms. F or a submo dular function f : 2 N → R , its extended multilinear extension F : [ 0 , 1] 2 N → R is defined as: F ( y ) = X J ⊆ 2 N   f   [ S ∈J S   · Y S ∈J y S · Y S / ∈J (1 − y S )   Belo w we introduce some notation and prop erties related to the extended multilinear ex- tension. All the prop erties can b e found in [9]. Definition 3 (Random set) . F or a ve ctor y ∈ [0 , 1] 2 N , define the r andom set R ( y , S ) , wher e Pr[ R ( y , S ) = S ] = y S , otherwise R ( y , S ) = ∅ . F urthermor e, define R ( y ) = S S ⊆N R ( y , S ) . Observ ation 4. F or any y ∈ [0 , 1] 2 N F ( y ) = E [ f ( R ( y ))] . 6 Definition 5 (Co ordinate-wise probabilistic sum) . The c o or dinate-wise pr ob abilistic sum of m ve ctors a (1) , a (2) , . . . , a ( m ) ∈ [0 , 1] n is a ve ctor c wher e e ach entry k ∈ { 1 , . . . , n } is define d as: m M i =1 a ( i ) ! k = 1 − m Y i =1 (1 − a ( i ) k ) Definition 6 (Marginal vector of y ) . L et y ∈ [ 0 , 1] 2 N , then the mar ginal ve ctor Mar ( y ) ∈ [ 0 , 1] N is define d by: Mar u ( y ) ≜ Pr[ u ∈ R ( y )] = 1 − Y S ⊆ 2 N | u ∈ S (1 − y S ) ∀ u ∈ N Observ ation 7. L et y 1 , y 2 ∈ [0 , 1] 2 N b e ve ctors such that y 1 + y 2 ≤ 1 . Then: Mar ( y 1 ⊕ y 2 ) = Mar ( y 1 ) ⊕ Mar ( y 2 ) Observ ation 8. Given a set function f : 2 N → R and a ve ctor y ∈ [0 , 1] 2 N , we define the set function g y : 2 N → R by g y ( A ) ≜ F ( e A ∨ y ) ∀ A ⊆ N . (1) Wher e e A ∈ [0 , 1] 2 N is a ve ctor such that ( e A ) A = 1 and for any S ⊆ N , S  = A , ( e A ) S = 0 . If f is a non-ne gative submo dular function, then g y ( A ) is also a non-ne gative submo dular function. Observ ation 9. L et F b e the extende d multiline ar extension of an arbitr ary set function f : 2 N → R . Then, for every ve ctor y ∈ [ 0 , 1] 2 N , F ( e S ∨ y ) ≥ (1 − ∥ Mar( y ) ∥ ∞ ) · f ( S ) ∀ S ⊆ N ∂ 2 F ( ∂ y S ) 2 ( y ) = 0 ∀ S ⊆ N ∂ 2 F ∂ y S ∂ y T ( y ) ≤ 0 ∀ S, T ⊆ N , S ∩ T = ∅ (1 − y S ) · ∂ F ( y ) ∂ y S = F ( e S ∨ y ) − F ( y ) ∀ S ⊆ N , y S ∈ [0 , 1] F ( y ⊕ z ) = X J ⊆ 2 N   F y ∨ X S ∈ J e S ! · Y S ∈ J z S · Y S ∈ 2 N \ J (1 − z S )   ∀ z ∈ [0 , 1] 2 N Lo v ász Extension. The Lov ász extension is another imp ortant extension for set functions. Ho wev er, in this pap er, it is only used as a definition for auxiliary pro of conclusions, so its prop erties will not b e extensively co vered. F or a submo dular function f : 2 N → R , its Lo v ász extension ˆ f : [0 , 1] N → R is defined as: ˆ f ( x ) = Z 1 0 f ( T λ ( x )) dλ where T λ ( x ) = { u ∈ N : x u > λ } . 7 Lemma 10. Given x ∈ [0 , 1] N and a submo dular function f : 2 N → R . L et ˆ f : [0 , 1] N → R b e its L ovász extension. L et A ( x ) ⊆ N b e a r andom set satisfying ∀ u ∈ N , Pr[ u ∈ A ( x )] = x u . Then: E [ f ( A ( x ))] ≥ ˆ f ( x ) Pr o of. Let { u 1 , u 2 , . . . , u n } = N b e ordered such that ∀ i < j , x u i ≥ x u j . Let ev ent X i indicate u i ∈ A ( x ) . Let A i = { u j | j ≤ i } . E [ f ( A ( x ))] = E " f ( ∅ ) + n X i =1 f ( A ( x ) ∩ A i | A ( x ) ∩ A i − 1 ) # = E " f ( ∅ ) + n X i =1 X i · f ( u i | A i − 1 ∩ A ( x )) # ≥ E " f ( ∅ ) + n X i =1 X i · f ( u i | A i − 1 ) # = f ( ∅ ) + n X i =1 E [ X i ] · f ( u i | A i − 1 ) = f ( ∅ ) + n X i =1 x u i · f ( u i | A i − 1 ) = f ( ∅ ) + Z 1 0 X i λ< x u i f ( u i | A i − 1 ) dλ = ˆ f ( x ) The first equalit y decomp oses the function v alue into the incremen tal con tribution of eac h elemen t. The inequalit y follows from submo dularit y . The final equality holds b ecause, due to the descending order of x u i , the sum (including the preceding empty set term) constitutes a difference sequence for f ( T λ ( x )) , th us equaling ˆ f ( x ) . 3 Matroid Constrain ts In this section, we in tro duce the deterministic aided measured contin uous greedy algorithm based on the extended m ultilinear extension. In [9], the authors ha v e pro vided a rounding metho d for EME v ectors when the supp ort set is sufficiently small. Consequently , utilizing this rounding algorithm, our algorithm only needs to obtain a v ector with a sufficien tly small supp ort set to provide a solution for maximizing submo dular functions. Additionally , we introduce a dumm y element set D , whic h only o ccupy constraints without c hanging the function v alue, and the definition of the discrete stationary p oin t, which are essential bridges in our theoretical analysis. Deterministic-Pipage In [9], the authors provide a deterministic pipage rounding pro cedure that conv erts an EME v ector with sufficiently small supp ort in to a feasible set in the matroid. Therefore, our algorithm only needs to compute a vector whose supp ort size is sufficiently small in order to obtain a solution for the submo dular maximization problem. Theorem 11 (Deterministic-Pipage, Theorem 5.6 in [9]) . F or a ve ctor y ∈ [ 0 , 1] 2 N , if Mar ( y ) ∈ P ( M ) , ther e exists an algorithm that, within O ( n 2 · 2 fr ac ( y ) ) value queries and O ( n 5 log 2 n ) indep endenc e queries, r eturns a set S ∈ M satisfying f ( S ) ≥ F ( y ) . 8 Dumm y elemen ts. W e can make the subsequent analysis more conv enient b y adding at most n dummy elements to the original set N . Specifically , let ¯ N = { u 1 , . . . , u n } , ¯ I = { S ⊆ N ∪ ¯ N | S ∩ N ∈ I , | S | ≤ rank ( M ) } , and ¯ f : 2 N ∪ ¯ N → R , ¯ f ( S ) = f ( S \ ¯ N ) . The solution to the new problem ( N ∪ ¯ N , ¯ I ) , ¯ f will b e consistent with the original problem. A dditionally , w e can alw ays add dumm y elements to a set S ∈ I to make S a basis of the matroid M without c hanging the function v alue of S . The computed result will inevitably con tain some dumm y elemen ts; w e simply remov e them from the solution set to obtain a solution to the original problem with the same function v alue. In subsequen t analysis, w e will assume by default that the problem includes a sufficient num b er of dumm y elemen ts. Lemma 12. ¯ M = ( N ∪ ¯ N , ¯ I ) is a matr oid, and ¯ f is a submo dular function. Pr o of. F or an y A ⊆ B ⊆ N ∪ ¯ N , u ∈ ( N ∪ ¯ N ) \ B : If u ∈ ¯ N , then ¯ f ( A + u ) − ¯ f ( A ) = ¯ f ( B + u ) − ¯ f ( B ) = 0 . Otherwise, ¯ f ( A + u ) − ¯ f ( A ) = f (( A \ ¯ N ) + u ) − f ( A \ ¯ N ) ≥ f (( B \ ¯ N ) + u ) − f ( B \ ¯ N ) = ¯ f ( B + u ) − ¯ f ( B ) Therefore, ¯ f is submodular. T o pro v e ¯ M is a matroid, w e only need to prov e ¯ I satisfies the exc hange prop erty . F or an y A, B ∈ ¯ I with | A | < | B | . If | A ∩ ¯ N | < | B ∩ ¯ N | , then there exists a dummy element u ∈ ( B ∩ ¯ N ) \ A , so A + u ∈ ¯ I . Otherwise, consider | A \ ¯ N | = | A | − | A ∩ ¯ N | < | B | − | B ∩ ¯ N | = | B \ ¯ N | . By the definition of ¯ I , A ∈ ¯ I ⇒ A \ ¯ N ∈ I , and A \ ¯ N , B \ ¯ N ∈ I . By the matroid exc hange prop ert y , there exists u ∈ B \ ¯ N ⊆ B such that ( A \ ¯ N ) + u ∈ I , implying A + u ∈ ¯ I . The main idea of the algorithm on the extended m ultilinear extension is similar with that on the multilinear extension. First, the main algorithm finds a reference solution Z through the lo cal searc h algorithm. Then, adopting the information from Z , the main algorithm p er- forms con tinuous greedy in a direction orthogonal to Z b efore time t s and perform contin uous greedy in all direction after time t s , obtaining a fractional solution y negativ ely correlated with f ( Z ) . Finally , the b est of Rounding ( y ) and Z is an 0 . 385 − O ( ε ) approximation solution. The formal statemen t of the deterministic 0 . 385 − O ( ε ) appro ximation algorithm for non-monotone submo dular maximization sub ject to matroid constrain ts is as follows: Algorithm 1: MAIN ( M = ( N , I ) , f , t s ∈ [0 , 1] , ε ∈ (0 , 1)) 1 Z ← LocalSearch ( M , f , ε ) ; 2 y 1 ← AidedContinuousGreedy ( M , f , Z , t s , ε ) ; 3 return arg max Y ∈{ Deterministic-Pip age ( y 1 ) , Z } f ( Y ) ; 3.1 Discrete Lo cal Searc h In this subsection, we in tro duce a discrete lo cal search algorithm to obtain the reference solution Z . Although this algorithm originates from Algorithm 3 in [8], w e restate and reprov e it suc h that it can b e adapted to the non-monotone submo dular maximization problem we 9 study . The algorithm calls a deterministic combinatorial algorithm to obtain a constant-factor appro ximation of f ( O P T ) as its initial solution and con tin uously swaps elements to ensure that the curren t solution is a basis of the matroid M = ( N , I ) and the ob jective function v alue increases. Essen tially , the solution returned b y Algorithm 2 is a discrete stationary p oint. The formal statemen t is as follo ws: Algorithm 2: LocalSearch ( M = ( N , I ) , f , ε ∈ (0 , 1)) 1 Use the algorithm in Lemma 13 to find S 0 suc h that f ( S 0 ) ≥ 0 . 305 · f ( OP T ) , set S ← S 0 ; 2 Add elements until S b ecomes a basis of M ; 3 while there exist u ∈ S , v ∈ N \ S suc h that S − u + v ∈ I and f ( v | S ) − f ( u | S − u ) ≥ ε r · f ( S 0 ) do 4 S ← S + v − u ; 5 return S ; Lemma 13 (Theorem 2.4 in [17]) . F or a non-monotone submo dular maximization pr oblem ( M , f ) , ther e exists a deterministic algorithm that r eturns an 0 . 305 − ε appr oximate solution with at most O ε ( nr ) value and indep endenc e queries. Discrete Stationary P oints. In tuitively , w e call a solution stationary if no lo cal exchange can significantly improv e the ob jective v alue, meaning the algorithm has reac hed a stable state with respect to 1 -exc hange mo ves. W e formalize the notion of a discrete stationary p oint under 1 -exc hange. Let M = ( N , I ) b e a matroid of rank r and f : 2 N → R b e submo dular. A basis S ∈ I is called an α -appro ximate discrete stationary p oint if for ev ery pair u ∈ S and v ∈ N \ S such that S − u + v ∈ I , f ( v | S ) − f ( u | S − u ) ≤ α. When α = 0 , this coincides with the standard notion of a 1 -exchange lo cal optim um. Algorithm 2 terminates exactly when no feasible exchange impro ves the ob jective b y more than ε r · f ( S 0 ) ; hence the returned solution is an α -appro ximate discrete stationary p oint with α = ε r · f ( S 0 ) . F or notational simplicit y , throughout the sequel we refer to such α -appro ximate discrete stationary p oints simply as stationary p oints , as the additiv e error will b e explicitly accoun ted for in the analysis. Next, w e illustrate the theoretical guaran tee for Algorithm 2 as follo ws: Theorem 14. A lgorithm 2 terminates within O ( ε − 1 nr ) value queries and O ( ε − 1 nr 2 ) indep en- denc e queries, and the r eturne d set S satisfies S ∈ I and, for any T ∈ I , ε ∈ (0 , 1) , f ( S ) ≥ 1 2 ( f ( S ∩ T ) + f ( S ∪ T )) − ε · f ( O P T ) (2) Pr o of. Due to dummy elements, we can assume | S | = | T | = r , b oth b eing bases of M . Let { u 1 , . . . , u k } and { v 1 , . . . , v k } denote the elements in S \ T and T \ S , resp ectively . By the matroid exc hange prop ert y , we can find a bijection h : S \ T → T \ S suc h that S − u i + h ( u i ) ∈ I for 10 eac h u i . f ( S ∪ T ) + f ( S ∩ T ) − 2 f ( S ) =( f ( S ∪ T ) − f ( S )) − ( f ( S ) − f ( S ∩ T )) ≤ k X i =1 f ( v i | S ) ! − k X i =1 f ( u i | S − u i ) ! (b y submo dularity) = k X i =1 [ f ( h ( u i ) | S ) − f ( u i | S − u i )] ≤ k · ε r · f ( S 0 ) ≤ ε · f ( O P T ) (b y loop condition) Consequen tly , the solution returned b y Algorithm 2 satisfies the inequality (2). In each itera- tion, w e ha ve: f ( S + v − u ) − f ( S ) = ( f ( S + v − u ) − f ( S − u )) + ( f ( S − u ) − f ( S )) ≥ ( f ( S + v ) − f ( S )) + ( f ( S − u ) − f ( S )) (b y submo dularity) = f ( v | S ) − f ( u | S − u ) ≥ ε r · f ( S 0 ) > ε 4 r · f ( O P T ) The inequality in the algorithm’s lo op condition ensures the last step. Since f ( S ) − f ( S 0 ) ≤ f ( O P T ) − f ( S 0 ) ≤ f ( O P T ) , the loop m ust terminate after at most 4 r ε iterations; otherwise, it w ould con tradict f ( S ) − f ( S 0 ) ≤ f ( O P T ) . In eac h iteration, the algorithm requires O ( n ) v alue oracle queries and O ( nr ) indep endence oracle queries. T otally , Algorithm 2 needs O ( ε − 1 nr ) v alue oracle queries and O ( ε − 1 nr 2 ) independence oracle queries. 3.2 Deterministic aided Con tin uous Greedy In this subsection, w e introduce the aided con tin uous greedy algorithm that leverages the discrete stationary p oin t returned by the lo cal search algorithm to guide the forw ard direction on contin uous greedy algorithm. Firstly , w e in tro duce a combinatorial algorithm for obtaining a forw ard direction. The con tin uous greedy pro cess then pro ceeds by taking a small step along this direction. By construction, the direction obtained at each step yields a sufficiently large marginal gain, and thus the algorithm adv ances by up dating the current solution b y a small constan t multiple of this vector. Split Algorithm. The split algorithm serv es to extract a forw ard direction with the largest marginal gain at the current p oin t, under the constrain t that the supp ort size remains b ounded. This constrain t is crucial from a computational p ersp ectiv e: ev aluating the extended m ultilinear extension v alue incurs a cost that s cales with the size of its supp ort. The split algorithm returns a partition of an indep enden t set. As the n umber of partitions are b ounded, the size of supp ort set for the forward direction is also b ounded b y ℓ . Although the theoretical guarantees for Algorithm 3 w as prop osed in [9], for completeness, we also restate the pro of based on the Lo v ász extension, whic h has some subtle differences from [9]. 11 Algorithm 3: Split ( M = ( N , I ) , f , ℓ ) 1 Initialize T 1 ← ∅ , T 2 ← ∅ , . . . , T ℓ ← ∅ ; 2 Let T ← S ℓ i =1 T i ; 3 while T is not a b asis of M do 4 N ′ ← { u ∈ N \ T | T + u ∈ I } ; 5 ( u, j ) ← arg max ( u,j ) ∈N ′ × [ ℓ ] f ( u | T j ) ; 6 T j ← T j + u ; 7 return ( T 1 , T 2 , . . . , T ℓ ) ; Lemma 15 (Lemma 4.2 in [9]) . Given a matr oid M on the gr ound set N , a submo dular function f , and an inte ger ℓ as input, A lgorithm 3 outputs disjoint sets ( T 1 , T 2 , . . . , T ℓ ) whose union is a b asis of M , satisfying for any set O ⊆ N : ℓ X j =1 f ( T j | ∅ ) ≥  1 − 1 ℓ  · f ( O ) − 1 ℓ X j ∈ [ ℓ ] f ( T j ) (3) Pr o of. Let T = S ℓ j =1 T j . Due to dummy elements, we can assume | T | = | O | = r . By the exc hange prop erty of matroid, there exists a bijection h : O → T such that for an y u ∈ O \ T , ( T − h ( u )) + u ∈ I . F urthermore, for u ∈ O ∩ T , h ( u ) = u . Let u i ∈ T \ O b e the i -th element added to T in Algorithm 3. Define T i j = T j \ { h − 1 ( u k ) | k > i } , and let j i denote the index of the set to whic h u i b elongs, i.e., u i ∈ T j i . A ccording to the algorithm, w e state: f ( u i | T i − 1 j i ) ≥ 1 ℓ ℓ X j =1 f ( h − 1 ( u i ) | T i − 1 j ) ≥ 1 ℓ ℓ X j =1 f ( h − 1 ( u i ) | T j ) The first inequality holds b ecause the exchange prop erty guaran tees ( T − u i ) + h − 1 ( u i ) ∈ I , so the greedy strategy of the algorithm ensures f ( u i | T i − 1 j i ) ≥ f ( h − 1 ( u i ) | T i − 1 j ) for an y j (otherwise h − 1 ( u i ) would hav e b een added to some T j at this step). The second inequality follo ws from the submo dularit y of f . Summing the ab ov e equation ov er all i gives: ℓ X j =1 f ( T j | ∅ ) = r X i =1 f ( u i | T i − 1 j i ) ≥ 1 ℓ ℓ X j =1 r X i =1 f ( h − 1 ( u i ) | T j ) ≥ 1 ℓ ℓ X j =1 f r [ i =1 { h − 1 ( u i ) } | T j ! (b y submo dularity) = 1 ℓ ℓ X j =1 ( f ( T j ∪ O ) − f ( T j )) Let A b e a random set where for each 1 ≤ j ≤ ℓ , Pr[ A = T j ∪ O ] = 1 ℓ . Then 1 ℓ P ℓ j =1 f ( T j ∪ O ) = E [ f ( A )] . Let p u = Pr[ u ∈ A ] . Since the T j are disjoint, p u ≤ 1 ℓ for all u ∈ N \ O , and p u = 1 for all u ∈ O . By Lemma 10, E [ f ( A )] ≥ ˆ f ( p ) ≥ Z 1 1 ℓ f ( T λ ( p )) dλ ≥  1 − 1 ℓ  f ( O ) 12 The first inequalit y is from Lemma 10, the second from the definition of the Lov ász extension, and the third follows from the prop erty of p : only elements in O hav e probability > 1 ℓ of b eing in the random set. Com bining these inequalities yields (3). Corollary 16 sp ecializes Lemma 15 to the regime ℓ = 1 /ε and rewrites the inequality in terms of total marginal gain with resp ect to the empt y set. As we will show later, this form allo ws us to establish a low er b ound on the improv emen t obtained by the forward direction induced b y the split algorithm. Essen tially , Corollary 16 pro vides a quan titative guaran tee on the qualit y of the direction induced b y the split algorithm. When the function f is instantiated as the ob jective ev aluated at the current fractional solution, the left-hand side represen ts the total marginal gain contributed b y the comp onents of the forward direction. The inequalit y then implies that this direction achiev es a sufficien tly large increase relativ e to any indep enden t set O , and therefore constitutes a sufficiently go o d forward direction for the contin uous greedy pro cess. Corollary 16. In A lgorithm 3, for any O ∈ I , if ℓ ≥ 1 ε , then: ℓ X j =1 f ( T j | ∅ ) ≥ (1 − 2 ε ) · f ( O ) − (1 − ε ) · f ( ∅ ) (4) Pr o of. P ℓ j =1 ( f ( T j ) − f ( ∅ )) ≥  1 − 1 ℓ  f ( O ) − 1 ℓ P ℓ j =1 f ( T j ) ⇕  1 + 1 ℓ  P ℓ j =1 ( f ( T j ) − f ( ∅ )) ≥  1 − 1 ℓ  f ( O ) − f ( ∅ ) ⇕ ℓ X j =1 ( f ( T j ) − f ( ∅ )) ≥ 1 − 1 ℓ 1 + 1 ℓ f ( O ) − 1 1 + 1 ℓ f ( ∅ ) ≥ (1 − 2 ε ) f ( O ) − (1 − ε ) f ( ∅ ) The last inequalit y uses 1 − ε 1+ ε ≥ 1 − 2 ε + ε 2 , 1 1+ ε ≥ 1 − ε − ε 2 for ε ∈ (0 , 1) and assumes f ( O ) ≥ f ( ∅ ) for a simpler co efficient form. If f ( O ) < f ( ∅ ) , the righ t-hand side is negativ e. How ev er, dumm y elemen ts ensure f ( T j ) ≥ f ( ∅ ) since f ( T j ) is non-decreasing when elements are added. Con tinuous Greedy Algorithm. Inspired by [15], the authors illustrated the stationary p oin t for non-monotone submo dular maximization is probably arbitrary bad, whic h implies the solution far aw ay from the bad stationary p oint has the p oten tial to b e a go o d solution. Lev eraging this insight, we prop ose a deterministic con tinuous greedy algorithm that com bines the information of the discrete stationary p oin t through an auxiliary function. In Algorithm 4, we adopt the parameter t s to con trol the distance close to the stationary p oin t. Before t s , the feasible ascent directions is alwa ys orthogonal to the stationary p oint, i.e., alwa ys moving in the direction aw a y from the stationary p oint. After t s , w e release the limitation for ascent direction, i.e., moving in all p ossible direction in feasible domain. A dditionally , the concrete direction in each iteration is returned by the Algorithm 3 based on the auxiliary function, while limiting the num b er of co ordinates at the same time. The formal statemen t is as follows: 13 Algorithm 4: ContinuousGreedy ( M = ( N , I ) , f , Z ⊆ N , t s ∈ [0 , 1] , ε ∈ (0 , 1) ) 1 δ ← ε 3 , y 0 ← 0 ; 2 for i = 1 to 1 /δ do 3 Z i ← ( Z if i ∈ [1 , t s /δ ] ∅ if i ∈ ( t s /δ, 1 /δ ] ; 4 Define f − Z ( S ) ≜ f ( S ) − P u ∈ Z ∩ S  f ( { u } ) − f ( ∅ ) + 1  ; 5 Define g − Z ( S ) ≜ E  f − Z  R ( e S ∨ y i − 1 )  ; 6 ( T 1 , . . . , T 1 /ε ) ← Split ( M , g − Z i , 1 /ε ) ; 7 x i ← δ · P 1 /ε j =1 e T j ; 8 y i ← y i − 1 ⊕ x i ; 9 return y 1 /δ ; W e briefly explain the role of the mo dified set functions f − Z and g − Z used in Algorithm 4. The purp ose of these functions is purely technical: they ensure that elements in Z are nev er selected by the split pro cedure, while preserving the relative marginal v alues of all other ele- men ts. Sp ecifically , for any u ∈ Z , the definition of f − Z enforces a strictly negativ e marginal con tribution for adding u to any set, and hence u is dominated by every element outside Z in the greedy selection. F or elemen ts v / ∈ Z , the marginal gains f − Z ( v | S ) coincide with f ( v | S ) for all S ⊆ N , so the b eha vior of the ob jectiv e function on N \ Z remains unc hanged. The function g − Z is the corresp onding extended m ultilinear extension ev aluated at the curren t fractional p oin t. Imp ortantly , the supp ort of f − Z and g − Z differs from that of f only b y the elemen ts in Z , and ev aluating g − Z incurs the same asymptotic computational cost as ev aluating the original function, up to a constan t-factor o v erhead. Lemma 17. The set functions f − Z ( S ) and g − Z ( S ) define d in A lgorithm 4 ar e submo dular. Note we do not r e quir e these functions to b e non-ne gative. Pr o of. F or an y A ⊆ B ⊆ N , u ∈ N \ B : f − Z ( u | B ) = f ( u | B ) − [ u ∈ Z ]( f ( { u } ) − f ( ∅ ) + 1) ≤ f ( u | A ) − [ u ∈ Z ]( f ( { u } ) − f ( ∅ ) + 1) = f − Z ( u | A ) Th us f − Z is submodular. Next, g − Z ( u | B ) = E [ f − Z ( R ( e B + u ∨ y i − 1 ))] − E [ f − Z ( R ( e B ∨ y i − 1 ))] = E h f − Z ( R ( y i − 1 ) ∪ ( B + u )) − f − Z ( R ( y i − 1 ) ∪ B ) i ≤ E h f − Z ( R ( y i − 1 ) ∪ ( A + u )) − f − Z ( R ( y i − 1 ) ∪ A ) i (submo dularit y) = g − Z ( u | A ) The second equality treats b oth R ( y i − 1 ) o ccurrences as the same random v ariable. The in- equalit y uses the submo dularit y of f − Z . Th us g − Z is submodular. Lemma 18. F or any u ∈ Z , the function g − Z ( S ) define d in A lgorithm 4 satisfies for any set S ⊆ N − u : g − Z ( S ) > g − Z ( S + u ) . 14 Pr o of. First, we claim: F or any T ⊆ N − u , f − Z ( T ) > f − Z ( T + u ) . This follows from the definition of f − Z and submodularity: f − Z ( T + u ) − f − Z ( T ) = ( f ( T + u ) − f ( T )) + ( − 1 · ( f ( { u } ) − f ( ∅ ) + 1) ≤ ( f ( { u } ) − f ( ∅ )) − ( f ( { u } ) − f ( ∅ ) + 1) = − 1 No w, note that R ( e S ∨ y ) = R ( y ) ∪ S b ecause elements in S are alwa ys included in R ( e S ∨ y ) , and the v alues at indices corresp onding to S do not affect the distribution of other elements in R ( y ) . Therefore, g − Z ( S ) = E [ f − Z ( R ( y ) ∪ S )] = X T ⊆N Pr[ R ( y ) = T ] · f − Z ( T ∪ S ) > X T ⊆N Pr[ R ( y ) = T ] · f − Z ( T ∪ ( S + u )) (by claim ab ov e) = E [ f − Z ( R ( y ) ∪ ( S + u ))] = g − Z ( S + u ) Due to the negative incremen tal gain prop erty of elements u ∈ Z , and the fact that dummy elemen ts hav e zero incremental gain and are sufficiently n umerous, suc h u will never b e selected b y the Split (Algorithm 3) in to an y set. Corollary 19. In the i -th iter ation of A lgorithm 4, for any j ∈ [1 /ε ] , T j ∩ Z i = ∅ . Next, w e establish several structural properties of the sequence { y i } 1 /δ i =0 generated by Algo- rithm 4. First, the solution constructed by the algorithm is feasible, in particular that the final marginal v ector lies in the matroid p olytop e. Lemma 20. Mar( y 1 /δ ) ∈ P ( M ) . Pr o of. Since the sets returned by Split form a basis of M , 1 δ Mar( x i ) ∈ P ( M ) . Therefore, their linear combination P i ∈ [1 /δ ] x i ∈ P ( M ) . Thus, Mar( y 1 /δ ) = Mar   M i ∈ [1 /δ ] x i   = M i ∈ [1 /δ ] Mar  x i  F or any u ∈ N , Mar( y 1 /δ ) u = L i ∈ [1 /δ ] Mar( x i ) u ≤  P i ∈ [1 /δ ] x i  u . By the down w ard-closed prop ert y of P ( M ) , it follo ws that Mar( y 1 /δ ) ∈ P ( M ) . A dditionally , we show that the supp ort of y i gro ws in a con trolled manner, that all marginal v alues remain b ounded, and that the con tribution of elemen ts in Z is appropriately delay ed. These b ounds allo w us to interpret the up date rule as a v alid contin uous greedy pro cess and will be rep eatedly in vok ed in the pro of of the main appro ximation result. Lemma 21. F or any inte ger 0 ≤ i ≤ 1 /δ , supp( y i ) ≤ (1 /ε ) · i ≤ 1 /ε 4 , ∥ Mar( y i ) ∥ ∞ ≤ 1 − (1 − δ ) i , and ∥ Mar( y i ) ∧ 1 Z ∥ ∞ ≤ 1 − (1 − δ ) max { 0 ,i − t s /δ } . 15 Pr o of. Since each up date from y i to y i +1 affects only 1 /ε p ositions, the num b er of non-zero p ositions in y i is at most (1 /ε ) · i . A dditionally , the sets ( T 1 , T 2 , . . . , T 1 /ε ) returned by Split are disjoin t, i.e., ∥ Mar( x i ) ∥ ∞ ≤ δ . Using the property of Mar and ⊕ (Observ ation 7), for any u ∈ N : Mar( y i ) u = Mar  y i − 1  u ⊕ Mar ( x i ) u ≤ 1 −  1 − Mar( y i − 1 ) u  (1 − δ ) Recursiv ely applying this gives Mar( y i ) u ≤ 1 − (1 − δ ) i . By Corollary 19, for any u ∈ Z , i ≤ t s /δ , we hav e u / ∈ T 1 ∪ T 2 ∪ . . . T 1 /ε , so Mar( y i ) u = 0 . Then, applying the ab o ve inequality yields for u ∈ Z , i ≥ t s /δ : Mar( y i ) u ≤ 1 − (1 − δ ) i − t s /δ . Lemma 22. F or any i ≤ 1 /δ , we have 1 − (1 − δ ) i ≤ 1 − e − iδ + iδ 2 , ∥ Mar( y i ) ∥ ∞ ≤ 1 − e − iδ + δ , and ∥ Mar( y i ) ∧ 1 Z ∥ ∞ ≤ 1 − e − max { 0 ,iδ − t s } + δ . Pr o of. Pro v e by induction on i . F or i = 0 , it holds trivially . F or i = 1 , 1 − (1 − δ ) ≤ 1 − e − δ + δ 2 . Expanding e − δ : e − δ − (1 − δ + δ 2 ) = X k ≥ 0 ( − δ ) k k ! − (1 − δ + δ 2 ) = − δ 2 2 − X k ≥ 2 δ 2 k − 1 (2 k − 1)! − δ 2 k (2 k )! ! ≤ 0 So it holds for i = 1 . Assume it holds for all integers in [0 , i ] , pro v e for i + 1 : 1 − (1 − δ ) i +1 = (1 − (1 − δ ) i )(1 − δ ) + δ ≤ (1 − e − iδ + iδ 2 )(1 − δ ) + δ = 1 + iδ 2 − iδ 3 − e − iδ (1 − δ ) ≤ 1 + iδ 2 − iδ 3 − e − iδ ( e − δ − δ 2 ) (using i = 1 case) = 1 + ( iδ 2 + e − iδ δ 2 ) − e − ( i +1) δ − iδ 3 ≤ 1 + ( i + 1) δ 2 − e − ( i +1) δ (since e − iδ ≤ 1 ) Lemma 23. A lgorithm 4 r e quir es a total of O ε ( nr ) queries. Pr o of. Since R ( y ) has at most 2 frac ( y ) distinct p ossible v alues, and by Lemma 21, frac ( y i ) ≤ supp( y i ) ≤ 1 /ε 4 (a constan t dep ending only on ε ), computing g − Z ( S ) requires a constan t n umber of v alue queries. In Algorithm 4, only the Split subroutine requires query complex- it y b eyond ε dep endence. One Split execution requires O ( nr ) computations of g − Z ( S ) and indep endence queries. Therefore, Algorithm 4 requires O ε ( nr ) queries in total. Lev eraging the ab ov e lemmas for the sequence { y i } 1 /δ i =0 , w e can obtain a low er b ound on the function v alue gro wth at eac h step. Lemma 24. L et O P T i = O P T \ Z i . In A lgorithm 4, for any i ∈ [ 0 , 1 /δ ) , 1 δ h F ( y i ) − F ( y i − 1 ) i ≥ (1 − 3 ε ) ·  F ( e OP T i ∨ y i − 1 ) − F ( y i − 1 )  16 Pr o of. Note that for i ≤ t s /δ , the algorithm do es not in volv e any elements in Z . It can b e view ed as executing on the set N \ Z . Since T j ∩ Z i = ∅ , O P T i ∩ Z i = ∅ , R ( y i ) ∩ Z i = ∅ , we easily get F ( e OP T i ∨ y i ) = g − Z i ( O P T i ) , F ( e T j ∨ y i ) = g − Z i ( T j ) . Using the prop erty of Algorithm Split (Corollary 16), w e ha ve: 1 /ε X j =1  F ( e T j ∨ y i − 1 ) − F ( y i − 1 )  ≥ (1 − 2 ε ) F ( e OP T i ∨ y i − 1 ) − (1 − ε ) F ( y i − 1 ) (5) No w note: F ( y i ) = E [ f ( R ( y i ))] = E [ f ( R ( y i − 1 ⊕ x i ))] = E [ f ( R ( y i − 1 ) ∪ R ( x i ))] = X S ⊆N Pr[ R ( x i ) = S ] · E [ f ( R ( y i − 1 ) ∪ S )] = X S ⊆N Pr[ R ( x i ) = S ] · F ( e S ∨ y i − 1 ) ≥ (1 − δ ) 1 /ε F ( y i − 1 ) + 1 /ε X j =1 δ (1 − δ ) 1 /ε − 1 F ( e T j ∨ y i − 1 ) (only S = ∅ or T j ) ≥ 1 − δ ε ! F ( y i − 1 ) + δ 1 − δ ε ! 1 /ε X j =1 F ( e T j ∨ y i − 1 ) ( (1 − δ ) k ≥ 1 − k δ ) ≥ 1 − δ ε !  F ( y i − 1 ) + δ  (1 − 2 ε ) F ( e OP T i ∨ y i − 1 ) +  1 ε − 1 + ε  F ( y i − 1 )  (using (5)) ≥ (1 − δ ) F ( y i − 1 ) + δ (1 − 3 ε ) F ( e OP T i ∨ y i − 1 ) (using δ = ε 3 and simplifying) Rearranging giv es the result. Note that the low er b ound in the ab o v e lemma in v olv es F ( e OP T i ∨ y i − 1 ) . In the next lemma, w e analyze this further to relate it to the optimal solution f ( OP T ) . Lemma 25. In A lgorithm 4, for any i ∈ [1 , 1 /δ ] , A ⊆ N , we have: F ( y i ∨ e A ) ≥  e − max { 0 ,t − t s } − e − t  · max { 0 , f ( A ) − f ( A ∪ Z ) } + ( e − t − δ ) · f ( A ) wher e t = iδ . Pr o of. By Lemma 10, F ( y i ∨ e A ) ≥ ˆ f (Mar( y i ∨ e A )) . Let θ Z = e − max { 0 ,t − t s } − δ , θ = e − t − δ . By Lemma 22, for u ∈ Z , Mar( y i ) u ≤ 1 − θ Z ; for u / ∈ Z , Mar( y i ) u ≤ 1 − θ . Note that Mar( y i ∨ e A ) = Mar( y i ) ∨ 1 A b ecause R ( y i ∨ e A ) = R ( y i ) ∪ A . Using these observ ations: F ( y i ∨ e OP T ) ≥ ˆ f (Mar( y i ) ∨ 1 A ) = Z 1 0 f ( T λ (Mar( y i )) ∪ A ) dλ ≥ Z 1 − θ 1 − θ Z f ( T λ (Mar( y i )) ∪ A ) dλ + Z 1 1 − θ f ( T λ (Mar( y i )) ∪ A ) dλ ≥ Z 1 − θ 1 − θ Z f ( T λ (Mar( y i )) ∪ A ) dλ + θ · f ( A ) (since T λ = ∅ for λ > 1 − θ ) ≥ ( θ Z − θ ) · max { f ( A ) − f ( Z ∪ A ) , 0 } + θ · f ( A ) 17 The last inequality holds because when 1 − θ Z < λ ≤ 1 − θ , we hav e T λ (Mar( y i )) ⊆ N \ Z . By submo dularit y and non-negativity of f , for any B ⊆ N \ Z : f ( B ∪ ( Z \ A ) ∪ A ) − f ( B ∪ A ) ≤ f (( Z \ A ) ∪ A ) − f ( A ) ⇒ f ( B ∪ A ) ≥ f ( A ) − f ( Z ∪ A ) ⇒ Z 1 − θ 1 − θ Z f ( T λ (Mar( y i )) ∪ A ) dλ ≥ ( θ Z − θ ) · ( f ( A ) − f ( Z ∪ A )) And this v alue is also ≥ 0 due to non-negativit y . T o translate the p er-iteration impro vemen t b ounds into a b ound on the final v alue F ( y 1 /δ ) , w e compare the discrete ev olution of F ( y i ) with an auxiliary function. Since A is arbitrary in the ab o v e lemma, we can finally express our solution F ( y 1 /δ ) in terms of O P T and Z , and then complete the entire algorithm via Theorem 11. Define g (0) = 0 , g (( i + 1) δ ) =    g ( iδ ) + δ h f ( O P T \ Z ) − (1 − e − iδ ) · f ( Z ∪ O P T ) − g ( iδ ) i if iδ < t s g ( iδ ) + δ h e − iδ · f ( O P T ) + ( e t s − iδ − e − iδ ) · max { f ( O P T ) − f ( Z ∪ OP T ) , 0 } − g ( iδ ) i if iδ ≥ t s Lemma 26. F or any i ∈ [0 , 1 /δ ] , g ( iδ ) ≤ F ( y i ) + 4 iδ ε · f ( O P T ) Pr o of. W e pro ve b y induction on i . F or i = 0 , g (0) = 0 ≤ F ( y 0 ) . Assume the lemma holds for i , pro ve for i + 1 . Define the increment of g as: g ′ ( iδ ) =    f ( O P T \ Z ) − (1 − e − iδ ) · f ( Z ∪ O P T ) if iδ < t s e − iδ · f ( O P T ) + ( e t s − iδ − e − iδ ) · max { f ( O P T ) − f ( Z ∪ OP T ) , 0 } if iδ ≥ t s By definition, g ′ ( iδ ) ≤ f ( O P T ) and g (( i + 1) δ ) = δ [ g ′ ( iδ ) − g ( iδ )] + g ( iδ ) By Lemmas 24 and 25, we ha v e: F ( y i +1 ) ≥ F ( y i ) + δ h (1 − 3 ε )( g ′ ( iδ ) − δ · f ( O P T )) − F ( y i ) i (6) ≥ F ( y i ) + δ h (1 − 3 ε ) g ′ ( iδ ) − δ · f ( O P T ) − F ( y i ) i (7) Therefore: g (( i + 1) δ ) = g ( iδ ) + δ ( g ′ ( iδ ) − g ( iδ )) = (1 − δ ) g ( iδ ) + δ g ′ ( iδ ) ≤ (1 − δ )[ F ( y i ) + 4 iδ ε · f ( O P T )] + δ g ′ ( iδ ) (induction h yp.) ≤ F ( y i +1 ) − δ (1 − 3 ε ) g ′ ( iδ ) + δ 2 f ( O P T ) + (1 − δ ) · 4 iδ ε · f ( O P T ) + δ g ′ ( iδ ) (using 6) = F ( y i +1 ) + (1 − δ )4 iδ ε · f ( O P T ) + 3 εδ g ′ ( iδ ) + δ 2 f ( O P T ) ≤ F ( y i +1 ) + 4(( i + 1) δ ) ε · f ( O P T ) (since g ′ ( iδ ) ≤ f ( O P T ) , δ < ε ) 18 Theorem 27. The solution r eturne d by A lgorithm 4 satisfies the fol lowing ine quality: F ( y 1 /δ ) ≥ e t s − 1  (2 − t s − e − t s − 4 ε ) f ( O P T ) − (1 − e − t s ) f ( Z ∩ O P T ) − (2 − t s − 2 e − t s ) f ( Z ∪ O P T )  Pr o of. The definition of g ( iδ ) is consistent with that in [7]. Its Corollary A.5 pro v es: g (1) ≥ e t s − 1  (2 − t s − e − t s ) f ( O P T ) − (1 − e − t s ) f ( Z ∩ O P T ) − (2 − t s − 2 e − t s ) f ( Z ∪ O P T )  By Lemma 26, F ( y 1 /δ ) ≥ g (1) − 4 εf ( O P T ) ≥ e t s − 1  (2 − t s − e − t s − 4 ε ) f ( O P T ) − (1 − e − t s ) f ( Z ∩ O P T ) − (2 − t s − 2 e − t s ) f ( Z ∪ O P T )  3.3 Pro of of the Main Result In the previous subsections, we established the theoretical guarantees for the t w o main com- p onen ts of our algorithm. First, the Split algorithm returns a set Z satisfying the structural inequalities in Theorem 14. Second, the contin uous greedy phase pro duces a fractional vector whose v alue satisfies the b ound in Theorem 27. Finally , b y Theorem 11, the Deterministic- Pip age pro cedure conv erts any fractional vector y with Mar ( y ) ∈ P ( M ) into a feasible set S ∈ M such that f ( S ) ≥ F ( y ) . Therefore, to prov e the appro ximation guaran tee of our algorithm, it suffices to establish a lo wer b ound on the v alue returned by Algorithm 1. W e no w pro ve the main theorem. Pro of of Theorem 1. By Theorem 11, the Deterministic-Pip a ge algorithm conv erts a fractional v ector y with Mar ( y ) ∈ P ( M ) into a feasible set S ∈ M satisfying f ( S ) ≥ F ( y ) . Therefore, it suffices to prov e a lo wer b ound on the fractional v alue returned b y Algorithm 1. The feasibilit y of the solutions pro duced by the algorithm and the query complexit y b ound follo w from Theorem 14, Lemma 20, and Lemma 23. Hence, w e only need to analyze the appro ximation ratio. The lo wer b ound for the solution of our algorithm follo ws essen tially the same analysis as in [6]. The algorithm returns the maximum of tw o candidate solutions. W e first list the inequalities satisfied by these t wo quantities. 19 By Theorem 14, for every A ∈ I , f ( Z ) ≥ 1 2 f ( Z ∪ A ) + 1 2 f ( Z ∩ A ) − ε · f ( O P T ) . Substituting A = O P T and A = O P T ∩ Z giv es f ( Z ) ≥ 1 2 f ( Z ∪ O P T ) + 1 2 f ( Z ∩ O P T ) − ε · f ( O P T ) , (8) f ( Z ) ≥ f ( Z ∩ O P T ) − ε · f ( O P T ) . (9) By Theorem 27, F ( y 1 /δ ) ≥ e t s − 1  (2 − t s − e − t s − 4 ε ) f ( O P T ) − (1 − e − t s ) f ( Z ∩ O P T ) − (2 − t s − 2 e − t s ) f ( Z ∪ O P T )  . (10) Inequalities (8), (9), and (10) are all v alid low er bounds on the ob jectiv e v alue returned by the algorithm. Therefore, an y con vex com bination of them is also a v alid low er b ound. Let ALG denote the fractional solution returned b y the algorithm. F or an y p 1 , p 2 , p 3 ≥ 0 with p 1 + p 2 + p 3 = 1 , w e obtain F ( ALG ) ≥ 1 2 p 1 ( f ( Z ∪ O P T ) + f ( Z ∩ O P T ) + εf ( O P T )) + p 2 ( f ( Z ∩ O P T ) + εf ( O P T )) + p 3 e t s − 1  (2 − t s − e − t s − 4 ε ) f ( O P T ) − (1 − e − t s ) f ( Z ∩ O P T ) − (2 − t s − 2 e − t s ) f ( Z ∪ O P T )  . W e choose p 1 , p 2 , p 3 and t s so that the co efficien ts of f ( Z ∪ O P T ) and f ( Z ∩ O P T ) are non-negativ e while maximizing the co efficient of f ( OP T ) . As sho wn in Section 3.1 of [6], the optimal parameters are p 1 = 0 . 205 , p 2 = 0 . 025 , p 3 = 0 . 070 , t s = 0 . 372 , whic h yields F ( ALG ) ≥ (0 . 385 − O ( ε )) f ( O P T ) . Applying Theorem 11 completes the pro of. 4 Knapsac k Constrain ts In this section, w e present our deterministic algorithm for the knapsac k constraint and ana- lyze its approximation ratio and query complexity . Our algorithm consists of three comp onents: en umerate elemen ts, optimization, and rounding. The algorithm first enumerates all subsets E i of the ground set whose size is at most ε − 2 . Then, for eac h E i , w e assume E i has b een 20 selected, that is, we consider f ( · | E i ) as the new function and (1 − ε ) B as the capacity . W e apply a measured con tinuous greedy algorithm on the extended multilinear extension to obtain a vector y i ∈ [0 , 1] 2 N . Next, we apply a rounding algorithm to y i to obtain a set S i . Finally , w e select the set with the largest v alue among all S i as our output. The ab o v e pro cedure is summarized in Algorithm 5. Algorithm 5: Deterministic Knapsac k ( f , w , B ) 1 for e ach set E i ∈ N with | E i | ≤ ε − 2 do 2 y i ← D M C G ( f ( · ∪ E i ) , w , (1 − ε )( B − w ( E i )) , ε ) 3 S i ← R ounding ( f , w , y i ) 4 return ar g max i f ( S i ∪ E i ) 4.1 En umerate Elemen ts W e b egin our analysis with the elemen t enumeration step. F ollo wing the approach in [14], b y en umerating all subsets of size at most O ( ε − 2 ) , w e effectively fix the heavy elements and reduce the remaining problem to one consisting only of ligh tw eight elemen ts, each con tributing at most an ε fraction of the total w eigh t. W e then sho w that, in this reduced instance, the knapsac k capacity can b e safely scaled down by a factor of α , and under this reduced capacity , the v alue of the optimal solution decreases by at most α − O ( ε ) . This reduction has t w o purp oses: first, it makes the knapsack constrain t b eha v e more lik e a cardinalit y constraint during the optimization phase; second, it provides slac k for the subsequen t rounding pro cedure to ensure feasibility . Lemma 28 (Corollary 4.16 in [14]) . F or maximizing a submo dular function f : 2 N → R + under a knapsack c onstr aint with c ap acity B and a weight function w : N → R + , ther e exists a subset E ⊆ N satisfying the fol lowing pr op erties: 1. Its c ar dinality is b ounde d by | E | ≤ ε − 2 , and henc e E c an b e identifie d by enumer ating at most ε − 2 elements. 2. F or every element o ∈ OPT \ E , the mar ginal c ontribution satisfies f ( o | E ) ≤ ε 2 f (OPT) . 3. L et B ig = { u ∈ OPT | w ( u ) ≥ ε ( B − w ( E )) }\ E , then f (OPT \ B ig ) ≥ (1 − ε ) f ( O P T ) . Pr o of. W e consider a subset E constructed as follo ws. Initially , E is empty . W e then iterate o ver the elemen ts of the optimal solution OPT . F or eac h elemen t o ∈ OPT , if f ( o | E ) ≥ ε 2 f (OPT) , then we pick the item o into E . W e first observe that if | E | > ε − 2 , then b y the construction rule of E we would ha v e f ( E ) > | E | · ε 2 f (OPT) ≥ ε − 2 · ε 2 f (OPT) = f (OPT) . This is imp ossible, since f ( E ) ≤ f ( O P T ) denotes the optimal v alue. Therefore, w e m ust ha v e | E | ≤ ε − 2 . 21 Next, consider the second prop ert y . Since f is submo dular, the marginal contribution of an y element can only decrease as the set E grows. Consequen tly , once ev ery element of OPT has b een examined exactly once, submo dularity guaran tees that its marginal con tribution in all subsequen t steps cannot increase, and hence no remaining element can satisfy the inequality ab o v e. Therefore, for all o ∈ OPT \ E , w e ha ve f ( o | E ) ≤ ε 2 f (OPT) . Finally , consider the third prop ert y . Let B ig = { b 1 , b 2 , . . . , b t } denote the subset of elements in OPT with w eights at least ε ( B − w ( E )) . W e analyze the pro cess of removing these elemen ts from OPT one b y one. F or eac h b i ∈ B ig , the loss incurred is b ounded b y f ( b i | OPT \ B ig ∪ { b i +1 , . . . , b t } ) ≤ f ( b i | OPT \ B ig ) ≤ f ( b i | E ) ≤ ε 2 f (OPT) , where the last inequality follows from the prop erty that no elemen t outside E has marginal con tribution larger than ε 2 f (OPT) . Since there are at most ε − 1 suc h elements, the total loss is bounded b y εf (OPT) . Moreo ver, we pro ve that if we assume such an elemen t E has already b een selected and further reduce the remaining capacit y to a α fraction of the original, then the v alue of the optimal solution decreases b y at most a factor of ( α − ε 2 ) . Lemma 29. Given the pr oblem of maximzing submo dular function f : 2 N → R + under a knapsack c onstr aint with c ap acity B and a weight function w : N → R + . If for any u ∈ N , we have f ( u | ∅ ) ≤ ε 2 f (OPT) and w ( u ) ≤ εB , then we have max w ( S ) ≤ αB f ( S ) ≥ ( α − ε 2 ) f (OPT) . Pr o of. W e consider the following iterative pro cedure. Let O 0 = OPT . At the i -th round, w e select the element o i ∈ O i − 1 with the low est curren t densit y , i.e., o i = arg min u ∈ O i − 1 f ( u | O i − 1 − u ) w ( u ) , and remov e it to obtain O i = O i − 1 \ { o i } , con tinuing until w ( O t ) ≤ αB for some t . Since o i has the lo west densit y , it follo ws that f ( o i | O i − 1 − o i ) ≤ f ( O i − 1 ) · w ( o i ) w ( O i − 1 ) . Otherwise, summing ov er all u ∈ O i − 1 , w e w ould obtain f ( O i − 1 ) ≥ X u ∈ O i − 1 f ( u | O i − 1 − u ) > f ( O i − 1 ) X u ∈ O i − 1 w ( u ) w ( O i − 1 ) = f ( O i − 1 ) , whic h is a contradiction. Rearranging terms, w e then ha ve f ( O i ) w ( O i ) ≥ f ( O i − 1 ) w ( O i − 1 ) . 22 By applying this inequalit y iterativ ely , we obtain f ( O t − 1 ) w ( O t − 1 ) ≥ f ( O t − 2 ) w ( O t − 2 ) ≥ · · · ≥ f (OPT) w (OPT) . It follo ws that f ( O t ) ≥ f ( O t − 1 ) − f ( o t | O t ) ≥ f (OPT) w ( O t − 1 ) w (OPT) − f ( o t | O t ) ≥ αf (OPT) − f ( o t | O t ) ≥ αf (OPT) − f ( o t | ∅ ) ≥ ( α − ε 2 ) · f (OPT) . Consequen tly , our algorithm first en umerates all subsets E i of size at most ε − 2 . This guar- an tees that the desired set E will b e included among the enumerated candidates. F or each suc h set E i , we solv e the corresp onding derived optimization problem defined ab ov e using an ap- propriate optimization algorithm. If this algorithm returns an appro ximate fractional solution for the derived problem, then, when applied to the correct set E , it yields a fractional solution that ac hieves the desired approximation guarantee. 4.2 Deterministic Measured Con tin uous Greedy W e now describ e our optimization comp onent, whic h relies on a nov el split algorithm namely , KnapsackSplit subroutine. Different from the matroid split algorithm, we replace the marginal gain f ( u | T j ) with the marginal density f ( u | T j ) /w ( u ) . Throughout the remain- der of this section, w e set ℓ = 1 /ε and, without loss of generalit y , assume that 1 /ε is an integer. F or the Knapsa ckSplit algorithm, w e establish the following result: Lemma 30. Given a submo dular function f : 2 N → R + , a knapsack c onstr aint with c ap acity B and a weight function w : N → R + , then for any fe asible set Q , if f ( u | ∅ ) ≤ ε 2 f ( Q ) for al l u ∈ N , the KnapsackSplit A lgorithm pr o duc es disjoint sets T 1 , T 2 , . . . , T ℓ ⊆ N such that their union T = ∪ ℓ j =1 T j that satisfying ∪ ℓ j =1 T j ≤ B , and then we have ℓ X j =1 f ( T j | ∅ ) ≥ max    1 ℓ ℓ X j =1 ( f ( Q ∪ T j ) − f ( T j )) − ε 2 f ( O P T ) , 0    . This result is analogous to the prov ed for the Split algorithm in [9], except for an additional ε 2 f ( Q ) term. The pro of strategy also differs from that of Split . In the original analysis, the argumen t relies on the basis exchange prop erty of matroids, whic h establishes a bijection b et w een the selected set T and an y feasible set Q . Under a knapsack constrain t, such a bijection no longer exists. Instead, we order the elements in to a sequence and partition them into several segmen ts so that they can b e paired accordingly . As a result, at most one element q ∈ Q with f ( q | T ) ≥ 0 ma y remain unpaired. As sho wn ab ov e, the marginal contribution of such an element is at most ε 2 f ( Q ) . W e further sho w that this additional term is negligible in the o verall analysis of the optimization algorithm. 23 Algorithm 6: KnapsackSplit ( f , w , B , ℓ ) 1 Initialize: T 1 ← ∅ , T 2 ← ∅ , . . . , T ℓ ← ∅ . 2 Use T to denote the union ∪ ℓ j =1 T j . 3 while w ( T ) ≤ B do 4 Let N ′ ← { u ∈ N \ T | w ( T ∪ { u } ) ≤ B } . 5 Let ( u, j ) ∈ ar g max ( u,j ) ∈N ′ × [ ℓ ] f ( u | T j ) w ( u ) . 6 T j ← T j ∪ { u } . 7 return ( T 1 , · · · , T ℓ ) . Algorithm 7: KnapsackDMCG( g , w , B , ε ) 1 Let δ ← ε 3 . 2 Let y (0) ← 0 . 3 for i = 1 to 1 /δ do 4 Define g : 2 N → R ≥ 0 to be g ( S ) = F ( e S ∨ y ( t )) . 5 ( T 1 , . . . , T 1 /ε ) ← Split ( g , w , B , 1 /ε ) 6 y ( t + δ ) ← y ( t ) ⊕ ( δ · P 1 /ϵ j =1 e T j ) . 7 return y (1) . Pr o of. W e first aim to establish a corresp ondence b etw een the elemen ts in Q and those in T = S i T i . W e arbitrarily order the elemen ts in Q as q 1 , q 2 , . . . , q x , and denote the elemen ts of T according to their order of added as t 1 , t 2 , . . . , t y , where h i indicates that the elemen t t i w as added to the set T h i . W e additionally denote by T ( i ) j the set of elements in T j at the momen t when the element t i is added. W e represen t each element as an interv al on the real line [0 , B ] , with length equal to its w eight. W e first place the elemen ts of Q consecutively according to the fixed order: the elemen t q 1 corresp onds to the in terv al [0 , w ( q 1 )] , q 2 corresp onds to the in terv al [ w ( q 1 ) , w ( q 1 ) + w ( q 2 )] , and in general, q i corresp onds to the in terv al I Q i =  i − 1 X k =1 w ( q k ) , i X k =1 w ( q k )  . Similarly , we place the elements of T = S i T i on the same in terv al [0 , B ] according to their order of insertion. Sp ecifically , the element t 1 corresp onds to the interv al [0 , w ( t 1 )] , the element t 2 corresp onds to the in terv al [ w ( t 1 ) , w ( t 1 ) + w ( t 2 )] , and in general, the elemen t t i corresp onds to the interv al I T k =  i − 1 X k =1 w ( t ℓ ) , i X k =1 w ( t ℓ )  . Based on the interv al represen tations constructed ab ov e, w e consider the pairwise intersec- tions b et ween interv als asso ciated with elements in Q and those asso ciated with elements in T . Sp ecifically , each p oin t in [0 , B ] lies in exactly one interv al corresp onding to an element of Q and in at most one interv al corresp onding to an element of T , which naturally induces a pairing b et w een elements of Q and elements of T through their ov erlapping interv als. F or α indexing 24 elemen ts of Q and β indexing elements of T , w e denote by w α,β the length of the intersection b et w een the corresp onding in terv als, w α,β =    I Q α ∩ I T β    . q 1 q 2 q 3 . . . q x t 1 t 2 t 3 . . . t y B F or each α ∈ { 1 , 2 , . . . , x } and β ∈ { 1 , 2 , . . . , y } , we hav e f ( t β | T ( β − 1) h β ) w ( t β ) ≥ 1 ℓ ℓ X j =1 f ( q α | T ( β − 1) j ) w ( q α ) ≥ 1 ℓ ℓ X j =1 f ( q α | T j ∪ Q α − 1 ) w ( q α ) . Multiplying b oth sides by w α,β and summing o ver all α ∈ { 1 , . . . , x } and β ∈ { 1 , . . . , y } , w e obtain y X β =1 x X α =1 w α,β w ( t β ) f ( t β | T ( β − 1) h β ) ≥ 1 ℓ ℓ X j =1 x X α =1 y X β =1 w α,β w ( q α ) f ( q α | T j ∪ Q α − 1 ) . On the left-hand side, by construction w e ha v e P x α =1 w α,β ≤ w ( t β ) for eac h β . Th us, the left-hand side is lo wer b ounded b y y X β =1 f ( t β | T ( β − 1) h β ) = ℓ X j =1 f ( T j ) . On the right-hand side, the term P y β =1 w α,β admits a simple characterization dep ending on the p osition of α relativ e to γ . F or all α < γ , the in terv al I Q α is fully cov ered by the union of in terv als corresp onding to T , and hence P y β =1 w α,β = w ( q α ) . F or α = γ , the interv al I Q α is only partially cov ered, which implies P y β =1 w α,β ≤ w ( q α ) . F or α > γ , the in terv al I Q α has empty in tersection with the in terv als corresponding to T and lies en tirely to their righ t, implying w ( q α ) + w ( T ) ≤ B . Th us, f ( o α | T i ) ≤ 0 , ∀ i = 1 , 2 , . . . , ℓ, otherwise this element could be added to T i . Summing and scaling, w e obtain: x X α = γ +1 ℓ X ℓ =1 f ( q α | T j ∪ Q α − 1 ) ≤ x X α = γ +1 ℓ X ℓ =1 f ( q α | T j ) ≤ 0 Consequen tly , w e hav e: 25 1 ℓ ℓ X j =1 x X α =1 y X β =1 w α,β w ( q α ) f ( q α | T j ∪ Q α − 1 ) = 1 ℓ γ − 1 X α =1 ℓ X j =1 y X β =1 w α,β w ( q α ) f ( q α | T j ∪ Q α − 1 ) + 1 ℓ ℓ X j =1 y X β =1 w γ ,β w ( q γ ) f ( q γ | T j ∪ Q γ − 1 ) = 1 ℓ γ X α =1 ℓ X j =1 f ( q α | T j ∪ Q α − 1 ) −   1 − y X β =1 w γ ,β w ( q γ )   1 ℓ ℓ X j =1 f ( q γ | T j ∪ Q γ − 1 ) ≥ 1 ℓ x X α =1 ℓ X j =1 f ( q α | T j ∪ Q α − 1 ) −   1 − y X β =1 w γ ,β w ( q γ )   1 ℓ ℓ X j =1 f ( q γ | T j ∪ Q γ − 1 ) ≥ 1 ℓ x X α =1 ℓ X j =1 f ( q α | T j ∪ Q α − 1 ) −   1 − y X β =1 w γ ,β w ( q γ )   1 ℓ ℓ X j =1 f ( q γ | ∅ ) = 1 ℓ ℓ X j =1 ( f ( Q ∪ T j ) − f ( T j )) − ε 2 f ( Q ) With the p er-iteration progress guaran tee in place, w e can pro ve that the algorithm returns a con tinuous solution with a guaran teed approximation ratio. Theorem 31. Given a submo dular function f : 2 N → R + , a knapsack c onstr aint with c ap acity B and a weight function w : N → R + , the KnapsackDMCG algorithm r eturns a fr actional solution y ∈ [0 , 1] 2 N with P u ∈N Mar u ( y ) w ( u ) ≤ B , and F ( y (1)) ≥ ( 1 e − O ( ε )) f (OPT) (11) wher e F is the extende d multiline ar extension of f . Besides, for al l i ∈ [0 , 1 /δ ] , we have frac( y ( iδ )) ≤ 1 /ε · i ≤ ε − 4 . Pr o of. Firstly , we briefly explain why (11) holds, as this part of the pro of is iden tical to the argumen t in Section 3 (the matroid case) and can b e directly deriv ed from [9]. By submo dularity and the choice ℓ = 1 /ε , w e obtain 1 /ε X j =1 g ( T j | ∅ ) ≥ (1 − O ( ε )) g (OPT) − ε 1 /ε X j =1 g ( T j ) . Plugging this inequality in to the iterative pro cess yields 1 /ε X j =1  F ( e T j ∨ y (( i − 1) δ )) − F ( y (( i − 1) δ ))  ≥ (1 − O ( ε )) · F ( e OPT ∨ y (( i − 1) δ )) − (1 − ε ) · F ( y (( i − 1) δ )) . Expanding the up date rule of the algorithm giv es 1 δ ( F ( y ( iδ )) − F ( y (( i − 1) δ ))) ≥ (1 − O ( ε )) · F ( e OPT ∨ y (( i − 1) δ )) − F ( y (( i − 1) δ )) . 26 Finally , using ∥ Mar( y ( iδ )) ∥ ≤ 1 − (1 − δ ) i together with Observ ation 9 we can acquire F ( e OPT ∨ y (( i − 1) δ )) ≥ (1 − δ ) i − 1 · f (OPT) , and solving this recurrence ov er the in terv al [0 , 1] yields F ( y (1)) ≥  1 e − O ( ε )  f (OPT) , whic h establishes (11). Besides,it is easy to see that in each iteration, when the algorithm up dates y ( t ) to y ( t + δ ) , it increases at most 1 /ε co ordinates. Since the algorithm p erforms at most 1 /ε 3 iterations in total, w e ha ve frac( y ( iδ )) ≤ (1 /ε ) · i . No w it suffices to sho w that X u ∈N Mar u ( y ) w ( u ) ≤ B . Let T ( i ) j denote the set T i pro duced in the j -th iteration of the algorithm. The final vector y can be written as y = 1 /δ M j =1  δ · 1 /ε X i =1 e T ( i ) j  . F or eac h fixed j , the KnapsackSplit algorithm guaran tees that the total weigh t of the selected sets does not exceed the capacit y , implying Mar  δ · 1 /ε X i =1 e T ( i ) j  = δ 1 /ε X i =1 w ( T ( i ) j ) ≤ δ B . By the additivity of the op erator Mar( · ) ov er the direct sum, we obtain Mar( y ) = 1 /δ M j =1 Mar  δ · 1 /ε X i =1 e T ( i ) j  ≤ 1 /δ X j =1 δ B = B . Algorithm 8: Relax ( y , u ) 1 Up date y u ← Mar u ( y ) 2 for every non-empty set S ⊆ N − u do 3 y S ← 1 − (1 − y S + u )(1 − y S ) . 4 y S + u ← 0 . 5 return y . 4.3 Rounding Finally , we apply a rounding algorithm to the fractional solution y obtained by running the optimization algorithm on the function f ( · ∪ E ) . 27 Algorithm 9: R ounding ( f , w , y ) 1 while R  = N do 2 S ← R ∩ { u ∈ N ′ | y { u } ∈ (0 , 1) } . 3 while | S | < 2 do 4 Cho ose u ∈ N ′ \ R . 5 y ← Relax ( y , u ) . 6 R ← R ∪ { u } . 7 Up date S ← R ∩ { u ∈ N ′ | y { u } ∈ (0 , 1) } . 8 Pic k distinct u, v ∈ S . 9 Define g ( t ) : = F  y + t  e { u } w ( u ) − e { v } w ( v )  . 10 Let t max = min { (1 − y { u } ) w ( u ) , y { v } w ( v ) } and t min = max {− y { u } w ( u ) , ( y { v } − 1) w ( v ) } . 11 Let t ⋆ ∈ arg max t ∈{ t min ,t max } g ( t ) . 12 y ← y + t ⋆  e { u } w ( u ) − e { v } w ( v )  . 13 return arg max  f ( R ) , f  R \ { u ∈ N | y { u } ∈ (0 , 1) }   . Our rounding algorithm is inspired b y the Pipage Rounding paradigm and emplo ys the Relax algorithm of [9] as a key subroutine. The Relax algorithm transforms a vector x ∈ [0 , 1] 2 n b y aggregating all comp onents corresp onding to sets that contain a given elemen t u in to the co ordinate e u . This transformation preserv es the marginal v ector Mar( x ) , increases the supp ort size by at most one, and do es not decrease the objective v alue. Lemma 32 (Lemma 5.1 in [9]) . L et x ∈ [0 , 1] 2 N , u ∈ N and z = Relax ( x , u ) ∈ [0 , 1] 2 N . Computing z r e quir es O (frac( x )) time, and the new ve ctor satisfies: • frac( z ) ≤ 1 + frac( x ) . • Mar( x ) = Mar( z ) , and x S = 0 for al l sets S that c ontain u , but ar e not the singleton set { u } . • F ( z ) ≥ F ( x ) . Our rounding pro cedure iterativ ely reduces the n umber of fractional elements. At each step, w e consider the elements whose corresp onding co ordinates in y are fractional after applying Relax . If few er than t wo suc h elements remain, w e Relax additional elements so that exactly t wo fractional elemen ts are presen t, denoted b y u and v . W e then examine the extended m ultilinear extension along the direction g ( t ) = F  y + t  e { u } w ( u ) − e { v } w ( v )  . W e can see that g is con vex. 28 Lemma 33 (Conv exity along exc hange directions) . L et f : 2 N → R b e a submo dular function and let F denote its multiline ar extension. F or any y ∈ [0 , 1] N and any two distinct elements u, v ∈ N , the function g ( t ) = F  y + t  e { u } w ( u ) − e { v } w ( v )  is c onvex in t over its fe asible interval. Pr o of. It’s easy to know that g ′′ ( t ) = 1 w ( u ) 2 ∂ 2 F ( z ) ∂ z 2 { u }       z = y + t  e { u } w ( u ) − e { v } w ( v )  + 1 w ( v ) 2 ∂ 2 F ( z ) ∂ z 2 { v }       z = y + t  e { u } w ( u ) − e { v } w ( v )  − 2 w ( u ) w ( v ) ∂ 2 F ( z ) ∂ z { u } ∂ z { v }      z = y + t  e { u } w ( u ) − e { v } w ( v )  . By Observ ation 8 we hav e g ′′ ( t ) = − 2 w ( u ) w ( v ) ∂ 2 F ( z ) ∂ z { u } ∂ z { v }      z = y + t  e { u } w ( u ) − e { v } w ( v )  ≥ 0 . Our algorithm then up dates y to the v ector corresp onding to the endp oint of g that ac hiev es the larger function v alue. This pro cedure is rep eated iterativ ely until all elemen ts hav e b een pro cessed by Relax , leaving at most one fractional co ordinate. At this p oin t, the algorithm compares the tw o candidate sets: one consisting of all elements rounded to 1, and the other consisting of these elements together with the remaining fractional element, and selects the set with the larger function v alue. The rounding algorithm comes with the follo wing p erformance guaran tee: Theorem 34. Given a submo dular function f : 2 N → R + , a knapsack c onstr aint with w ( u ) ≤ εB for al l u ∈ N and a fr actional solution y satisfying P u ∈N Mar u ( y ) w ( u ) ≤ B , our Rounding algorithm r eturns a discr ete solution S ⊆ N such that f ( S ) ≥ F ( y ) and w ( S ) ≤ (1 + ε ) B , and the algorithm uses O ( n · 2 frac( y ) ) queries. Pr o of. W e first analyze the b ehavior of the algorithm when y is up dated, sp ecifically during lines 9–12. By con vexit y , one of the t wo endp oin ts of this in terv al achiev es a v alue at least as large as the curren t one, allowing us to round either u or v to an in tegral v alue without decreasing the ob jective. Moreov er, mo ving along this direction preserv es the weigh ted sum X u ∈N Mar u ( y ) · w ( u ) , Therefore, the knapsack capacit y is maintained. 29 During the execution, applying Relax ma y increase frac( y ) b y 1, which also increases | S | b y 1. Eac h step along a con v ex exchange direction decreases b oth frac( y ) and | S | by 1. Since | S | ≤ 2 is alwa ys main tained, it follo ws that frac( y ) can increase by at most 2 relativ e to its initial v alue. A t the final iteration, it is p ossible that only a single element remains fractional. In this case, the current ob jective is a con vex com bination of the v alues obtained by either selecting or discarding this elemen t. Hence, w e can choose the better integral option, ensuring f ( S ) ≥ F ( y ) . Finally , since each rounding step preserv es the w eighted sum and only one elemen t may remain fractional at the end, we ha v e w ( S ) ≤ (1 + ε ) B . The total num b er of queries is O ( n · 2 frac( y ) ) , as each ev aluation of F requires 2 frac( y ) queries and there are at most n elemen ts to process. This final decision ma y cause the knapsac k capacity to b e exceeded; how ev er, the violation is b ounded by at most εB . Since the rounding pro cedure is executed with an initial capacit y of (1 − ε ) B , suc h a b ounded capacit y violation can b e safely tolerated. 4.4 Pro of of the Main Result In light of the abov e descriptions, w e are now ready to combine all the preceding results to pro ve Theorem 2. Pr o of of The or em 2. Algorithm 5 first enumerates all sets E i with | E i | ≤ ε − 2 , which ensures that the set constructed in Lemma 2, denoted b y E , is included among the en umerated candi- dates. At this p oin t, our algorithm then applies an optimization algorithm to the resulting prob- lem instance with ob jective function g ( S ) = f ( S ∪ E i ) and knapsack capacit y (1 − ε )( B − w ( E )) . Then we obtain a vector y for G ( · ) , where G is the extended m ultilinear extension of g . Due to the prop erty of E , this guaran tees that G ( y ) ≥  1 e − O ( ε )  g (OPT) ≥  1 e − ε  f (OPT) , and X u ∈N Mar u ( y ) · w ( u ) ≤ (1 − ε )( B − w ( E )) . W e then apply a rounding pro cedure, which returns tw o discrete solutions R and R + u suc h that max { g ( R ) , g ( R + u ) } ≥ g ( y ) and w ( R + u ) ≤ B − w ( E ) . W e select the one with the larger v alue and denote it by S . 30 Finally , our algorithm outputs the set S i ∪ E i with the largest function v alue among all candidates, which in particular includes the set S ∪ E discussed ab o v e. Since w ( S ∪ E ) ≤ B and f ( S ∪ E ) = g ( S ) ≥ g ( y ) ≥  1 e − ε  f (OPT) , b oth the feasibility and the appro ximation ratio are satisfied. F or the query complexity , the enumeration step first incurs a multiplicativ e ov erhead of 2 O ( ε − 2 ) . In the optimization phase, w e alwa ys maintain frac( y ) ≤ ε − 4 . The pro cess runs for ε − 3 rounds, and in eac h round the algorithm ev aluates the function for n 2 candidate directions. Since querying the v alue of g ( y ) requires 2 frac( y ) v alue queries to g , and frac( y ) ≤ ε − 4 throughout the algorithm, each suc h ev aluation costs at most 2 ε − 4 queries. Therefore, the total n um b er of queries in this phase is O ( n 2 · 2 ε − 4 ) . Then, the rounding algorithm tak es a fractional solution with frac( y ) ≤ ε − 4 as input.Each ev aluation requires 2 frac( y ) ≤ 2 ε − 4 queries, resulting in O ( n · 2 ε − 4 ) queries in total. Combining all parts, the ov erall n umber of v alue queries is n O ( ε − 2 ) ·  O ( n 2 · 2 ε − 4 ) + O ( n · 2 ε − 4 )  = O ε ( n ε − 2 ) . 5 Conclusion In this work, based on the optimization–then–rounding paradigm o v er the extended m ulti- linear extension, we design deterministic algorithms for maximizing non-monotone submo dular functions under matroid and knapsac k constrain ts. Our algorithms ac hieve improv ed appro xi- mation ratios compared with the b est previously kno wn deterministic results. F or the future, the extended multilinear extension framew ork still has broader p otential for further study , although many related questions remain challenging. One p ossible direction is to derandomize the aided contin uous greedy algorithm for knapsac k constraints, which w ould lead to a deterministic algorithm with an approximation ratio of 0 . 385 . Another direction is to extend the constraint to a multi-dimensional knapsack. W e hav e v erified that in the optimization phase, the m ultiplicativ e up dates technique can b e applied within the extended m ultilinear extension framework. Nevertheless, a deterministic rounding algorithm tailored for the m ulti-dimensional knapsack constraint is still required. Suc h a rounding pro cedure must sim ultaneously maintain feasibilit y for all knapsack constrain ts while carefully con trolling the n umber of fractional co ordinates in the v ector. A c kno wledgmen ts This work w as supp orted in part b y the National Natural Science F oundation of China Gran ts Nos. 62325210, 12501450, 62272441. 31 References [1] A. A. Ageev. Impro ved approximation algorithms for m ultilevel facility lo cation problems. Op er. R es. L ett. , 30(5):327–332, 2002. [2] A. A. Ageev and M. Sviridenko. An 0.828-approximation algorithm for the uncapacitated facilit y lo cation problem. Discr et. A ppl. Math. , 93(2-3):149–156, 1999. [3] G. Amanatidis, F. F usco, P . Lazos, S. Leonardi, and R. Reiffenhäuser. F ast adaptiv e non-monotone submo dular maximization sub ject to a knapsack constraint. In A dvanc es in Neur al Information Pr o c essing Systems 33: A nnual Confer enc e on Neur al Information Pr o c essing Systems , 2020. [4] A. Badanidiyuru and J. V ondrák. F ast algorithms for maximizing submo dular functions. In Pr o c e e dings of the Twenty-Fifth A nnual A CM-SIAM Symp osium on Discr ete A lgorithms, SOD A , pages 1497–1514. SIAM, 2014. [5] N. Buc h binder and M. F eldman. Deterministic algorithms for submo dular maximization problems. In Pr o c e e dings of the Twenty-Seventh A nnual A CM-SIAM Symp osium on Dis- cr ete A lgorithms, SODA , pages 392–403. SIAM, 2016. [6] N. Buc hbinder and M. F eldman. Constrained submo dular maximization via a nonsym- metric tec hnique. Math. Op er. R es. , 44(3):988–1005, 2019. [7] N. Buc hbinder and M. F eldman. Constrained submo dular maximization via new b ounds for dr-submo dular functions. In Pr o c e e dings of the 56th A nnual A CM Symp osium on The ory of Computing, STOC , pages 1820–1831. A CM, 2024. [8] N. Buc h binder and M. F eldman. Deterministic algorithm and faster algorithm for sub- mo dular maximization sub ject to a matroid constrain t. In 65th IEEE A nnual Symp osium on F oundations of Computer Scienc e, FOCS 2024 , pages 700–712. IEEE, 2024. [9] N. Buc hbinder and M. F eldman. Extending the extension: Deterministic algorithm for non- monotone submo dular maximization. In Pr o c e e dings of the 57th A nnual A CM Symp osium on The ory of Computing, STOC 2025 , pages 1130–1141. ACM, 2025. [10] N. Buc hbinder, M. F eldman, and M. Garg. Deterministic (1/2 + ϵ )-approximation for submo dular maximization ov er a matroid. SIAM J. Comput. , 52(4):945–967, 2023. [11] N. Buch binder, M. F eldman, J. Naor, and R. Sch wartz. Submo dular maximization with cardinalit y constrain ts. In Pr o c e e dings of the Twenty-Fifth A nnual A CM-SIAM Symp osium on Discr ete Algorithms, SOD A , pages 1433–1452. SIAM, 2014. [12] N. Buc hbinder, M. F eldman, J. Naor, and R. Sc hw artz. A tigh t linear time (1/2)-appro ximation for unconstrained submo dular maximization. SIAM J. Comput. , 44(5):1384–1402, 2015. 32 [13] C. Chekuri, J. V ondrák, and R. Zenklusen. Dep endent randomized rounding via exc hange prop erties of combinatorial structures. In 51th A nnual IEEE Symp osium on F oundations of Computer Scienc e, FOCS , pages 575–584. IEEE Computer Society , 2010. [14] C. Chekuri, J. V ondrák, and R. Zenklusen. Submodular function maximization via the m ultilinear relaxation and con tention resolution sc hemes. SIAM J. Comput. , 43(6):1831– 1879, 2014. [15] S. Chen, D. Du, W. Y ang, D. Xu, and S. Gao. Con tin uous non-monotone dr-submo dular maximization with down-closed con v ex constrain t. CoRR , abs/2307.09616, 2023. [16] W. Chen, X. Sun, J. Zhang, and Z. Zhang. Netw ork inference and influence maximization from samples. In Pr o c e e dings of the 38th International Confer enc e on Machine L e arning, ICML 2021 , volume 139 of Pr o c e e dings of Machine L e arning R ese ar ch , pages 1707–1716. PMLR, 2021. [17] Y. Chen, A. Nath, C. P eng, and A. Kuhnle. Discretely b eyond 1/e: Guided combinatorial algortihms for submo dular maximization. In A dvanc es in Neur al Information Pr o c essing Systems , v olume 37, pages 108929–108973. Curran Asso ciates, Inc., 2024. [18] G. Cualinescu, C. Chekuri, M. Pál, and J. V ondrák. Maximizing a monotone submo dular function subject to a matroid constrain t. SIAM J. Comput. , 40(6):1740–1766, 2011. [19] J. Dianetti and G. F errari. Nonzero-sum submo dular monotone-follow er games: Existence and appro ximation of nash equilibria. SIAM J. Contr ol. Optim. , 58(3):1257–1288, 2020. [20] S. Dobzinski. An imp ossibility result for truthful combinatorial auctions with submo dular v aluations. In Pr o c e e dings of the 43r d A CM Symp osium on The ory of Computing, STOC , pages 139–148. ACM, 2011. [21] S. Dobzinski and M. Sc hapira. An improv ed appro ximation algorithm for combinatorial auctions with submodular bidders. In Pr o c e e dings of the Sevente enth A nnual A CM-SIAM Symp osium on Discr ete A lgorithms, SOD A , pages 1064–1073. A CM Press, 2006. [22] A. Ene and H. L. Nguyen. Constrained submo dular maximization: Bey ond 1/e. In IEEE 57th A nnual Symp osium on F oundations of Computer Scienc e, F OCS , pages 248–257. IEEE Computer So ciety , 2016. [23] U. F eige, V. S. Mirrokni, and J. V ondrák. Maximizing non-monotone submo dular func- tions. In 48th A nnual IEEE Symp osium on F oundations of Computer Scienc e, F OCS , Pr o c e e dings , pages 461–471. IEEE Computer Society , 2007. [24] M. F eldman, J. Naor, and R. Sch wartz. A unified con tin uous greedy algorithm for sub- mo dular maximization. In IEEE 52nd A nnual Symp osium on F oundations of Computer Scienc e, FOCS , pages 570–579. IEEE Computer Society , 2011. [25] M. F eldman, Z. Nutov, and E. Shoham. Practical budgeted submo dular maximization. A lgorithmic a , 85(5):1332–1371, 2023. 33 [26] Y. F eng, Y. Hu, S. Li, and R. Zhang. Constan t approximation for weigh ted nash so cial w elfare with submo dular v aluations. In Pr o c e e dings of the 57th A nnual A CM Symp osium on The ory of Computing, STOC , pages 1395–1405. ACM, 2025. [27] M. L. Fisher, G. L. Nemhauser, and L. A. W olsey . A n analysis of appr oximations for maximizing submo dular set functions—II , pages 73–87. Springer Berlin Heidelb erg, Berlin, Heidelb erg, 1978. [28] S. O. Gharan and J. V ondrák. Submo dular maximization by sim ulated annealing. In Pr o c e e dings of the Twenty-Se c ond A nnual A CM-SIAM Symp osium on Discr ete A lgorithms, SOD A , pages 1098–1116. SIAM, 2011. [29] K. Han, z. Cao, S. Cui, and B. W u. Deterministic approximation for submo dular maximiza- tion ov er a matroid in nearly linear time. In A dvanc es in Neur al Information Pr o c essing Systems , v olume 33, pages 430–441. Curran Asso ciates, Inc., 2020. [30] M. Henzinger, P . Liu, J. V ondrák, and D. W. Zheng. F aster submo dular maximization for sev eral classes of matroids. In 50th International Col lo quium on A utomata, L anguages, and Pr o gr amming, ICALP , v olume 261 of LIPIcs , pages 74:1–74:18. Schloss Dagstuhl - Leibniz-Zen trum für Informatik, 2023. [31] D. Kemp e, J. M. Kleinberg, and É. T ardos. Maximizing the spread of influence through a so cial netw ork. In Pr o c e e dings of the Ninth A CM SIGKDD International Confer enc e on K now le dge Disc overy and Data Mining, W ashington, DC, USA, A ugust 24 - 27, 2003 , pages 137–146. ACM, 2003. [32] N. K orula, V. S. Mirrokni, and M. Zadimoghaddam. Online submo dular w elfare maxi- mization: Greedy beats 1/2 in random order. SIAM J. Comput. , 47(3):1056–1086, 2018. [33] A. Kulik, R. Sch w artz, and H. Shachnai. A refined analysis of submo dular greedy . Op er. R es. L ett. , 49(4):507–514, 2021. [34] A. Kulik, H. Shachnai, and T. T amir. Approximations for monotone and nonmonotone submo dular maximization with knapsack constraint s. Math. Op er. R es. , 38(4):729–739, 2013. [35] W. Li and J. V ondrák. A constant-factor approximation algorithm for nash so cial w el- fare with submo dular v aluations. In 62nd IEEE A nnual Symp osium on F oundations of Computer Scienc e, FOCS , pages 25–36. IEEE, 2021. [36] E. Mossel and S. Ro c h. On the submo dularity of influence in so cial netw orks. In Pr o c e e dings of the 39th A nnual A CM Symp osium on The ory of Computing, San Die go, California, USA, June 11-13, 2007 , pages 128–134. ACM, 2007. [37] G. L. Nemhauser and L. A. W olsey . Best algorithms for approximating the maximum of a submodular set function. Math. Op er. R es. , 3(3):177–188, 1978. [38] G. L. Nemhauser, L. A. W olsey , and M. L. Fisher. An analysis of approximations for maximizing submodular set functions - I. Math. Pr o gr am. , 14(1):265–294, 1978. 34 [39] X. Pan, S. Jegelka, J. E. Gonzalez, J. K. Bradley , and M. I. Jordan. Parallel double greedy submo dular maximization. In A dvanc es in Neur al Information Pr o c essing Systems 27: A nnual Confer enc e on Neur al Information Pr o c essing Systems , pages 118–126, 2014. [40] I. Simon, N. Snav ely , and S. M. Seitz. Scene summarization for online image collections. In IEEE 11th International Confer enc e on Computer Vision, ICCV 2007, , pages 1–8. IEEE Computer Society , 2007. [41] R. Sipos, A. Sw aminathan, P . Shiv asw am y , and T. Joac hims. T emp oral corpus sum- marization using submo dular word cov erage. In 21st A CM International Confer enc e on Information and K now le dge Management, CIKM’12, , pages 754–763. A CM, 2012. [42] P . Sko wron. FPT appro ximation sc hemes for maximizing submo dular functions. Inf. Comput. , 257:65–78, 2017. [43] X. Sun, J. Zhang, S. Zhang, and Z. Zhang. Improv ed deterministic algorithms for non- monotone submodular maximization. The or. Comput. Sci. , 984:114293, 2024. [44] M. Sviridenko. A note on maximizing a submo dular set function sub ject to a knapsac k. Op er. R es. L ett. , 32(1):41–43, 2004. [45] S. T schiatsc hek, R. K. Iy er, H. W ei, and J. A. Bilmes. Learning mixtures of submo dular functions for image collection summarization. In A dvanc es in Neur al Information Pr o- c essing Systems 27: A nnual Confer enc e on Neur al Information Pr o c essing Systems 2014 , pages 1413–1421, 2014. [46] J. V ondrák. Optimal approximation for the submo dular welfare problem in the v alue oracle mo del. In Pr o c e e dings of the 40th A nnual A CM Symp osium on The ory of Computing , pages 67–74. A CM, 2008. 35

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment