The Geometry of Efficient Nonconvex Sampling
We present an efficient algorithm for uniformly sampling from an arbitrary compact body $\mathcal{X} \subset \mathbb{R}^n$ from a warm start under isoperimetry and a natural volume growth condition. Our result provides a substantial common generaliza…
Authors: Santosh S. Vempala, Andre Wibisono
The Geometry of Efficien t Noncon v ex Sampling San tosh S. V empala ∗ Andre Wibisono † Marc h 27, 2026 Abstract W e presen t an efficien t algorithm for uniformly sampling from an arbitrary compact b o dy X ⊂ R n from a warm start under isop erimetry and a natural v olume growth condition. Our result pro vides a substan tial common generalization of kno wn results for con vex bo dies and star- shap ed b odies. The complexit y of the algorithm is polynomial in the dimension, the P oincar´ e constan t of the uniform distribution on X and the v olume growth constan t of the set X . ∗ Georgia Institute of T echnology , College of Computing. Email: vempala@gatech.edu . This work w as supp orted in part by NSF aw ard CCF-2504994 and a Simons Inv estigator aw ard. † Y ale Universit y , Department of Computer Science. Email: andre.wibisono@yale.edu . This work was supp orted b y NSF a wards CCF-2403391 and CAREER CCF-2443097. 1 Con ten ts 1 In tro duction 3 1.1 Algorithm: In-and-Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Main result: Iteration complexit y of In-and-Out with a warm start . . . . . . . . . . 7 2 Preliminaries 8 2.1 Geometry of the supp ort set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Isop erimetry: P oincar´ e inequalit y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 V olume growth condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 R ´ en yi divergence and other statistical divergences . . . . . . . . . . . . . . . . . . . . 12 2.5 Review: Outer con vergence of In-and-Out under isoperimetry . . . . . . . . . . . . . 13 3 Analysis of the In-and-Out algorithm 13 3.1 Bound on stationary failure probabilit y under volume gro wth condition . . . . . . . 13 3.2 P er-iteration success probability and expected num b er of trials . . . . . . . . . . . . 15 3.3 Pro of of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.4 Discussion and future w ork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A Additional discussion 19 A.1 A review of the Pro ximal Sampler . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 The analogy with optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B Pro ofs for the Analysis of In-and-Out 22 B.1 Pro of of Lemma 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B.2 Pro of of Lemma 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B.3 Bound on exp ected failure probabilit y under stationarity . . . . . . . . . . . . . . . . 28 B.4 Bound on the exp ected n umber of trials under stationarity . . . . . . . . . . . . . . . 31 B.5 Pro of of Lemma 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B.6 Details for the pro of of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 C Details for the v olume gro wth condition 36 C.1 V olume growth condition for conv ex b o dies . . . . . . . . . . . . . . . . . . . . . . . 36 C.2 V olume growth condition for star-shap ed b o dies . . . . . . . . . . . . . . . . . . . . . 37 C.3 V olume growth condition under set union . . . . . . . . . . . . . . . . . . . . . . . . 37 C.4 V olume growth condition under set exclusion . . . . . . . . . . . . . . . . . . . . . . 38 References 39 2 1 In tro duction Sampling from a b ounded set X in a high-dimensional space R n is a classical problem with con- nections to many topics in mathematics and theoretical computer science. The literature for the problem is largely based on the theory of sampling from a c onvex b o dy X (see Figure 1 (a) for an example). The celebrated w ork of Dyer, F rieze and Kannan [ Dy er et al. , 1991 ] show ed that conv ex b o dies X can be sampled efficiently , i.e., in time p olynomial in the dimension n , with only mem- b ership oracle access to the b o dy X and a starting p oin t inside the b o dy . Applegate and Kannan [ 1991b ] extended the frontier of p olynomial-time sampling to the class of logconca v e probabilit y distributions, which can be view ed as the functional generalization of conv ex sets. Subsequen t w orks [ Lo v´ asz and Simono vits , 1990 , Dyer and F rieze , 1991 , Lov´ asz and Simono vits , 1993 , Kannan et al. , 1997 , Lov´ asz and V empala , 2007 , 2006b , Kook et al. , 2024 ] impro ved the complexit y by in tro ducing new ideas and refining the analyses. The state-of-the-art result by Ko ok, V empala and Zhang [ Kook et al. , 2024 , Theorem 3] pro vides an algorithm ( In-and-Out ) with an ˜ O ( n 2 ) iteration complexit y for sampling from a (near-)isotropic c onvex b o dy X ⊂ R n with a warm start, where eac h iteration uses one call to the membership query to X and O ( n ) arithmetic op erations, with a guaran tee in R´ en yi div ergence. See also related w ork of [ Jia et al. , 2026 ] for isotropic transforma- tion, [ Kook and V empala , 2025b ] for sampling from general logconca ve distributions, and [ Ko ok and V empala , 2025a ] for improv ed complexit y of generating a warm start. Ho wev er, few results are known about the algorithmic complexity of sampling from nonc onvex b o dies. The work of Chandrasek aran, Dadush and V empala [ Chandrasek aran et al. , 2010 ] sho w ed that star-shap e d b o dies X (see Figure 1 (b) for an example) can b e sampled with iteration complexity p olynomial in the dimension n , in the in v erse fraction of volume taken up b y the con vex core (the non-empt y subset of the star-shap ed bo dy that can “see” all p oints in the b ody), and in ε − 1 where ε > 0 is the final error parameter in total v ariation distance sp ecified in the input. Beyond this result, we are not aw are of any rigorous results on the algorithmic complexity of uniformly sampling noncon vex b o dies. Indeed, sampling from noncon vex sets is intractable in the worst case [ Koutis , 2003 ]. Nevertheless, there are man y “reasonable” nonconv ex sets—such as those depicted in Figures 1 (c) and 1 (d)—that one should b e able to sample in p olynomial time, but such a statemen t do es not follow from the existing theory , since the sets are not star-shap ed. In this paper, we pro vide an efficien t algorithm for uniformly sampling from an arbitrary com- pact set X ⊂ R n under tw o minimal assumptions: (1) The uniform distribution on X satisfies isop erimetry , namely a Poincar ´ e inequality; and (2) the set X satisfies a natural volume gr owth c ondition , which w e define b elo w. These conditions substantially generalize conv exity and star- shap edness, the t wo broad families previously known to b e efficiently sampleable (and cov er the examples in Figures 1 (c) and 1 (d), and hence our results show that w e can indeed sample them in polynomial time). Here b y efficient/p olynomial-time , we mean p olynomial in the dimension n , the P oincar ´ e constan t of the target distribution, the v olume gro wth constan ts of X , and in log ε − 1 where ε > 0 is the final error parameter in R´ en yi divergence sp ecified in the input. Therefore, 3 (a) Conv ex (b) Star-shap ed (c) Noncon vex with a hole (d) Non-star-shap ed Figure 1: Examples of b o dies X ⊂ R n that current theory cov ers ((a) con v ex and (b) star-shap ed); and examples of X that curren t theory do es not cov er ((c) with a hole and (d) not star-shaped). our result recov ers the classical p olynomial-time sampling from con vex b o dies, and it improv es the result of Chandrasek aran et al. [ 2010 ] to a p olynomial-time sampling from star-shap ed b o dies (impro ving the dep endence from poly ( ε − 1 ) to poly (log ε − 1 )) with stronger error guaran tees in R ´ enyi div ergence. W e discuss these assumptions and the algorithm in more detail. Isop erimetry . The analysis of sampling for con v ex b o dies and logconcav e densities rev eals that the isop erimetry of the target distribution is a crucial ingredien t for efficien t sampling. F or example, if a distribution has p o or Cheeger isop erimetry (i.e., large P oincar´ e constant), then the domain can b e partitioned in to tw o subsets with a small separating b oundary (see Figure 2 (a) for an example), and an y “local” pro cess suc h as one based on diffusion, without global knowledge, w ould need man y steps to ev en cross from such a subset to its complement. Our main question is th us the following: Do es isop erimetry (Poinc ar´ e ine quality) suffic e for p olynomial-time sampling? W e recall that from the p ersp ectiv e of sampling via contin uous-time diffusion pro cesses such as the Langevin dynamics, isop erimetry (P oincar´ e inequality) is a natural sufficien t condition for efficien t sampling; see for example [ V empala and Wibisono , 2023 , Theorem 3] for the con v ergence rate of the contin uous-time Langevin dynamics under Poincar ´ e inequality . In discrete time, the Pr oximal Sampler algorithm [ Lee et al. , 2021 ] has a conv ergence guarantee under P oincar´ e inequal- it y analogous to the contin uous-time Langevin dynamics; see [ Chen et al. , 2022 , Theorem 4] for the case of the unconstrained target distribution, and [ Ko ok et al. , 2024 , Theorem 14] for the case of the uniform distribution on X (see also Lemma 8 ). How ev er, the Pr oximal Sampler is an idealized algorithm, since it requires sampling from a regularized distribution in eac h iteration (see Sec- tion A.1 for a review). The work [ Chen et al. , 2022 ] shows ho w to implement the Pr oximal Sampler 4 (a) A dum bb ell-shaped bo dy . (b) A cylinder. Figure 2: A bo dy with (a) p o or isoperimetry (large Poincar ´ e constan t), and (b) go od isop erimetry (small P oincar´ e constan t) but large volume gro wth rate. as a concrete algorithm using rejection sampling when the target distribution π ∝ exp( − f ) has full supp ort on R n and f is smo oth (has a b ounded second deriv ativ e). Subsequen t w orks [ Liang and Chen , 2022 , 2023 , F an et al. , 2023 ] sho w how to obtain an impro ved complexity using appro ximate rejection sampling under w eaker smo othness assumption suc h as H¨ older contin uity of ∇ f , or Lip- sc hitzness of f . Ho w ever, for our setting of the uniform target distribution π ∝ 1 X , the p otential function f is not ev en contin uous: f ( x ) = 0 if x ∈ X , and f ( x ) = + ∞ if x / ∈ X ; therefore, none of the results ab ov e apply . When π ∝ 1 X and X is c onvex , the work of Ko ok et al. [ 2024 ] sho ws ho w to implement the Pr oximal Sampler via rejection sampling with a threshold on the num b er of trials, resulting in the In-and-Out algorithm that they analyze and pro v e has ˜ O ( n 2 ) complexit y from a warm start in an isotropic conv ex b o dy . Ho wev er, when X is not con vex (or star-shap ed with a large con v ex core), there is curren tly no suc h algorithmic guarantee; we address this gap in this pap er. V olume growth condition. F or sampling from X ⊂ R n , it turns out that isop erimetry (Poincar ´ e inequalit y) alone is not sufficien t, and we need another condition. T o illustrate, consider a cylinder in R n of fixed axis length (sa y 1), and base radius ϵ > 0 (see Figure 2 (b)). The uniform distribution on this cylinder has go o d isop erimetry , i.e., the Poincar ´ e constant is O (1). How ev er, as the radius shrinks ( ϵ → 0), any kno wn Marko v c hain would need more mem b ership queries to go from one end of the cylinder to another. Intuitiv ely , an y pro cess that only has a lo cal picture of the b o dy to b e sampled either mak es v ery small steps or has a large probabilit y of stepping out. W e can capture this consideration via the volume gr owth r ate of X (see Section 2.3 for a precise definition), whic h is how fast the volume of X ⊕ B t gro ws as a function of t , where B t ≡ B (0 , t ) is the ℓ 2 -ball of radius t > 0, and ⊕ is the Mink o wski sum b et ween sets. When X is con vex, the v olume gro wth rate can b e con trolled by the external isop erimetry of X , whic h is the ratio of the surface area to the volume of X (see Lemma 1 ); the external isop erimetry can in turn b e con trolled by the reciprocal of the radius of the largest ℓ 2 -ball con tained in X (the “inner radius” of X ). In the cylinder example 5 ab o ve (Figure 2 (b)), which is conv ex, the volume gro wth rate of X scales as 1 /ϵ , which tends to + ∞ as ϵ → 0, suggesting that sampling from X in this case is difficult. This leads us to a more precise question: Can the c omplexity of sampling fr om X ⊂ R n b e b ounde d by a p olynomial in the dimension n , the Poinc´ ar e c onstant of π ∝ 1 X and the volume gr owth r ate of X ? In this pap er, we answ er this question affirmativ ely . W e analyze the same In-and-Out algo- rithm studied b y Ko ok et al. [ 2024 ], which only requires a mem b ership oracle to X . Whereas Ko ok et al. [ 2024 ] pro ved their result assuming X is conv ex, w e pro ve our result without assuming con- v exity , only isop erimetry and volume growth as stated ab ov e. W e know that isop erimetry holds for a large class of noncon vex sets. Similarly , w e show that the volume gro wth condition is preserved under op erations including union and set subtraction, and so it captures a large class of nonconv ex sets. Therefore, our main result sho ws that a large class of nonconv ex sets can b e sampled in p olynomial time, substan tially generalizing previous results for conv ex and star-shaped sets. 1.1 Algorithm: In-and-Out The In-and-Out algorithm [ Ko ok et al. , 2024 ] is the following iteration. Algorithm 1 : In-and-Out for sampling from π ∝ 1 X . Input: Initial p oint x 0 ∈ X , step size h > 0, num ber of steps T ∈ N , threshold N ∈ N . Output: x T ∼ ρ T . 1: for i = 0 , . . . , T − 1 do 2: Sample y i ∼ N ( x i , hI n ). 3: Rep eat: Sample x i +1 ∼ N ( y i , hI n ) un til x i +1 ∈ X . If #attempts i ≥ N , declare F ailure . 4: end for The algorithm has three parameters: (1) the step size h > 0, (2) the n umber of steps (or outer iterations) T ∈ N , and (3) the threshold on the num b er of trials N ∈ N in the rejection sampling (Step 3) in eac h iteration. An outer iter ation of the In-and-Out algorithm is one iteration (corresp onding to one v alue of i ∈ { 0 , 1 , . . . , T − 1 } ) of the Steps 2–3 in Algorithm 1 . W e call Step 2 the forwar d step , and Step 3 the b ackwar d step . W e note the In-and-Out algorithm is an implemen tation of the idealized Pr oximal Sampler sc heme (describ ed in Sec tion A.1 ) via rejection sampling with a threshold N on the num b er of trials in Step 3. Without the threshold N (equiv alen tly , when N = ∞ ), the In-and-Out algorithm b ecomes exactly the Pr oximal Sampler scheme. Ho w ever, as explained in [ Kook et al. , 2024 , Section 3.3], without the threshold N , the exp ected n umber of trials in the rejection sampling in Step 3 (when x i ∼ π X ) is equal to infinit y . Therefore, follo wing [ Ko ok et al. , 2024 ], we introduce 6 the threshold N < ∞ to ensure the expected num b er of trials in Step 3 is finite. With the threshold N < ∞ , the In-and-Out algorithm can fail when the rejection sampling in Step 3 exceeds N trials in an y iteration. When X satisfies a v olume gro wth condition, w e can con trol the failure probability of the In-and-Out algorithm; see Lemma 7 in Section 3.2 . When it succeeds for T iterations, In-and-Out returns a p oint x T ∼ ρ T whic h is a random v ariable with a probability distribution ρ T supp orted on X , i.e., x T ∈ X almost surely . In this case, the In-and-Out algorithm has a con v ergence guaran tee (inherited from the Pr oximal Sampler with a small bias in tro duced b y the threshold N ) in R ´ enyi divergence b etw een the output distribution ρ T and the target uniform distribution π ∝ 1 X , assuming that π satisfies a Poincar ´ e inequality; see Lemma 5 in Section 2.5 . 1.2 Main result: Iteration complexit y of In-and-Out with a w arm start Our main result is the follo wing guarantee for In-and-Out for sampling from π ∝ 1 X under isop erimetry and volume growth. Below, we assume β ≥ 1 n without loss of generalit y; if β is smaller than 1 n , w e can alwa ys replace it b y 1 n . W e provide the proof of Theorem 1 in Section 3.3 . Theorem 1. L et π ∝ 1 X wher e X ⊂ R n is a c omp act b o dy, n ≥ 2 . Assume: 1. X satisfies an ( α , β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ [ 1 n , ∞ ) ; 2. π satisfies a C PI -Poinc ar ´ e ine quality for some C PI ∈ [1 , ∞ ) . L et q ∈ [2 , ∞ ) , ε ∈ (0 , 1 2 ) , and M ∈ [1 , ∞ ) b e arbitr ary. Supp ose x 0 ∼ ρ 0 is M -warm with r esp e ct to π . Then with a suitable choic e of p ar ameters (se e ( 3 ) for T , ( 5 ) for h , ( 6 ) for N ), with pr ob ability at le ast 1 − ε , In-and-Out suc c e e ds for T iter ations, and outputs x T ∼ ρ T satisfying R q ( ρ T ∥ π ) ≤ ε , with the total numb er of trials over al l T iter ations b ounde d in exp e ctation by: E # of trials in In-and-Out = ˜ O q C PI α β 2 M n 3 log 4 1 ε , wher e e ach trial r e quir es one query to the memb ership or acle 1 X and O ( n ) arithmetic op er ations. The ˜ O notation ab ov e hides logarithmic dep endencies in the leading parameters q , n, C PI , α, β , M , and log 1 ε . See the pro of of Theorem 1 in Section 3.3 for a precise b ound (in equation ( 13 )) for the expected total n umber of trials of In-and-Out . W e also note the output guarantee in R ´ en yi div ergence R q of index q ≥ 2 also implies guarantees in KL divergence and total v ariation distance, since 2 TV 2 ≤ KL ≤ R 2 ≤ R q (see Section 2.4 ). W e recall that conv ex b o dies and star-shap ed b o dies hav e b ounded P oincar´ e constant s (see Section 2.2 ), and they also ha ve b ounded v olume growth constan ts (see Section 2.3 ). Th us, our result recov ers p olynomial-time sampling for con vex b o dies, albeit at a higher iteration complexit y ˜ O ( n 3 ) compared to ˜ O ( n 2 ) from [ Kook et al. , 2024 ]. Our result pro vides p olynomial-time sampling 7 (with p olynomial dependence on log ε − 1 ) for star-shaped bo dies with guaran tee in R ´ enyi div ergence; this strengthens the prior result of [ Chandrasek aran et al. , 2010 ] whic h has a p olynomial dep endence in ε − 1 and only provides guarantees in total v ariation distance. Perhaps most imp ortantly , our result v astly generalizes the scope of p olynomial-time uniform sampling to a large class of noncon vex b o dies — those satisfying the volume growth condition and whose uniform distributions satisfy a P oincar´ e inequalit y . W e remark on the differences compared to the con v ex case from [ Ko ok et al. , 2024 ]. A k ey part in the analysis of In-and-Out is in showing that the forward step do es not step to o far aw a y from X , and that the backw ard step has a go o d chance of landing bac k in X . In previous work [ Ko ok et al. , 2024 ], b oth these steps w ere analyzed b y crucially assuming con vexit y; see for example [ Ko ok et al. , 2024 , Lemma 25]. In our work, we analyze these steps without assuming con vexit y , only assuming that X satisfies a volume growth condition (Definition 1 ). W e pro vide further discussion on the differences with the con vex setting in a remark following Lemma 6 . 2 Preliminaries Notation and definitions. Let R n b e the Euclidean space of dimension n ≥ 2. Let ∥ x ∥ = q P n i =1 x 2 i b e the ℓ 2 -norm of a v ector x = ( x 1 , . . . , x n ) ∈ R n . Let I n denote the n × n identit y matrix. Let N ( µ, Σ) denote the Gaussian distribution with mean vector µ ∈ R n and co v ariance matrix Σ ∈ R n × n . In particular, N (0 , I n ) is the standard Gaussian distribution in R n . Giv en a set X ⊆ R n , its interior X ◦ is the set of p oin ts x ∈ X such that a sufficiently small ball around x is still contained in X . Recall X ⊆ R n is op en if X = X ◦ . Recall X ⊆ R n is close d if X ∁ is open, where X ∁ = R n \ X is its complemen t. Recall a b o dy X is a set X ⊆ R n that is closed and has a non-empt y in terior: X ◦ = ∅ . Recall a set X ⊂ R n is c omp act if and only if it is closed and b ounded. Giv en a closed set X ⊆ R n , recall the b oundary ∂ X of X is the set of points in X that are not in the interior: ∂ X = X \ X ◦ . Recall the volume of a b o dy X ⊆ R n is the in tegral ov er the b o dy: Vol ( X ) = R X d x , where d x is the Leb esgue measure on R n . The surfac e ar e a of a bo dy X ⊂ R n is the in tegral o ver the boundary ∂ X : a rea ( ∂ X ) = R ∂ X d H n − 1 ( x ), where d H n − 1 ( x ) is the ( n − 1)-dimensional Hausdorff measure. Giv en distributions ρ, π on R n , recall we say ρ is absolutely c ontinuous with respect to π , denoted by ρ ≪ π , if for an y measurable set A ⊆ R n , π ( A ) = 0 implies ρ ( A ) = 0. If a probability distribution π is absolutely contin uous with resp ect to the Leb esgue measure d x , then we can write π in terms of its probability densit y function (Radon-Nik o dym deriv ative), whic h we also denote b y π : R n → [0 , ∞ ), with R R n π ( x ) d x = 1. If ρ and π are b oth absolutely con tinuous with resp ect to the Leb esgue measure d x and represen ted by their density functions, then ρ ≪ π means π ( x ) = 0 for some x ∈ R n implies ρ ( x ) = 0. 8 2.1 Geometry of the supp ort set Throughout, let X ⊂ R n b e a compact b o dy; this means X is closed, b ounded, and has a non- empt y interior, so it has a finite v olume Vol ( X ) ∈ (0 , + ∞ ). Note we do not assume X is con vex. W e assume X has a sufficiently regular surface, so X has a finite surface area area ( ∂ X ) ∈ (0 , + ∞ ). W e assume we hav e access to a memb ership or acle to X , whic h is a function 1 X : R n → { 0 , 1 } giv en by 1 X ( x ) = 1 if x ∈ X , and 1 X ( x ) = 0 if x / ∈ X . W e measure the complexity of our algorithm b y the num b er of queries to the mem b ership oracle 1 X . Our goal is to sample from the uniform probability distribution π ∝ 1 X supp orted on X . Explicitly , π has probability densit y function π : R n → [0 , ∞ ) giv en by: π ( x ) = 1 V ol ( X ) · 1 X ( x ) for all x ∈ R n . W e also define the following notions for conv enience. Given a closed set Y ⊆ R n , w e define the distanc e function dist ( · , Y ) : R n → [0 , ∞ ) of x ∈ R n to Y by: dist ( x, Y ) = min y ∈Y ∥ x − y ∥ . Note that b y definition, dist ( x, Y ) = 0 if and only if x ∈ Y . F or t ≥ 0, let B t ≡ B (0 , t ) b e the ℓ 2 -ball of radius t cen tered at 0 ∈ R n : B t = { x ∈ R n : ∥ x ∥ ≤ t } = { x ∈ R n : dist ( x, { 0 } ) ≤ t } . Giv en X ⊂ R n , for t ≥ 0, we define the enlar ge d set X t ⊂ R n b y: X t = X ⊕ B t = { x ∈ R n : dist ( x, X ) ≤ t } where ⊕ is the Minko wski sum b et ween sets. W e note the following relation, which we can also tak e as the definition of surface area: area ( ∂ X ) = lim t → 0 1 t ( V ol ( X t ) − Vol ( X )) . Giv en a compact b o dy X ⊂ R n , we define the outer isop erimetry of X to b e the ratio of the surface area to v olume: ξ ( X ) = a rea ( ∂ X ) V ol ( X ) ∈ (0 , ∞ ) . 2.2 Isop erimetry: P oincar´ e inequalit y Recall w e say a probabilit y distribution π on R n satisfies a Poincar ´ e inequalit y (PI) with constant C PI ( π ) ∈ (0 , ∞ ) if for all smo oth functions ϕ : R n → R , the following holds: V ar π ( ϕ ) ≤ C PI ( π ) · E π [ ∥∇ ϕ ∥ 2 ] where V ar π ( ϕ ) = E π ( ϕ − E π [ ϕ ]) 2 is the v ariance of ϕ under π . 9 W e recall that if π is logconcav e (i.e., − log π is a conv ex function on R n ), then it satisfies a P oincar´ e inequalit y . The Kannan-Lov asz-Simonovitz (KLS) conjecture [ Kannan et al. , 1995 ] states that if π is logconcav e, then C PI ( π ) = O ( ∥ Σ ∥ op ), where ∥ Σ ∥ op is the largest eigen v alue of the co v ariance matrix Σ = Cov π ( X ) of X ∼ π . The current b est result [ Klartag , 2023 ] is C PI ( π ) = O ( ∥ Σ ∥ op · log n ). W e recall that the Poincar ´ e inequalit y is stable under some types of p erturbations of the distri- butions (including b ounded p erturbations of the densit y , and pushforw ard b y a Lipschitz mapping), while these p erturbations easily destroy logconcavit y . Therefore, the class of distributions satisfying P oincar´ e inequalit y is larger than the class of logconcav e distributions. In this w ork, w e as sume that the target uniform distribution π ∝ 1 X satisfies a P oincar ´ e inequal- it y with some constant C PI ≡ C PI ( π ) ∈ (0 , + ∞ ). Note while X ma y b e noncon v ex, this assumption implies X cannot be too bad; e.g., X m ust be connected and do es not hav e a “b ottleneck”. If X is a conv ex b o dy with diameter D > 0, then C PI ( π ) = O ( D 2 ). W e also recall from [ Chandrasek aran et al. , 2010 ] that if X is a star-shaped b o dy with diameter D > 0 and the v olume of its conv ex core is a fraction γ ∈ (0 , 1] of the total v olume, then C PI ( π ) = O ( D 2 /γ 2 ). 2.3 V olume growth condition W e in tro duce the following key definition on the volume growth of the enlarged set X t = X ⊕ B t , where recall B t ≡ B (0 , t ) is the ℓ 2 -ball of radius t ≥ 0. Definition 1. We say a c omp act b o dy X ⊂ R n satisfies an ( α, β ) -volume gr owth c ondition for some c onstants α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) if for al l t > 0 : V ol ( X t ) V ol ( X ) ≤ α · (1 + tβ ) n . (1) W e recall that conv ex sets satisfy the v olume gro wth condition where the constan t is determined b y the outer isop erimetry; see Lemma 1 . W e can show that a star-shap ed b o dy satisfies the volume gro wth condition with a constant inherited from the conv ex b o dy; see Lemma 2 . Moreov er, we sho w that volume gro wth condition is preserved under some operations, including set union and set exclusion; see Lemma 3 and Lemma 4 . Therefore, the v olume growth condition captures conv ex b o dies and a wide class of nonconv ex sets. Later, we will see that the volume gro wth condition allo ws us to control the failure probabilit y of the In-and-Out algorithm (Lemma 6 ). Remark 1. A ny c omp act b o dy X trivial ly satisfies the volume gr owth c ondition, but with a naive estimate of α that may b e exp onential ly lar ge such that it is not useful for our algorithmic purp ose. Concr etely, sinc e X is a c omp act set with nonempty interior, it c ontains a b al l of r adius r and is c ontaine d in a lar ger b al l of r adius R , so B ( x, r ) ⊆ X ⊆ B ( x, R ) for some x ∈ X and R ≥ r > 0 . Then X t ⊆ B ( x, R + t ) for al l t ≥ 0 , and we c an estimate: V ol ( X t ) V ol ( X ) ≤ V ol ( B ( x, R + t )) V ol ( B ( x, r )) = R r n · 1 + t R n . 10 This shows X satisfies the volume gr owth c ondition with α = ( R /r ) n and β = 1 /R . If we have additional structur es on X , such as b eing c onvex or star-shap e d, or b eing a union or differ enc e of sets satisfying volume gr owth c ondition, then we may obtain b etter estimates on α and β ; se e L emmas 1 , 2 , 3 , and 4 b elow. Remark 2. The volume gr owth pr op erty has b e en studie d in prior work. Notably, the Brunn- Minkowski the or em implies that c onvex b o dies satisfy the volume gr owth c ondition with α = 1 ; se e L emma 1 . Mor e over, [ F r adelizi and Marsiglietti , 2014 , The or em 3.7] show that if X ⊂ R n is c omp act with a r e gularity c ondition, 1 then t 7→ Vol ( X t ) is eventual ly 1 n -c onc ave, i.e., ther e exists T 0 ∈ [0 , ∞ ) such that t 7→ Vol ( X t ) 1 /n is a c onc ave function for al l t ≥ T 0 . This implies Vol ( X t ) ≤ V ol ( X T 0 ) (1 + ( t − T 0 ) β ) n for al l t ≥ T 0 , wher e β = 1 n ξ ( X T 0 ) ; this shows the enlar ge d b o dy X T 0 satisfies the volume gr owth c ondition with α = 1 . 2.3.1 V olume growth for conv ex b o dies When X is con vex, it satisfies the v olume gro wth condition with a constant bounded by the outer isop erimetry ratio ξ ( X ) = area ( ∂ X ) Vol ( X ) ; this is a consequence of the Brunn-Minko wski theorem. W e pro vide the pro of of Lemma 1 in Section C.1 . Lemma 1. If X ⊂ R n is a c onvex b o dy, then it satisfies the (1 , 1 n ξ ( X )) -volume gr owth c ondition. W e also recall that if X is con vex and con tains a ball of radius r > 0, then we can b ound 1 n ξ ( X ) ≤ 1 r ; see e.g. [ Belkin et al. , 2013 , Lemma 2.1]. 2.3.2 V olume growth for star-shap ed b o dies W e recall from [ Chandrasek aran et al. , 2010 ] that a b o dy X ⊂ R n is star-shap e d if it is a union of con vex sets, all of whic h hav e a common (necessarily conv ex) intersection called the c or e of X . W e can b ound the volume gro wth constan t of star-shaped b o dies similar to conv ex b o dies. W e provide the pro of of Lemma 2 in Section C.2 . Lemma 2. L et X ⊂ R n b e a star-shap e d b o dy, so X = S i ∈I X i wher e X i is a c onvex b o dy for e ach i ∈ I in a finite index set I , and they shar e a c ommon interse ction Y = X i ∩ X j = ∅ for al l i = j . Assume Y c ontains a b al l of r adius r > 0 c enter e d at 0 , i.e., B r ⊆ Y . Then X satisfies the (1 , 1 r ) -volume gr owth c ondition. 2.3.3 V olume growth under union W e sho w that the v olume growth condition is preserved under set union. W e provide the pro of of Lemma 3 in Section C.3 . 1 Namely , ϵ 7→ Vol ( ϵ X ⊕ B 1 ) is twice-differen tiable in a neighborho od of 0, with a contin uous second deriv ative at 0. 11 Lemma 3. Supp ose X i ⊂ R n is a c omp act b o dy that satisfies the ( α i , β i ) -volume gr owth c ondition for some α i ∈ [1 , ∞ ) and β i ∈ (0 , ∞ ) , for e ach i ∈ I in some finite index set I . L et q I b e the pr ob ability distribution supp orte d on I with density q I ( i ) = Vol ( X i ) P j ∈I Vol ( X j ) , for i ∈ I . Then the union X = S i ∈I X i satisfies the ( A, B ) -volume gr owth c ondition, wher e: A = max i ∈I α i · P i ∈I V ol ( X i ) V ol ( X ) , B = E I ∼ q I [ β n I ] 1 /n = P i ∈I V ol ( X i ) · β n i P j ∈I V ol ( X j ) ! 1 n ≤ max i ∈I β i . 2.3.4 V olume growth under set exclusion W e also sho w that the v olume gro wth condition is preserv ed under set exclusion. W e provide the pro of of Lemma 4 in Section C.4 . Lemma 4. L et Y ⊂ R n b e a c omp act b o dy that satisfies the ( α, β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . L et X = Y \ Z , wher e Z ⊂ Y is an op en set with V ol ( Z ) < Vol ( Y ) , and assume X is c omp act. Then X satisfies the ( A, β ) -volume gr owth c ondition wher e A = α · Vol ( Y ) Vol ( X ) . 2.4 R ´ enyi div ergence and other statistical divergences Let ρ and π b e probability distributions on R n absolutely contin uous with resp ect to the Leb esgue measure, so w e can represent them b y their densit y functions. Assume ρ ≪ π . The R ´ enyi diver genc e of order q ∈ (1 , ∞ ) b etw een ρ and π is: R q ( ρ ∥ π ) = 1 q − 1 log E π h ρ π q i . Recall R q ( ρ ∥ π ) ≥ 0 for all ρ and π , and R q ( ρ ∥ π ) = 0 if and only if ρ = π . W e also recall the map q 7→ R q ( ρ ∥ π ) is increasing. The limit q → 1 is the Kul lb ack-L eibler (KL) diver genc e or relativ e en tropy: lim q → 1 R q ( ρ ∥ π ) = KL ( ρ ∥ π ) = E ρ h log ρ π i . Therefore, KL ( ρ ∥ π ) ≤ R q ( ρ ∥ π ) for all q > 1. The total variation distanc e b et ween ρ and π is: TV ( ρ, π ) = sup A ⊆ R n | ρ ( A ) − π ( A ) | . Recall b y Pinsker’s ine quality , w e hav e 2 TV ( ρ, π ) 2 ≤ KL ( ρ ∥ π ) for an y ρ ≪ π . Supp ose ρ and π b oth hav e supp ort X ⊂ R n , and ρ ≪ π . W e say that ρ is M -w arm with resp ect to π for some M ∈ [1 , ∞ ) if sup x ∈X ρ ( x ) π ( x ) ≤ M . 12 Note that if ρ is M -warm with resp ect to π , then KL ( ρ ∥ π ) ≤ log M and R q ( ρ ∥ π ) ≤ log M for all q ∈ (1 , ∞ ); in particular, R ∞ ( ρ ∥ π ) := lim q →∞ R q ( ρ ∥ π ) ≤ log M . 2.5 Review: Outer conv ergence of In-and-Out under isop erimetry W e recall that when it succeeds, In-and-Out has a rapid mixing guarantee to π in R´ en yi div er- gence under Poincar ´ e inequality; see Lemma 5 b elow. This follows from com bining the conv ergence guaran tee of the ideal Pr oximal Sampler algorithm under Poincar ´ e inequality (see Lemma 8 in Section A.1 for a review), together with a con trol on the bias of the In-and-Out algorithm condi- tioned on the success even t (which w as sho wn in [ Ko ok et al. , 2024 , Lemma 16]). F or completeness, w e provide the proof of Lemma 5 in Section B.1 . Lemma 5. Assume π ∝ 1 X satisfies a Poinc ar´ e ine quality with c onstant C PI ∈ [1 , ∞ ) . L et q ∈ [2 , ∞ ) and h > 0 . L et ρ 0 b e a pr ob ability distribution on X with ρ 0 ≪ π and R q ( ρ 0 ∥ π ) < ∞ . L et T 0 := max ( 0 , & q ( R q ( ρ 0 ∥ π ) − 1) 2 log 1+ h C PI ') , and let T ≥ T 0 b e the desir e d numb er of iter ations. L et Succ b e the suc c ess event that In-and-Out (Algorithm 1 ) runs without failur e for T iter ations. Assume Pr( Succ ) ≥ 1 − η for some η ∈ [0 , 1 2 ] . Then, c onditione d on Succ , the output x T ∼ ρ T of In-and-Out satisfies: R q ( ρ T ∥ π ) ≤ 1 + h C PI − 1 q ( T − T 0 ) + 4 η . Lemma 5 gives the n umber of outer iterations of the In-and-Out algorithm to ac hieve a desired error threshold. Then it remains to control the complexity of implementing each outer iteration via rejection sampling; w e can do this via the notion of volume growth condition. 3 Analysis of the In-and-Out algorithm 3.1 Bound on stationary failure probabilit y under volume gro wth condition Let X ∼ π ∝ 1 X . F or h > 0, define the random v ariable Y h := X + √ hZ where Z ∼ N (0 , I n ) is indep enden t, so Y h ∼ π h where π h := π ∗ N (0 , hI n ) . In this section, we study the follo wing “stationary failure probabilit y” at scale r > 0 and h > 0: Pr( Y h / ∈ X r ) = π h X ∁ r . (Here “stationary” means that X ∼ π , so Y h ∼ π h .) F or m ∈ N , we define the Gaussian tail pr ob ability Q m : [0 , ∞ ) → [0 , 1] b y: Q m ( r ) := Pr( ∥ Z ∥ ≥ r ) 13 where Z ∼ N (0 , I m ) is a standard Gaussian random v ariable in R m . W e show the following b ound on the stationary failure probabilit y under the volume gro wth condition. This will allow us to sho w that a t ypical step stays “close” to X . W e provide the pro of of Lemma 6 in Section B.2 . Lemma 6. Assume X ⊂ R n satisfies the ( α, β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . If 0 < h ≤ 1 2 n 3 β 2 , then for al l r ≥ 0 : π h X ∁ r ≤ α ( n + 1) · Q 2 n r √ h . (2) Comparison with conv ex case: Let us compare this result with the conv ex case from Ko ok et al. [ 2024 ]. When X is conv ex and con tains a ball of radius 1, [ Ko ok et al. , 2024 , Lemma 25] sho ws that: π h ( X ∁ r ) ≤ exp( hn 2 2 ) · Q 1 r − hn √ h ≤ exp( − r 2 2 h + rn ); this in volv es the one-dimensional Gaussian tail Q 1 , and thus has an exp onen tial decay for all r > 0. When X is nonconv ex, our b ound ( 2 ) in Lemma 6 in volv es the 2 n -dimensional Gaussian tail Q 2 n r √ h , which only provides an exp onen tial decay after r > √ hn . This difference results in a smaller step size h = ˜ O ( n − 3 ) in the nonconv ex case, compared to h = ˜ O ( n − 2 ) in the conv ex case [ Ko ok et al. , 2024 ]; this manifests as the ˜ O ( n 3 ) iteration complexit y of In-and-Out in the noncon vex case (Theorem 1 ), compared to ˜ O ( n 2 ) in the con vex case [ Kook et al. , 2024 ]. Let us illustrate where the difference abov e comes from. A k ey step of the proof of Lemma 6 is to consider y ∈ X ∁ r , whic h lies at distance u ( y ) > 0 from X , and we w an t to b ound the probabilit y Pr( y + √ hZ ∈ X ) where Z ∼ N (0 , I n ). • When X is conv ex, there is a halfspace H y con taining y that is contained in X ∁ (see Fig- ure 3 (a)), so we can b ound: Pr( y + √ hZ ∈ X ) ≤ Pr( y + √ hZ ∈ H ∁ y ), whic h is equal to the one-dimensional Gaussian tail probabilit y Q 1 u ( y ) √ h . • When X is nonconv ex, such a halfspace containmen t may not hold; ho wev er, a weak er ball con tainment still holds. Note the ball of radius u ( y ) = dist ( y , X ) centered at y do es not in tersect X (see Figure 3 (b)): B ( y , u ( y )) ⊆ X ∁ , so X ⊆ B ( y , u ( y )) ∁ . Then w e can b ound: Pr( y + √ hZ ∈ X ) ≤ Pr( y + √ hZ ∈ B ( y , u ( y )) ∁ ), whic h is equal to the n -dimensional Gaussian tail probabilit y Q n u ( y ) √ h . Subsequent steps of the proof result in a b ound in volving the 2 n - dimensional Gaussian tail probabilit y Q 2 n . See Section B.2 for details. 14 (a) X conv ex: There is a halfspace contain- ing y that does not intersect X . (b) X noncon vex: There is a ball containing y that do es not intersect X . Figure 3: Difference of containmen t guarantee in the (a) conv ex case and (b) noncon vex case. 3.2 P er-iteration success probability and exp ected n umber of trials Recall an iteration of In-and-Out is one forward step and one backw ard step which inv olv es up to N rejection sampling trials. W e hav e the follo wing guaran tee on the success probabilit y and the exp ected num ber of trials of the rejection sampling step in the In-and-Out algorithm. W e provide the pro of of Lemma 7 in Section B.5 . Lemma 7. L et π ∝ 1 X wher e X ⊂ R n satisfies the ( α, β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . L et x 0 ∼ ρ 0 wher e ρ 0 is M -warm with r esp e ct to π for some M ∈ [1 , ∞ ) . Assume: h ≤ 1 2 β 2 n 3 · max { 1 , 1 n log(( n + 1) αS ) } N = 8 αS log S wher e S ≥ 3 is arbitr ary. Assume In-and-Out suc c e e ds for m ≥ 0 iter ations (i.e., c onditione d on the event “r e ach iter ation m ”), and is curr ently at x m ∼ ρ m which is M -warm with r esp e ct to π . Then the fol lowing hold: 1. The next iter ation (to c ompute x m +1 ) suc c e e ds with pr ob ability: Pr x m ∼ ρ m ( next iter ation suc c e e ds | r e ach iter ation m ) ≥ 1 − 3 M S . 2. The exp e cte d numb er of trials in the next iter ation is E x m ∼ ρ m [# trials until suc c ess in next iter ation | r e ach iter ation m ] ≤ 16 M α log S. 3. Up on ac c epting x m +1 ∼ ρ m +1 , the next distribution ρ m +1 is M -warm with r esp e ct to π . The lemma ab o ve allo ws us to induct ov er the iterations and b ound the failure probabilit y of In-and-Out ov er T iterations. 15 3.3 Pro of of Theorem 1 W e are now ready to pro v e Theorem 1 . Pr o of of The or em 1 . Given the target error ε ∈ (0 , 1 2 ), w e define ε ′ := 1 2 ε and η := 1 8 ε , so ε ′ < 1 4 and η < 1 16 . By assumption, x 0 ∼ ρ 0 is M -w arm with resp ect to π ∝ 1 X . In particular, R q ( ρ 0 ∥ π ) ≤ log M . W e choose the num b er of iterations T to b e: T := 8 q C PI β 2 n 2 n + log 3( n + 1) αM η · log M ε ′ × · · · log 4 q C PI β 2 n 2 n + log 3( n + 1) αM η · log M ε ′ . (3) W e define an auxiliary parameter S to b e: S := 3 T M η . (4) W e also define the step size h and the threshold on the num b er of trials N in each iteration to b e: h := 1 2 β 2 n 3 (1 + 1 n log(( n + 1) αS )) (5) N := 8 αS log S. (6) W e note S ≥ 3, h ≤ 1 2 n ≤ 1 2 , and our choices of h and N ab ov e satisfy the assumptions in Lemma 7 . By Lemma 7 inductively for m ≥ 0, conditioned on the ev ent that In-and-Out reac hes iteration m , the iterate x m ∼ ρ m of In-and-Out remains M -w arm. Then w e can do the following analysis. (1) F ailure probability: F or eac h m ≥ 0, let E m denote the even t that In-and-Out reaches iteration m (i.e., the first m iterations succeed) starting from x 0 ∼ ρ 0 . Then Pr( E 0 ) = 1, and we w ant to b ound Pr( E T ). By Lemma 7 and the warmness guaran tee, for eac h m ∈ { 0 , 1 , . . . , T − 1 } w e hav e Pr E ∁ m +1 | E m ≤ 3 M S ( 4 ) = η T . Since the ev ents E m are nested ( E m +1 ⊆ E m for m ≥ 0), w e can decomp ose: Pr E ∁ T = T − 1 X m =0 Pr E m ∩ E ∁ m +1 = T − 1 X m =0 Pr ( E m ) · Pr E ∁ m +1 | E m ≤ T − 1 X m =0 1 · η T = η . Therefore, the probabilit y that In-and-Out runs successfully for T iterations is: Pr ( E T ) = 1 − Pr E ∁ T ≥ 1 − η = 1 − ε 8 . 16 (2) Error guaran tee: In view of Lemma 5 , we define the initial duration when R´ en yi divergence decreases slo wly along In-and-Out : T 0 := max 0 , q ( R q ( ρ 0 ∥ π ) − 1) 2 log 1 + h C PI . (7) After T 0 iterations, the R ´ en yi div ergence decreases exp onentially fast. W e define: ˜ T := T 0 + q log 1 ε ′ log 1 + h C PI . (8) W e claim that our c hoice of T in ( 3 ) satisfies T ≥ ˜ T ; we sho w this b elo w. Assuming this claim, if In-and-Out runs successfully for T ≥ ˜ T iterations (whic h holds with probability at least 1 − η by part (1)), then b y Lemma 5 , the output x T ∼ ρ T satisfies: R q ( ρ T ∥ π ) ≤ 1 + h C PI − 1 q ( T − T 0 ) + 4 η ≤ 1 + h C PI − 1 q ( ˜ T − T 0 ) + 4 η ( 8 ) ≤ ε ′ + 4 η = ε. Th us, if In-and-Out succeeds for T iterations, then its output satisfies the desired error guaran tee. W e no w show T ≥ ˜ T . Since h ≤ 1 2 and C PI ≥ 1, w e ha ve h C PI ≤ 1 2 . Using the inequality log(1 + t ) ≥ t 2 for 0 < t = h C PI ≤ 1 2 , we ha ve log(1 + h C PI ) ≥ h 2 C PI . Using R q ( ρ 0 ∥ π ) ≤ log M , we can estimate ( 7 ) b y: T 0 ≤ q C PI h log M . F urthermore, we can estimate ( 8 ) b y: ˜ T ≤ q C PI h log M + 2 q C PI h log 1 ε ′ ≤ 2 q C PI h log M ε ′ . It remains to sho w that with our choices of T in ( 3 ) and h in ( 5 ), we hav e T ≥ 2 q C PI h log M ε ′ . This follo ws by a routine computation, whic h w e provide in Lemma 16 . (3) Exp ected num b er of trials: F or each m ≥ 0, let N m denote the n umber of trials in iteration m of In-and-Out to compute x m +1 ∼ ρ m +1 . Recall from part (1) the even t E m that In-and-Out reac hes iteration m . If In-and-Out fails b efore reaching iteration m (i.e. on the even t E ∁ m ), then N m = 0. By Lemma 7 , if In-and-Out reac hes iteration m (i.e. on the even t E m ), then the expected n umber of trials in the next iteration is: E [ N m | E m ] ≤ 16 M α log S ( 4 ) = 16 M α log 3 M T η . (9) 17 Therefore, the exp ected n umber of trials ov er T iterations of In-and-Out is: E # of trials in T iterations of In-and-Out = T − 1 X m =0 E [ N m ] = T − 1 X m =0 Pr E ∁ m × E h N m | E ∁ m i + Pr ( E m ) × E [ N m | E m ] = T − 1 X m =0 Pr ( E m ) × E [ N m | E m ] ( 9 ) ≤ 16 M αT log 3 M T η . (10) T o simplify this, we plug in our c hoice of T from ( 3 ) whic h is of the form T = 2 z log z , where z := 4 q C PI β 2 n 2 n + log 3( n + 1) αM η · log M ε ′ . (11) Using log z ≤ z − 1 ≤ z , w e hav e log 3 M T η = log 6 M z η + log (log z ) ≤ log 6 M z η + log z ≤ 2 log 6 M z η . (12) Therefore, the exp ected n umber of trials ov er T iterations of In-and-Out is b ounded by: E # of trials in T iterations of In-and-Out ( 12 ) ≤ 64 M αz (log z ) · log 6 M z η ≤ 64 M αz · log 6 M z η 2 . (13) The expression ( 13 ) is a precise b ound on the exp ected n umber of total trials of In-and-Out , where z is defined in ( 11 ). T o simplify this further, we use the ˜ O notation to hide constan ts and logarithmic dep endencies on the parameters, so for example, M log M = ˜ O ( M ) and z log z = ˜ O ( z ). Recalling ε ′ = 1 2 ε and η = 1 8 ε , w e can simplify the expression in ( 13 ) as: 64 M α z · log 6 M z η 2 = ˜ O M α z log 1 η 2 ! ( 11 ) = ˜ O M α · q C PI β 2 n 3 1 + 1 n log 3( n + 1) αM η · log M ε ′ · log 1 η 2 ! = ˜ O q C PI αβ 2 M n 3 1 + 1 n log 1 η · log 1 ε ′ · log 1 η 2 ! = ˜ O q C PI αβ 2 M n 3 · log 1 ε 4 ! 18 as claimed in the theorem. Finally , insp ecting the algorithm, we see that each trial requires one call to the membership oracle 1 X and O ( n ) arithmetic op erations. 3.4 Discussion and future work In this pap er, w e hav e pro vided an efficient algorithm ( In-and-Out ) to sample from a large family of compact, noncon vex sets. Our result significantly adv ances the frontier of pro v ably efficien t uniform sampling; such results w ere previously known for the sp ecial conv ex b o dies and star-shap ed b o dies. W e show it for nonconv ex sets satisfying isop erimetry (Poincar ´ e inequality) and a volume growth condition; these conditions are preserved under natural set op erations, and thereby capture a m uch wider class b ey ond conv exit y . The In-and-Out algorithm is an implemen tation of the ideal Proximal Sampler scheme, which can be view ed as an approximate proximal discretization of the Langevin dynamics, and explains wh y isop erimetry in the form of Poincar ´ e inequality is a natural condition for fast mixing. The difficult y is in controlling the complexit y of implemen ting the Proximal Sampler scheme using rejection sampling. W e show ed how to control the failure probability under the volume gro wth condition. W e conjecture that a Poincar ´ e inequality and a b ounded outer isop erimetry ξ ( X ) = a rea ( ∂ X ) / Vol ( X ) (which is ev en weak er than v olume growth) are sufficien t for efficient sampling. Our result in this pap er assumes a warm start initialization. Ho w to generate a warm start efficiently for a noncon vex set is an open problem. It would b e interesting to study if other algorithms, such as the Ball W alk or the Metrop olis Random W alk, also work for sampling from this class of noncon vex sets; all existing pro ofs still assume conv exit y/star-shap edness. Beyond uniform sampling, it would b e interesting to study ho w to efficiently sample more general distributions under isop erimetry and without smo othness, e.g. extending the w orks of Applegate and Kannan [ 1991a ], F rieze and Kannan [ 1999 ], Lo v´ asz and V empala [ 2006a ], Ko ok and V empala [ 2025b ] b eyond logconcavit y . A Additional discussion A.1 A review of the Pro ximal Sampler The ideal Pr oximal Sampler algorithm is the following (Algorithm 2 ). Here ide al means it assumes w e can implement b oth the forw ard step 2 and backw ard step 3 in Algorithm 2 . If we implement the bac kward step 3 via rejection sampling with a threshold on the n um b er of trials, then we obtain the In-and-Out algorithm (Algorithm 1 ). Below, N ( y , hI n ) · 1 X denotes the Gaussian distribution N ( y , hI n ) restricted to X . 19 Algorithm 2 : Pro ximal Sampler for sampling from π ∝ 1 X Input: Initial p oint x 0 ∼ ρ 0 ∈ P ( X ), step size h > 0, n umber of steps T ∈ N Output: x T ∼ ρ T ∈ P ( X ) 1: for i = 0 , . . . , T − 1 do 2: Sample y i ∼ N ( x i , hI n ) 3: Sample x i +1 ∼ N ( y i , hI n ) · 1 X 4: end for Pro ximal Sampler as Gibbs sampling. As described in [ Lee et al. , 2021 ], w e can deriv e the Pr oximal Sampler algorithm as an alternating Gibbs sampling algorithm from a joint target distribution π X Y ∈ P ( R 2 n ) with densit y function: π X Y ( x, y ) ∝ 1 X ( x ) · exp − 1 2 h ∥ y − x ∥ 2 . Note the X -marginal of π X Y is π ∝ 1 X . If we can sample from π X Y , then we can return the X -comp onent as a sample from π . W e apply Gibbs sampling to sample from π X Y , which results in the follo wing reversible Mark o v chain ( x i , y i ) 7→ ( x i +1 , y i +1 ) via the alternating up date: x i +1 | y i ∼ π X | Y = y i = N ( y i , hI n ) · 1 X y i +1 | x i +1 ∼ π Y | X = x i +1 = N ( x i +1 , hI n ) whic h is precisely the Pr oximal Sampler (Algorithm 2 ). See also Chen et al. [ 2022 ] for further optimization in terpretation of the Pr oximal Sampler as an approximate pro ximal discretization of the con tinuous-time Langevin dynamics. A review on the con vergence guarantee of the Proximal Sampler. W e recall the follo wing result on the conv ergence guaran tee of the Pr oximal Sampler under P oincar ´ e inequalit y . This result w as sho wn in [ Chen et al. , 2022 , Theorem 4] for target distribution π with full supp ort on R n , and w as extended to the case of uniform distribution π ∝ 1 X in [ Kook et al. , 2024 , Theorem 14]. W e refer the reader to [ Ko ok et al. , 2024 , Theorem 14] for the proof. Lemma 8 ([ Ko ok et al. , 2024 , Theorem 14]) . Assume π ∝ 1 X satisfies a Poinc ar ´ e ine quality with c onstant C PI ∈ [1 , ∞ ) . L et q ∈ [2 , ∞ ) . L et ρ 0 b e a pr ob ability distribution on X with ρ 0 ≪ π and R q ( ρ 0 ∥ π ) < ∞ . Along the Pr oximal Sampler algorithm fr om x 0 ∼ ρ 0 with step size h > 0 for T ∈ N iter ations, the output x T ∼ ρ T satisfies: R q ( ρ T ∥ π ) ≤ R q ( ρ 0 ∥ π ) − T q log 1 + h C PI if T ≤ q ( R q ( ρ 0 ∥ π ) − 1) 2 log 1+ h C PI 1 + h C PI − 1 q ( T − T 0 ) if T ≥ T 0 := max ( 0 , & q ( R q ( ρ 0 ∥ π ) − 1) 2 log 1+ h C PI ') . 20 A.2 The analogy with optimization The results for sampling logconca ve densities can b e view ed as parallels to classical results for optimization, where again, there are p olynomial-time algorithms for con v ex optimization (when the ob jectiv e function to be minimized and the feasible region of solutions are both con vex) [ Gr¨ otsc hel et al. , 1993 , V aidya , 1996 , Lee et al. , 2018 ]. Since a logconcav e density π has a con vex p otential, i.e., π ( x ) ∝ e − f ( x ) for some conv ex function f , this family is the natural analog of conv ex functions in optimization. Bey ond conv exit y/logconca vity . F or optimization, min x ∈X f ( x ), there are mild deviations from con vexit y that still allo w efficien t algorithms; notably , when the ob jectiv e function satisfies a gradien t-domination condition (P olyak-Lo jaciewicz/PL inequalit y), simple algorithms such as the pro ximal gradient metho d or gradient descent (under smo othness) hav e exp onen tial con vergence guaran tees in ob jective function to the minimum v alue [ Karimi et al. , 2016 ]. Ho wev er, ev en under the PL c ondition, it is crucial that any lo cal minimum is also a global minimum. On the other hand, if the function f is allow ed to ha ve spurious lo cal (non-global) minima, then the task of finding the global minimizer is NP-hard in the w orst case. F or sampling, there are more significan t extensions b ey ond logconca vity that still allo w for efficien t algorithms. W e men tion tw o directions of existing results for efficient sampling b eyond log-conca vity . First, star-shap ed b o dies can be efficiently sampled with complexit y p olynomial in the dimension and the in v erse fraction of v olume tak en up by the con v ex core (the non-empt y subset of the star-shap ed b o dy that can “see” all p oin ts in the b o dy) [ Chandrasek aran et al. , 2010 ]; as sho wn in [ Chandrasek aran et al. , 2010 ], linear optimization ov er star-shap ed b o dies is hard, even to solv e approximately , and even when the con vex core tak es up a constant fraction of the star-shap ed b o dy . Second, distributions satisfying a Poincar ´ e or log-Sob olev inequalit y under smo othness , i.e., with a b ound on the Lipsc hitz constan t of the logarithm of the target density [ V empala and Wibisono , 2023 , Chen et al. , 2022 ]; how ev er, the iteration complexit y scales with the smo othness constan t, and in any case it does not apply to our setting of uniform on X . Despite only suc h modest deviations from con vexit y b eing kno wn to b e tractable, the existing results abov e demonstrate that lo cal minima b eing global minima or unimo dalit y is not essential for sampling. In fact, in tuition suggests that sampling a substantially more general class of distributions should b e tractable. F or optimization, the optimal solution can b e hidden in a v ery small part of the feasible region of a slightly nonconv ex domain, suc h a bottleneck app ears unlik ely for sampling — small parts of the domain can b e effectively ignored as they tak e up small measure. Moreov er, assuming isop erimetry suggests that all parts are “reachable”. This leads us to our first motiv ating question as describ ed in Section 1 , and whic h w e answer in this pap er via the additional notion of v olume growth condition. 21 B Pro ofs for the Analysis of In-and-Out B.1 Pro of of Lemma 5 Lemma 5 . Assume π ∝ 1 X satisfies a Poinc ar´ e ine quality with c onstant C PI ∈ [1 , ∞ ) . L et q ∈ [2 , ∞ ) and h > 0 . L et ρ 0 b e a pr ob ability distribution on X with ρ 0 ≪ π and R q ( ρ 0 ∥ π ) < ∞ . L et T 0 := max ( 0 , & q ( R q ( ρ 0 ∥ π ) − 1) 2 log 1+ h C PI ') , and let T ≥ T 0 b e the desir e d numb er of iter ations. L et Succ b e the suc c ess event that In-and-Out (Algorithm 1 ) runs without failur e for T iter ations. Assume Pr( Succ ) ≥ 1 − η for some η ∈ [0 , 1 2 ] . Then, c onditione d on Succ , the output x T ∼ ρ T of In-and-Out satisfies: R q ( ρ T ∥ π ) ≤ 1 + h C PI − 1 q ( T − T 0 ) + 4 η . Pr o of. Let ˜ ρ T b e the output of the ideal Pr oximal Sampler algorithm with step size h for T iterations, starting from the same initialization x 0 ∼ ρ 0 as In-and-Out . By Lemma 5 , we ha ve the guaran tee R q ( ˜ ρ T ∥ π ) ≤ 1 + h C PI − 1 q ( T − T 0 ) . (14) By [ Kook et al. , 2024 , Lemma 16], the output x T ∼ ρ T of In-and-Out conditioned on the success ev ent Succ satisfies: R q ( ρ T ∥ π ) ≤ R q ( ˜ ρ T ∥ π ) + q q − 1 log 1 1 − η ( 14 ) ≤ 1 + h C PI − 1 q ( T − T 0 ) + 4 η where the last inequalit y follows since q ≥ 2 so q q − 1 ≤ 2, and η ≤ 1 2 so log 1 1 − η ≤ 2 η . B.2 Pro of of Lemma 6 W e in tro duce a few definitions. F or h > 0, let N h : R n → R denote the probability density function of the Gaussian distribution N (0 , hI n ): N h ( y ) = (2 π h ) − n 2 exp − ∥ y ∥ 2 2 h . F or h > 0, let ℓ h : R n → [0 , 1] denote the lo c al c onductanc e [ Ko ok et al. , 2024 ]: ℓ h ( y ) = Z X N h ( y − x ) d x = Pr Z ∼N (0 ,I n ) y + √ hZ ∈ X . (15) F or h > 0, define π h = π ∗ N (0 , hI n ), so π h has densit y function at any point y ∈ R n : π h ( y ) = 1 V ol ( X ) Z X N h ( y − x ) d x = ℓ h ( y ) V ol ( X ) . (16) 22 Lemma 6 . Assume X ⊂ R n satisfies the ( α, β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . If 0 < h ≤ 1 2 n 3 β 2 , then for al l r ≥ 0 : π h X ∁ r ≤ α ( n + 1) · Q 2 n r √ h . Pr o of of L emma 6 . Let π h = π ∗ N (0 , hI n ), so it has densit y function (see ( 16 )): π h ( y ) = 1 V ol ( X ) Z X N h ( y − x ) dx = Pr Z ∼N (0 ,I n ) ( y + √ hZ ∈ X ) V ol ( X ) . Recall the distance function u ( y ) := dist ( y , X ) = min x ∈X ∥ x − y ∥ . If y ∈ X ∁ , then u ( y ) > 0 and B ( y , u ( y )) ⊆ X ∁ , where B ( y , u ( y )) is the ℓ 2 -ball of radius u ( y ) cen tered at y . Therefore: X ⊆ B ( y , u ( y )) ∁ . (17) Then for y ∈ X ∁ : Pr( y + √ hZ ∈ X ) ≤ Pr y + √ hZ ∈ B ( y , u ( y )) ∁ = Pr ∥ Z ∥ ≥ u ( y ) √ h = Q n u ( y ) √ h . This means for an y y ∈ X ∁ w e hav e the b ound: π h ( y ) = Pr( y + √ hZ ∈ X ) V ol ( X ) ≤ 1 V ol ( X ) Q n u ( y ) √ h . (18) Fix r 0 ≥ 0 (this is r in the statement of the lemma); we w ant to b ound π h X ∁ r 0 . By the co-area form ula, we can write: π h X ∁ r 0 = Z X ∁ r 0 π h ( y ) d y ( 18 ) ≤ 1 V ol ( X ) Z X ∁ r 0 Q n u ( y ) √ h d y = 1 V ol ( X ) Z ∞ r 0 Z { y ∈ R n : u ( y )= r } Q n r √ h d H n − 1 ( y ) d r = 1 V ol ( X ) Z ∞ r 0 Q n r √ h · Z { y ∈ R n : u ( y )= r } d H n − 1 ( y ) | {z } = A ( r ) d r = 1 V ol ( X ) Z ∞ r 0 Q n r √ h A ( r ) d r. (19) 23 In the third line ab o ve, we use the co-area form ula to write the integration in terms of the level sets of the distance function u ( y ). W e also use the fact that the distance function u ( y ) = dist ( y , X ) satisfies ∥∇ u ( y ) ∥ = 1 for almost every y ∈ X ∁ , which was sho wn in [ Belkin et al. , 2013 , Pro of of Lemma 5.4] without assuming con vexit y; see also Lemma 9 for a self-contained pro of. In the last line ab ov e, we define A ( r ) to b e the surface area of X r whic h is equal to the integral of the ( n − 1)-dimensional Hausdorff measure d H n − 1 on the lev el set of constant distance: A ( r ) := Z { y ∈ R n : u ( y )= r } d H n − 1 ( y ) = area ( ∂ ( X r )) . Next, similar to the conv ex case [ Ko ok et al. , 2024 ], w e do integration by parts. Let Z ∼ N (0 , I n ) in R n . Re call Y = ∥ Z ∥ has the c hi distribution with density function γ n ( r ) at r ≥ 0 giv en by: γ n ( r ) = 1 N n r n − 1 exp − r 2 2 where N n is the normalizing constan t N n := 2 n 2 − 1 Γ n 2 and recall the Gamma function Γ( m ) = R ∞ 0 e − t t m − 1 d t for m > 0. Then by definition of Q n , Q ′ n ( r ) = d d r Q n ( r ) = − γ n ( r ) . F or h > 0, define G h : [0 , ∞ ) → [0 , 1] b y , for all r ≥ 0: G h ( r ) := Pr ∥ Z ∥ ≥ r √ h = Q n r √ h . W e note the prop erty: G ′ h ( r ) = d d r G h ( r ) = 1 √ h Q ′ n r √ h = − 1 √ h γ n r √ h = − 1 h n/ 2 N n r n − 1 exp − r 2 2 h . Define the v olume V ( r ) = Vol ( X r ), and recall the relation (or definition): V ′ ( r ) = d d r V ( r ) = area ( ∂ ( X r )) = A ( r ) . 24 Then b y integration b y parts: Z ∞ r 0 Q n r √ h A ( r ) d r = Z ∞ r 0 G h ( r ) V ′ ( r ) d r = ( G h ( r ) V ( r )) ∞ r = r 0 − Z ∞ r 0 G ′ h ( r ) V ( r ) d r = lim r →∞ G h ( r ) V ( r ) | {z } =0 − G h ( r 0 ) V ( r 0 ) | {z } ≥ 0 + 1 h n/ 2 N n Z ∞ r 0 r n − 1 exp − r 2 2 h V ( r ) d r ≤ 1 h n/ 2 N n Z ∞ r 0 r n − 1 exp − r 2 2 h V ( r ) d r = 1 N n Z ∞ r 0 / √ h u n − 1 exp − u 2 2 V ( u √ h ) d u. (20) where the last inequality is b ecause lim r →∞ G h ( r ) V ( r ) = 0 (since V ( r ) has a p olynomial growth b y the v olume growth assumption, and G h ( r ) has an exp onen tial deca y), and also because trivially G h ( r 0 ) V ( r 0 ) ≥ 0. In the last equality we use change of v ariable u = r √ h , so d r = √ h d u and note the h n/ 2 term disapp ears in the last line. By the v olume growth condition, w e ha ve for all u, h > 0: V ( u √ h ) V (0) ≤ α 1 + u √ hβ n = α n X i =0 n i u √ hβ i ≤ α n X i =0 u √ hβ n i where in the last step w e use the b ound n i = n · ( n − 1) ··· ( n − i +1) i ! ≤ n i i ! ≤ n i . Plugging this to ( 20 ), w e obtain: Z ∞ r 0 Q n r √ h A ( r ) d r ≤ V (0) · α N n Z ∞ r 0 / √ h u n − 1 exp − u 2 2 n X i =0 u √ hβ n i d u = V (0) · α N n · n X i =0 Z ∞ r 0 / √ h u n − 1 exp − u 2 2 u √ hβ n i d u. F or each i ∈ { 0 , 1 , . . . , n } , w e hav e: Z ∞ r 0 / √ h u n − 1 exp − u 2 2 u √ hβ n i d u = √ hβ n i Z ∞ r 0 / √ h u n + i − 1 exp − u 2 2 d u = N n + i √ hβ n i Z ∞ r 0 / √ h γ n + i ( u ) d u ≤ N n · Z ∞ r 0 / √ h γ n + i ( u ) d u = N n · Q n + i r 0 √ h . 25 where the inequalit y ab ov e follo ws from our assumption h ≤ 1 / (2 n 3 β 2 ) (see Lemma 10 ). Com bining the tw o parts ab o ve and plugging in the result bac k to ( 19 ) and ( 20 ), w e obtain: π h X ∁ r 0 ≤ 1 V (0) Z ∞ r 0 Q n r √ h A ( r ) d r ≤ α N n · n X i =0 Z ∞ r 0 / √ h u n − 1 exp − u 2 2 u √ hβ n i d u ≤ α · n X i =0 Q n + i r 0 √ h ≤ α · ( n + 1) · Q 2 n r 0 √ h where in the last inequalit y we use the trivial b ound Q n + i ( r ) ≤ Q 2 n ( r ) for i ≤ n and r ≥ 0. B.2.1 Help er lemma on gradient of distance function Lemma 9. L et X ⊂ R n b e a close d set with non-empty interior. Define u : R n → [0 , ∞ ) by: u ( y ) := dist ( y , X ) = min x ∈X ∥ x − y ∥ . Then u is differ entiable almost everywher e, and if u differ entiable at y ∈ X ∁ , then ∥∇ u ( y ) ∥ = 1 . Pr o of. This follows the argument in [ Belkin et al. , 2013 , Pro of of Lemma 5.4]. F or any x, y ∈ R n , b y triangle inequality w e ha ve: u ( y ) = min z ∈X ∥ y − z ∥ ≤ min z ∈X ( ∥ y − x ∥ + ∥ x − z ∥ ) = ∥ y − x ∥ + u ( x ) . Exc hanging x and y giv es | u ( x ) − u ( y ) | ≤ ∥ x − y ∥ , whic h shows that u is 1-Lipschitz on R n . Then b y Rademacher’s theorem, u is differen tiable almost ev erywhere on R n . No w fix y ∈ X ∁ suc h that u is differentiable at y . Since u is 1-Lipsc hitz, w e kno w ∥∇ u ( y ) ∥ ≤ 1. W e will show ∥∇ u ( y ) ∥ ≥ 1, which implies the claim ∥∇ u ( y ) ∥ = 1. Since X is closed, there exists x ∈ X suc h that u ( y ) = ∥ y − x ∥ > 0 . Define the unit v ector v := y − x ∥ y − x ∥ . Since x ∈ X , for all t ∈ (0 , u ( y )) we ha v e u ( y − tv ) ≤ ∥ y − tv − x ∥ = ( y − x ) 1 − t ∥ y − x ∥ = ∥ y − x ∥ − t = u ( y ) − t. On the other hand, since u is 1-Lipsc hitz, we also hav e u ( y − tv ) ≥ u ( y ) − ∥ tv ∥ = u ( y ) − t. 26 Therefore, for all t ∈ (0 , u ( y )), we ha ve u ( y − tv ) = u ( y ) − t . Hence, ⟨∇ u ( y ) , − v ⟩ = lim t ↓ 0 u ( y − tv ) − u ( y ) t = − 1 or equiv alently , ⟨∇ u ( y ) , v ⟩ = 1. By Cauc hy-Sc h warz inequalit y , this implies 1 = ⟨∇ u ( y ) , v ⟩ ≤ ∥∇ u ( y ) ∥ · ∥ v ∥ = ∥∇ u ( y ) ∥ . This completes the pro of. B.2.2 Help er lemma on the normalizing constant Recall the normalizing constant N n = 2 n 2 − 1 Γ( n 2 ) for the chi distribution, where Γ( m ) = R ∞ 0 e − t t m − 1 dt is the Gamma function. W e hav e the follo wing estimate. Lemma 10. Assume 0 ≤ h ≤ 1 / (2 n 3 β 2 ) for some n ≥ 1 and β > 0 . Then for al l i ∈ { 0 , 1 , . . . , n } , √ hnβ i · N n + i N n ≤ 1 . Pr o of. W e recall a classical b ound on Gamma function [ W endel , 1948 ] for any x > 0 and 0 < s < 1: Γ( x + s ) ≤ x s · Γ( x ) . Applying this b ound with s = 1 2 , for eac h i ∈ { 1 , . . . , n } w e hav e: Γ( n + i 2 ) Γ( n 2 ) = i − 1 Y k =0 Γ( n + k 2 + 1 2 ) Γ( n + k 2 ) ≤ i − 1 Y k =0 n + k 2 1 / 2 ≤ n + i − 1 2 i/ 2 ≤ n i/ 2 . Therefore, for i ∈ { 1 , . . . , n } we can bound: N n + i N n = 2 i/ 2 · Γ( n + i 2 ) Γ( n 2 ) ≤ (2 n ) i/ 2 . Therefore, ( √ hnβ ) i · N n + i N n ≤ ( √ 2 hn 3 / 2 β ) i ≤ 1 since w e assume h ≤ 1 / (2 n 3 β 2 ). 27 B.3 Bound on exp ected failure probability under stationarit y Recall the lo c al c onductanc e at y ∈ R n is ℓ h ( y ) = Pr( y + √ hZ ∈ X ) for h > 0 and Z ∼ N (0 , I ). F or h > 0 and N ∈ N , we define the failure probability under stationarity (where “stationarity” means the exp ectation is under π h ) to b e: F ( h, N ) := E Y ∼ π h [(1 − ℓ h ( Y )) N ] . (21) Lemma 11. L et X ⊂ R n b e a c omp act b o dy which satisfies the ( α, β ) -volume gr owth c ondition. Assume: h ≤ 1 2 n 2 β 2 · max { n, log (( n + 1) αS ) } N = 8 αS log S wher e S ≥ 3 is arbitr ary. Then we have: F ( h, N ) ≤ 3 S . Pr o of. W e will use Lemma 12 b elow. Let r = p 8 h · max { n, log (( n + 1) αS ) } . Note this c hoice satisfies the constraint ( 23 ) for r . The assumption on h satisfies the constrain t ( 22 ). Com bining our choice of r with the assumption on h : r ≤ s 8 · max { n, log( α ( n + 1) S ) } 2 n 2 β 2 · max { n, log ( α ( n + 1) S ) } = 2 nβ . Therefore, (1 + r β ) n ≤ exp( rnβ ) ≤ e 2 < 8. Therefore, the c hoice N = 8 αS log S also satisfies the constrain t ( 24 ) for N . With these c hoices, the b ound from Lemma 12 yields: F ( h, N ) ≤ 3 S . B.3.1 Help er lemma on a more general b ound W e can bound the expected failure probability under stationarity . W e specialize this in Lemma 11 b y choosing sp ecific v alues for r and N . 28 Lemma 12. L et X ⊂ R n b e a c omp act b o dy which satisfies the ( α, β ) -volume gr owth c ondition. L et h, r > 0 and N ∈ N satisfy: h ≤ 1 2 n 3 β 2 (22) r ≥ p 8 h · max { n, log (( n + 1) αS ) } (23) N ≥ αS log S · (1 + r β ) n (24) wher e S ≥ 3 is arbitr ary. Then we have: F ( h, N ) ≤ 3 S . Pr o of. Let A h,N : R n → R b e the function: A h,N ( y ) = (1 − ℓ h ( y )) N . W e partition R n = X ∁ r ∪ B 1 ∪ B 2 where B 1 := X r ∩ y ∈ R n : ℓ h ( y ) ≥ 1 N log S B 2 := X r ∩ y ∈ R n : ℓ h ( y ) < 1 N log S . F or simplicit y , w e suppress the dep endence on the argumen t y in the integrals b elo w. W e can split: F ( h, N ) = Z R n A h,N d π h = Z X ∁ r A h,N d π h + Z B 1 A h,N d π h + Z B 2 A h,N d π h . W e control eac h part separately: 1. In tegral ov er X ∁ r : By the trivial b ound A h,N ( y ) = (1 − ℓ h ( y )) N ≤ 1 for all y ∈ R n , and using the b ound from Lemma 6 (whic h applies since we assume h ≤ 1 / (2 n 3 β 2 )), w e hav e: Z X ∁ r A h,N d π h ≤ Z X ∁ r d π h = π h ( X ∁ r ) ≤ α ( n + 1) · Q 2 n r √ h . Since r ≥ √ 8 hn , we hav e r − √ 2 hn ≥ r 2 , so b y the Gaussian tail bound ( 25 ) from Lemma 13 : Q 2 n r √ h ≤ exp − 1 2 r √ h − √ 2 n 2 ! = exp − r − √ 2 hn 2 2 h ≤ exp − r 2 8 h . Therefore, since w e also assume r 2 8 h ≥ log (( n + 1) αS ), we can b ound the integral abov e b y: Z X ∁ r A h,N d π h ≤ α ( n + 1) · exp − r 2 8 h ≤ 1 S . 29 2. In tegral o ver B 1 : By definition, for y ∈ B 1 w e hav e ℓ h ( y ) ≥ 1 N log S , so A h,N ( y ) = (1 − ℓ h ( y )) N ≤ exp( − ℓ h ( y ) N ) ≤ 1 S . Therefore, since w e also hav e π h ( B 1 ) ≤ π h ( R n ) = 1, w e get Z B 1 A h,N dπ h ≤ Z B 1 1 S dπ h = 1 S π h ( B 1 ) ≤ 1 S . 3. In tegral ov er B 2 : W e use the trivial bound A h,N ( y ) ≤ 1 for all y ∈ R n , the form ula ( 16 ) for the densit y of π h , and the bound ℓ h ( y ) ≤ 1 N log S for y ∈ B 2 b y the definition of B 2 . W e also use the inclusion B 2 ⊆ X r , and the b ound on Vol ( X r ) from the v olume gro wth condition, to get: Z B 2 A h,N d π h ≤ Z B 2 1 · d π h ( y ) = Z B 2 ℓ h ( y ) V ol ( X ) d y ≤ log S N · Z B 2 1 V ol ( X ) d y ≤ log S N · Z X r 1 V ol ( X ) d y = log S N · V ol ( X r ) V ol ( X ) ≤ log S N · α · (1 + r β ) n ≤ 1 S where the last inequalit y follows from our assumption on N . Com bining the three parts ab ov e, w e get the desired b ound: F ( h, N ) ≤ 1 S + 1 S + 1 S = 3 S . B.3.2 Help er lemma on Gaussian tail b ound Recall Q n ( r ) = Pr( ∥ Z ∥ ≥ r ) where Z ∼ N (0 , I n ) in R n . Lemma 13. F or al l r ≥ √ n : Q n ( r ) ≤ exp − ( r − √ n ) 2 2 . (25) Pr o of. Note x 7→ ∥ x ∥ is 1-Lipschitz, since b y the triangle inequalit y , for any x, y ∈ R n : ∥ x ∥ − ∥ y ∥ ≤ ∥ x − y ∥ . Let Z ∼ N (0 , I n ) in R n . By the Gaussian concentration inequality for Lipschitz functions [ Boucheron et al. , 2013 , Theorem 5.6], for an y t ≥ 0 we hav e: Pr ( ∥ Z ∥ − E [ ∥ Z ∥ ] ≥ t ) ≤ exp − t 2 2 . 30 Let r ≥ √ n and set t := r − E [ ∥ Z ∥ ] ≥ 0. Then Q n ( r ) = Pr( ∥ Z ∥ ≥ r ) = Pr ∥ Z ∥ − E [ ∥ Z ∥ ] ≥ r − E [ ∥ Z ∥ ] ≤ exp − ( r − E [ ∥ Z ∥ ]) 2 2 . By Cauch y-Sch w arz inequality , E [ ∥ Z ∥ ] ≤ p E [ ∥ Z ∥ 2 ] = √ n , so r − E [ ∥ Z ∥ ] ≥ r − √ n . Since a 7→ e − a 2 / 2 is decreasing for a ≥ 0, we ha ve exp − ( r − E [ ∥ Z ∥ ]) 2 2 ≤ exp − ( r − √ n ) 2 2 , whic h pro v es the claim. B.4 Bound on the exp ected n umber of trials under stationarit y F or h > 0, N ∈ N , and y ∈ R n , let M h,N ( y ) b e a random v ariable that denotes the num b er of trials in the rejection sampling for implemen ting one bac kward step of the In-and-Out algorithm with step size h starting from y and limited to N trials, where each trial is indep endent with success probabilit y given b y the lo cal conductance ℓ h ( y ). By definition, we can write M h,N ( y ) d = min { G h ( y ) , N } (26) where d = denotes equality in distribution, and G h ( y ) ∈ N is a geometric random v ariable with success probabilit y ℓ h ( y ). Define the exp ected n um b er of trials in the rejection sampling (for implementing one bac k- w ard step) starting from y : µ h,N ( y ) = E [ M h,N ( y )] (27) where the exp ectation is ov er the geometric random v ariable G h ( y ). Note for all h > 0, N ∈ N , and y ∈ R n : µ h,N ( y ) ≤ min 1 ℓ h ( y ) , N . (28) (Indeed, since M h,N ( y ) ≤ N , w e hav e µ h,N ( y ) = E [ M h,N ( y )] ≤ N . And since M h,N ( y ) ≤ G h ( y ), we also ha ve µ h,N ( y ) = E [ M h,N ( y )] ≤ E [ G h ( y )] = 1 /ℓ h ( y ).) Lemma 14. L et X ⊂ R n b e a c omp act b o dy that satisfies the ( α , β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . Assume: h ≤ 1 2 n 2 β 2 · max { n, log (( n + 1) αS ) } N = 8 αS log S wher e S ≥ 3 is arbitr ary. Then: E π h [ µ h,N ] ≤ 16 α log S. 31 Pr o of. W e will use Lemma 15 b elow. Let r = p 8 h · max { n, log (( n + 1) αS ) } . Note this c hoice satisfies the constraint ( 31 ) for r . The assumption on h satisfies the constrain t ( 29 ). Com bining our choice of r with the assumption on h : r ≤ s 8 · max { n, log( α ( n + 1) S ) } 2 n 2 β 2 · max { n, log ( α ( n + 1) S ) } = 2 nβ . Therefore, (1 + r β ) n ≤ exp( rnβ ) ≤ e 2 < 8. Therefore, the c hoice N = 8 αS log S also satisfies the constrain t ( 30 ) for N . With these c hoices, the b ound from Lemma 15 yields: E π h [ µ h,N ] ≤ 2 N S = 16 α log S. B.4.1 Help er lemma on a more general b ound W e can b ound the exp ected n umber of trials under stationarity . W e sp ecialize this in Lemma 14 b y choosing sp ecific v alues for r and N . Lemma 15. L et X ⊂ R n b e a c omp act b o dy that satisfies the ( α , β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . Assume h, r > 0 and N ∈ N satisfy: h ≤ 1 2 n 3 β 2 (29) N ≥ αS log S · (1 + r β ) n (30) r ≥ p 8 h · max { n, log (( n + 1) αS ) } (31) wher e S ≥ 3 is arbitr ary. Then: E π h [ µ h,N ] ≤ 2 N S . Pr o of. W e split the integral in to t wo parts: E π h [ µ h,N ] = Z R n µ h,N d π h = Z X r µ h,N d π h + Z X ∁ r µ h,N d π h . W e can b ound each in tegral ab o ve as follo ws: 1. In tegral ov er X r : Using the b ound µ h,N ( y ) ≤ 1 ℓ h ( y ) from ( 28 ), the form ula ( 16 ) for the densit y of π h , and the v olume growth bound, we get: Z X r µ h,N dπ h ≤ Z X r 1 ℓ h ( y ) · π h ( y ) dy = Z X r 1 V ol ( X ) dy = V ol ( X r ) V ol ( X ) ≤ α · (1 + r β ) n ≤ N S log S . The last inequalit y follo ws from our assumption on N . Since we assume S ≥ 3, log S ≥ 1, so w e can further b ound N S log S ≤ N S . 32 2. In tegral ov er X ∁ r : Using the b ound µ h,N ( y ) ≤ N from ( 28 ), and the bound from Lemma 6 (whic h applies since we assume h ≤ 1 / (2 n 3 β 2 )), w e can b ound: Z X ∁ r µ h,N dπ h ≤ N · π h ( X ∁ r ) ≤ N · α ( n + 1) · Q 2 n r √ h ≤ N S . The last inequality follows since w e assume r ≥ √ 8 hn , so r √ h − √ 2 n ≥ r 2 √ h ; and b y the Gaussian tail b ound (see Lemma 13 ): Q 2 n r √ h ≤ exp − 1 2 r √ h − √ 2 n 2 ≤ exp − r 2 8 h ; and since w e assume r ≥ p 8 h · log (( n + 1) αS ), so that exp − r 2 8 h ≤ 1 ( n +1) αS . Com bining the tw o estimates ab o ve, w e ha ve the bound: E π h [ µ h,N ] ≤ N S + N S = 2 N S . B.5 Pro of of Lemma 7 Lemma 7 . L et π ∝ 1 X wher e X ⊂ R n satisfies the ( α, β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . L et x 0 ∼ ρ 0 wher e ρ 0 is M -warm with r esp e ct to π for some M ∈ [1 , ∞ ) . Assume: h ≤ 1 2 β 2 n 3 · max { 1 , 1 n log(( n + 1) αS ) } N = 8 αS log S wher e S ≥ 3 is arbitr ary. Assume In-and-Out suc c e e ds for m ≥ 0 iter ations (i.e., c onditione d on the event “r e aches iter ation m ”), and is curr ently at x m ∼ ρ m which is M -warm with r esp e ct to π . Then the fol lowing hold: 1. The next iter ation (to c ompute x m +1 ) suc c e e ds with pr ob ability: Pr x m ∼ ρ m ( next iter ation suc c e e ds | r e ach iter ation m ) ≥ 1 − 3 M S . 2. The exp e cte d numb er of trials in the next iter ation is E x m ∼ ρ m [# trials until suc c ess in next iter ation | r e ach iter ation m ] ≤ 16 M α log S. 3. Up on ac c epting x m +1 ∼ ρ m +1 , the next distribution ρ m +1 is M -warm with r esp e ct to π . 33 Pr o of. Supp ose we are at x m ∼ ρ m in the m -th iteration. When m = 0, w e know ρ 0 is M -warm by assumption. F or m > 0, we assume ρ m is M -warm as an inductiv e hypothesis. In the forward step of the algorithm, we dra w y m ∼ N ( x m , hI n ) to obtain y m with marginal distribution ρ Y m := ρ m ∗ N (0 , hI n ). Note ρ Y m is M -warm with resp ect to π h = π ∗ N (0 , hI n ). Indeed, letting N h ( y ) denote the density of N (0 , hI n ) at y ∈ R n , w e hav e b y definition: ρ Y m ( y ) = Z X ρ m ( x ) N h ( y − x ) d x ≤ M · Z X π ( x ) N h ( y − x ) d x = M · π h ( y ) . In the backw ard step, we try to dra w x ′ m +1 ∼ N ( y m , hI n ) and accept when x ′ m +1 ∈ X (in which case w e set x m +1 = x ′ m +1 ), and rep eat up to N times. W e analyze this as follows: 1. F ailure probability: Conditioning on y m , the failure probabilit y of one trial in the next iteration is Pr( x ′ m +1 / ∈ X | y m ) = Pr( y m + √ hZ / ∈ X | y m ) = 1 − ℓ h ( y m ) where Z ∼ N (0 , I ) in R n , and recall ℓ h is the lo cal conductance defined in ( 15 ). If the trial fails, then w e rep eat it (conditioning on the same y m ) for ≤ N trials. Then the failure probabilit y o ver N trials (conditioned on y m ) is (1 − ℓ h ( y m )) N . T aking exp ectation o ver y m ∼ ρ Y m , the failure probabilit y o ver N trials is E ρ Y m [(1 − ℓ h ) N ]; this is the probabilit y that the next iteration fails. Since ρ Y m is M -w arm with resp ect to π h , we can b ound this by the exp ectation under stationary: E ρ Y m [(1 − ℓ h ) N ] ≤ M · E π h (1 − ℓ h ) N ≤ 3 M S where the last inequalit y follows from Lemma 11 (whic h holds for our c hoices of h and N ). 2. Con trolling the exp ected n umber of trials: Conditioning on y m , let M h,N ( y m ) denote the num b er of trials in the bac kward step, where each trial is indep endent and has success probabilit y ℓ h ( y m ), and we run at most N trials. Then M h,N ( y m ) d = min { G h ( y m ) , N } where G h ( y m ) is a geometric random v ariable with success probabilit y ℓ h ( y m ). The expected n um b er of trials (conditioned on y m , so the expectation is o ver the geometric random v ariable G h ( y m )) is µ h,N ( y m ) = E [min { G h ( y m ) , N } ], whic h we also defined in ( 27 ). T aking exp ectation ov er y m ∼ ρ Y m , the exp ected n umber of trials in the backw ard step is E ρ Y m [ µ h,N ]. Since ρ Y m is M -warm with respect to π h , w e can b ound this by: E ρ Y m [ µ h,N ] ≤ M · E π h [ µ h,N ( y )] ≤ 16 M α log S where the last inequalit y follows from Lemma 14 (whic h holds for our c hoices of h and N ). 3. W armness: Up on acceptance, the next random v ariable x m +1 ∼ ρ m +1 has distribution: ρ m +1 ( x ) = Z R n ρ Y m ( y ) · π X | Y ( x | y ) d y 34 where π X | Y ( · | y ) ∝ N ( y , hI n ) · 1 X . Since ρ Y m is M -warm with respect to π h = π Y , w e get: ρ m +1 ( x ) ≤ M · Z R n π Y ( y ) · π X | Y ( x | y ) d y = M · π X ( x ) whic h shows ρ m +1 is M -warm with respect to π X . B.6 Details for the pro of of Theorem 1 B.6.1 Help er lemma on b ound on num b er of iterations Lemma 16. Under the same assumptions as in The or em 1 , and with the definitions of T in ( 3 ) , h in ( 5 ) , and S in ( 4 ) , we have T ≥ 2 q C PI h log M ε ′ . (32) Pr o of. Our c hoice ( 3 ) of T is of the form T = 2 z log z , where z := 4 q C PI β 2 n 2 n + log 3( n + 1) αM η · log M ε ′ ≥ 32 β 2 n 2 ≥ 32 since q ≥ 2, C PI ≥ 1, n ≥ 2, log 3( n +1) αM η ≥ 2, log M ε ′ ≥ 1, and w e assume β ≥ 1 n . W e recall (see Lemma 17 ) that for z ≥ 2, y ≥ 2 z log z implies y / log y ≥ z . Then w e observe that our c hoice of T in ( 3 ) implies: T log T ≥ 4 q C PI β 2 n 2 n + log 3( n + 1) αM η · log M ε ′ ≥ 16 q C PI β 2 n 2 · log M ε ′ (33) where the last inequalit y holds since n ≥ 2 and log 3( n +1) αM η ≥ log 9 > 2. The righ t-hand side of the claim ( 32 ) is, using the definitions of h from ( 5 ) and S from ( 4 ): RHS := 2 q C PI h log M ε ′ ( 5 ) = 4 q β 2 C PI n 2 ( n + log (( n + 1) αS )) · log M ε ′ ( 4 ) = 4 q β 2 C PI n 2 n + log 3( n + 1) αM η + log T · log M ε ′ . 35 On the other hand, from our c hoice of T = 2 z log z in ( 3 ) with log z ≥ 1, w e hav e: T ≥ T 2 + z = T 2 + 4 q C PI β 2 n 2 n + log 3( n + 1) αM η · log M ε ′ = T 2 + RHS − 4 q β 2 C PI n 2 · log M ε ′ · log T . Therefore, to sho w T ≥ RHS , it suffices to show that: T log T ≥ 8 q C PI β 2 n 2 · log M ε ′ whic h we observ ed in ( 33 ) for our c hoice of T . This completes the pro of. B.6.2 Help er lemma on logarithm Lemma 17. If z ≥ 2 and y ≥ 2 z log z , then y / log y ≥ z . Pr o of. Define the function ϕ : (0 , ∞ ) → R b y h ( y ) = y log y . Its deriv ativ e is ϕ ′ ( y ) = log y − 1 (log y ) 2 , so ϕ ( y ) is increasing for y ≥ e . Since y ≥ 2 z log z ≥ 4 log 2 > e , we ha ve y log y ≥ 2 z log z log(2 z log z ) . Then to show y / log y ≥ z , it suffices to show 2 log z ≥ log(2 z log z ). Exp onen tiating and simplifying, this is equiv alent to sho wing z ≥ 2 log z , which is true for z ≥ 2. C Details for the v olume gro wth condition C.1 V olume growth condition for con v ex b o dies Lemma 1 . If X ⊂ R n is c onvex, then it satisfies the (1 , 1 n ξ ( X )) -volume gr owth c ondition. Pr o of. Let ϕ ( t ) = log Vol ( X t ) so ϕ ′ ( t ) = ξ ( X t ). By the Brunn-Minko wski theorem, w e know t 7→ V ol ( X t ) 1 /n = exp 1 n ϕ ( t ) is a conca ve function. This means 0 ≥ d 2 d t 2 V ol ( X t ) 1 /n = d 2 d t 2 exp 1 n ϕ ( t ) = d d t 1 n ϕ ′ ( t ) exp 1 n ϕ ( t ) = 1 n 2 nϕ ′′ ( t ) + ( ϕ ′ ( t )) 2 exp 1 n ϕ ( t ) . 36 Therefore, nϕ ′′ ( t ) + ( ϕ ′ ( t )) 2 ≤ 0. Equiv alently , for all t > 0: − ϕ ′′ ( t ) ( ϕ ′ ( t )) 2 ≥ 1 n . W e can write this as: d d t 1 ϕ ′ ( t ) = − ϕ ′′ ( t ) ( ϕ ′ ( t )) 2 ≥ 1 n . Therefore, 1 ϕ ′ ( t ) − 1 ϕ ′ (0) ≥ t n . Equiv alently , ξ ( X t ) = ϕ ′ ( t ) ≤ 1 1 ξ ( X ) + t n = n t + n ξ ( X ) = n · d d t log t + n ξ ( X ) . Therefore, R t 0 ξ ( X s ) d s ≤ R t 0 n s + n ξ ( X ) d s = n log t + n ξ ( X ) n ξ ( X ) = n log 1 + t · ξ ( X ) n . This sho ws the claim: V ol ( X t ) V ol ( X ) = exp Z t 0 ξ ( X s ) d s ≤ 1 + t · ξ ( X ) n n . C.2 V olume growth condition for star-shap ed b o dies Lemma 2 . L et X ⊂ R n b e a star-shap e d b o dy, so X = S i ∈I X i wher e X i is a c onvex b o dy for e ach i ∈ I in a finite index set I , and they shar e a c ommon interse ction Y = X i ∩ X j = ∅ for al l i = j . Assume Y c ontains a b al l of r adius r > 0 c enter e d at 0 , i.e., B r ⊆ Y . Then X satisfies the (1 , 1 r ) -volume gr owth c ondition. Pr o of. Fix t > 0. Since X = S i ∈I X i , w e observ e that X t ⊆ S i ∈I ( X i ) t ; this is because any x ∈ X t = X ⊕ B t is of the form x = y + z where y ∈ X i for some i ∈ I and z ∈ B t , so x ∈ X i ⊕ B t = ( X i ) t . Next, since B r ⊆ Y ⊆ X i , w e hav e B t = t r B r ⊆ t r X i . Then we hav e ( X i ) t = X i ⊕ B t ⊆ X i ⊕ t r X i = 1 + t r X i , where the last equality holds since X i is conv ex and con tains 0. F urthermore, since X i ⊆ X , we also hav e 1 + t r X i ⊆ 1 + t r X . Com bining the relations ab o ve, w e obtain X t ⊆ [ i ∈I ( X i ) t ⊆ [ i ∈I 1 + t r X i ⊆ 1 + t r X . T aking volume on b oth sides, we conclude Vol ( X t ) ≤ Vol 1 + t r X = 1 + t r n · Vol ( X ) . This sho ws that X satisfies the volume gro wth condition with α = 1 and β = 1 /r . C.3 V olume growth condition under set union Lemma 3 . Supp ose X i ⊂ R n is a c omp act b o dy that satisfies the ( α i , β i ) -volume gr owth c ondition for some α i ∈ [1 , ∞ ) and β i ∈ (0 , ∞ ) , for e ach i ∈ I in some finite index set I . L et q I b e the pr ob ability distribution supp orte d on I with density q I ( i ) = Vol ( X i ) P j ∈I Vol ( X j ) , for i ∈ I . Then the union 37 X = S i ∈I X i satisfies the ( A, B ) -volume gr owth c ondition, wher e: A = max i ∈I α i · P i ∈I V ol ( X i ) V ol ( X ) B = E I ∼ q I [ β n I ] 1 /n = P i ∈I V ol ( X i ) · β n i P j ∈I V ol ( X j ) ! 1 /n ≤ max i ∈I β i . Pr o of. As in the pro of of Lemma 2 , we hav e X t ⊆ S i ∈I X i t where X i t ≡ ( X i ) t = X i ⊕ B t . Since eac h X i satisfies the ( α i , β i )-v olume growth condition, we hav e Vol ( X i t ) ≤ α i · (1 + tβ i ) n · Vol ( X i ) for all i ∈ I . Introducing the random v ariable I ∼ q I with densit y q I ( i ) = Vol ( X i ) P j ∈I Vol ( X j ) , w e hav e: V ol ( X t ) V ol ( X ) ≤ 1 V ol ( X ) · X i ∈I V ol ( X i t ) ≤ 1 V ol ( X ) · X i ∈I α i · (1 + tβ i ) n · Vol ( X i ) = P j ∈I V ol ( X j ) V ol ( X ) · E I ∼ q I [ α I · (1 + tβ I ) n ] ≤ P j ∈I V ol ( X j ) V ol ( X ) · max i ∈I α i · E I ∼ q I [(1 + tβ I ) n ] . F or a random v ariable U ∈ R we denote its L p -norm, p ≥ 1, by ∥ U ∥ p := E [ | U | p ] 1 /p . Note that B = ∥ β I ∥ n where I ∼ q I . Then we can write, using this notation and using triangle inequalit y: E q I [(1 + tβ I ) n ] = ∥ 1 + tβ I ∥ n n ≤ ( ∥ 1 ∥ n + t ∥ β I ∥ n ) n = (1 + tB ) n . Therefore, con tinuing the bound ab ov e, w e obtain: V ol ( X t ) V ol ( X ) ≤ P j ∈I V ol ( X j ) V ol ( X ) · max i ∈I α i · E I ∼ q I [(1 + tβ I ) n ] ≤ P j ∈I V ol ( X j ) V ol ( X ) · max i ∈I α i · (1 + tB ) n . This shows X satisfies the ( A, B )-v olume growth condition where A = P j ∈I Vol ( X j ) Vol ( X ) · (max i ∈I α i ) and B = ∥ β I ∥ n . In particular, B ≤ max i ∈I β i . C.4 V olume growth condition under set exclusion Lemma 4 . L et Y ⊂ R n b e a c omp act b o dy that satisfies the ( α, β ) -volume gr owth c ondition for some α ∈ [1 , ∞ ) and β ∈ (0 , ∞ ) . L et X = Y \ Z , wher e Z ⊂ Y is an op en set with V ol ( Z ) < Vol ( Y ) , and assume X is c omp act. Then X satisfies the ( A, β ) -volume gr owth c ondition wher e A = α · Vol ( Y ) Vol ( X ) . 38 Pr o of. Since X ⊆ Y , w e hav e X t ⊆ Y t for all t ≥ 0 where X t = X ⊕ B t and Y t = Y ⊕ B t . Thus, since Y satisfies the ( α, β )-v olume growth condition, V ol ( X t ) ≤ Vol ( Y t ) ≤ Vol ( Y ) · α · (1 + tβ ) n . Dividing b y Vol ( X ) yields V ol ( X t ) V ol ( X ) ≤ V ol ( Y ) V ol ( X ) · α · (1 + tβ ) n . This sho ws that X satisfies the ( A, β )-v olume growth condition, where A = α · Vol ( Y ) Vol ( X ) . Ac knowledgmen t. The authors thank Y unbum Kook for helpful discussions. References D. Applegate and R. Kannan. Sampling and integration of near log-conca v e functions. In STOC , pages 156–163, 1991a. Da vid Applegate and Ra vi Kannan. Sampling and in tegration of near log-conca ve functions. In Symp osium on The ory of Computing (STOC) , pages 156–163. A CM, 1991b. M. Belkin, H. Nara yanan, and P . Niy ogi. Heat flow and a faster algorithm to compute the surface area of a conv ex b o dy . R andom Structur es Algorithms , 43(4):407–428, 2013. ISSN 1042-9832. doi: 10.1002/rsa.20513. St ´ ephane Boucheron, G´ ab or Lugosi, and Pascal Massart. Conc entr ation Ine qualities: A Nonasymp- totic The ory of Indep endenc e . Oxford Univ ersity Press, 02 2013. K. Chandrasek aran, D. Dadush, and S. V empala. Thin partitions: Isop erimetric inequalities and a sampling algorithm for star shap ed b odies. In SODA , pages 1630–1645, 2010. Y ongxin Chen, Sinho Chewi, Adil Salim, and Andre Wibisono. Impro ved analysis for a proximal algorithm for sampling. In Confer enc e on L e arning The ory , v olume 178, pages 2984–3014. PMLR, 2022. M. E. Dy er and A. M. F rieze. Computing the v olume of a conv ex b o dy: a case where randomness pro v ably helps. In Pr o c. of AMS Symp osium on Pr ob abilistic Combinatorics and Its Applic ations , pages 123–170, 1991. Martin Dyer, Alan F rieze, and Ravi Kannan. A random p olynomial-time algorithm for approxi- mating the v olume of conv ex b o dies. J. Asso c. Comput. Mach. , 38(1):1–17, 1991. 39 Jiao jiao F an, Bo Y uan, and Y ongxin Chen. Improv ed dimension dep endence of a proximal algorithm for sampling. In Confer enc e on L e arning The ory , volume 195, pages 1473–1521. PMLR, 2023. Matthieu F radelizi and Arnaud Marsiglietti. On the analogue of the conca vity of en tropy p ow er in the Brunn–Mink owski theory . A dvanc es in Applie d Mathematics , 57:1–20, 2014. Alan F rieze and Ra vi Kannan. Log-sobolev inequalities and sampling from log-concav e distributions. A nn. Appl. Pr ob ab. , 9(1):14–26, 02 1999. doi: 10.1214/aoap/1029962595. URL https://doi. org/10.1214/aoap/1029962595 . Martin Gr¨ otschel, L´ aszl´ o Lo v´ asz, and Alexander Sc hrijver. Ge ometric algorithms and c ombinatorial optimization , v olume 2 of A lgorithms and Combinatorics . Springer-V erlag, second edition, 1993. He Jia, Aditi Laddha, Yin T at Lee, and Santosh V empala. Reducing isotrop y and v olume to KLS: faster rounding and v olume algorithms. Journal of the ACM , 2026. R. Kannan, L. Lov´ asz, and M. Simonovits. Random w alks and an O ∗ ( n 5 ) v olume algorithm for con vex bo dies. R andom Structur es and A lgorithms , 11:1–50, 1997. Ra vi Kannan, L´ aszl´ o Lov´ asz, and Mikl´ os Simono vits. Isoperimetric problems for conv ex bo dies and a lo calization lemma. Discr ete Comput. Ge om. , 13(3-4):541–559, 1995. Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear conv ergence of gradien t and proximal- gradien t metho ds under the Poly ak- lo jasiewicz condition. In Joint Eur op e an c onfer enc e on ma- chine le arning and know le dge disc overy in datab ases , pages 795–811. Springer, 2016. Bo’az Klartag. Logarithmic b ounds for isop erimetry and slices of conv ex sets, 2023. Y unbum Ko ok and San tosh S V empala. F aster logconca ve sampling from a cold start in high dimension. In Symp osium on F oundations of Computer Scienc e (F OCS) . IEEE, 2025a. Y unbum Ko ok and San tosh S V empala. Sampling and in tegration of logconca ve functions by algorithmic diffusion. In ACM Symp osium on the The ory of Computing , 2025b. Y unbum Ko ok, San tosh V empala, and Matthew Sh unshi Zhang. In-and-out: Algorithmic diffusion for sampling conv ex b o dies. In arXiv:2405.01425v4 (version 4); Pr esente d at the Thirty-eighth A nnual Confer enc e on Neur al Information Pr o c essing Systems , 2024. Ioannis Koutis. On the hardness of approximate multiv ariate in tegration. In Sanjeev Arora, Klaus Jansen, Jos´ e D. P . Rolim, and Amit Sahai, editors, Appr oximation, R andomization, and Com- binatorial Optimization.. Algorithms and T e chniques , pages 122–128, Berlin, Heidelb erg, 2003. Springer Berlin Heidelb erg. ISBN 978-3-540-45198-3. Yin T at Lee, Aaron Sidford, and San tosh S V empala. Efficient con vex optimization with member- ship oracles. In Confer enc e On L e arning The ory , pages 1292–1294. PMLR, 2018. 40 Yin T at Lee, Ruo qi Shen, and Kevin Tian. Structured logconcav e sampling with a restricted Gaussian oracle. In Confer enc e on L e arning The ory , volume 134, pages 2993–3050. PMLR, 2021. Jiaming Liang and Y ongxin Chen. A proximal algorithm for sampling from non-smo oth p otentials. In 2022 Winter Simulation Confer enc e (WSC) , pages 3229–3240. IEEE, 2022. Jiaming Liang and Y ongxin Chen. A proximal algorithm for sampling. T r ansactions on Machine L e arning R ese ar ch , 2023. ISSN 2835-8856. L. Lo v´ asz and M. Simonovits. Random walks in a conv ex b ody and an improv ed volume algorithm. In R andom Structur es and Alg. , v olume 4, pages 359–412, 1993. L. Lo v´ asz and S. V empala. F ast algorithms for logconcav e functions: sampling, rounding, integra- tion and optimization. In FOCS , pages 57–68, 2006a. L´ aszl´ o Lov´ asz and Mikl´ os Simono vits. The mixing rate of Mark ov chains, an isoperimetric in- equalit y , and computing the volume. In Symp osium on F oundations of Computer Scienc e , pages 346–354. IEEE, 1990. L´ aszl´ o Lo v´ asz and Santosh S. V empala. Hit-and-run from a corner. SIAM Journal on Computing , 35(4):985–1005, 2006b. L´ aszl´ o Lo v´ asz and Santosh S. V empala. The geometry of logconcav e functions and sampling algo- rithms. R andom Structur es Algorithms , 30(3):307–358, 2007. P . M. V aidya. A new algorithm for minimizing con vex functions ov er conv ex sets. Mathematic al Pr o gr amming , 73:291–341, 1996. San tosh S. V empala and Andre Wibisono. Rapid conv ergence of the unadju sted Langevin algorithm: isop erimetry suffices. In Ge ometric Asp e cts of F unctional A nalysis , volume 2327 of L e ctur e Notes in Math. , pages 381–438. Springer, Cham, 2023. J. G. W endel. Note on the Gamma function. The A meric an Mathematic al Monthly , 55(9):563–564, 1948. 41
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment