Asymptotics of solutions to the linear search problem
The exact leading asymptotics of solutions to the symmetric linear search problem are obtained for any positive probability density on the real line with a monotonic, sufficiently regular tail. A similar result holds for densities on a compact interv…
Authors: Robin A. Heinonen
ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEAR CH PR OBLEM R OBIN A. HEINONEN MaLGa & DICCA, University of Geno a, Geno a, IT Abstract. The exact leading asymptotics of solutions to the symmetric linear search problem are obtained for an y p ositive probabilit y density on the real line with a monotonic, sufficiently regular tail. A similar result holds for densities on a compact in terv al. 1. Introduction Ho w should one mov e on a line in order to find, in minim um exp ected time, an unseen target whose position was dra wn from a kno wn probabilit y densit y? This is the question asked b y the classical “linear search problem” (LSP) [10, 3, 13, 1], in tro duced independently b y R. Bellman and A. Bec k in the early 1960s. In the literature, a large share of the attention on the LSP has b een fo cused on the case where the probabilit y density is symmetric ab out the searcher’s starting p oin t. In such cases, it is not hard to see that the optimal tra jectory zigzags back and forth across the starting p oin t, so that the tra jectory may b e parametrized b y a discrete sequence of turning p oints { x k } with alternating sign. With this in mind and with no loss of generality , we will, throughout this article, denote the turning p oin ts by a sequence of nonne gative real num bers { x k } (i.e. the mo duli of the turning p oints), with the starting p oint x 0 set to 0. Searc hing under uncertaint y is a central task in many scien tific areas— biology , rob otics, computer science, and game theory , to name a few—and the LSP is one of the simplest, most fundamen tal searc h problems that can b e devised. 1 Ho w ever, despite its simplicit y , the LSP is more difficult than it may seem at first glance, and relativ ely little is kno wn in general about optimal searc h strategies. F or example, no kno wn algorithm can compute the optimal strategy for general probabilit y densities (although dynamic programming can b e used to compute an ϵ -optimal solution for an y desired accuracy ϵ [1]). Moreo v er, to our kno wledge, there are essen tially no quan titative results ab out the optimal strategies for general probabilit y densities. Most of what is known ma y b e summarized by a few sp ecial cases, studied by A. Beck (1984,1986) [5, 6]: • the uniform distribution on a symmetric, compact in terv al, wherein the optimal strategy is simply to visit the endp oin ts; Date : F ebruary 25, 2026. 1 The LSP is also v ery closely related to the 2-lane cow-path problem in computer science [16]. 1 2 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM • the triangular distribution on the same in terv al, whose optimal tra jectory nev er terminates (and for which some numerical v alues of the x k w ere com- puted); • and the normal distribution, for which we hav e x k ∼ √ 2 k log k . In the presen t w ork, we close this latter gap definitiv ely b y rigorously establishing asymptotics for the optimal turning points for any (even tually) monotonic, nonzero densit y on the real line, under reasonable regularity assumptions. The result ma y b e expressed compactly in terms of the hazard function asso ciated with the target probabilit y density . A similar result can b e used to ev aluate the asymptotics for man y densities on compact interv als. In the cases of b oth b ounded and unbounded domains, w e also establish rigorously the asymptotics of the turning p oints in the case where p decays lik e a p o w er law (up to undetermined constan ts), which will serv e as an imp ortant edge case. The remainder of the pap er is organized as follows. In Sec. 2, we state the LSP precisely along with some useful, w ell-kno wn results and a few necessary definitions. Then, in Sec. 3 w e state our main results. A few in teresting sp ecial cases are presen ted in Sec. 4, including the aforemen tioned triangular distribution and the normal distribution (as a sp ecial case of general stretched/compressed-exponential tails). Finally , the main results are pro ved in Secs. 5–9, and we conclude with a brief discussion in Sec. 10. 2. Preliminaries 2.1. Problem statement and kno wn results. F ormally , the symmetric LSP ma y b e stated as follows: Problem 1 (Symmetric linear searc h problem) . Fix a pr ob ability density p ( x ) on R such that p is even (i.e., p ( x ) = p ( − x ) for al l x ∈ R ). Sele ct a tar get x ∗ ∼ p. L et Γ 1 b e the set of c ontinuous, pie c ewise C 1 , unit-sp e e d curves γ : [0 , ∞ ) → R with γ (0) = 0 . Find γ ∗ := arg min γ ∈ Γ 1 E x ∗ ∼ p [inf { t : γ ( t ) = x ∗ } ] . Muc h of what we know ab out Problem 1 was established by A. Beck in a long series of colorfully-named pap ers [3, 4, 8, 9, 5, 6, 7]. 2 F or one, as previously stated, the candidate γ may b e parametrized b y a discr ete sequence of turning p oints { x k } k ≥ 0 , and WLOG w e can tak e x 0 = 0 and x k ≥ 0 for all k ≥ 1. W e also hav e the following imp ortant, well-kno wn facts, which we state without pro of: Prop osition 1. L et { x k } b e a se quenc e of turning p oints p ar ametrizing a minimiz- ing se ar ch str ate gy for Pr oblem 1. Then the fol lowing hold: (1) Such a minimizing str ate gy { x k } exists if and only if R ∞ 0 xp ( x ) dx < ∞ . (2) Define the survival function G ( x ) = R ∞ x p ( t ) dt. Then { x k } minimizes the obje ctive function J [ x ] := ∞ X k =1 x k ( G ( x k ) + G ( x k − 1 )) (Ob j) and in p articular we have (by differ entiating) the r e curr enc e ( x k + x k +1 ) p ( x k ) = G ( x k ) + G ( x k − 1 ) . (Rec) (3) We have x k +1 > x k ∀ k ≥ 0 and, if p > 0 everywher e, x k → ∞ as k → ∞ . 2 “The linear search problem: electric b oogalo o” w as a working title for the presen t manuscript. ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 3 The form of the ob jective function J follo ws from a straigh tforward computation of the exp ectation of the first passage time (note that a p -dep endent constant term has b een omitted). The recurrence (Rec) essentially reduces the computation of the turning p oints to finding x 1 , but this is known to b e extremely difficult. The third fact—that the optimal turning points are strictly increasing in mo dulus—is non trivial and very useful. It is also ob vious that an y optimal strategy must visit the en tirety of the support of p , or else the expected first passage time div erges; for example, for p > 0 on R , w e m ust hav e that x k → ∞ as k → ∞ . This latter case will be the main focus of what follows; due to symmetry , it will generally suffice to think of p as a densit y on R + . 2.2. The hazard function. W e find it useful to p erform our analysis in terms of the hazar d function , a standard ob ject in the theory of probability densities ov er R + and surviv al analysis [15]. Definition 1. L et p b e a pr ob ability density on R + , and let G ( x ) := R ∞ x p ( t ) dt b e its survivor function. The function h ( x ) := p ( x ) /G ( x ) is c al le d the hazard function of p. Equiv alently , if X is the random v ariable sp ecifying the (mo dulus of the) target p osition, we hav e h ( x ) = lim ϵ → 0 Pr( X ∈ [ x, x + ϵ ) | X ≥ x ) ϵ . That is, h ( x ) represents the instantaneous rate p er unit distance that the target is lo cated at p osition x, given that it is not lo cated closer to the origin than x. Notably , the hazard of a density p uniquely sp ecifies p ; in particular, since h = p/G = − G ′ /G, we hav e G ( x ) = exp( − R x 0 h ( t ) dt ) . W e present the hazards of a few w ell-known kinds of probability densities b elow. Example 1 (Pareto) . If p ( x ) ≍ x − α for some α > 1 , then the hazar d is h ( x ) ∼ ( α − 1) /x. Example 2 (Stretc hed/compressed exponential) . If p ( x ) ≍ exp( − ( x/a ) b ) for some a, b > 0 , then the hazar d is h ( x ) ≍ x b − 1 . In p articular, the exp onential distribution has c onstant hazar d, and the normal distribution has line ar hazar d. Example 3 (Lognormal) . If p ( x ) ≍ x − 1 exp( − log 2 x/ 2 σ 2 ) for some σ > 0 , then the hazar d is h ( x ) ≍ log x/x. Example 4 (Gumbel) . If p ( x ) ≍ exp( − exp( x )) , then h ( x ) ≍ exp( x ) . It is clear from the definition that the hazard must itself b e p ositiv e wherever p > 0 . How ever, for the kinds of distributions in whic h we are in terested (monotone, sufficien tly regular), m uc h more can be said. In particular, w e ha ve the follo wing useful Lemma, which establishes a sharp low er b ound on h. Lemma 1. L et p > 0 b e a pr ob ability density on R + such that R ∞ 0 xp ( x ) dx < ∞ . L et h b e the hazar d function asso ciate d with p. If h is eventual ly monotone and L := lim x →∞ xh ( x ) exists, then h ( x ) ≥ c/x for some c > 1 and lar ge enough x. Pr o of. If h is even tually increasing, say for all x > x 0 , then h ( x ) ≥ h ( x 0 ) > 0 and for any c > 0 we hav e x ≥ c/h ( x 0 ) for large enough x. Therefore h ( x ) ≥ c/x ev entu ally . 4 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM No w supp ose h is even tually decreasing. If L = ∞ , w e ha ve immediately that h ( x ) = ω (1 /x ) by definition. Otherwise take L ∈ (0 , ∞ ) . Then h ( x ) ∼ L/x and log G ( x ) = − Z x 0 h ( t ) dt = − L log x + o (log x ) so G ( x ) = x − L + o (1) . But the first moment is Z ∞ 0 xp ( x ) dx = Z ∞ 0 G ( x ) dx < ∞ so L > 1 . T aking any c ∈ (1 , L ) then gives the claim for large enough x. □ After dividing through by G ( x k ) and taking logarithms, the recurrence (Rec) can b e re-expressed in terms of the hazard as follows: log (( x k + x k +1 ) h ( x k ) − 1) = Z x k x k − 1 h ( x ) dx. (Rec’) Through this equation, controlling how, and by how muc h, the hazard can change o v er the in terv al [ x k − 1 , x k ] will pro vide p o werful control ov er the optimal turning p oin ts themselves. This motiv ates the use of monotonicity and regularity hypothe- ses on the hazard, whic h are weak assumptions in the sense that they apply to essen tially an y commonly studied distribution on R + with finite exp ectation. 2.3. Regular v ariation and de Haan’s class Γ . In order to control the in tegral on the RHS of (Rec’), we need the hazard to b e sufficiently regular. The precise necessary notions of regularity are standard [12] and stated b elow. 3 Definition 2 (Slow v ariation) . L et L : [ A, ∞ ) → R + for some A > 0 b e L eb esgue- me asur able. We say that L is slowly v arying if, for al l λ > 0 L ( λx ) ∼ L ( x ) as x → ∞ . Morally sp eaking, slo wly v arying functions are slow er than p olynomial; a pro- tot ypical example is any pow er of a logarithm. As a useful fact, they ob ey the so-called Potter b ound , i.e., for any η > 0 and t 1 , t 2 large enough, there exists C η suc h that L ( t 1 ) L ( t 2 ) ≤ C η max ( t 1 t 2 η , t 1 t 2 − η ) . Slo w v ariation forms the basis of the notion of “regular v ariation.” Definition 3 (Regular v ariation) . L et f : [ A, ∞ ) → R + for some A > 0 b e L eb esgue-me asur able. We say that f is regularly v arying of index ρ and f ∈ R ( ρ ) if f ( x ) = x ρ L ( x ) for some slow ly varying function L. 3 It is probably p ossible to generalize and streamline our results by using a unified notion suc h as “Beurling regular v ariation” [11], but instead we c ho ose to work with more w ell-kno wn notions. ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 5 In particular, an y function with a p ow er-law tail is regularly v arying. A standard, useful consequence of regular v ariation is the uniform c onver genc e pr op erty whic h sa ys that, for any λ > 0 , f ( λx ) f ( x ) → λ ρ uniformly in λ. Regular v ariation will help us c haracterize the solutions for polynomially gro wing or slow er hazard. In order to extend the results to rapidly v arying (sa y , exp onen- tially growing) functions, we also need an alternate kind of regularit y , which is also standard: Definition 4 (De Haan’s class Γ) . L et f : [ A, ∞ ) → R + for some A > 0 b e L eb esgue-me asur able. We say that f is in de Haan’s class Γ and f ∈ Γ if ther e exists an auxiliary function a ( x ) > 0 such that for al l fixe d t ∈ R , f ( x + ta ( x )) f ( x ) → e t as x → ∞ . As a remark, it is well-kno wn that an y suc h auxiliary function necessarily sat- isfies a ( x ) = o ( x ) (a fact which we will use more than once) and moreov er is self-ne gle cting , that is a ( x + ta ( x )) a ( x ) → 1 , uniformly in t. Note in particular that if f ( x ) = exp( g ( x )) , with g increasing and t wice-differen tiable and g ′′ /g ′ 2 → 0 , then f ∈ Γ with auxiliary 1 /g ′ . Hence, Γ encompasses essentially an y “nice” function growing faster than any p olynomial. Conv ersely , one can show that any f ∈ Γ grows faster than p olynomial (see pro of of Theorem 2). W e close this section with a useful fact: p inherits monotonicit y from h if h is either R V or in class Γ . Prop osition 2. L et p > 0 b e a density on R + with hazar d h. If h is eventual ly monotone and either r e gularly varying or in de Haan ’s class Γ , then p is eventual ly monotone de cr e asing. Pr o of. First consider h ∈ R ( ρ ) . First consider ρ < 0 , so that h is ev entually mono- tone decreasing. Note that p ( x ) = h ( x ) exp − R x 0 h ( t ) dt , so for any λ > 1 p ( λx ) p ( x ) = h ( λx ) h ( x ) exp − Z λx x h ( t ) dt ! ≤ h ( λx ) h ( x ) → λ ρ , with the last step coming from uniform conv ergence. Hence for any ϵ > 0 , p ( λx ) p ( x ) ≤ λ ϵ + ρ , and taking ϵ < | ρ | yields p ( λx ) p ( x ) < 1 for large enough x. No w consider ρ > 0 ( h even tually increasing). F rom the Potter b ound, we hav e for any λ > 1 and η > 0 , h ( λx ) h ( x ) ≤ C η λ ρ + η 6 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM for sufficiently large x. Moreov er, R λx x h ( t ) dt ≥ ( λ − 1) xh ( x ) . Com bining these, we ha v e p ( λx ) p ( x ) ≤ C η λ ρ + η e − ( λ − 1) xh ( x ) < 1 for large enough x. Finally , w e turn our attention to h ∈ Γ . In this case, h is even tually increasing, and we hav e for any t > 0 Z x + ta ( x ) x h ( u ) du ≥ ta ( x ) h ( x ) , so p ( x + ta ( x )) p ( x ) ≤ h ( x + ta ( x )) h ( x ) e − ta ( x ) h ( x ) → e t (1 − a ( x ) h ( x )) . It remains to show that a ( x ) h ( x ) → ∞ . F or any t > 0 , and ϵ ∈ (0 , 1) , h ( x + a ( x )) /h ( x ) ≥ e 1 − ϵ and (due to the self-neglecting property) a ( x + a ( x )) /a ( x ) ≥ 1 − ϵ for large enough x. Then, defining a sequence { x n } b y x n +1 = x n + a ( x n ) , w e ha ve a ( x n +1 ) h ( x n +1 ) a ( x n ) h ( x n ) ≥ (1 − ϵ ) e 1 − ϵ > 1 for small enough ϵ. Hence a ( x n ) h ( x n ) → ∞ . W e can extend this to an y x by c ho osing n suc h that x ∈ [ x n , x n +1 ] , which gives a ( x ) ≍ a ( x n ) (by self-neglecting) and hence a ( x ) h ( x ) ≳ a ( x n ) h ( x n ) → ∞ . □ 3. Resul ts W e first pro ve the following lemma, whic h essentially says that the solutions to the LSP cannot gro w faster than exponential under monotone hazards. T o establish this con trol, it is necessary to use the ob jective function itself (rather than just the recurrence). The pro of strategy , which inv olves introducing a “comp etitor” sequence y k , previously was used, e.g. in [5] when studying the sp ecial case of a normally distributed target. Lemma 2. L et { x k } b e a minimizing se quenc e of turning p oints for Pr oblem 1. Under the same hyp otheses on h and p as in L emma 1, ther e exists M > 1 such that x k +1 ≤ M x k for lar ge enough k . Pr o of. F or any sequence { y k } such that y k = x k ∀ k ≤ N , we may write J [ y ] − J [ x ] = X k ≥ N (( y k + y k +1 ) G ( y k ) − ( x k + x k +1 ) G ( x k )) ≤ ( y N + y N +1 ) G ( y N ) − ( x N + x N +1 ) G ( x N ) + X k ≥ N +1 ( y k + y k +1 ) G ( y k ) = ( y N +1 − x N +1 ) G ( x N ) + X k ≥ N +1 ( y k + y k +1 ) G ( y k ) . But by optimality of x k , J [ y ] ≥ J [ x ] , so x N +1 ≤ y N +1 + 1 G ( x N ) X k ≥ N +1 ( y k + y k +1 ) G ( y k ) . (1) ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 7 No w, by Lemma 1, there exists c > 1 and x 0 > 0 such that h ( x ) ≥ c/x for all x > x 0 . Then u ≥ v ≥ x 0 implies log G ( u ) G ( v ) = − Z u v h ( t ) dt ≤ − Z u v c t dt = − c log u v = ⇒ G ( u ) ≤ G ( v ) u v − c . (2) T ake N large enough that x N ≥ x 0 , and set r = 2 1 /c ∈ (1 , 2) . Define y N + j = x N r j for j ≥ 1 . Then combining Eq. 1 and Eq. 2 yields x N +1 ≤ rx N + ∞ X j =1 x N r j (1 + r ) r − cj = r x N + x N (1 + r ) r c − 1 − 1 , so x N +1 ≤ M x N with M = ( r c + 1) / ( r c − 1 − 1) = 3 / (2 r − 1 − 1) ∈ (1 , ∞ ) . □ The Lemma can b e strengthened if w e kno w that h ( x ) = ω (1 /x ) . Corollary 1. If, in addition to the hyp otheses of L emma 2, we have h ( x ) = ω (1 /x ) , then for any M > 1 , ther e exists K such that x k +1 ≤ M x k whenever k ≥ K . Pr o of. Same as the pro of of Lemma 2, mutatis mutandis . □ W e no w turn our atten tion to the increments ∆ k := x k − x k − 1 . Under an ad- ditional weak regularity assumption on h, we can establish a trichotom y whic h classifies the limiting v alue of these increments (zero, finite, or infinite). Theorem 1. L et p > 0 b e a pr ob ability density on R + such that R ∞ 0 xp ( x ) dx < ∞ . L et h b e the hazar d function asso ciate d with p, and let { x k } b e a minimizing se quenc e of turning p oints for Pr oblem 1 (using the symmetric extension p ( x ) = p ( − x ) as the tar get density). Define ∆ k := x k − x k − 1 . If h is eventual ly monotone and L := lim x →∞ xh ( x ) exists, then the fol lowing hold: (1) If h ( x ) = o (log x ) , then ∆ k → ∞ . (2) If h ( x ) = ω (log x ) , then ∆ k → 0 . (3) If h ( x ) ∼ c log x for some c ∈ (0 , ∞ ) , then ∆ k → 1 /c. In particular, asymptotically logarithmic hazards—equiv alent to densities with tail p ≍ exp( − cx log x + O ( x ))—act as a b oundary case where the optimal turning p oin ts gro w linearly . This is useful for certain applications whic h in tro duce an inequalit y c onstrain t to the LSP , see for example [14]. While the incremen ts are in teresting ob jects in their o wn right, the cen tral result of this article is that, with a bit of extra regularit y on h (in the sense defined in the previous section), studying them in fact allows us to characterize the exact leading asymptotics of x k . W e m ust exclude the c ase of p ow er law tails ( h ( x ) ∼ 1 /x ), wherein LSP b ehav es qualitatively differently . This is summarized in the following Theorem. Theorem 2. Supp ose, in addition to the hyp otheses of The or em 1, that h ( x ) = ω (1 /x ) and either h ∈ R ( ρ ) for some ρ ∈ R or h ∈ Γ . Then ∆ k = log(2 x k h ( x k )) h ( x k ) + o 1 h ( x k ) (3) and k ∼ Z x k h ( x ) log( xh ( x )) dx. 4 (4) 4 W e are unable to characterize the error term in Eq. 4 without finer-grain con trol on h. 8 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM Theorem 2 characterizes the asymptotics of any (well-behav ed) density which deca ys faster than a p o w er law; for any explicit expression for h, w e can in prin- ciple compute the integral (4) and in v ert to find the leading b eha vior of x k . F or completeness, we also work out the case of p o wer-la w (Pareto) tails, in which case the optimal turning p oints grow exp onen tially . 5 Prop osition 3. Supp ose h ( x ) ∼ a/x with a > 1 , with h eventual ly monotone. Then under optimality, x k = r k + o ( k ) , wher e r is the unique solution to r a = a ( r + 1) − 1 (5) such that r > 1 . Mor e over, if h ( x ) = a/x + O ( x − (1+ ϵ ) ) for some ϵ > 0 , then x k ≍ r k . The pro of is deferred to Sec. 7. 3.1. Extension to compact interv als. If p is instead supp orted only on a com- pact interv al, say [ − 1 , 1] WLOG, the LSP may seem qualitatively differen t, at least prima facie . Ho wev er, a result closely analogous to Theorem 2 holds and charac- terizes the asymptotics as x → 1 for regular and monotonic hazards. A subtlety is that we must first establish that the sequence of optimal turning p oin ts do es not terminate; the question of whether or not the sequence terminates was already answ ered in Ref. [2] (indeed, in greater generality than sho wn here), but we presen t the following simple results for clarity . Prop osition 4. L et p > 0 b e a symmetric, c ontinuous pr ob ability density on ( − 1 , 1) such that lim x ↑ 1 p ( x ) = 0 . If p is monotone on some interval [ A, 1) with A ∈ (0 , 1) , then the se quenc e x k do es not terminate (i.e. x k < 1 ∀ k ). Pr o of. Suppose the contrary; then x N − 1 = x N = 1 for some N . Consider a second sequence { y k } terminating at N + 1 with y k = x k for 0 ≤ k ≤ N − 2 , y N − 1 = t > x N − 2 , and y N = y N +1 = 1 . Then J [ y ] − J [ x ] = ( t − 1) G ( x N − 2 ) + ( t + 1) G ( t ) ≥ 0 , so for any t ∈ [0 , 1) , G ( t ) / (1 − t ) ≥ G ( x N − 2 ) / (1 + t ) . But for t close enough to 1, p is monotonically decreasing and thus we can tak e G ( t ) / (1 − t ) ≤ p ( t ) and hence p ( t ) ≥ G ( x N − 2 ) / (1 + t ) > 1 2 G ( x N − 2 ) . But we can alwa ys take t close enough to 1 that this is false, contradiction. □ On the other hand, if p do es not go to 0 at the b oundary of the interv al, it is easy to show that the optimal search strategy reaches the b oundary in finite time, and it do es not make sense to talk ab out large- k asymptotics. Prop osition 5. L et p > 0 b e a symmetric, c ontinuous pr ob ability density on ( − 1 , 1) such that L := lim x ↑ 1 p ( x ) > 0 . Then ther e exists N ∈ N such that x N = 1 . Pr o of. Assume the contrary , so x k < 1 for all k . Then x k ↑ 1 as k → ∞ . Since p is con tin uous, G ( x ) = R 1 x p ( t ) dt → 0 . But ( x k + x k +1 ) p ( x k ) = G ( x k ) + G ( x k − 1 ) , and the LHS → 2 L > 0 while the RHS → 0 , contradiction. □ 5 This result is lik ely known, but it is unclear if it has ever been published. ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 9 If the hazard is sufficiently regular and go es to infinity at the b oundary suffi- cien tly quic kly , we hav e the following theorem, whic h w e pro ve in Sec. 8. Theorem 3. L et p > 0 b e a symmetric, c ontinuous pr ob ability density on ( − 1 , 1) such that lim x ↑ 1 p ( x ) = 0 . L et G ( x ) = R 1 x p ( t ) dt and h ( x ) = p ( x ) /G ( x ) . L et { x k } b e an optimal str ate gy for Pr oblem 1, and let ∆ k := x k − x k − 1 . If, on some interval [ A, ∞ ) with A > 0 , ˜ h ( x ) := h (1 − 1 /x ) is monotone and either r e gularly varying of index ρ > 1 or in de Haan ’s class Γ , then the asymptotics Eq. 3-4 hold. Once again, we find that h ( ϵ ) ≍ 1 /ϵ (with ϵ = 1 − x ) represents a b oundary case which must b e treated differently than the others. T o study this case, it is illuminating to first establish the following simple Lemma, the analog to Lemma 1: Lemma 3. L et p b e a c ontinuous and eventual ly monotone density on (0 , 1) with p → 0 as x ↑ 1 . Then the hazar d ob eys h ( x ) ≥ 1 / (1 − x ) for x ∈ (0 , 1) close enough to 1. Pr o of. Since p → 0 and is p ositiv e, p is monotone decreasing on [ A, 1) for some A > 0 . Then for all x ∈ [ A, 1) , G ( x ) = Z 1 x p ( t ) dt ≤ (1 − x ) p ( x ) and so h ( x ) = p ( x ) /G ( x ) ≥ 1 / (1 − x ) . □ Note the imp ortan t distinction: due to the fact that finite exp ectation is not a useful con trol for densities on compact interv als, we are unable to guarantee that h ( x ) ≥ c/ (1 − x ) with c > 1 strictly . In the ev ent that c > 1 , which corresp onds to densities going to 0 like a p o w er la w at the b oundary , we can prov e the follo wing prop osition, showing that the residual 1 − x k deca ys doubly exp onentially . Prop osition 6. L et p > 0 b e a symmetric, c ontinuous pr ob ability density on ( − 1 , 1) . If the hazar d h satisfies h ∼ c/ (1 − x ) as x → 1 for some c > 1 , then log(1 − x k ) ≍ − c c − 1 k . If, furthermor e, h ( x ) = c/ (1 − x ) + O ((1 − x ) δ − 1 ) for some δ > 0 , then 1 − x k ∼ 2 c exp − A c c − 1 k ! for some A > 0 . The pro of of Prop osition 6 is given in Sec. 9. How ever, the b oundary case where h ( x ) ∼ 1 / (1 − x ) remains difficult to describ e in generality; this corresp onds to p (1 − ϵ ) ∼ ℓ (1 /ϵ ) for some ℓ slowly v arying. In this case, the asymptotics of the optimal turning points app ear to depend sensitiv ely on the next-to-leading b ehavior of the hazard. W e defer the classification of suc h cases to future study . 4. Examples Eq. 4 allows us find the asymptotics of { x k } for essentially any well-behav ed, ev entu ally monotonic hazard on the real line, except for the degenerate case h ( x ) ≍ 1 /x (p o w er-la w tails), which w as handled in Prop osition 3. Here are a few inter- esting examples whic h do not app ear to b e known in the literature; they follow 10 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEAR CH PROBLEM from straightforw ard calculations, whose details we hav e suppressed for the sake of brevit y . Example 5 (Compressed/stretc hed exp onen tial tails) . Supp ose, in addition to the hyp otheses of The or em 2, we have h ( x ) ∼ ax b for any a > 0 , b > − 1 . Then under optimality, x k ∼ 1 + b a k log k 1 / (1+ b ) . Note that the ab ov e subsumes as s pecial cases b oth the exp onen tial ( b = 0) and normal ( b = 1) distributions; the asymptotics of { x k } in the latter case were one of the the main results of Ref. [5]. Example 6 (Lognormal tails) . Supp ose, in addition to the hyp otheses of The o- r em 2, we have h ( x ) ∼ a log x/x for any a > 0 . Then under optimality, x k ∼ e √ k log k/a . Example 7 (Gum b el tails) . Supp ose, in addition to the hyp otheses of The or em 2, we have h ( x ) ∼ e ax for a > 0 (Unlike the pr evious examples, this hazar d is de Haan Γ and not R V.) Then under optimality, x k ∼ 1 a log k . Let us also present tw o examples of a distribution on a compact interv al. First, consider the triangular distribution. It was previously shown in Ref. [5] that the optimal searc h strategy in this case has an infinite num ber of turning points, but no quantitativ e asymptotic was given. A quic k calculation giv es h ( x ) = 2 / (1 − x ) , and w e can apply Prop osition 6 to find that the distance to the interv al boundary deca ys lik e a double exp onential. Example 8 (T riangular distribution) . L et a symmetric pr ob ability density p > 0 b e supp orte d on ( − 1 , 1) and supp ose the hazar d ob eys h ( x ) ∼ 2 / (1 − x ) + O ((1 − x ) δ − 1 ) for some δ > 0 . Then under optimality, 1 − x k ∼ 4 e − A 2 k . for some A > 0 . Finally , we consider tails decaying lik e p (1 − ϵ ) ≍ exp( − a/ϵ c + o (1 /ϵ )) , c > 0 . Due to the rapid decay of p at the boundary , 1 − x k deca ys m uc h more slo wly in this case. Example 9 (A fast-decaying family of distributions on ( − 1 , 1)) . L et a symmetric pr ob ability density p > 0 b e supp orte d on ( − 1 , 1) and supp ose the hazar d ob eys h ( x ) ∼ a (1 − x ) 1+ b for some a, b > 0 . Then under optimality, 1 − x k ∼ 1 + b a k − 1 /b . 5. Proof of Theorem 1 Our proof strategy is similar to that of Lemma 2, in that we construct a com- p etitor sequence which upp er b ounds the growth of the incremen ts, this time in an h -dep enden t manner. ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 11 Pr o of. F rom (Rec), w e ha ve b y monotonicity that h k := min { h ( x k − 1 ) , h ( x k ) } ≤ R x k x k − 1 h ( x ) dx ∆ k ≤ max { h ( x k − 1 ) , h ( x k ) } := ¯ h k . (6) Th us w e hav e ∆ k ≥ log (( x k + x k +1 ) h ( x k ) − 1) ¯ h k ≥ log (2 x k h ( x k ) − 1) ¯ h k ≥ log ( x k h ( x k )) ¯ h k , (7) where the last inequality follows from Lemma 1. W e first focus on the case h ( x ) = o (log x ) . If x k ≥ 2 x k − 1 , then ∆ k = x k − x k − 1 ≥ x k → ∞ . On the other hand, if x k < 2 x k − 1 , then log x k = log x k − 1 + O (1) , so ¯ h k = o (log x k ) regardless of whether h is increasing or decreasing. Hence ∆ k ≥ log x k + O (1) o (log x k ) → ∞ . (8) This prov es the first case. No w supp ose that h is increasing, consisten t with h = Ω(log x ). Then using Eq. 1, u ≥ v implies G ( u ) = G ( v ) exp − Z u v h ( x ) dx ≤ G ( v ) e − h ( u )( u − v ) . (9) Set y N + j = x N + r j for each j ≥ 1 , with r = log( x N h ( x N )) /h ( x N ) . W e then hav e G ( y N + j ) G ( x N ) ≤ exp( − h ( x N ) r j ) = ( x N h ( x N )) − j . Inserting into Eq. 1, we find x N +1 − x N ≤ r + ∞ X j =1 (2 x N + 2 r j + r )( x N h ( x N )) − j = r + 2 x N + r x N h ( x N ) − 1 + 2 x N h ( x n ) r ( x N h ( x N ) − 1) 2 = r + O 1 h ( x N ) . But the choice of N w as arbitrary , so we ha v e (in com bination with Eq. 7 and using h monotonically increasing) log( x k h ( x k )) h ( x k ) ≤ ∆ k ≤ log( x k − 1 h ( x k − 1 )) + O (1) h ( x k − 1 ) . (10) The upp er and low er b ounds b oth tend to zero if h ( x ) = ω (log x ) . On the other hand, b y Lemma 2, log ( x k ) = log x k − 1 + O (1), so if h ( x ) ∼ c log x, the upp er and low er b ounds are b oth equal to 1 /c + O (log log x k / log x k ) , so it follows that ∆ k → 1 /c. □ 6. Proof of Theorem 2 Equations 6 and 10 are already strongly suggestive of the final result. The main ingredient that extra regularity provides is a guarantee that h ( x k − 1 ) ∼ h ( x k ) through either the uniform conv ergence prop erty of regular v ariation or the o ( x ) prop ert y of the auxiliary functions for de Haan’s class Γ . The v alidit y of integrating 12 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEAR CH PROBLEM the increments (Eq. 4) follows by establishing that the increments do not gro w to o quic kly . Pr o of. Using Corollary 1 and h ( x ) = ω (1 /x ), we ha ve from (Rec’) that Z x k x k − 1 h ( x ) dx = log(2 x k h ( x k )) + o (1) . Since h is even tually monotone, we can use Eq. 6 to write log(2 x k h ( x k )) + o (1) ¯ h k ≤ ∆ k ≤ log(2 x k h ( x k )) + o (1) h k . (11) W e first show that ∆ k = o ( x k ) . If h is decreasing, then h k = h ( x k ) and Eq. 11 giv es ∆ k x k ≤ log( x k h ( x k )) + O (1) x k h ( x k ) → 0 since x k h ( x k ) → ∞ . On the other hand, with h increasing, Eq. 10 still holds and in particular ∆ k x k ≤ ∆ k x k − 1 ≤ log( x k − 1 h ( x k − 1 )) + O (1) x k − 1 h ( x k − 1 ) → 0 . Next, we show that if h ∈ R ( ρ ) is regularly v arying, then h ( x k ) ∼ h ( x k − 1 ) so that ¯ h k ∼ h k , yielding Eq. 3 from Eq. 11. By the uniform conv ergence prop erty of regularly v arying functions, we hav e for any u ( x ) = o ( x ) that h ( x + u ( x )) h ( x ) = h ( x (1 + u ( x ) /x )) h ( x ) → 1 + u ( x ) x ρ → 1 . Using u ( x k ) = − ∆ k giv es h ( x k − ∆ k ) /h ( x k ) = h ( x k − 1 ) /h ( x k ) → 1 . No w w e pro v e Eq. 4 for regularly v arying hazard. Let f ( x ) = log ( xh ( x )) /h ( x ) . it is easy to see that f is regularly v arying of index − ρ. The uniform con vergence prop ert y then gives f ( λx ) f ( x ) − λ − ρ ≤ η uniformly for any η > 0 , for large enough x. It follows that there exists δ > 0 such that | 1 − λ | ≤ δ implies 1 − η ≤ f ( λx ) f ( x ) ≤ 1 + η . Putting x = x k and λ = t/x k for t ∈ [ x k − 1 , x k ] , we ha ve λ ∈ [1 − ∆ k /x k , 1] , so since ∆ k /x k → 0 we even tually hav e (1 − η ) f ( x k ) ≤ f ( t ) ≤ (1 + η ) f ( x k ) . It follows that ∆ k (1 + η ) f ( x k ) ≤ Z x k x k − 1 dt f ( t ) ≤ ∆ k (1 − η ) f ( x k ) . But ∆ k /f ( x k ) → 1 , so taking η → 0 , we find R x k x k − 1 dt f ( t ) → 1 . Let F ( x ) = R x dt f ( t ) . Then for any k 0 < k, F ( x k ) − F ( x k 0 ) = k − 1 X j = k 0 ( F ( x j +1 ) − F ( x j )) → k − k 0 , so F ( x k ) ∼ k . This is the desired result. ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 13 Finally , we mo ve to the case where h ∈ Γ . Let a b e the auxiliary function. First, w e claim that a ( x ) = ω log h ( x ) h ( x ) . (12) Supp ose the con trary , and there is some C > 0 suc h that a ( x ) ≤ C log h ( x ) /h ( x ) for large enough x. By Def. 4 with t = 1 , we hav e h ( x + a ( x )) /h ( x ) → e. Define a sequence y n +1 = y n + a ( y n ) for n ≥ 0 . Iterating then gives h ( y n ) ∼ e n h ( y 0 ) . (13) Hence, a ( y n ) ≤ C log h ( y n ) h ( y n ) ∼ C n e n h ( y 0 ) , and it follows that P n ≥ 0 a ( y n ) < ∞ and therefore y n con v erges to a finite limit, con tradicting Eq. 13. On the other hand, we claim that h ( x ) /x m → ∞ for any m > 0 . Indeed, if h has auxiliary function a, for any t > 1 we ha ve h ( x + ta ( x )) ≥ e t h ( x ) for large enough x. Define y n +1 := y n + ta ( y n ) . Then h ( y n ) ≥ e t h ( y n − 1 ) ≥ · · · ≥ e nt h ( y 0 ) . But ta ( y n ) ≤ y n /m since a ( x ) = o ( x ) , so y n +1 ≤ (1 + 1 /m ) y n and hence y n ≤ x 0 (1 + 1 /m ) n . It follows that h ( y n ) y m n ≥ h ( y 0 ) y m 0 e t (1 + 1 /m ) m n → ∞ since e t > e > (1 + 1 /m ) m for any m. The claim follows from h b eing monotonic. Hence, log x = o (log h ) and we can refine Eq. 12 further to a ( x ) = ω log( xh ( x )) h ( x ) . (14) Next, note that for r > 0 Z x x − ra ( x ) h ( t ) dt = a ( x ) h ( x ) Z r 0 h ( x − sa ( x )) h ( x ) ds → a ( x ) h ( x ) Z r 0 e − s ds = (1 − e − r ) a ( x ) h ( x ) . W e now claim that ∆ k = o ( a ( x k )) . Otherwise, there is a subsequence x k j and r > 0 such that ∆ k j ≥ ra ( x k j ) . Then 1 a ( x k j ) h ( x k j ) Z x k j x k j − 1 h ( t ) dt ≥ Z x k j x k j − ra ( x k j ) h ( t ) dt ∼ 1 − e − r But combining Z x k x k − 1 h ( t ) dt ≤ log ( x k h ( x k )) + O (1) with Eq. 14 yields 1 a ( x k ) h ( x k ) Z x k x k − 1 h ( t ) dt → 0 , a contradiction. 14 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEAR CH PROBLEM Th us, s etting t k = ∆ k /a ( x k ) = o (1) , we hav e h ( x k − 1 ) h ( x k ) = h ( x k − 1 + t k a ( x k )) h ( x k ) → 1 b y Def. 4. This means w e once again hav e Eq. 3. Finally , we hav e h ( x − u ( x )) h ( x ) → 1 uniformly whenever u ( x ) = o ( a ( x )). But a ( x ) = o ( x ) , so we further hav e log(( x − u ( x )) h ( x − u ( x ))) log( xh ( x )) → 1 and so f ( x − u ( x )) f ( x ) → 1 , with f ( x ) = log( xh ( x )) /h ( x ) as before. This allo ws us to reason similarly as before, so that Eq. 4 holds. □ 7. Proof of Proposition 3 W e first need to establish existence/uniqueness of the ro ot of Eq. 5. Then, regular v ariation is enough to establish exp onential growth up to a subleading error term; more precise control on the asymptotics of h in turn is enough to establish that this error term is in fact zero. Pr o of. First, we claim that Eq. 5 has a unique solution on (1 , ∞ ) . T o see this, write F ( x ) := ( x a + 1) /a − 1 − x, so that solutions satisfy F ( x ) = 0 . Then x > 1 implies F ′ ( x ) = x a − 1 − 1 > 0 , so F is strictly increasing on (1 , ∞ ) . Moreov e r, F (1) = 2(1 − a ) /a < 0 , while F ( x ) → ∞ for x → ∞ , so the claim follows. No w let r k = x k /x k − 1 . By Lemma 2, r k ∈ (1 , C ] for some C > 1 even tually . Because p is measurable and p > 0 , h is also measurable and h ∈ R ( − 1), so the uniform conv ergence prop erty of regularly v arying functions gives sup t ∈ [1 ,C ] t h ( tx ) h ( x ) − 1 /t = sup t ∈ [1 ,C ] h ( tx ) a/ ( tx ) − 1 → 0 and thus sup x ∈ [ x k − 1 ,x k ] | h ( x ) − a/x | ≤ sup x ∈ [ x k /C,x k ] | h ( x ) − a/x | = o (1 /x k ) . Hence, Z x k x k − 1 h ( x ) dx = a Z x k x k − 1 dx x + o Z x k x k − 1 dx x k ! = a log r k + o (1) . (15) On the other hand, ( x k + x k +1 ) h ( x k ) − 1 = x k h ( x k )(1 + r k +1 ) − 1 = ( a + o (1))(1 + r k +1 ) − 1 = a (1 + r k +1 ) − 1 + o (1) , ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 15 where the last step used the b oundedness of r k +1 . Th us, from (Rec’) and Eq. 15, w e ha v e r k +1 = r a k + 1 a − 1 + o (1) = g ( r k ) + o (1) , (16) where we hav e defined g ( x ) = ( x a + 1) /a − 1 . Next, let m := lim inf k →∞ r k and M := lim sup k →∞ . By Lemma 2, M < ∞ . W e also hav e m > 1; if not, then there is a subsequence k j suc h that r k j → 1 , and Z x k j x k j − 1 h ( x ) dx ∼ a log r k j → 0 . (17) But then, using (Rec’) and xh ( x ) → a , exp Z x k x k − 1 h ( x ) dx ! = x k h ( x k )(1 + r k +1 ) − 1 → a (1 + r k +1 ) − 1 > log(2 a − 1) > 1 , con tradicting Eq. 17. No w, choose a subsequence k j with r k j → M . F rom Eq. 16, r k j +1 → g ( M ) . But lim sup j →∞ r j ≤ M , so g ( M ) ≤ M , and F ( M ) ≤ 0 . Since F is increasing and F ( r ) = 0 , we conclude M ≤ r. Similarly , along a subsequence with r k j → m, we get r k j +1 → g ( m ) and mutatis mutandis we conclude m ≥ r . Therefore, r k → r . Finally , since x k = x 0 Q k j =1 r j , 1 k log x k = 1 /k log x 0 + k X j =1 log r j → log r so x k = exp( k (log r + o (1))) = r k + o ( k ) . This prov es the first part. If h ( x ) − a/x = O ( x − (1+ ϵ ) ) , then Z x k x k − 1 h ( x ) dx = a log r k + O x − ϵ k − 1 and ( x k + x k +1 ) h ( x k ) − 1 = a (1 + r k +1 ) − 1 + O ( x − ϵ k ) . Since r k +1 is even tually b ounded, we then ha ve log(( x k + x k +1 ) h ( x k ) − 1) = log( a (1 + r k +1 ) − 1) + O ( x − ϵ k ) and so a log r k = log( a (1 + r k +1 ) − 1) + O ( x − ϵ k − 1 ) , or r a k = a (1 + r k +1 ) − 1 + O ( x − ϵ k − 1 ) . Put G ( x ) := ( a (1 + x ) − 1) 1 /a and call the error term ξ k . W e ha ve G ′ ( x ) = ( a (1 + x ) − 1) 1 /a − 1 = H ( x ) a − 1 so G ′ ( r ) = r 1 − a ∈ (0 , 1) , so G is a contraction mapping. 16 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEAR CH PROBLEM Since G ( r ) = r w e hav e r k − r = G ( r k +1 ) − G ( r ) + ξ k and so | r k − r | ≤ λ | r k +1 − r | + | ξ k | with λ ∈ (0 , 1) . It follows that | r k − r | ≤ λ n | r k + n − r | + n − 1 X j =0 λ j | ξ k + j | for each n ≥ 1 . T aking n → ∞ , we conclude | r k − r | ≤ ∞ X j =0 λ j | ξ k + j | . W e already show ed that lim inf r k > 1 , so put r k ≥ 1 + δ for some δ > 0 . Then x k + j ≥ x k (1 + δ ) j so | ξ k + j | ≤ C x − ϵ k (1 + δ ) − j ϵ for some finite C > 0 . It follo ws that P ∞ j =0 λ j | ξ k + j | ≤ C ′ x − ϵ k for some finite C ′ , and therefore | r k − r | = O ( x − ϵ k ) . Finally , we hav e x k r k = x 0 k Y j =1 r j r whic h conv erges iff P k j =1 log r j r con v erges. But log ( r j /r ) = ( r j − r ) /r + O (( r j − r ) 2 ) = O ( r j − r ) , and since | r j − r | = O ( x − ϵ k ) with x k + j ≥ (1 + δ ) j x k for large enough k , the series is summable. Hence x k ∼ C r k for some C . □ 8. Proof of Theorem 3 The pro of is essentially the same structurally as that of Theorem 2; once again, the main p oint is to establish that h ( x k ) ∼ h ( x k − 1 ) using the uniform conv ergence prop ert y , although the fine details differ. Pr o of. First, Prop osition 4 shows that the x k are non-terminating. No w let ϵ = 1 − x, z = 1 /ϵ. W e first pro ve that ∆ k / (1 − x k ) → 0 . Supp ose, on the contrary , that ∆ k ≥ θ ϵ k for some θ ∈ (0 , 1) along some subsequence. Then on that subsequence, x k − 1 = x k − ∆ k ≤ 1 − (1 + θ ) ϵ k , so Z x k x k − 1 h ( t ) dt ≥ Z 1 − (1+ θ ) ϵ k 1 − ϵ k h ( t ) dt = Z (1+ θ ) ϵ k ϵ k ˜ h (1 /s ) ds. Since ˜ h is monotone, we hav e for all s ∈ [ ϵ k , (1 + θ ) ϵ k ] that ˜ h (1 /s ) ≥ ˜ h z k 1 + θ , so Z x k x k − 1 h ( t ) dt ≥ θ ϵ k ˜ h z k 1 + θ , On the other hand, x k + x k +1 ≤ 2 , so (Rec’) gives Z x k x k − 1 h ( t ) dt ≤ log (2 h ( x k )) = log(2 ˜ h ( z k )) , ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 17 so log(2 ˜ h ( z k )) ≥ θ ˜ h ( z k / (1 + θ )) z k . (18) No w supp ose that ˜ h ∈ R ( ρ ) . Cho osing λ = 1 / (1 + θ ) > 0 , we ha v e ˜ h ( λz ) / ˜ h ( z ) → λ ρ as z → ∞ , and hence ˜ h ( z k / (1 + θ )) ∼ (1 + θ ) − ρ ˜ h ( z k ) . Using ˜ h ( z ) ∼ z ρ L ( z ) (with L slowly v arying), we then hav e z k log(2 ˜ h ( z k )) θ ˜ h ( z k / (1 + θ )) ∼ (1 + θ ) ρ z k log(2 ˜ h ( z k )) θ ˜ h ( z k ) = O log z k z ρ − 1 k L ( z k ) ! → 0 since ρ > 1 , contradicting Eq. 18. If instead ˜ h ∈ Γ , we again hav e ˜ h ( z ) /z m → ∞ for any m > 0 . This immediately implies that ˜ h ( z k / (1 + θ )) z k log ˜ h ( z k ) → ∞ , again contradicting Eq. 18. No w, by monotonicity , Eq. 6 again holds, and since log (( x k + x k +1 ) h ( x k ) − 1) = log(2 x k h ( x k )) + o (1) , combining with (Rec’) again shows that ∆ k = log(2 x k h ( x k )) + o (1) h ( x k ) as desired (in fact, the factor of x k in the logarithm can be neglected in this case since x k → 1). The pro of of Eq. 4 is essentially unchanged from the corresp onding part of the pro of of Theorem 2. □ 9. Proof of Proposition 6 The pro of is structurally similar to that of Prop osition 3: w e establish that G is regularly v arying, and then use this to deriv e a recurrence for the logarithm of 1 − x k , up to O (1) errors. More precise control on the higher-order asymptotics of h suffices to shrink this error to o (1) and refine our estimate of the x k . Pr o of. Let ϵ = 1 − x and ϵ k = 1 − x k . W e hav e d log G (1 − ϵ ) /dϵ = h (1 − ϵ ) . W rite h (1 − ϵ ) = c/ϵ + r ( ϵ ) with r ( ϵ ) = o (1 /ϵ ) as ϵ ↓ 0 . Then, integrating, we hav e log G (1 − ϵ ) = log K + c log ϵ + Z ϵ r ( t ) dt. for some K > 0 Hence G (1 − ϵ ) = ϵ c ℓ (1 /ϵ ) with ℓ ( t ) := K exp( R 1 /t r ( u ) du ) . W e claim that ℓ is slowly v arying. T o see this, note that for any λ > 0 , log ℓ ( λt ) ℓ ( t ) = Z 1 /λt 1 /t r ( u ) du. But r ( u ) = o (1 /u ) , so for any η > 0 , | r ( u ) | ≤ η /u for small enough u, so Z 1 /λt 1 /t r ( u ) du ≤ η | log λ | . 18 ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEAR CH PROBLEM T aking t → ∞ and η → 0 shows that log ℓ ( λt ) ℓ ( t ) → 0 . W e then hav e G ( x k − 1 ) G ( x k ) = ϵ k − 1 ϵ k c ℓ (1 /ϵ k − 1 ) ℓ (1 /ϵ k ) . and so, using x k , x k +1 → 1 and h ( x ) ∼ c/ (1 − x ) , (Rec’) gives ϵ k − 1 ϵ k c ℓ (1 /ϵ k − 1 ) ℓ (1 /ϵ k ) = 2 c ϵ k (1 + o (1)) . The Potter b ound then gives, for any η > 0 , ( c − η ) log ϵ k − 1 ϵ k + O (1) ≤ log 2 c ϵ k ≤ ( c + η ) log ϵ k − 1 ϵ k + O (1) , so η → 0 shows that log 2 c ϵ k = c log ϵ k − 1 ϵ k + O (1) , or, writing L k = − log ϵ k , L k = c c − 1 L k − 1 + O (1) . It follows that L k ≍ c c − 1 k for some A > 0 , proving the first claim. On the other hand, if r = O ( ϵ δ − 1 ) , then we can refine our estimate of G further to G (1 − ϵ ) = K ϵ c (1 + O ( ϵ δ )) . No w (Rec’) gives ϵ k − 1 ϵ k c = 2 c ϵ k (1 + o (1)) , whence L k = c c − 1 L k − 1 + log 2 c 1 − c + o (1) . It follows that there exists A > 0 such that L k = A c c − 1 k − log 2 c + o (1) , pro ving the second claim. □ 10. Discussion T aken together, Theorems 1–3 and Prop ositions 3–6 classify and quantify the asymptotics of the solutions to the symmetric LSP for positive, monotonic target densities on b oth compact interv als and the real line. This solv es a ma jor comp o- nen t of the problem for most well-behav ed densities. Tw o imp ortant classes hav e not b een treated carefully: densities with strong, p ersisten t oscillations throughout the tail (or as x approaches the b oundary in the case of compact in terv als), and densities on compact in terv als whic h approach 0 like a slo wly v arying function. While w e do not attempt to study such cases presen tly , it seems plausible that a similar approac h to the one taken here could pro vide control ASYMPTOTICS OF SOLUTIONS TO THE LINEAR SEARCH PROBLEM 19 on the asymptotics in the case of oscillating densities, pro vided that h has known upp er and low er b ounds. An interesting direction for future research would b e to try to use the rigorous asymptotics as a large- k constraint in a computational algorithm for the optimal searc h strategy . Early efforts by the author in this direction hav e b een encourag- ing but hav e th us far failed to pro duce convincing evidence of robust con v ergence prop erties. A cknowledgments The author thanks An tonio Celani for some early discussions whic h help ed in- spire this work. This w ork w as supp orted b y the Europ ean Researc h Council un- der the gran t RIDING (No. 101002724), by the Air F orce Office of Scien tific Re- searc h (gran t F A8655-20-1-7028), and the National Institute of Health under grant R01DC018789. The author discloses the use of a large language mo del to help pro duce some of the technical details of the pro ofs. References [1] Alpern, S., and Gal, S. The the ory of se ar ch games and r endezvous , vol. 55 of International Series in Op er ations Rese ar ch & Management Scienc e . Springer Science & Business Media, 2003. [2] Baston, V., and Beck, A. Generalizations in the linear search problem. Israel Journal of Mathematics 90 , 1 (1995), 301–323. [3] Beck, A. On the linear search problem. Isr ael Journal of Mathematics 2 , 4 (1964), 221–228. [4] Beck, A. More on the linear search problem. Isr ael Journal of Mathematics 3 , 2 (1965), 61–70. [5] Beck, A., and Beck, M. Son of the linear search problem. Israel Journal of Mathematics 48 , 2 (1984), 109–122. [6] Beck, A., and Beck, M. The linear search problem rides again. Isr ael Journal of Mathe- matics 53 , 3 (1986), 365–372. [7] Beck, A., and Beck, M. The revenge of the linear search problem. SIAM journal on contr ol and optimization 30 , 1 (1992), 112–122. [8] Beck, A., and Newman, D. J. Y et more on the linear search problem. Isr ael journal of mathematics 8 , 4 (1970), 419–429. [9] Beck, A., and W arren, P. The return of the linear searc h problem. Isr ael Journal of Mathematics 14 , 2 (1973), 169–183. [10] Bellman, R. An optimal search. Siam R eview 5 , 3 (1963), 274. [11] Bingham, N., and Ost aszewski, A. Beurling slow and regular v ariation. T r ansactions of the London Mathematic al Society 1 , 1 (2014), 29–56. [12] Bingham, N. H., Goldie, C. M., and Teugels, J. L. R egular variation , v ol. 27. Cambridge universit y press, 1989. [13] Bruss, F. T., and Rober tson, J. B. A survey of the linear-search problem. Mathematic al scientist 13 , 2 (1988), 75–89. [14] Heinonen, R. A., Biferale, L., Celani, A., and Vergassola, M. Optimal tra jectories for Bay esian olfactory search in turbulent flows: The low information limit and b ey ond. Phys. R ev. Fluids 10 (Apr 2025), 044601. [15] Kalbfleisch, J. D., and Prentice, R. L. The statistic al analysis of failure time data . John Wiley & Sons, 2002. [16] Kao, M.-Y., Reif, J. H., and T a te, S. R. Searching in an unknown environmen t: An optimal randomized algorithm for the cow-path problem. Information and c omputation 131 , 1 (1996), 63–79.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment