On a discrete max-plus transportation problem
We provide an explicit algorithm to solve the idempotent analogue of the discrete Monge-Kantorovich optimal mass transportation problem with the usual real number field replaced by the tropical (max-plus) semiring, in which addition is defined as the…
Authors: Sergio Mayorga, Eugene Stepanov, Pedro Barrios
ON A DISCRETE MAX-PLUS TRANSPOR T A TION PR OBLEM PEDRO BARRIOS, SERGIO MA YOR GA, AND EUGENE STEP ANO V De dic ate d to N.N. Ur altseva on the oc c asion of her 90th birthday Abstract. W e provide an explicit algorithm to solve the idempotent analogue of the discrete Monge-Kantoro vich optimal mass transportation problem with the usual real num b er field replaced by the tropical (max-plus) semiring, in which addition is defined as the maxim um and pro duct is defined as usual addition, with −∞ and 0 playing the roles of additiv e and multiplicative iden- tities. Such a problem ma y b e n aturally called tropical or “max-plus” optimal transportation problem. W e show that the solutions to the latter, called the optimal tropical plans, may not corresp ond to perfect matc hings even if the data (max-plus probabilit y measures) hav e all weights equal to zero, in con- trast with the classical discrete optimal transportation analogue, where p er- fect matching optimal plans in similar situations always exist. Nev ertheless, in some randomized situation the existence of p erfect matching optimal tropical plans may o ccur rather frequently . At last, w e prov e that the uniqueness of solutions of the optimal tropical transportation problem is quite rare. 1. Intr oduction In this pap er we consider a discrete optimization problem that looks quite similar to the classical Monge-Kan torovic h optimal mass transp ortation problem and in fact, as we argue later, is nothing else but the idemp oten t version of the latter. W e b egin with a short motiv ational introduction. 1.1. Motiv ation of the problem. Supp ose we ha ve m signal sources and n re- ceiv ers regularly exchanging information betw een them. Each source i ∈ { 1 , . . . , m } ma y transmit an amount h i,j of information to j ∈ { 1 , . . . , n } . The maximum amoun t of information the source i may send at one time is given by a num b er k i , that is, (1) max j ∈{ 1 ,...,n } h i,j = k i . Analogously , the maxim um amount of information the receiver j may get at one time is given by a num ber l j , that is, (2) max i ∈{ 1 ,...,m } h i,j = l j . Of course, (1) and (2) may only b e simultaneously v alid if (3) max i ∈{ 1 ,...,n } k i = max j ∈{ 1 ,...,n } l j . Key wor ds and phr ases. optimal transp ortation, tropical semiring, idempotent analysis. The third author ac knowledges the MIUR Excellence Departmen t Pro ject a w arded to the Department of Mathematics, Universit y of Pisa, CUP I57G22000700001. 1 2 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV The cost C i,j of transmitting b etw een the source i and the receiver j depends affin ely on the amount of transmitted information and takes into accoun t the known fixed cost g i,j of using the communication channel b etw een them, that is, C i,j = g i,j + γ h i,j for some giv en co efficient γ > 0 . The goal is to find the v alues h i,j , i = 1 , . . . , n , j = 1 , . . . , m (the resp ectiv e matrix b eing further called the optimal tropical trans- p ortation plan, the explanation of the terminology b eing given in the sequel) min- imizing the maximum of C i,j o ver all i and j , that is, finding the inf { max i,j ( g i,j + γ h i,j ) : h i,j satisfying (1) and (2) } . Denoting c i,j := g i,j /γ , this amoun ts to solving (4) inf { max i,j ( c i,j + h i,j ) : h i,j satisfying (1) and (2) } . 1.2. Idemp oten t (max-plus or tropical) in terpretation. Let us no w com- pletely c hange the point of view and lo ok at the ab ov e problem as a v ersion of the classical optimal mass transp ortation problem in the con text of idemp otent analy- sis : more precisely , analysis ov er the tropical (max-plus) semiring ¯ R − := R ∪ {−∞} endo wed with the op erations a ⊕ b := max { a, b } , a ⊗ b := a + b, whic h substitute the usual addition and m ultiplication of real n um b ers resp ectively . The v alue −∞ is an identit y with resp ect to ⊕ and 0 is an identit y with respect to ⊗ . Both op erations are comm utativ e, asso ciative and a ⊗ ( b ⊕ c ) = a ⊗ b + a ⊗ c . Th us the roles of 0 and 1 on the usual real line are pla yed here b y −∞ and 0 resp ectiv ely . F or a general o verview of idemp otent analysis we refer the reader to the classic b o ok [5]. The classical discrete Monge-Kan torovic h optimal mass transp ortation problem (see, e.g. [6] for a comprehensive introduction to the sub ject) is that of finding the optimal plan of transp ortation in the following sense: solve the minimization problem (5) inf m,n X i,j =1 c i,j π i,j : [ π i,j ] m,n i,j =1 where the infimimum is p erformed ov er m -by- n matrices [ π i,j ] m,n i,j =1 whic h satisfy the constraints n X j =1 π i,j = k i , (6) m X i =1 π i,j = l j , (7) with the n umbers k i , l j , i = 1 , . . . , m , j = 1 , . . . , n all fixed. This is usually in terpreted as finding the wa y of optimally transp orting the discrete measure µ := m X i =1 k i δ x i ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 3 to another discrete measure ν := n X j =1 l j δ y i , for some x i ∈ X , y j ∈ Y , i = 1 , . . . , m , j = 1 , . . . , n , with X and Y some sets and δ z standing for the Dirac p oint mass at z . The v alue π i,j is, then, interpreted as the amoun t of mass transp orted from x i to y j . The matrix [ π i,j ] m,n i,j =1 is identified with the discrete measure π = P m,n i,j =1 π i,j δ ( x i ,y j ) o ver X × Y ; constraints (6) and (7) now mean that the marginals (or pro jections) of the measure π along X and Y are µ and ν respectively . The quantit y P m,n i,j =1 c i,j π i,j is the total transp ortation cost, targeted for minimization. In the idemp otent max-plus setting the role of the Dirac measure δ z o ver an arbitrary set Z concen trated at a p oint z ∈ Z is play ed by the c haracteristic function (for whic h we retain the same notation as for the Dirac measure) δ z defined by δ z ( z ′ ) := ( 0 , z ′ = z , −∞ , z ′ = z . The analogues of sums of Dirac masses on sets X and Y are the functions on these sets resp ectively defined by (8) µ ( x ) := max i =1 ,...,m ( k i + δ x i ( x )) , ν ( y ) = max j =1 ,...,n ( l j + δ y j ( y )) , i.e. µ is the function taking the v alue k i at each x i and −∞ elsewhere, and ν is the function taking the v alue l j at eac h y i and −∞ elsewhere; the analogue of a discrete measure represented by a sum of Dirac masses with weigh ts h ij at p oin ts ( x i , y j ) ∈ X × Y is the function (9) π ( x, y ) = max i =1 ,...,m j =1 ,...,n h ij + δ ( x i ,y j ) ( x, y ) . W e will b e referring to the co efficien ts k i as the weights of µ and to the co efficients l j as the weigh ts of ν . The total mass of a discrete measure, which in the tradi- tional setting is the sum of its weigh ts, corresp onds, in the max-plus setting, to the maxim um of its weigh ts, i.e. | µ | := max i =1 ,...,m k i , | ν | := max j =1 ,...,n l j . W e will assume, in complete analogy with the classical mass transportation theory , that | µ | = | ν | , whic h is exactly the condition (3), and for purely aesthetical reasons, whic h imply no loss of generalit y , w e also assume that b oth total masses are zero, i. e. | µ | = | ν | = 0 , so that µ and ν can b e considered tropical versions of discrete pr ob ability measures. W e will call, therefore, functions such as µ , ν , π abov e discr ete max-plus pr ob ability me asur es , the set of suc h functions ov er a given set Z being denoted M ( Z ) , so that µ ∈ M ( X ) and ν ∈ M ( Y ) , π ∈ M ( X × Y ) . Supp ose now that { x i } m i =1 ⊂ X , { y j } n j =1 ⊂ Y are given, and µ , ν are defined b y (8) and (9) respectively . The max-plus, or tropical, analogue of a transp ort plan b et ween µ and ν is a function π defined as in (9) and satisfying the constraints (10) max x ∈ X π ( x, y ) = ν ( y ) , max y ∈ Y π ( x, y ) = ν ( x ) , The Monge-Kan torovic h optimal transp ortation problem (5) with the giv en cost function c : X × Y → ¯ R then b ecomes, in the max-plus setting, the problem of 4 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV solving (11) inf { max( c ( x, y ) + π ( x, y )) : π ∈ M ( X × Y ) satisfies (10) } . It is worth men tioning that the problem just stated is not the unique example of a meaningful idemp otent (max-plus or tropical) v ersion of a classical optimization problem; similar tropical formulations hav e arisen elsewhere in the literature. F or instance, this is the case of the so-called b ottlenec k tra veling salesman problem (see e.g. section 8 of [4] or [3]), whic h can b e considered a max-plus v ersion of the classical trav eling salesman problem. W e will further iden tify , whenever con v enient, max-plus discrete probability mea- sures with the sequences of their weigh ts, and the transp ort plan π (given by (9)) with the matrix of co efficients [ h i,j ] , and refer to this ob ject in either in terpretation as a tr opic al tr ansp ort plan for the discrete max-plus probability measures µ and ν (or, equiv alently , for the sequences of their weigh ts) whenev er (10) holds, which in terms of the matrix [ h i,j ] amounts precisely to (1) and (2), namely , that maximum of the i -th ro w of the matrix m ust be k i and the maximum of the j -th row of the matrix m ust b e l j . If we write c i,j := c ( x i , y j ) , then, in view of (8) and (9), the problem (11) b ecomes exactly (4), which is the reason wh y it may be considered as the max-plus version of the Monge-Kantoro vic h problem (5). Such an identification of measures with w eights, plans and cost functions with matrices is quite natural in the discrete setting w e are considering here, esp ecially when the p oints x i and y j themselv es are of no practical imp ortance. 1.3. Our contribution. In this pap er we pro vide an explicit algorithm to solve the optimal tropical transp ortation problem (4) and find an explicit formula for the optimal tropical cost, i. e. the v alue of (4). As a consequence, we obtain some curious results on the optimal tropical plans and v alues. In particular, • In the case m = n optimal tropical plans corresp onding to perfect matc hings (those given by p ermutation matrices) may not exist even if the max-plus probabilit y measures µ and ν hav e all the weigh ts equal to zero (we hence- forth call this case fundamental ), see Example 4.10 b elow. This is a stark con trast with classical optimal mass transp ortation theory , where (again with m = n ) perfect matching optimal transp ort plans b etw een sums of Dirac masses with equal weigh ts alw ays exist. Nevertheless, it turns out that, at least in the fundamental case, the existence of p erfect matc hing optimal tropical plans o ccurs rather frequently as the num b er of w eights of b oth µ and ν becomes large, the resp ectiv e statemen t b eing made precise b y introducing randomness in the cost. More precisely , under a concrete randomization of the cost matrix, the existence of a p erfect matching op- timal plan is “asymptotically almost sure” as the n um b er of weigh ts of the measures approaches infinity . This is Theorem 5.5 b elow. • In the fundamen tal case, under the same t yp e of randomization of the cost matrix, the optimal tropical cost is, asymptotically almost surely , the lo west v alue among all the entries of the cost matrix. This is the conten t of Theorem 5.1 and Remark 5.3 b elow. • W e also pro ve that uniqueness of an optimal tropical plan asymptotically almost surely fails to occur (in the fundamen tal case), when the cost matrix en tries are sampled uniformly . This is Theorem 5.7 b elow. ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 5 2. Not a tion and preliminaries In complete analogy with the classical optimal transportation theory , the matrix [ h i,j [ m,n i,j =1 with each h i,j ∈ [ −∞ , 0] satisfying (1) and (2) will b e called discrete max- plus (or tr opic al) plan (or just a plan for brevit y) for max-plus discrete probability measures µ ∈ M ( X ) , ν ∈ M ( Y ) . Equiv alently , as remarked earlier, it can be seen as a max-plus discrete probability measure in the sense giv en b y (9). W e denote by Π( µ, ν ) the set of all suc h plans (which is alwa ys nonempty , since µ ⊗ ν ∈ Π( µ, ν ) , where ( µ ⊗ ν ) i,j := k i + l j ). F or the given cost matrix [ c i,j ] m,n i,j =1 w e define d c ( µ, ν ) := inf max i =1 ,...,m j =1 ,...,n ( c i,j + h i,j ) : h ∈ Π( µ, ν ) . If we interpret h as an element of h ∈ M ( X × Y ) , i.e. as in (9), then we ma y write h ( x i , y j ) and c ( x i , y j ) instead of h i,j and c i,j resp ectiv ely , since the p oints x i and y j can be assumed fixed in every discussion. Again for purely aesthetical reasons, and to allow for the interpretation of the num bers c i,j as representing a cost, it is con venien t to assume c i,j ≥ 0 , whic h can alwa ys b e done without loss of generalit y . The minimizer h ∈ Π( µ, ν ) in the ab o v e problem will b e called the minimizing (or optimal) tr opic al plan , the set of such minimizing plans b eing denoted by Π c ( µ, ν ) . The num b er d c ( µ, ν ) will be called the optimal tr opic al c ost b etw een µ and ν . W e m ust say that, despite our choice of notation, the function d c ( · , · ) is not a metric. In the sequel we assume the sequences of weigh ts k j and l j to b e ordered in decreasing order k 1 = l 1 = 0 , i.e. (12) k n ≤ k n − 1 ≤ · · · ≤ k 1 = 0 , l n ≤ l n − 1 ≤ · · · ≤ l 1 = 0 . W e denote by Λ( µ ) and Λ( ν ) the sets of weigh ts of µ and ν resp ectively . If we wish to retain the interpretation of µ and ν as elements of M ( X ) and M ( Y ) resp ectiv ely , then (12) is ac hieved simply by a relab eling of the fixed p oints x i and y j , i ∈ { 1 , . . . , m } , j ∈ { 1 , . . . , n } . F or an y h ∈ Π( µ, ν ) , b y the supp ort of h , denoted supp ( h ) , we will mean the subset of X × Y of p oints ( x, y ) where h ( x, y ) > −∞ , or (again, equiv alen tly , since an y suc h point must b e one of the pairs ( x i , y j ) ) with the set of pairs ( i, j ) ∈ { 1 , . . . , m } × { 1 , . . . , n } such that h i,j > −∞ . In the latter case, we may also write h ( i, j ) rather than h ij (for instance, if we wish to free up the subindex place for another purp ose, as in section 4.2 b elow). F or a set X we denote b y # X its cardinality . W e also write sometimes a ∨ b for the maximum of the num b ers a and b . 3. Reduced transpor t a tion plans and existence of minimizers W e start with the following definition. Definition 3.1. Given fixe d discr ete max-plus pr ob ability me asur es µ and ν , we wil l c al l a tr opic al plan h ∈ Π( µ, ν ) reduced if for e ach i, j such that h i,j > −∞ , the element h i,j is a strict maximum in its r ow or in its c olumn, and denote by Π R ( µ, ν ) the set of r e duc e d plans for discr ete µ and ν . Without loss of generality for the optimal tropical transp ortation problem, all the weigh ts of µ and ν can b e taken to b e finite (i. e. > −∞ ). In fact, if, say , 6 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV k i = −∞ for some i ∈ { 1 , . . . , m } , then the i -th row of h , for any h ∈ Π( µ, ν ) must consist only of −∞ . In this case, in the expression that defines d c ( µ, ν ) , eac h of the elemen ts ov er which the minim um is tak en is max ( i,j ) ( h i,j + c i,j ) = max { . . . , h i, 1 + c i, 1 , h i, 2 + c i, 2 , . . . , h i,n + c i,n , . . . } = max { . . . , −∞ , −∞ , . . . − ∞ , . . . } , but the maxim um is non-negativ e, so the the n um b ers −∞ can be c hanged to sufficien tly small negativ e n um b ers (negativ e but with large absolute v alue) without affecting the maximum and then the weigh t k j = −∞ can b e changed to max i h i,j where h i,j are the new num b ers just mentioned. The following assertion holds true. Lemma 3.2. F or al l discr ete µ ∈ M ( X ) , ν ∈ M ( Y ) one has d c ( µ, ν ) = inf { max ( i,j ) ( h i,j + c i,j ) : h ∈ Π R ( µ, ν ) } . Mor e over, for every minimizing plan h ther e is a r e duc e d minimizing plan ˜ h with supp ˜ h ⊂ supp h and ˜ h = h on the supp ort of ˜ h . Pr o of. If h i,j is not a strict maximum neither in its column nor in its row for some i, j ∈ { 1 , . . . , n } , then changing h i,j to −∞ (or to any n um b er less than h i,j ) can only decrease max ( i,j ) ( h i,j + c i,j ) . Changing all such entries of the matrix [ h i,j ] will transform the plan to a reduced one, and thus d c ( µ, ν ) = inf { max ( i,j ) ( h i,j + c i,j ) : h ∈ Π( µ, ν ) } = inf { max ( i,j ) ( h i,j + c i,j ) : h ∈ Π R ( µ, ν ) } as claimed. □ As a consequence, the following existence result holds. Theorem 3.3. The discr ete max-plus tr ansp ortation pr oblem admits a solution, namely, inf is actual ly a min . Pr o of. It is enough to refer to Lemma 3.2 and observe that the set of reduced plans Π R ( µ, ν ) has finitely many elements (indeed, each entry of a reduced plan must b e either −∞ or one of the weigh ts of µ and ν ). □ 4. Algorithm to sol ve the discrete max-plus transpor t a tion problem 4.1. P artition of the support of a plan. Given discrete µ and ν , for each i ∈ { 1 , . . . , m } , j ∈ { 1 , . . . , n } , let p i = max { j : l j ≥ k i } , q j = max { i : k i ≥ l j } , S i = { ( i, 1) , . . . , ( i, p i ) } , T j = { (1 , j ) , . . . , ( q j , j ) } . The following statemen t gives some information on the general structure of reduced plans, as long as w e adhere to the con ven tion that the w eights are sorted as in (12), whic h we agreed to hold throughout. Lemma 4.1. L et µ ∈ M ( X ) , ν ∈ M ( Y ) b e discr ete max-plus pr ob ability me asur es as in (8) and let h ∈ Π R ( µ, ν ) . Assume, without loss of gener ality, that the weights of the me asur es satisfy (12) . The fol lowing assertions hold true. ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 7 (1) F or e ach i ∈ { 1 , . . . , m } , at le ast one of the numb ers h i, 1 , . . . , h i,p i must b e k i , and the numb ers h i,p i +1 , . . . , h i,n ar e al l strictly less than k i . Likewise, for e ach j ∈ { 1 , . . . , n } , at le ast one of the numb ers h 1 ,j , . . . , h q i ,j must b e l j , and the numb ers h q i +1 ,j , . . . , q n,j ar e al l strictly less than l j . (2) If the weights k i and l j ar e al l distinct, with the exc eption of k 1 = l 1 = 0 , then S i ∩ T j = ∅ whenever ( i, j ) = (1 , 1) . (3) One has k i = l j for some ( i, j ) ∈ { 1 , . . . , m } × { 1 , . . . , n } , if and only if ( i, j ) ∈ S i ∩ T j . Pr o of. (1) Fix i ∈ { 1 , . . . , m } . The maxim um among h i, 1 , . . . , h i,n m ust b e k i . If h i,p i + ¯ m = k i for some ¯ m > 0 , then the maximum among h 1 ,p i + ¯ m , . . . , h n,p i + ¯ m is at least k i . The maximum among h 1 ,p i + m , . . . , h n,p i + ¯ m m ust b e l p i + ¯ m , whic h, b y definition of p i is, strictly less than k i . This contradiction prov es that the maxim um of h i, 1 , . . . , h i,n , equal to k i , o ccurs among h i, 1 , . . . , h i,p i , and not among h i,p i +1 , . . . , h n , which pro ves the first part of the assertion. The second part, i.e. the claim ab out the num b ers h 1 ,j , . . . , h n,j is pro ven completely symmetrically . (2) Supp ose ( i, j ) = (1 , 1) and ( q , p ) ∈ S i ∩ T j . Since the pair ( q , p ) is in S i , its first comp onent m ust be i , i.e. q = i . Similarly , since it is in T j , we m ust hav e p = j . Th us ( q , p ) = ( i, j ) . Moreov er, the definition of p i and q j no w con tains only strict inequalities b ecause we are assuming all the weigh ts distinct with the exception of k 1 = l 1 = 0 . Ha ving ( i, j ) ∈ S i , then, implies that l j > k i , while ha ving ( i, j ) ∈ T j implies that k i > l j , and we hav e obtained a con tradiction. (3) Supp ose k i = l j for some pair ( i, j ) ∈ { 1 , . . . , m } × { 1 , . . . , n } . Since l j ≥ k i , w e must ha v e ( i, j ) ∈ S i . Likewise, since k i ≥ l j , then ( i, j ) ∈ T j , so that necessit y is prov en. Now supp ose ( i, j ) ∈ S i ∩ T j . Since ( i, j ) ∈ S i , j ≤ p i so l j ≥ k i , and ( i, j ) ∈ T j giv es i ∈ T i , so k i ≥ l j . This completes the pro of. □ Giv en discrete max-plus probability measures µ, ν and a real num b er λ , let (13) R λ := [ { i : k i = λ } S i ∪ [ { j : l j = λ } T j , whic h is a subset of { 1 , . . . , m } × { 1 , . . . , n } . W e call R λ a r e gion or λ -region to emphasize the dependence on λ . A region can look like an L written backw ards (lik e the one in pink in Figure 1 b elo w), with the ends resting on the top and left edges of the grid, or a rectangle with its left side lying on the left edge of the grid, or a rectangle with its top side on the top edge of the grid, or a rectangle with b oth its left and top sides lying on the left and top sides of the grid, resp ectively . W e remark that that our notion of region exists only when the measures µ and ν ha ve b een fixed. Also, for the description of our algorithm, it is essen tial that the w eights of these measures are lab eled as in 12. Example 4.2. F or m = n = 6 and the max-plus probability measures µ = max { 0 + δ x 1 , 0 + δ x 2 , − 2 + δ x 3 , − 3 + δ x 4 , − 4 + δ x 5 , − 4 + δ x 6 } = 0 , x ∈ { x 1 .x 2 } , − 2 , x = x 3 , − 3 , x = x 4 , − 4 , x ∈ { x 5 .x 6 } 8 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV and ν = max { 0 + δ y 1 , 0 + δ y 2 , 0 + δ y 3 , − 1 + δ y 4 , − 2 + δ y 5 , − 2 + δ y 6 } = 0 , y ∈ { y 1 .y 2 .y 3 } , − 1 , y = y 4 , − 2 . y ∈ { y 5 .y 6 } with x j , j = 1 , . . . , m as well as y i , i = 1 , . . . , n all distinct, the regions (eac h in a differen t color) and a plan are shown in Figure 1. △ 0 0 0 -1 -2 -2 0 - ∞ - ∞ 0 -1 - ∞ -2 0 0 0 - ∞ - ∞ -2 - ∞ -2 - ∞ - ∞ -2 - ∞ - ∞ - ∞ -3 - ∞ - ∞ - ∞ - ∞ -3 - ∞ -4 - ∞ - ∞ - ∞ -4 - ∞ - ∞ -4 -4 - ∞ - ∞ - ∞ - ∞ - ∞ Figure 1. Regions for the pair ( µ, ν ) of Example 4.2. It is con v enien t to extend the notions of plan and reduced plan as follows. Fix discrete max-plus probability measures µ, ν , with their w eigh ts arranged as in (12); supp ose λ is one of these w eights and consider the corresp onding region R λ . By a plan of R λ w e will mean a function h : R λ → [ −∞ , 0] such that the maxim um of h on eac h row and on eac h column of R λ is λ . In Figure 1 w e see plans of each of the five regions, determined by the num b ers in the cells. Let Π( R λ ) b e the set of plans of R λ . Lik e ab o ve, a plan h = { h i,j } ( i,j ) ∈ R λ ∈ Π( R λ ) is called r e duc e d whenever h i,j is a strict maxim um of its row or a strict maxim um of its column, as long as h i,j > −∞ . Thus, a reduced plan of a λ -region has no num b ers other than −∞ and λ . The plans of the regions in Figure 1 are all reduced. W e will denote by Π R ( R λ ) the set of reduced plans of R λ . Giv en discrete max-plus probability measures µ, ν , a region R λ , and a cost func- tion c , we will use the notation d c to also mean the following: d c ( R λ ) := min h ∈ Π( R λ ) max ( i,j ) ∈ R λ ( h i,j + c i,j ) . A plan h ∈ Π( R λ ) at which the min in the preceding form ula is attained will b e called a minimizing (or optimal) plan for the r e gion R λ . The following assertion holds true. Prop osition 4.3. L et µ ∈ M ( X ) , ν ∈ M ( Y ) b e arbitr ary discr ete max-plus pr ob- ability me asur es and a c ost function c : X × Y → [0 , ∞ ) b e given. Then d c ( µ, ν ) = max λ ∈ Λ( µ ) ∪ Λ( ν ) d c ( R λ ) . Pr o of. By definition, d c ( µ, ν ) = min h ∈ Π( µ,ν ) max ( i,j ) ( h i,j + c ( x i , y j )) . ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 9 Let us lo ok at M = max λ min h ∈ Π( R λ ) max ( i,j ) ∈ R λ ( h i,j + c ( x i , y j )) , whic h is the right hand side of the inequalit y we wish to pro v e. F or each one of the distinct λ ’s, w e pick h λ ∈ R λ for whic h max ( i,j ) ∈ R λ ( h i,j + c ( x i , y j )) tak es the least p ossible v alue, i.e. w e pick an optimal plan h λ of the region R λ for eac h λ . F urther, let ¯ λ b e the v alue of λ at which M is attained. Let h ∗ b e the element of Π( µ, ν ) suc h that its restriction to R λ is h λ , for each λ ∈ Λ( µ ) ∪ Λ( ν ) . W e claim that h ∗ is optimal for d c ( µ, ν ) . In fact, if it is not, then there is another h 0 ∈ Π( µ, ν ) suc h that max ( i,j ) ( h 0 i,j + c ( x i , y j )) ≤ max ( i,j ) ( h i,j + c ( x i , y j )) ∀ h ∈ Π( µ, ν ) . In particular, if h = h ∗ , then, by the assumption just made, the inequality m ust b e strict, and max ( i,j ) ∈ R h ∗ ( h 0 i,j + c ( x i , y j )) ≤ max ( i,j ) ( h 0 i,j + c ( x i , y j )) < max ( i,j ) ( h ∗ i,j + c ( x i , y j )) . But the maxim um v alue of the function λ 7→ max ( i,j ) ∈ R λ ( h ∗ i,j + c ( x i , y j )) is M and is attained at λ = h ∗ . Thus, it follows that max ( i,j ) ∈ R λ ⋆ ( h 0 i,j + c ( x i , y j )) < max ( i,j ) ∈ R λ ⋆ ( λ ⋆ i,j + c ( x i , y j )) = max ( i,j ) ∈ R λ ⋆ ( h ¯ λ i,j + c ( x i , y j )) , whic h contradicts the definition of h ¯ λ . Therefore, λ ⋆ is optimal for d c ( µ, ν ) , so d c ( µ, ν ) = M . □ 4.2. Finding the optimal cost on a region. By Prop osition 4.3, to solve the original problem, it is enough to find the optimal plan for each λ -region R λ , hence also finding the resp ectiv e optimal costs d c ( R λ ) ; the optimal plan for the original problem will then coincide ov er each R λ with the optimal plan for this region. T o find the optimal plan for the giv en region R λ , supp ose the cost function c b e given; let us n umber the v alues that c takes o ver R λ in an increasing order. Namely , supp ose that s ∈ Z + b e the num b er of distinct v alues that c takes on ov er the region R λ and denote these v alues, in increasing order, by (14) β 1 < · · · < β s . F or each m ∈ { 1 , 2 , . . . , s } we define the function h m c : R λ → {−∞ , λ } by the form ula h m c ( i, j ) := λ, if c ( x i , y j ) ≤ β m , −∞ , otherwise . That is, h m c is a plan for the region R λ suc h that λ app ears in the cells that host one of the smallest m v alues of c on the region, while −∞ app ears in all the other cells. In particular, for m = s , h s c fills all the cells in the region R λ with λ , and hence is a plan for R λ , that is, h s c ∈ Π( R λ ) . This motiv ates the follo wing definition. Definition 4.4. Given λ , a λ -r e gion R λ , and a c ost function c , let m b e the smal lest inte ger for which the function h m c on the r e gion R λ c onstitutes a plan for R λ , i. e. m c ( λ ) = min { m : h m c ∈ Π( R λ ) } . 10 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV It is conv enient to assign to eac h ( i, j ) ∈ R λ the num b er (from 1 to s ) that the v alue c ( x i , y j ) o ccupies in the list (14). Such an assignment is giv en b y the function f : R λ → { 1 , 2 , . . . , s } determined by the condition: (15) f ( i 1 , j 1 ) < f ( i 2 , j 2 ) if and only if c ( x i 1 , y j 1 ) < c ( x i 2 , y j 2 ) for ( i 1 , j 1 ) , ( i 2 , j 2 ) ∈ R λ . W e illustrate the ab ov e definitions with the following example. Example 4.5. Suppose the region is { 1 , 2 , 3 } 2 and the cost function (restricted to this region) is, in matrix form, [ c ( x i , y j )] 3 i,j =1 = 2 4 8 8 2 0 2 0 5 . Then f (2 , 3) = f (3 , 2) = 1 , f (1 , 1) = f (2 , 2) = f (3 , 1) = 2 , f (1 , 2) = 3 , f (1 , 3) = f (2 , 1) = 4 , and h 1 c = −∞ −∞ −∞ −∞ −∞ λ −∞ λ −∞ , h 2 c = λ −∞ −∞ −∞ λ λ λ λ −∞ , h 3 c = λ −∞ −∞ −∞ λ λ λ λ λ , h 4 c = λ λ λ λ λ λ λ λ λ . Here m c ( λ ) = 2 , h m c ( λ ) c = h 2 c . △ Lemma 4.6. L et R λ b e a λ -r e gion, c b e a c ost function. L et h ∈ Π( R λ ) b e a minimizer for d c ( R λ ) . Then the supp ort of h is include d in the supp ort of h m c ( λ ) c and, with the notation of (14) , d c ( R λ ) = λ + β m c ( λ ) . Mor e over, h m c ( λ ) c is itself a minimizing plan. Pr o of. Let { ( x i 1 , y j 1 ) , . . . , ( x i p , y j p ) } b e the supp ort of h . Then d c ( R λ ) = max 1 ≤ k ≤ p { c ( x i k , y j k ) + λ } . With the notation of (14), let β m b e the largest of the c ( x i k , y j k ) ; then d c ( R λ ) = λ + β m . But then the function h m c , b y definition, must place a λ in ev ery cell ( i, j ) such that c ( x i , y j ) ∈ { β 1 , . . . , β m } . Thus, the supp ort of h is included in the supp ort of h m c , and h m c is a plan, so m c ( λ ) ≤ m and d c ( R λ ) = λ + β m c ( λ ) ≤ λ + β m = d c ( R λ ) . On the other hand, since h m c ( λ ) c is a plan, we must hav e d c ( R λ ) ≤ λ + β m c ( λ ) . Com bining the last tw o inequalities, w e obtain that d c ( R λ ) = λ + β m c ( λ ) , as desired, and m = m c ( λ ) , so the support of h is included in the supp ort of h m c ( λ ) c . This means h m c ( λ ) c is itself a minimizing plan, and the last assertion follows. □ W e collect the preceding conclusions in the following: ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 11 Theorem 4.7. L et µ ∈ M ( X ) , ν ∈ M ( Y ) , that is, discr ete max-plus pr ob ability me asur es on X and Y r esp e ctively, namely: µ = max m i =1 ( k i + δ x i ) , ν = max n j =1 ( k j + δ y j ) , and let c : X × Y → [0 , ∞ ) b e a given c ost function. T o obtain an optimal tr opic al plan h b etwe en µ and ν , one c onsiders for every λ ∈ Λ( µ ) ∪ Λ( ν ) (i. e. for e ach distinct weight of either µ and ν ) the r esp e ctive r e gion R λ and a minimizing plan h λ for e ach R λ (e.g. h λ := h m c ( λ ) c ), setting then h ∈ Π( µ, ν ) to b e the plan whose r estriction over e ach R λ c oincides with h λ . F urthermor e, d c ( µ, ν ) = max λ ∈ Λ( µ ) ∪ Λ( ν ) ( λ + c ( x i λ , y j λ )) , wher e e ach ( i λ , j λ ) ∈ f − 1 ( m c ( λ )) , f standing for the numb ering function define d by c ondition (15) . In p articular, if al l the weights k i and l j ar e distinct, exc ept k 1 = l 1 = 0 , then d c ( µ, ν ) = max 1 ≤ i ≤ m min j ≤ p i ( k i + c ( x i , y j )) ∨ max 1 ≤ j ≤ n min i ≤ q j ( l j + c ( x i , y j )) . Pr o of. It is a direct consequence of com bining Lemma 4.6 with Prop osition 4.3. □ 4.3. Remarks on uniqueness of plans on a region. As we see fom Exam- ple 4.5„ the function h m c ( λ ) c (i.e. the first function on R λ , as we go from m = 1 to m = s , that happ ens to b e a plan) is not necessarily a r e duc e d plan. Another, simpler, example of such a situation is [ c ( x i , y j )] 2 i,j =1 = 1 3 3 3 ; indeed, supp osing { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) } is a region R λ , then, here, m c ( λ ) = 2 , and h m c ( λ ) is the 2 × 2 matrix with λ in every en try . W e can state the following ab out reduced minimized plans and uniqueness of minimizing plans of a region. Prop osition 4.8. L et R λ b e a λ -r e gion (c orr esp onding to some discr ete max-plus pr ob ability me asur es µ and ν ), c : X × Y → [0 , ∞ ) b e a c ost function. If h m c ( λ ) c is a r e duc e d plan, then it is the unique r e duc e d minimizing plan for d c ( R λ ) . Vic e versa, if a minimizing plan for d c ( R λ ) c ontains only −∞ and λ and is unique among minimizing plans with this pr op erty, then it is r e duc e d and must c oincide with h m c ( λ ) c . Pr o of. T o prov e the first assertion, sup p ose that h m c ( λ ) c is a reduced plan for d c ( R λ ) . It is minimizing b y Lemma 4.6. If there is another reduced minimizing plan h for d c ( R λ ) , then b y Lemma 4.6 its supp ort is a subset of the supp ort of h m c ( λ ) c . Hence if h = h m c ( λ ) c , then, for some ( x i , y j ) one has h ( x i , y j ) = −∞ and h m c ( λ ) c ( x i , y j ) = λ . But, h m c ( λ ) c b eing a reduced plan (by assumption), either the i -th row of the matrix [ h m c ( λ ) c ( x k , y l )] k,l , or its j -th column, con tain only −∞ , except at ( i, j ) where λ is. Therefore, the matrix [ h ( x k , y l )] k,l has either all the i -th column or all the j -th ro w full of −∞ , contradicting the fact that h is a plan for R λ , hence proving the assertion. T o pro v e the second assertion, let h b e the unique minimizing plan for d c ( R λ ) among minimizing plans containing only −∞ and λ . It has to be reduced b y Lemma 3.2. On the other hand, also h m c ( λ ) c con tains only −∞ and λ and is a minimizing plan for d c ( R λ ) , by Lemma 4.6. Thus h = h m c ( λ ) c as claimed. □ 12 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV W e remark that the latter Prop osition 4.8 asserts that having a unique plan (among all plans containing only −∞ and λ ) is equiv alen t to h m c ( λ ) c b eing reduced, but this is not equiv alent to the existence of a unique reduced minimizing plan as the following example shows. Example 4.9. Supp ose λ = 0 . (1) If the cost function is [ c ( x i , y j )] 2 i,j =1 = 1 2 4 3 , then h m c ( λ ) c is not reduced; there are tw o minimizing plans (con taining only 0 and −∞ ), with one of them the only reduced minimizing plan: h m c ( λ ) c = 0 0 −∞ 0 , h 1 = 0 −∞ −∞ 0 . (2) If the cost function is [ c ( x i , y j )] 3 i,j =1 = 1 4 2 6 7 8 5 9 3 , then h m c ( λ ) c is not reduced, and there are at least tw o reduced minimizing plans: h m c ( λ ) c = 0 0 0 0 −∞ −∞ 0 −∞ 0 , h 1 = −∞ 0 0 0 −∞ −∞ 0 −∞ −∞ , h 2 = −∞ 0 −∞ 0 −∞ −∞ −∞ −∞ 0 . △ 4.4. A remark on p erfect matc hings. Of particular interest, as in the classical mass transp ortation problem, are minimizing plans supp orted on subsets of the t yp e { ( x 1 , y σ (1) ) , . . . , ( x n , y σ ( n ) ) } , where σ : { 1 . . . . , n } → { 1 . . . . , n } . W e will call them p erfe ct matching plans. The plan h 1 in Example 4.9(1) and the plan h 3 in Example 4.9(2) are p erfect matc hings, while the other plans in these examples are not. The example b elow sho ws that for some data one might hav e no p erfect matc hing minimizing plans. Example 4.10. Consider the cost matrix [ c ( x i , y j )] 3 i,j =1 = 5 1 5 5 2 5 3 5 4 . If k 3 = k 2 = k 1 = l 3 = l 2 = l 1 = 0 , then h = −∞ 0 −∞ −∞ 0 −∞ 0 −∞ 0 is the unique minimizing plan (among plans containing only 0 and −∞ ), but is not a p erfect matching. △ W e stress that the nonexistence of the optimal tropical plans even when the max-plus probability measures µ and ν hav e all the w eights equal to zero (as we said earlier, we call this case fundamental ) is in striking contrast with the classical optimal mass transp ortation. The latter alwa ys admits an optimal transp ort plan corresp onding to a perfect matc hing (i. e. a p ermutation matrix) b et ween discrete ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 13 measures whic h are sums of Dirac masses with equal w eights, by virtue of the Birkhoff-v on Neumann theorem which states that the set of extreme p oints of the Birkhoff p olytop e of bisto c hastic matrices in R n 2 is exactly the set of p erm utation matrices (and hence a linear functional on this polytop e alwa ys attains its minim um on a p ermutation matrix). The following assertion holds true. Prop osition 4.11. L et µ = max n j =1 ( k j + δ x j ) , ν = max n j =1 ( l j + δ y j ) , with the elements arr ange d as in (12) as usual. If ther e is j ∈ { 1 , . . . , n } such that k j = l j , then ther e c an b e no plan that would c orr esp ond to a p erfe ct matching. Pr o of. If h ∈ Π( µ, ν ) is not reduced, then it does not correspond to a p erfect matc hing, so assume that h ∈ Π R ( µ, ν ) . Recall the definition 13 and consider the disjoin t regions R λ k , k = 1 , . . . , r determined by the plan h , where λ k , k = 1 , . . . , r are all the distinct weigh ts of the max-plus probabilit y measures µ and ν . Supp ose that the set { i : k i = λ k } has m k, 1 elemen ts, and the set { j : l j = λ k } has m k, 2 elemen ts; at least one of these tw o num b ers must b e p ositiv e. Observ e that the plan h must ha ve at least max { m k, 1 , m k, 2 } finite (i.e. differen t from −∞ ) entries on the region R λ k . Thus, the plan h has at least m = max { m 1 , 1 , m 1 , 2 } + · · · + max { m r, 1 , m r, 2 } finite entries in total. Keep in mind that r X k =1 m k, 1 = r X k =1 m k, 2 = n. The plan will correspond to a p erfect matc hing only if there are n finite entries in total. The only wa y to hav e m = n is if m k, 1 = m k, 2 for every k = 1 , . . . , r . Giv en that the weigh ts are arranged as in (12) as usual, the conclusion follo ws. □ 5. Uniqueness of solution and perfect ma tchings for random costs In this section, we will try to elucidate some questions regarding the optimal cost, p erfect matc hings and uniqueness when w e in troduce some randomness in the cost function. W e will limit ourselv es to the fundamental case (i.e. when all the w eights of the discrete max-plus probability measures are zero) and with m = n , i. e.: µ n 0 = max { 0 + δ x 1 , . . . 0 + δ x m } , ν n 0 = max { 0 + δ y 1 , . . . , 0 + δ y n } . with x j , j = 1 , . . . , n as w ell as y i , i = 1 , . . . , m all distinct. In what follows the sequences of max-plus probability measures µ n 0 and ν n 0 as ab ov e are fixed, while the cost function is random, i. e. is represented by a Bernoulli random matrix, i. e. eac h en try in the n × n cost matrix is indep endent from the others and tak es the v alue β 1 with probability p and β 2 with probability q = 1 − p , where β 1 < β 2 . 5.1. Optimal tropical cost for random cost matrices. The follo wing state- men t holds true. Theorem 5.1. L et β 1 , β 2 b e nonne gative numb ers, with β 1 < β 2 , and supp ose that for e ach n , µ n 0 and ν n 0 ar e discr ete max-plus pr ob ability me asur es with al l their weights e qual to zer o, and c n is a Bernoul li c ost matrix: P ( c n ( x i , y j ) = β 1 ) = p , 14 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV P ( c n ( x i , y j ) = β 2 ) = q = 1 − p for i, j ∈ { 1 , . . . , n } , wher e x 1 , . . . , x n and y 1 , . . . , y n ar e the p oints of the supp ort of µ n 0 and ν n 0 . If q < 1 , then P ( d c n ( µ n 0 , ν n 0 ) = β 1 ) → 1 as n → ∞ . Pr o of. Ev en though a very short argument can b e provided, we will derive a for- m ula for the probability under question. Referring to Lemma 4.6 (and recall definition 4.4) the optimal tropical cost d c n b et ween µ n 0 = max n i =1 (0 + δ x i ) and ν n 0 = max n j =1 (0 + δ y j ) will b e β 1 or β 2 dep ending on whether m c n (0) is 1 or 2 resp ectiv ely . It is 1 if and only if in the matrix for c n there is at least one β 1 in ev ery ro w and in every column. Denote b y F i the ev en t that there is at least one β 1 in the i -th ro w of the matrix, and by C j the even t that there is at least one β 1 in the j -th column of the matrix. In the calculation that follo ws we retain, for the sake of clarity , the notation m for the num b er of rows and n for the num b er of columns in the cost matrix, although one really has m = n . Therefore for the indices i and j one has i ∈ { 1 , . . . , m } , j ∈ { 1 , . . . , n } . Thus P ( d c n ( µ n 0 , ν n 0 ) = β 1 ) = P (( ∩ m i =1 F i ) ∩ ( ∩ j =1 C j )) = 1 − P (( ∪ m i =1 F c i ) ∪ ( ∪ j =1 C c j )) , where the upp er index c denotes the complement of the even t. W e ha ve P (( ∪ m i =1 F c i ) ∪ ( ∪ n j =1 C c j )) = = m + n X s =1 ( − 1) s +1 X a + b = s ( a,b ) =(0 , 0) m a n b P ( F c 1 ∩ · · · ∩ F c a ∩ C c 1 ∩ · · · ∩ C c b ) = m + n X s =1 ( − 1) s +1 X a + b = s m a n b q mn − ( m − a )( m − b ) = − q mn X 0 ≤ a ≤ m 0 ≤ b ≤ n ( a,b ) =(0 , 0) ( − 1) a + b m a n b q − ( m − a )( m − b ) . Assuming that p < 1 (otherwise P ( d c ( µ 0 , ν 0 ) = β 1 ) = 1 for any n so that there is nothing to prov e). Then P (( ∪ m i =1 F c i ) ∪ ( ∪ n j =1 C c j )) = − q mn X 0 ≤ a ≤ m 0 ≤ b ≤ n ( − 1) a + b m a n b q − ( m − a )( m − b ) − q − mn = − q mn ( − 1) n m X a =0 m a ( − 1) a n X b =0 ( − 1) n − b n b ( q − ( m − a ) ) n − b + 1 = − q mn ( − 1) m + n b X a =0 m a ( − 1) m − a (1 − q − ( m − a ) ) n + 1 . Recalling that m = n , w e get (16) P ( d c n ( µ n 0 , ν n 0 ) = β 1 ) = q n 2 n X j =0 ( − 1) j n j (1 − q − j ) n . Th us, P ( d n c ( µ n 0 , ν n 0 ) = β 1 ) → 1 as n → ∞ , ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 15 if q < 1 , proving the claim. □ F or the follo wing remark, let us in tro duce a special notation for the expression in the right hand side of (16), namely , set s ( n ; p ) := ( (1 − p ) n 2 P n j =0 ( − 1) j n j (1 − (1 − p ) − j ) n . if p ∈ [0 , 1) , 1 , if p = 1 . Remark 5.2. The relationship (16) reads lim n s ( n ; p ) = 1 , 0 < p ≤ 1 . It is also easy to show that lim p → 0 s ( n ; p ) = 0 , lim p → 1 s ( n ; p ) = 1 , n ∈ N , so that p 7→ s ( n ; p ) is contin uous ov er [0 , 1] . The asymptotics of s , hence that of the probabilit y of the optimal tropical cost equaling the minim um v alue of the cost function, may b e interesting also for the more general cases when p is not constant but depends on n . F or instance, one has lim n →∞ s ( n, 1 /n γ ) = 0 for all γ ≥ 1 and lim n →∞ s n, 1 /n 1 / 2 = 1 . ⋄ Remark 5.3. A quite similar situation o ccurs not only when the cost is giv en not b y a Bernoulli random matrix, but, sa y , b y a binomial one. Namely , supp ose now that s ∈ N is fixed, and each entry in the cost matrix c n can take one of the v alues β 1 < · · · < β s (as in (14)), with β 1 app earing with probability p 1 . Let q := 1 − p 1 . Then the low er b ound for P ( d c n ( µ n 0 , ν n 0 ) = β 1 ) can b e obtained in the same wa y as in the pro of of the Theorem 5.1. Therefore lim n →∞ P ( d c ( µ n 0 , ν n 0 ) = β 1 ) = 1 . Th us, even if the av ailable choices for the entries of the cost matrix for c n is a large but fixed num b er, the optimal tropical cost b etw een µ n 0 and ν n 0 is equal to the the smallest v alue β 1 of the cost with large probability for large n (with probabilit y of this ev ent tending to one as n → ∞ ). Moreov er, if p j is the probabilit y of β j app earing in an y given entry of the cost matrix, then it follows from the calculation ab o ve that (17) P ( d c n ( µ n 0 , ν n 0 ) = β j ) = s n, j X p =1 p k ! − s n, j − 1 X p =1 p k ! , whic h tends to zero as n → ∞ , the ab ov e equality (17) giving the rate of conv er- gence. ⋄ 5.2. Presence of p erfect matching optimal plans. W e consider the following definition. Definition 5.4. L et µ and ν b e discr ete max-plus pr ob ability me asur es and let h b e a plan for a square r e gion R λ . W e wil l say that h contains a p erfect matc hing , if ther e is a p erfe ct matching plan ˜ h for the same r e gion with supp ort c ontaine d in the supp ort of h . In other words, h is a p erfect matching plan for a region R λ if it can b e “simpli- fied” b y substituting some of its λ en tries b y −∞ to get a p erfect matc hing plan for a R λ . W e will again discuss the case of a random cost pro vided by a Bernoulli cost matrix, and restrict ourselves to the fundamental case. T o simplify the discussion, 16 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV let β 1 = 0 and β 2 = 1 . If there is a zero in ev ery row and ev ery column of the matrix, then, as we know, the optimal tropical cost is 0 , bu t if w e look at the corresponding plan (represented b y the matrix h ), it may b e imp ossible to “simplify” it (c hange some of the entries equal to 0 to −∞ ) so as to pro duce a p erfect matching plan (see Example 4.10), that is, it does not contain a p erfect matching. In the opp osite direction, if the corresponding optimal plan contains a p erfect matching, then the optimal tropical cost is 0 . Summing up, there are the following p ossibilities. • The optimal tropical cost is 1 . This occurs exactly when some ro w or column of the cost matrix fails to hav e a 0 . Then there is alwa ys a p erfect matc hing plan. In fact, the absence of a 0 in some row or column of the cost matrix means that h m c (0) c is the matrix with 1 in all the entries, whic h con tains any p erfect matching plan. F or instance, if the cost matrix is [ c i,j ] 2 i,j =1 = 0 1 1 1 , then a p ossible p erfect matching minimizing plan is [ h i,j ] 2 i,j =1 = 0 −∞ −∞ 0 . • The optimal tropical cost is 0 , but the optimal plan do es not contain a p erfect matching. • The optimal tropical cost is 0 , and the optimal plan con tains a p erfect matc hing. F or the following theorem we give here a random graph argument based on the strong and remark able result of Bollob´ as and Thomason (see [2, theorem 7.11]) that will also b e used in the pro of of Theorem 5.7 b elow. Theorem 5.5. L et β 1 < β 2 , and supp ose that for e ach natur al numb er n , µ n 0 and ν n 0 ar e discr ete max-plus pr ob ability me asur es with al l their weights e qual to zer o, and c n is a Bernoul li c ost matrix: P ( c n ( x i , y j ) = β 1 ) = p n , P ( c n ( x i , y j ) = β 2 ) = q n = 1 − p n for i, j ∈ { 1 , . . . , n } , wher e x 1 , . . . , x n and y 1 , . . . , y n ar e the p oints of the supp ort of µ n 0 and ν n 0 . If p n ≥ (log n ) /n for al l but finitely many n , then lim n →∞ P ( ∃ h ∈ Π c n ( µ n 0 , ν n 0 ) : h c ontains a p erfe ct matching ) = 1 . Pr o of. W e asso ciate c n with one and only one random bipartite (undirected) graph, denoted by G n ( c n ) , with the sets { x 1 , . . . , x n } and { y 1 , . . . , y n } as the tw o dis- join t sets of vertices in the following wa y: c n ( x i , y j ) = β 1 if x i y j is an edge, and c n ( x i , y j ) = β 2 otherwise. The plan h m c n (0) c n (recall the definitions of section 4.2) con tains a p erfect matc hing plan if and only if the bipartite graph G n ( c n ) contains a p erfect matching. In the pro of of [2, theorem 7.11], it is shown that the prob- abilit y that the random bipartite graph con tains a p erfect matching approaches 1 as n → ∞ . Thus, the probability that h m c n (0) c n con tains a p erfect matching also approac hes 1 as n → ∞ . Since h m c n (0) c n is alwa ys an optimal plan, the result fol- lo ws. □ Remark 5.6. An alternativ e pro of of Theorem 5.5 can b e offered as follo ws. Re- gardless of whether the optimal tropical cost is β 1 or β 2 , for the plan h m c (0) c (whic h ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 17 is alwa ys minimizing), the prop erty of con taining a p erfect matching plan is char- acterized by the fact that, for some p ermutation σ ∈ S n , the pro duct Π n j =1 | β 2 − c ( x j , y σ ( j ) ) | is different from zero (necessarily then it is equal to ( β 2 − β 1 ) n ). The latter is guaran teed, for instance, when the matrix [ β 2 − c ( x i , y j )] n i,j =1 is not singular (i.e. has nonzero determinant). By a theorem of Basak and Rudelson [1], this probabilit y approac hes 1, for every 0 < p < 1 . ⋄ 5.3. Uniqueness of minimizing plans. W e show no w that in the fundamental case (when all the weigh ts of the discrete max-plus masure are zero), when the uniform probabilit y is put on the space of the cost matrices, the uniqueness of a minimizing plan containing only 0 and −∞ is an asymptotically rare ev en t in the sense that its probabilit y tends to zero as the n umber of weigh ts approaches infinit y . Namely , the following result is v alid. Theorem 5.7. Fix any p ositive r e al numb er M > 0 and let { X n } ∞ n =1 and { Y n } ∞ n =1 b e se quenc es of subsets of X and Y r esp e ctively, with # X n = # Y n = n for al l n . F or e ach n ∈ N , let µ n 0 := max { 0 + δ x 1 , . . . , 0 + δ x n } , ν n 0 := max { 0 + δ y 1 , . . . , 0 + δ y n } , wher e x 1 , . . . , x n ar e the elements of X n and y 1 , . . . , y n those of Y n . F or e ach n let P n b e the uniform pr ob ability me asur e over [0 , M ] X n × Y n . Define C n ⊂ [0 , M ] X n × Y n as the set of functions c such that ther e is a unique, among plans c ontaining only 0 and −∞ , minimizing plan for d c ( µ n , ν n ) . Then lim n →∞ P n ( C n ) = 0 . Pr o of. In order to apply the theory from [2], let us introduce the notion of bip artite gr aph pr o c ess , sp ecifically , on the set of vertices X n ∪ Y n . Any giv en bijective func- tion f : { 1 , . . . , n 2 } → { 1 , . . . , n } 2 w e define determines a sequence of n 2 +1 graphs in the follo wing wa y: at time step t = 0 there are no edges and at step t ∈ { 1 , . . . , n 2 } the edge ( i, j ) := f ( t ) is added. At the n 2 -th time step we obtain the complete bi- partite graph. Note that the set of bijective functions f : { 1 , . . . , n 2 } → { 1 , . . . , n } 2 is in one-to-one corresp ondence with the set of p ermutations of { 1 , . . . , n 2 } , i. e. with the symmetric group S n 2 of order n 2 ; in fact, eac h f − 1 is an enumeration of the cells of an n × n matrix. If the function f (or equiv alently the resp ective p ermuta- tion σ ∈ S n 2 ) is chosen randomly , with uniform probabilit y , then w e hav e a r andom bip artite gr aph pr o c ess , which coincides with the one describ ed in [2] (see pp. 42 and 171 therein). Let Ω n := { ω : X n × Y n → [0 , M ] : ω tak es n 2 distinct v alues } , and for eac h ω ∈ Ω n define the mapping f ω : { 1 , . . . , n 2 } → { 1 , . . . , n } 2 b y setting f ω ( t ) := ( i, j ) , where ( i, j ) is the unique pair of indices such that ω ( x i , y j ) is the t -th largest v alue among the n 2 distinct v alues ω ( x 1 , y 1 ) , . . . , ω ( x n , y n ) . Thus, eac h ω ∈ Ω n determines an ordering of the matrix cells f ω whic h, in turn, gives the ab o ve describ ed graph pro cess with f := f ω . Since P n is the uniform measure on [0 , M ] X n × Y n , w e hav e P n (Ω n ) = 1 . Moreov er, since P n is uniform, for each bijective g : { 1 , . . . , n 2 } → { 1 , . . . , n } 2 , the set { ω ∈ 18 PEDRO BARRIOS, SERGIO MA YORGA, AND EUGENE STEP ANOV Ω n : f ω = g } has the same P n -measure, namely , 1 / ( n 2 )! . Hence, these sets form a partition of the probability space ([0 , M ] X n × Y n , B ([0 , M ] X n × Y n ) , P n ) in to ( n 2 )! equiprobable even ts, where B ([0 , M ] X n × Y n ) stands for the Borel σ -algebra of [0 , M ] X n × Y n . Th us, the bipartite random graph pro cess can b e equiv alently sampled from this probability space, rather than directly from the set of bijectiv e g : { 1 , . . . , n 2 } → { 1 , . . . , n } 2 (or equiv alently , from S n 2 ) endow ed with the uniform probabilit y . Let us denote by { G t } n 2 t =0 a generic realization of our bipartite random graph pro cess on X n ∪ Y n , and let τ b e the stopping time τ := min { t : G t has degree 1 } . That is, τ is the first instance t suc h that ev ery x i b elongs to an edge and also ev ery y j b elongs to an edge. Recalling now Definition 4.4 and the algorithm of section 4.2, we hav e: (18) τ ( ω ) = m ω (0) for P n -a.e. ω ∈ Ω n . Denote by D n the even t that G τ con tains a perfect matc hing. By [2, theorem. 7.11], (19) lim n →∞ P n ( D n ) = 1 , whic h means, in w ords, that b y the time the bipartite graph achiev es degree 1 (this is exactly the time when the minimizing plan h m ω (0) ω is formed, by (18)), the graph con tains a p erfect matching. Let H n := { ω ∈ Ω n : h m ω (0) ω is not reduced } . By Prop osition 4.8, w e will b e done if we show that P n ( H n ) → 1 as n → ∞ . No w, the even t D n is the disjoint union of F n and E n , where F n is the even t that G τ is exactly a p erfect matc hing, and E n is the even t that G τ has a p erfect matc hing and at least one more edge. As can easily b e argued, P n ( F n ) → 0 as n → ∞ (in fact, for F n to hold, at the last step of forming G τ only one p ossibility of forming an edge, or equiv alen tly only one wa y of placing a zero in the resp ectiv e row of the matrix, results in a p erfect matching). Thus, b y (19), P n ( E n ) → 1 as n → ∞ . On the other hand, the even t E n is included in H n : indeed, a graph in E n corresp onds to a plan in the supp ort of which there is triple of indices, t wo of which are in the same column and t wo of which are in the same row, thereb y violating Definition 3.1. Therefore, lim n →∞ P n ( H n ) = 1 hence concluding the pro of. □ References [1] A. Basak and M. Rudelson. In vertibilit y of sparse non-hermitian matrices. A dvanc es in Math- ematics , 310:426–483, 2017. [2] B. Bollob´ as. Random Gr aphs . Cam bridge Universit y Press, 2 edition, 2010. [3] R. S. Garfinkel and K. C. Gilb ert. The b ottleneck trav eling salesman problem: algorithms and probabilistic analysis. J. Asso c. Comput. Mach. , 25(3):435–448, 1978. [4] P . C. Gilmore and R. E. Gomory . Sequencing a one state-v ariable machine: A solv able case of the traveling salesman problem. Op er ations Res. , 12:655–679, 1964. [5] V. Kolok oltsov and V.P . Maslov. Idemp otent Analysis and Its Applic ations . Mathematics and Its Applications. Springer Netherlands, 1997. [6] C. Villani. Optimal tr ansp ort: old and new , volume 338 of Grund lehr en der mathematischen Wissenschaften . Springer-V erlag, Berlin, 2008. ON A DISCRETE MAX-PLUS TRANSPOR T A TION PROBLEM 19 (Pedro Barrios) Universid ad de Antioquia, Calle 67, No. 53-108, Medell ´ ın. Colom- bia Email addr ess , Pedro Barrios: pdrlsbrrs@gmail.com (Sergio May orga) Innopolis University, Ul. Universitetska y a 1, Innopolis, Russian Federa tion Email addr ess , Sergio May orga: me@mayorga.ru (Eugene Stepano v) St.Petersburg Branch of the Steklov Ma thema tical Institute of the Russian Academy of Sciences, St.Petersburg, Russian Federa tion and Dip ar- timento di Ma tema tica, Universit ` a di Pisa, Lar go Br uno Pontecor v o 5, 56127 Pisa, It al y and HSE University, Mosco w, Russian Federa tion Email addr ess : stepanov.eugene@gmail.com
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment