Collaboration in Multi-Robot Systems: Taxonomy and Survey over Frameworks for Collaboration

Collaboration is a central theme in multi-robot systems as tasks and demands increasingly require capabilities that go beyond what any one individual robot possesses. Yet, despite extensive work on cooperative control and coordinated behaviors, the t…

Authors: Riwa Karam, Alex, er A. Nguyen

Collaboration in Multi-Robot Systems: Taxonomy and Survey over Frameworks for Collaboration
C O L L A B O R A T I O N I N M U L T I - R O B O T S Y S T E M S : T A X O N O M Y A N D S U RV E Y OV E R F R A M E W O R K S F O R C O L L A B O R A T I O N Riwa Karam 1 , Alexander A. Nguyen 1 , Ruoyu Lin 1 , David R. Martin 1 , Diana Morales 1 , Brooks A. Butler 1 , Magnus Egerstedt 2 1 Univ ersity of California, Irvine, Irvine, CA, USA 2 Univ ersity of North Carolina, Chapel Hill, Chapel Hill, NC, USA Email:{rwkaram, alexaan2, rlin10, davidrm3, dlmoral2, bbutler2}@uci.edu, magnus@unc.edu A B S T R A C T Collaboration is a central theme in multi-robot systems as tasks and demands increasingly require capabilities that go beyond what an y one individual robot possesses. Y et, despite e xtensiv e work on cooperativ e control and coordinated beha viors, the terminology surrounding collecti ve multi-robot interaction remains inconsistent across research communities. In particular , cooperation, coordination, and collaboration are often treated interchangeably , without clearly articulating the dif ferences among them. T o address this gap, we propose definitions that distinguish and relate cooperation, coordination, and collaboration in multi-robot systems, highlighting the support of new capabilities in collaborati ve behaviors, and illustrate these concepts through representati ve e xamples. Building on this taxonomy , different framew orks for collaboration are re viewed, and technical challenges and promising future research directions are identified for collaborativ e multi-robot systems. K eywords Collaboration · Coordination · Cooperation · Multi-Robot Systems 1 Introduction Many of today’ s engineering challenges, where robots are envisioned to be utilized, such as disaster response [1], en vironmental monitoring [2], large-scale logistics [3], and space exploration [4], are well-suited to multi-robot deployments, particularly as tasks gro w beyond the capabilities of any single robot. These applications call out the need for multi-robot collabor ation , where robots share goals and combine their dif ferent functionalities (such as, sensing and actuation modalities, or other physical attributes) to achie ve results that are dif ficult or impossible to achiev e in isolation. As a result, collaboration has become a central theme in multi-robot research [5 – 10], prompting the need to understand how such joint beha viors should be described and classified. Despite the prev alence of multi-robot systems, the terminology used to describe collectiv e behaviors remains inconsistent across research communities. One contributing f actor is that related concepts, such as "cooperation", are by themselves broad and v ariably interpreted, making it difficult to delineate finer-grained categories of cooperativ e behavior in a way that is clearly defined and technically precise. As a result, concepts such as "cooperation", "coordination", and "collaboration" are frequently in voked in robotics, and in multi-agent systems broadly [11 – 13], b ut not clearly defined or distinguished. These terms often appear interchangeably , e ven though they may reflect distinct assumptions regarding inter -robot dependencies, information exchange, and the nature of joint actions among robots. For example, the standard approach to consensus [14, 15], formation [16, 17], and cov erage [18, 19] typically emphasizes the notion of coordination, which ensures that indi vidual actions remain compatible with a shared objectiv e, without necessarily enabling capabilities that surpass those of individual robots. Thus, as multi-robot systems become more heterogeneous and tasks increasingly coupled and comple x, the lack of shared definitions complicates comparisons across methods and may obscure how cooperati v e approaches might transfer across problem settings. Collaboration in Multi-Robot Systems While se veral surveys ha ve reviewed "cooperativ e" or "coordinated" multi-robot systems [20 – 22], and others ha ve examined resilience [23, 24] and task allocation [25, 26], or domain-specific deployments (such as, object transportation [27, 28]), there remains limited treatment of "collaboration" as a distinct and carefully defined concept. Existing works typically focus on particular subsets of the problem (for example, industrial collaboration architectures [29], cooperativ e control methods [13], or multi-agent learning techniques [30]) without addressing ho w the different approaches relate or what conditions are necessary for the "collaborativ e" capabilities to be supported. The objectiv e of this article is to address this lack of clear distinctions among cooperation, coordination, and collabora- tion, by providing a structured examination of collaboration in multi-robot systems. This article provides the following main contributions: • Formalization of the distinctions among cooperation, coordination, and collaboration; • Revi ew of dif ferent organizational architectures (centralized, decentralized, hierarchical) supporting collabora- tiv e behavior; and • Revie w of dif ferent collaborativ e framew orks drawn from ecology and game theory , as well as human-swarm interaction and learning-based framew orks. By synthesizing insights from multiple perspectiv es within the literature on collaborative multi-robot systems, this surve y aims to establish a conceptual foundation and to support the dev elopment of new methodologies that e xplicitly lev erage collaborati ve capabilities in multi-robot systems. The outline of this article is as follows. Section 2 presents the definitions adopted in this surve y and illustrates them through representativ e examples. Section 3 re views background literature on cooperation, coordination, and collaboration across multi-robot systems and related fields. In Section 4, the architectural and or ganizational frameworks that shape how collaborati ve behavior is supported in multi-robot teams are examined. Section 5 surveys different methodological approaches, including control-theoretic, game-theoretic, human-robot teaming, ecology-inspired, and learning-based methods. In Section 6, open challenges and opportunities for adv ancing collaboration in multi-robot systems are discussed. 2 T axonomy on Cooperation, Coordination, and Collaboration Giv en the potential for ambiguity in the terminology used to describe collecti ve beha viors in multi-robot systems, we introduce a set of definitions in this section that distinguish cooperation, coordination, and collaboration, as well as the relationship between these terms. These concepts can often appear together in the literature, but may represent fundamentally different types of inter-robot relationships, as well as the e xtent to which robots rely on one another to complete a task. The taxonomy introduced below , visualized in Figure 1, will serve as a basis for this article. Cooperation represents one of the more fundamental forms of collecti ve beha vior , requiring only that robots share an ov erarching objectiv e and that they do not act in ways that deliberately impede one another . W e present the definition of cooperation in Definition 1; note that, already , we are imposing a notion of "intention" for agents in robotic systems, which in and of itself highlights the challenge in defining beha vioral terms for engineered systems where intent and agency in autonomous systems can ha ve broad interpretations. Howe v er , for the purposes of this article, we consider the class of cooperati ve multi-robot systems to include all scenarios where the intention is that robotic agents either work to wards a shared objectiv e, or at the v ery least do not e xplicitly make efforts to the detriment of other agents. This definition excludes scenarios in which robotic agents intentionally sabotage others or eng age in antagonistic behavior , which are outside the scope of this surve y but studied in the broader multi-agent systems literature [31 – 33]. Definition 1 Cooperation between two or more r obots is the non-adversarial intention to contribute towards achieving a shar ed goal or a set of r elated tasks. The definition of cooperation does not impose any constraints on the information exchange, task ex ecution, or joint effort between robots. It imposes minimal assumptions on robot interaction rules, serving as a foundation for richer forms of collectiv e behavior introduced later in this section. In the case of system beha vior that is only cooperative, b ut not necessarily coordinated or collaborativ e, robots may operate independently , without coupling their decisions or actions, as long as their behaviors are aligned with their shared goal or related tasks. As an example, consider a simplified warehouse en vironment with homogeneous robots and packages of varying sizes that may require one or more robots to transport (see Figure 2a), where the shar ed goal of all robots is to transport packages to a designated loading area. Each robot is then tasked to grab packages and of fload them in the loading zone. The common goal between robots provides a basis for cooperation, where, supposing that each robot selects a 2 Collaboration in Multi-Robot Systems Figure 1: V enn diagram illustrating the relationship among cooperation, coordination, and collaboration in multi- robot systems. Cooperation serves as a superset for coordination and capability complementarity , whose intersection corresponds to collaboration. package at random (or through a greedy mechanism) to retriev e and transport it to a loading zone, and that the packages are all small enough to be mo ved by a single robot, then robots can beha ve independently while still contrib uting to the global objecti ve of clearing the warehouse. Thus, e ven though such a scenario may be vie wed as suboptimal with respect to the efficienc y and capability of the entire system, the shared non-adversarial intent among robots to achie ve this collectiv e goal satisfies the definition of cooperation as presented in Definition 1. Having established cooperation as the foundational layer of collecti ve beha vior , we now strengthen the behavioral requirement(s) by considering how robots, on top of shared intent and the absence of antagonism, should couple their actions to achiev e more structured system behavior . Coordination, as we define in Definition 2, requires additional structure beyond shared intent. Robots must e xchange information and/or follo w rules that ensure their actions remain compatible with one another . This coupling may arise from task dependencies, resource constraints, or the need to a void conflict during e xecution. While coordination can improv e team efficienc y , it does not necessarily yield new capabilities that surpass those of any indi vidual robot. Definition 2 Coordination between two or mor e r obots is the cooperative information exchange to plan their actions and/or decisions, in or der to execute inter dependent tasks mor e ef fectively than any single r obot operating alone . Returning to the same warehouse en vironment, let’ s suppose that each robot is able to broadcast (that is, share information) which package it intends to retriev e based on proximity (see Figure 2b). This additional information exchange then pre vents two (or more) robots from redundantly going to mo ve the same small package or e ven blocking each other’ s paths. Robots can now use this information to assign themselv es, or be assigned, to dif ferent packages, leading to a more efficient and potentially faster achie vement of the shared goal. The task structure has not changed as each package is still small enough to be carried by a single robot, but the robots’ actions are now coupled , which is necessary in coordination. The decision made by one robot directly influences the decisions and/or actions av ailable to others. While coordination plans independent robot actions, it does not require robots to jointly ex ecute tasks that exceed their indi vidual capabilities. This motiv ates the next cate gory of interaction in the taxonomy presented in this article: capability complementarity . Definition 3 Capability Complementarity between two or more r obots is the cooperative use of their joint capabilities to execute tasks unattainable by any individual r obot alone, ther eby expanding the joint action set. Robots that achiev e capability complementarity must not only cooperate b ut must also combine their mobility , sensing, computational, and/or communication capabilities to create capabilities that no individual robot can realize alone. These capabilities expand the joint action set in ways that cannot be realized by individual robots acting independently , a condition needed when a task requires joint effort to be successfully accomplished. The definition of capability complementarity , presented in this article in Definition 3, allows, in general, for scenarios in which joint action occurs without explicit coordination. For e xample, capability complementarity occurs if robots independently reach the same 3 Collaboration in Multi-Robot Systems (a) Cooperation (b) Coordination (c) Capability Complementarity (d) Collaboration Figure 2: Illustrativ e examples of cooperation, coordination, capability complementarity , and collaboration in a simplified warehouse en vironment with homogeneous robots and three different package sizes; small packages can be mov ed by any single robot, medium packages can be moved by any two robots, and the lar ge package can be mo ved by any three robots. In all cases, robots share a common goal of mo ving all packages from the warehouse to a loading zone (depicted by the green area), which is the basis of cooperati ve "intent” between robots. Red arro ws represent the task assignment of each robot (that is, which package to mov e) and a crossed-out red arrow represents a "bad" assignment (that is, redundant assignment or infeasible task). In (a), robots randomly , or greedily , go to mov e packages without regard for the selection of other robots (that is, no coordination) and without taking joint action(s) to move larger packages (that is, no capability complementarity), which can lead to suboptimal assignments and exclude all packages that require more than one robot to transport. In (b), robots share information to determine what small box each robot must move, but do not take any joint action to mov e packages together (that is, no capability complementarity). In (c), robots randomly , or greedily , go to move packages (that is, no coordination); howe ver , robots may form joint capabilities, should they happen to arise, and mo ve packages together . In (d), robots share information to determine what box size each robot must move, or what arrangements in terms of capability complementarity are needed to clear the warehouse of all box sizes. medium (or large) package and organically begin manipulating it together (see Figure 2c); the tasks are not planned , hence, there is no guarantee that capability complementarity will be able to be achie ved. Howe ver , such cases rarely happen, since in practice, robots achie ving capability complementarity nearly always requires coordination; joint capabilities are hard to achiev e without explicit information exchange. That is why , in this article, we will focus on the intersection between cooperation, coordination, and capability complementarity , labeled as collaboration in Figures 1 and 2d. For the remainder of this article, whenev er we mention collaboration, we refer to this intersection, which is the strongest form of collectiv e behavior in the taxonomy presented. Definition 4 Collaboration is cooperation, coor dination, and capability complementarity , between two or more r obots. Looking back at the w arehouse example (see Figure 2d), collaboration as defined in Definition 4, happens when two or more robots jointly lift and transport medium or large packages that no single robot can move alone. Here, the collectiv e action set is expanded, which is necessary for the team of robots to move all packages and reach their shared 4 Collaboration in Multi-Robot Systems goal of clearing the warehouse. This cannot be achie ved through coordination alone; thus, achie ving collaboration in multi-robot systems relies on non-adversarial intention, information e xchange, and joint action. T ogether , these definitions provide a structured hierarchy of collectiv e behaviors in multi-robot systems, distinguishing non-adversarial shared intent (cooperation), information sharing (coordination), and joint action(s) (capability com- plementarity). W e now use this taxonomy to present a background on the cooperati ve (Definition 1) and coordinated (Definition 2) multi-robot literature. 3 Background on Cooperation and Coordination Multi-robot systems are commonly modeled within the broader framework of network ed systems, where each robot is represented as a node in a graph and edges encode inter-agent relationships, such as communication and sensing [14]. W ithin this setting, a global objecti ve, shared among the robots, is achie ved through interaction rules, and a rich body of work exists, focusing on goals such as consensus, formation, and coverage. Foundational works such as [34, 35] provide general graph-theoretic and systems-theoretic tools for analyzing these behaviors. Complementary surveys on consensus and distributed coordination [36 – 38] formalize what it means for agents to work to ward a shared objecti v e. In this section, we focus on classical coordination (Definition 2) problems in volving cooperati v e (Definition 1) robots. Under the taxonomy introduced in the pre vious section, most of the multi-robot systems literature can be categorized as either coordination or collaboration; in both cases, cooperation is assumed. In much of the literature on multi-robot systems, cooperation and coordination are often treated implicitly , being captured through modeling assumptions in problem formulations, without being defined explicitly . In consensus problems, for example, the robots iterativ ely update their states (such as, positions, headings) based on their neighbors’ information so that all agents asymptotically "agree", that is, con ver ge, on a common v alue [34, 39]. Consensus has become a typical example of a model for coordinated behavior (Definition 2) between cooperative robots (Definition 1), underlying flocking, rendezvous, distributed estimation, and distributed optimization. Surve ys such as [15, 37, 38] analyze v ariants including finite-time consensus, event-triggered consensus, and consensus under communication constraints, and clarify how local interaction rules gi v e rise to desired group-lev el behavior . Formation control also pro vides a classical instance of coordinated multi-robot behavior (Definition 2), where the goal is to maintain prescribed relative configurations or geometric patterns among robots. Graph rigidity and distrib uted formation stabilization sho w how inter -agent distance constraints and graph-theoretic properties determine whether a desired shape can be maintained [40]. Other surveys, such as [41, 42], re view approaches ranging from leader -follower and virtual-structure methods to behavior -based and optimization-based schemes. In these framew orks, coordination is achiev ed through coupled relati ve-position or relativ e-velocity feedback laws that ensure alignment of individual actions with a shared formation objectiv e. As another example of multi-robot coordination (Definition 2), coverage control focuses on spatial allocation of robots, that is, positioning a group of robots to optimally cover a domain of interest. A common formulation is the V oronoi tessellation-based coverage control [18], where robots perform gradient descent with respect to a cost function to achiev e a team-lev el configuration, that is, a centroidal V oronoi tessellation. This formulation has been extended to scenarios with, for example, non-con vex en vironments [43], heterogeneous sensing capabilities [44], and time-varying densities [45], and is closely tied to distributed optimization and geometric partitioning of the workspace, such as, [46 – 51]. Beyond these canonical problem classes, a number of surve ys hav e examined coordination and cooperation in multi- robot systems. F or example, [20] revie ws coordinated control of multi-robot systems with an emphasis on motion coordination tasks such as rendezvous, formation, and coverage, while [12] provides a taxonomy of multi-robot coordination strategies, highlighting issues such as communication models, task structures, and scalability . Other surveys focus on particular coordination mechanisms, including market-based methods [52] and cooperati ve heterogeneous multi-robot systems [13]. Collectiv ely , these works establish "cooperation" and "coordination" as central org anizing principles in multi-robot research, b ut they typically do not establish clear or consistent distinctions between the tw o. Instead, the terms are often used interchangeably to describe collectiv e behaviors, with cooperation typically referring to shared goals and coordination referring to related actions, but without a common, explicit taxonomy . In contrast, "collaboration" in multi-robot systems started to appear more recently in literature, also often used interchangeably with cooperation or coordination, such as, [53, 54]. Some survey papers [24, 29] provide structures and/or distinctions among the three terms. Howe ver , while these works surv ey multi-robot interaction, they are centered around dif ferent criteria than the distinctions and relationships emphasized in this surve y . Building on these foundations, cooperation, coordination, and collaboration are treated as related yet conceptually distinct forms of multi-robot interaction according to the taxonomy introduced in the pre vious section. This taxonomy 5 Collaboration in Multi-Robot Systems (a) Centralized (b) Decentralized (c) Hierarchical Figure 3: Illustrativ e example of or ganizational structures for collaborati ve multi-robot systems. In (a), a central planner assigns tasks to each robot, and robots engage in collaborativ e actions only when directed by the controller . In (b), robots independently decide when to collaborate often through communicating requests for assistance. In (c), the robots are divided into teams with one robot acting as team leader . The leader instructs the other robots on when and how to take collaborati ve actions. provides the basis for e xamining how multi-robot systems are or ganized and structured to enable collaboration, as per Definition 4, in both theory and practice. 4 Multi-Robot Collaboration Having clarified the dif ferences among cooperation, coordination, and collaboration, as well as presenting a background on cooperation and coordination, we next surve y the system architectures and design framew orks that support the dev elopment of collaborati ve capabilities that satisfy Definition 4. 4.1 Organizational Structur es Collaboration (Definition 4) in multi-robot systems requires explicit or ganizational structures for b uilding teams and allocating tasks. A fundamental design question emerges: Who decides when robots collaborate? Three common approaches can be found in the literature, as depicted in Figure 3. One approach is to use a centralized planner (see Figure 3a) that explicitly directs how robots should collaborate by assigning tasks or roles to each robot. Simply , the centralized planner tells each robot when and with whom to collaborate [12]. In contrast, decentralized approaches (see Figure 3b) allow indi vidual robots to autonomously decide when and with whom to collaborate [12]. Such approaches can improv e scalability and enhance system robustness by removing the centralized controller as a single point of failure [55], though the y typically rely on communication, information sharing, and/or negotiation among robots. A third organizational paradigm is hierarchical approaches (see Figure 3c), where team leaders or more important agents directly instruct agents lower in the hierarch y when and how to engage in collaborati ve beha vior [55]. 6 Collaboration in Multi-Robot Systems 4.1.1 Centralized Centralized multi-robot systems are controlled by a single master controller that manages all robots in the swarm. Individual robots act as e xecutors of centralized commands. The primary advantage of centralized methods is that the central controller has a global vie w of the world, and may therefore generate globally optimal solutions [12]. This is particularly v aluable for collaborati ve systems where robots will need to carefully plan their actions considering the needs and capabilities of all other robots. Howe ver , because centralized systems rely on a single centralized planner , they suf fer from a single point of failure which limits their robustness. Additionally , centralized approaches struggle to scale to large systems with a lar ge number of robots as the communication and computational demands placed on the centralized controller increase [12, 55]. Much of centrally managed collaboration can be formulated within the framework of multi-robot task allocation (MR T A) [25] where a planner must decide both which robots to assign to each task and whether those tasks should be completed indi vidually or collaborativ ely by teams. Common approaches to solve the MR T A problem include optimization-based, auction-based, and game-theoretic methods [25, 56]. The approach used in [57] employs a centralized queue, where robots requesting collaboration are matched with av ailable collaborators based on a user- desired selection strategy (such as, first in, first out or matching algorithm). The authors in [58] dev elop a scalable method for centralized collaborati ve task allocation by repeatedly breaking the global problem into smaller subproblems that can be solved ef ficiently . A centralized controller can be particularly v aluable when indi vidual robots lack the communication and/or sensing capabilities, as well as computational resources to plan their own collaborati ve beha vior . For e xample, in [59], an unmanned surface vehicle (USV) hosts a centralized controller that coordinates the collaboration between an unmanned aerial vehicle (U A V) and an unmanned underwater vehicle (UUV). The USV , positioned at the water–surface interf ace, enables communication between the U A V and UUV , which otherwise cannot directly exchange information. Through centralized role assignment and coordination across heterogeneous platforms, the system achie ves multi-domain sensing capabilities that exceed those of an y single robot, satisfying Definition 4. Large language models (LLMs) hav e inspired researchers to le verage their reasoning capabilities for centralized task planning. For example, in [60], an LLM acts as a centralized task planner for a heterogeneous team consisting of a robotic arm, a quadruped robot, and a quadrotor . Upon recei ving a high-le vel instruction from a human supervisor , the LLM decomposes the mission into sub-tasks and assigns roles according to each robot’ s capabilities. Here, the centralized planner’ s (that is, the LLM’ s) reasoning enables the team to e xecute tasks that require joint manipulation, thus supporting collaborativ e capabilities as per Definition 4. 4.1.2 Decentralized In decentralized systems, indi vidual robots autonomously make their o wn decisions regarding when and how to collaborate, such as, [61]. One of the primary adv antages to decentralized systems is that they are robust to commu- nication failures stemming from one point of failure (that is, central planner) [55]. Additionally , because decisions are made locally , decentralized approaches generally scale better to lar ge numbers of agents compared to centralized approaches [12]. Howe ver , a challenge of decentralized approaches is that each robot must independently decide whether to participate in a collaborative behavior . Ensuring that multiple robots reach consistent and compatible decisions can be challenging as well, and can be addressed through communication or negotiation frame works. Game theory provides an ef ficient approach for dealing with decentralized multi-robot systems where each robot acts as an independent agent. T ools such as transferable utility games, cooperative games, and auctions help to coordinate behavior and resolve conflicts between robots. For e xample, in [62], coalition formation is modeled as an exact potential game, where robots join coalitions to maximize their individual utilities while contrib uting to a shared goal. When tasks require joint effort from multiple robots, coalition formation results in the creation of teams whose collecti ve capabilities exceed those of any indi vidual agent, satisfying Definition 4. Similarly , in [63], the authors present a distributed auction-based method for time-constrained collectiv e transport. Robots bid for participation in transport tasks that cannot be completed by a single robot due to object size or weight constraints. Through decentralized negotiation and agreement, teams are formed to execute these tasks; the resulting joint manipulation capability represents an expansion of the collectiv e action set. Learning-based decentralized approaches can also support collaboration when policies are learned for tasks requiring joint effort. In the centralized training with decentralized ex ecution (CTDE) paradigm, agents are trained using shared global information but act independently at deployment [30]. While CTDE alone does not imply collaboration, it can enable agents to learn policies that produce collaborative behaviors when task success depends on capability complementarity . For e xample, in [30], symbiotic reinforcement learning (RL) is used to shape re wards that promote complementary behaviors between agents. 7 Collaboration in Multi-Robot Systems 4.1.3 Hierarchical Hierarchical approaches of fer a compromise between centralized and decentralized approaches by employing a multi- lev el structure, where decision-making responsibilities are assigned to selected agents or leaders, while other agents will ex ecute assigned tasks. Higher levels make strate gic decisions about when and how to collaborate, while lo wer lev els ex ecute the resulting commands [12]. Hierarchical approaches are particularly relev ant in settings where certain robots (or agents) hav e greater capabilities (such as, task knowledge or computational resources) than others. In [64] and [65], a leader–follower hierarchy is used to support dynamic coalition formation through RL. Robots first decide which coalitions to join in order to start completing tasks. Once formed, a designated leader determines subsequent task assignments for the coalition, rather than dissolving the team after each task. When the tasks assigned to these coalitions require the joint effort of multiple robots, the coalition exhibits capabilities that no indi vidual robot can achiev e alone. In such cases, the hierarchical structure facilitates collaboration in the sense of Definition 4. Hierarchical planning architectures have also been applied to collaborativ e manipulation tasks. In [66], multiple mobile manipulators transport a shared object through a hierarchical framework in which high-le vel planning coordinates task allocation and motion sequencing, while lo wer-le vel controllers e xecute synchronized manipulation actions. Because the payload cannot be transported by any single robot under the task formulation, successful e xecution depends on joint physical actuation. Hierarchical structures are also common in human-swarm interaction (HSI) frame works, where a human or supervisory agent occupies the higher lev el in the hierarchy , such as in [67] and [68]. In [67], a human operator controls a "master robot" that broadcasts abstract mission parameters guiding sw arm behavior . When successful task ex ecution depends on the integration of human strategic reasoning and distributed swarm e xecution, the resulting beha vior will be collaborati ve, as per Definition 4. Here, the hierarchy provides a structured mechanism for determining when collaborativ e joint actions should occur . 4.2 Framework Inspirations and T ools In this section, we discuss frame works in the literature that support collaboration in multi-robot systems by dra wing on different modeling inspirations and tools. W e choose to narrow it do wn to four main categories: ecologically- inspired framew orks, game-theoretic framew orks, HSI frameworks, and learning-based frame works. W e focus on these categories because the y represent major and complementary sources of methodology in the literature. Each category originates from a dif ferent disciplinary foundation (biology , mathematics and economy , human f actors, and machine learning) and captures a substantial body of work on ho w collaborativ e behavior can be modeled. T ogether, these perspectiv es span decentralized self-org anization, strategic interaction, human-in-the-loop modeling, and data-dri ven adaptation, providing a representati ve cross-section of approaches to multi-robot collaboration. 4.2.1 Ecology-Inspired Framew orks Natural systems and the en vironment ha ve long serv ed as a rich source of inspiration for modeling complex systems; understanding their structure and dynamics has influenced a substantial body of prior work [69 – 72]. Ecology is the study of interactions between organisms and their en vironment [73], and collaborations can also be conceptualized as mutualisms (see Figure 4)—jointly beneficial interactions between members of dif ferent species [74]. In the field of multi-agent robotics, ecological principles hav e been lev eraged to inform the design of collaborativ e control strategies for robotic swarms, enabling them to be adapti ve, resilient, and/or decentralized, such as, [75 – 77]. The authors in [8] introduce a mutualistic collaboration frame work for heterogeneous multi-robot systems, in which robots with different mobility capabilities form pairwise collaborati v e arrangements based on the composition of control barrier functions (CBFs). Through this composition, each robot’ s safe operating region can expand, enabling robots to accomplish tasks and reach states that could not be achie ved alone. Similarly , a multi-robot collaboration framew ork based on mutualisms was introduced in [77], which examined ho w the composition of the landscape can dictate when collaborati ve arrangements between dif ferent types of robots are preferred and when they are not. Others ha ve proposed a more general symbiotic framew ork, which considers interactions such as mutualism, commensalism, and parasitism, to promote collaborative beha vior between agents in a multi-agent reinforcement learning (MARL) setting [78]. In these works, collaborative arrangements arise when the joint actions of multiple agents are required to achieve an objectiv e, satisfying Definition 4. Ecology has also inspired the constraint-driven control synthesis frame work in [75, 76], where robots (organisms) tend to minimize their control efforts, subject to en vironmental constraints (such as, collision a voidance and battery capacity maintenance). This ecology-inspired framew ork [76] is formulated as an optimization problem that utilizes control L yapuno v functions (CLFs) and CBFs [79]. Building upon [76], a heterogeneous multi-robot collaboration framew ork 8 Collaboration in Multi-Robot Systems Figure 4: Example of a mutualistic interaction between two heterogeneous species, the Nile crocodile and the Egyptian plov er , in nature [84, 85]. There is mutual benefit from this collaboration: the crocodile gets a free dental cleaning, prev enting infections, and the bird gets a free meal. is proposed in [61] by e xplicitly characterizing the en vironment with which robots interact by a set of partial dif ferential equations. Collaborative beha viors are achie ved through principled compositions of CBFs and CLFs, adapted from [80]. For e xample, in terms of collaboration in mobility , inspired by [81], some agents’ safe operating re gions in the state space can expand through interactions with other agents, enabling access to regions that would otherwise be unreachable. In terms of collaboration in sensing, inspired by [82, 83], heterogeneous robots equipped with complementary sensing modalities collaborate to monitor time-varying en vironmental phenomena that are inaccessible to some individuals. The collaboration mechanism in [61] allows heterogeneous robots with complementary capabilities to accomplish tasks that they would otherwise be unable to achie v e, satisfying Definition 4. 4.2.2 Human-Swarm Interaction Frameworks Human-swarm interaction (HSI) is a gro wing research area at the intersection of robotics, control theory , and human factors [86]. While HSI falls broadly under the umbrella of multi-agent systems, it is particularly relev ant to this surve y due to its focus on collaborative multi-robot systems in v olving human agents. Beyond robot-robot coordination, HSI explicitly addresses the challenges arising from interactions between robotic swarms and one or more humans, including defining the human’ s role, authority , and le vel of influence in systems characterized by scale, heterogeneity , and complexity [86]. Mission, interaction, complexity , automation, and human (MICAH), introduced in [87], pro vides a conceptual framew ork for adapti ve human–swarm teaming through the fi ve indicator cate gories in its title. These indicators, which account for both en vironmental and human-state factors, are assessed by a supervisory agent to dynamically modulate swarm autonomy and human in volv ement. In this framework, the human contrib utes strategic reasoning, situational awareness, and oversight, while the swarm provides distributed sensing and scalable autonomy . Collaboration, as presented in Definition 4, emerges from this adapti ve coupling, enabling coordinated behaviors, such as dynamic search-and-rescue operations in uncertain en vironments, that neither the human nor the swarm could achiev e independently . Similarly , a shared-control human-swarm teaming frame work is presented in [67], where a human operator and a swarm of mobile robots jointly perform exploration and cov erage tasks in en vironments with limited and heterogeneous robot sensing. Through a bidirectional interaction loop, the human provides global reasoning and guidance, while the swarm supplies distributed en vironmental feedback. This coupling yields a composite capability , that is, adaptive co verage in unknown or partially observable en vironments, that is unattainable by either entity alone. In some HSI frame works, collaboration is mediated through interfaces that enable teleoperation and shared decision- making, particularly when direct low-le vel control becomes infeasible at scale. SwarmPaint [88] introduces a gesture- based interface allowing humans to issue high-le vel commands that are refined and ex ecuted autonomously by the sw arm, enabling safe and flexible real-time reconfiguration. Similarly , [68] proposes a user-centered interf ace emphasizing usability , situational aw areness, and adapti ve autonomy to support effecti ve human–swarm teaming. In both cases, collaboration (Definition 4) arises through closed-loop feedback and co-regulation, where human intent and swarm autonomy are continuously integrated rather than hierarchically separated. 9 Collaboration in Multi-Robot Systems Beyond con ventional interfaces, immersiv e technologies such as virtual reality (VR) and augmented reality (AR) further facilitate collaboration in HSI by pro viding a shared spatial and perceptual conte xt. VR-based coverage control [89] allows humans to reshape the optimization landscape by modifying density functions and obstacles, while the swarm autonomously adapts its configuration. AR-based systems [90, 91] enhance human perception and decision-making by embedding the swarm state and the en vironmental cues directly into the workspace. These approaches expand the shared action set of the swarm, enabling collaborati ve beha viors that require human cognition and swarm adaptability across physical and virtual domains. 4.2.3 Game Theoretic Framew orks Game theory pro vides a mathematical foundation for modeling strategic interactions among autonomous decision- makers, making it a natural candidate for supporting collaboration in multi-robot systems. Early applications of game theory in robotics focused primarily on coordination (Definition 2) and conflict resolution, with robots modeled as players optimizing individual or team-based objectiv es, such as, through classical formulations of noncooperative and cooperativ e games [92, 93]. Howe ver , more recent developments le verage g ame-theoretic constructs to explicitly capture capability complementarity [94], coalitional decision-making [95], and social preference structures [96], thereby enabling some collaborativ e behaviors in the sense of Definition 4. Cooperativ e game theory provides a basis for modeling scenarios where robots form binding agreements to jointly ex ecute tasks, and where the resulting team can achiev e outcomes unattainable by individual agents alone. Frameworks such as characteristic-function formulations and transferable-utility (TU) games [97, 98], which quantify ho w groups of robots (that is, coalitions) can pool heterogeneous resources, sensing, or actuation capabilities to expand the collecti ve action set. Solutions like the core, Shapley value, or nucleolus determine stable and equitable divisions of utility , ensuring that collaboration remains beneficial for all contributing agents. These cooperative-game constructs pro vide a rigorous means of representing multi-robot collaboration as the formation of groups whose joint capabilities surpass those of any indi vidual robot, as per Definition 4. Beyond static coalitions, dynamic coalition formation has been explored for multi-robot task allocation and distributed control. W orks such as [99] and [100] introduce coalitional control, in which self-organizing agents acti vely ne gotiate coalition memberships based on local information and mission objecti ves. More recent multi-robot applications demonstrate how reinforcement learning (RL) and game theory can be combined to learn dynamic coalitions that respond to en vironmental uncertainty or task coupling [64]. In these approaches, collaboration arises when robots jointly commit to tasks that require joint sensing, transport, or manipulation, while coordination corresponds to the subsequent synchronization of their individual control actions within the coalition. 4.2.4 Learning-Based Frameworks Learning-based approaches hav e become central to multi-robot systems, lev eraging learned policies to improve coordination and task performance in shared environments, as well as supporting capability complementarity [30, 101]. These frameworks are often framed as enabling collaboration, where robots achie ve tasks by lev eraging shared information, policies, or representations learned by other robots to perform joint tasks. In multi-agent reinforcement learning (MARL), agents learn policies in a shared en vironment where actions taken by one robot af fect the learning dynamics of others [102, 103]. These foundational works establish that polic y learning in multi-agent settings is inherently interdependent, since each agent’ s experience depends on the e volving beha vior of its peers. Distributed RL has been applied to decentralized collective construction, where multiple robots learn policies for assembling shared structures without centralized control [104]. In this setting, the construction task requires coordinated block placement subject to structural constraints, and successful completion depends on the collectiv e ex ecution of multiple robots. Since the target structure cannot be assembled by any indi vidual robot under the task formulation, the learned policies support capability complementarity , satisfying Definition 4. RL has also been applied to object transportation, where multiple robots must jointly manipulate a shared payload. In [105], deep RL is used to learn decentralized control policies for cooperativ e object transport without centralized supervision. The task requires synchronized force application and motion coordination across robots, and the transported object cannot be moved by a single robot under the task constraints. Supervised learning frame works can similarly support collaboration when shared representations enable distributed perception that exceeds the sensory capability of an y single robot. For e xample, in [7], a graph neural network (GNN) is trained to fuse encoded observ ations exchanged among neighboring robots, allowing each agent to incorporate information beyond its local sensing range. The resulting fused perception and unified situational representation emerges only through the integration of complementary observ ations gathered by multiple robots, allo wing for a larger 10 Collaboration in Multi-Robot Systems decision-making set. In this sense, the framework satisfies Definition 4 by allo wing robots to jointly form an effecti ve perceptual capability , thereby accomplishing objectives that no indi vidual agent could achie ve independently . Beyond reinforcement and supervised learning, foundation-model-based frameworks further expand collaborati ve possibilities. For example, RoCo [9] lev erages large language models (LLMs) to enable dialog-based task reasoning and shared semantic reasoning across multiple robots. Robots equipped with LLMs discuss task strategies, generating sub-task plans and paths that incorporate both high-level semantics and en vironmental feedback. This shared language- model reasoning allo ws robots to acquire joint capabil ities that exceed what indi vidual agents could learn independently . For example, in a sweeping task, robots collaborate by coordinating roles such as pushing debris and collecting it, jointly manipulating the en vironment in a manner that neither robot could accomplish alone, satisfying Definition 4. 5 Comparisons After revie wing sev eral different collaborativ e frameworks across multiple categories in the previous section, we summarize the literature in T able 1 and highlight ke y observations that can be identified from this comparison. T able 1 provides a structured summary of the multi-robot collaboration frame works (Definition 4) revie wed in this article, organized according to both their underlying system architectures and the framew orks through which collaboration is realized. In particular , the table distinguishes between centralized, decentralized, and hierarchical organizational structures, while also highlighting whether collaboration is moti v ated by ecological principles, HSI, g ame-theoretic formulations, or learning-based approaches. This classification rev eals that collaboration, as defined in Definition 4, can be supported through a wide range of paradigms, b ut consistently relies on mechanisms that enable joint capabilities and coordinated, hence cooperativ e, decision-making beyond what is achie v able by individual robots acting independently . A ke y observation emerging from the table is that decentralized architectures dominate the collaborati ve literature, particularly in ecology-inspired and game-theoretic frame works, where interactions are typically local and coordination is achiev ed through local rules, pairwise interactions, or coalition formation. In contrast, centralized approaches often appear in task allocation and planning settings, where a global view of the system is needed to explicitly assign dif ferent roles and/or distribute dif ferent resources. Hierarchical structures arise less frequently , but play an important role in scenarios in volving explicit leadership, coalition leaders, or supervisory agents, including certain human-swarm interaction framew orks where the human acts as a high-lev el decision-maker . Another notable trend highlighted by T able 1 is the prev alence of hybrid organizational and computational paradigms, particularly in learning-based collaborativ e systems. Se veral of the learning-based approaches employ centralized T able 1: Classification of the papers revie wed in the "Multi-Robot Collaboration" Section based on the metrics used to identify ho w collaboration (Definition 4), is portrayed in those frame works. The symbol × ∗ denotes centralized training with decentralized ex ecution. Papers Centralized Decentralized Hierarchical Ecological HSI Game Theoretic Learning- Based [7, 9, 104, 105] × ✓ × × × × ✓ [8, 57, 77] ✓ × × ✓ × × × [30] × ∗ ✓ × × × × ✓ [58, 59] ✓ × × × × × × [60] ✓ × × × × × ✓ [61, 76] × ✓ × ✓ × × × [62, 63, 94, 95, 99, 100] × ✓ × × × ✓ × [64, 65] × × ✓ × × × ✓ [66] × × ✓ × × × × [67, 68, 89, 90] × ✓ × × ✓ × × [78] × ✓ × ✓ × × ✓ [87, 91] × × ✓ × ✓ × × [88] × × ✓ × ✓ × ✓ 11 Collaboration in Multi-Robot Systems information during training, such as shared v alue functions or global state representations, while deploying policies in a decentralized manner at execution time. This centralized training decentralized execution (CTDE) paradigm, symbolized by × ∗ in T able 1, reflects a broader design pattern in collaborative multi-robot systems, where global structure is exploited offline to improv e coordination, but real-time decision-making remains distributed to ensure scalability , robustness, and adaptability . Importantly , while learning-based methods introduce ne w tools for realizing collaboration, the table highlights that collaboration itself is not inherently tied to learning; rather , it emerges from how information, decision-making, and capabilities are structured and shared across agents. 6 Challenges and Future Resear ch Dir ections In this section, we highlight some challenges that are relev ant to achie ving collaboration in multi-robot systems, as defined in Definition 4, as well as some future research directions that may advance the field of collaborative multi-robot systems. The challenges listed focus on issues that directly influence the realization of collaboration; we emphasize recurring barriers identified across the literature that limit its support. Addressing these challenges would play a critical role in determining whether collaboration can arise at all and remain robust at scale. 6.1 Conceptual and Theoretical F oundations of Collaboration In the previous sections, we revie wed foundational works in cooperation and coordination, as well as surve yed sev eral dif ferent framew orks supporting collaboration in multi-robot systems, while emplo ying different methods to achiev e collaboration. Although coordination and cooperation hav e been studied extensi vely in multi-agent systems, collaboration, having emer ged more prominently in recent years, remains a comparativ ely underde veloped concept, both theoretically and algorithmically . One of the primary challenges lies in the lack of clear , generalizable models that capture what it means for robots to collaborate beyond acting to ward the shared system goal(s). In many e xisting approaches, collaboration becomes possible implicitly through cooperativ e control or learning-based formulations, rather than being modeled as intentional and distinct collaborativ e decisions. This ambiguity limits interpretability and makes it dif ficult to clearly distinguish collaborati ve beha viors from cooperati ve or coordinated ones. The taxonomy presented in this article is intended to help clarify these distinctions and, in doing so, facilitate the formalization of multi-robot collaboration models. Therefore, de veloping frame works that can support collaboration between robots remains an important direction for future research. At the same time, many approaches that are described as collaborative in the literature do not necessarily satisfy Definition 4 under the taxonomy adopted in this surv ey . In particular, a substantial body of work in both g ame theory and learning focuses on producing compatible or globally ef ficient beha viors among agents, but does not explicitly require capability complementarity . These works remain highly rele vant to multi-robot systems because the y provide principled systems for alignment, equilibrium selection, scalable learning, and robust coordination; howe ver , they typically fall under cooperation and coordination unless the underlying task formulation is structured so that success is unattainable by any indi vidual robot acting alone. 6.2 Stability , Safety , and Perf ormance Guarantees in Collaborative Systems Guarantees, such as stability , safety , and performance, are well-established for coordinated and cooperati ve beha viors [24, 40], b ut these notions in a collaborativ e setting are not as well studied. Collaboration, as presented in Definition 4, often implies tighter coupling between robots through shared physical interaction, sensing dependencies, and/or decision-making, thereby increasing system complexity and potentially compromising safety . The lack of formal stability and performance guarantees can lead to cascading failures or degradation of the system’ s goal(s), which in practice discourages the adoption of collaborativ e strategies and limits their potential benefits. 6.3 Ecology-Inspired and Altruistic Framew orks Ecology-inspired approaches also offer another promising a venue, b ut many current formulations capture only a limited subset of behaviors observ ed in natural and ecological systems. For example, se veral works ha ve e xamined altruistic or socially aware beha viors in multi-agent systems [54, 106 – 109]. These approaches demonstrate that robots may benefit from acting in ways that improve collecti ve outcomes, e ven at an individual cost. Howe ver , such beha viors are typically studied at the lev el of a single agent’ s action and do not, by themselv es, introduce new joint capabilities or e xpand the collectiv e action set. As such, they do not constitute collaboration as defined in this article. Understanding ho w such behaviors relate to collaboration remains an open challenge and highlights the need for clearer conceptual boundaries. 12 Collaboration in Multi-Robot Systems 6.4 Conceptual Boundaries in Game-Theoretic Framew orks In game-theoretic multi-robot systems, classical noncooperativ e formulations often model agents as optimizing indi vidual objectiv es while seeking stable solutions, for example through Nash equilibria [110] or Pareto optimality [111]. Such solution concepts are essential for predicting and enforcing compatibility among agents, but they do not, by themselves, imply collaboration in the sense of Definition 4. In particular , noncooperative formulations typically assume that each agent prioritizes its o wn objecti ve, which does not necessarily satisfy the shared, non-adversarial intent required by Definition 1. While these tools may produce efficient outcomes, and in most cases require information exchange, their underlying strategic structure does not inherently in volv e capability complementarity . As a result, noncooperativ e or adversarial game-theoretic frame works generally f all outside the collaborativ e scope considered in this surve y , unless the y are explicitly extended to model cooperati v e intent and joint actions. 6.5 Representation of Collaboration in Lear ning Frameworks Learning-based approaches introduce further challenges. While MARL has enabled robots to learn joint behaviors, collaboration is often treated as a byproduct of shared rewards or centralized training processes [30]. This raises fundamental questions about how collaboration should be represented in learning frame works, particularly within the action set and decision space. Understanding ho w agents can learn when to collaborate, whom to collaborate with, and under what conditions remains an open research problem, especially in decentralized and partially observable settings. A similar distinction arises in learning-based multi-agent systems. For example, decentralized MARL methods that maximize a global return through consensus-style updates ov er a network, such as [112], are often discussed in the context of “collaborati ve learning. ” Under the taxonomy in this article, such approaches primarily enable coordination in policy learning and ex ecution unless the task itself requires joint capability complementarity . Likewise, learning formulations that share v alue functions, Q-values, or centralized training signals can accelerate learning and impro ve team-lev el performance, but these mechanisms do not necessarily expand the collective action set in the sense of Definition 4. In application domains such as autonomous driving, for instance, approaches like mix Q-learning for lane changing (MQLC) [113] promote compatible and ef ficient multi-agent decision-making, yet the resulting beha vior is typically better characterized as coordination unless the task is posed so that safety or success requires joint action beyond the capability of an y single agent. Supervised learning has also been used to support multi-robot coordination through shared representations. Learning- based geometric control and coordination frame works such as [114, 115] demonstrate that policies can be learned to produce complex collecti ve beha viors (such as, formation and cov erage) under communication and timing constraints. Similarly , GNN-based prediction models can learn team-level structure for assigning goals or inferring interaction intent, such as in [116]. These approaches can be viewed as enabling tools for collaboration, b ut under Definition 4 they constitute collaboration only when they are embedded in task formulations where distrib uted perception or execution is necessary to achiev e outcomes unattainable by any single robot. 6.6 LLMs and Human-Robot Collaboration Recent advances in LLMs ha ve introduced ne w possibilities for interaction and communication in multi-robot systems, particularly in translating high-lev el intent into actionable behaviors [117]. While natural language can facilitate communication and planning, mapping linguistic intent to collaborativ e mechanisms via LLMs requires proper integration between human reasoning and decision-making models. Addressing this gap could be beneficial for advancing both human-robot and robot-robot collaboration. Beyond language-based interaction, the gro wing interest in agentic AI and human-AI collaboration raises additional challenges and opportunities for collaborati ve multi-robot systems. As autonomous agents are increasingly expected to operate with higher le vels of autonomy , collaboration must account for trade-of fs between human ov ersight, agent autonomy , and system-level performance. Designing framew orks that enable effecti ve collaboration between humans and autonomous robots remains an open research direction, particularly in safety-critical and long-horizon tasks. 6.7 Concluding Remarks Overall, multi-robot collaboration remains an open research problem. Addressing the challenges and research directions outlined above will require advances in control theory , decision making, machine learning, and system design. W ithin the broader multi-robot behavior paradigm, collaboration represents a promising yet unexplored mechanism for achie ving scalable, adaptiv e, and robust collecti ve beha vior in complex, dynamic, unkno wn, and/or heterogeneous en vironments. 13 Collaboration in Multi-Robot Systems References [1] Robin R Murphy . Disaster r obotics . MIT press, 2017. [2] Matthew Dunbabin and Lino Marques. Robots for en vironmental monitoring: Significant advancements and applications. IEEE Robotics & Automation Ma gazine , 19(1):24–39, Mar . 2012. [3] I Karabego vi ´ c, E Karabego vi ´ c, M Mahmi ´ c, and Ejaipe Husak. The application of service robots for logistics in manufacturing processes. Advances in Pr oduction Engineering & Mana gement , 10(4):185–194, Dec. 2015. [4] Y ang Gao and Ste ve Chien. Revie w on space robotics: Tow ard top-le vel science through space exploration. Science Robotics , 2(7):5074, Jun. 2017. [5] A. Stroupe, T . Huntsber ger , A. Okon, H. Aghazarian, and M. Robinson. Behavior-based multi-robot collaboration for autonomous construction tasks. In Pr oceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IR OS) , pages 1495–1500, 2005. [6] Ian D. Miller, Fernando Cladera, T rey Smith, Camillo Jose T aylor , and V ijay Kumar . Stronger together: Air- ground robotic collaboration using semantics. IEEE Robotics and Automation Letters (RA-L) , 7(4):9643–9650, Oct. 2022. [7] Y ang Zhou, Jiuhong Xiao, Y ue Zhou, and Giuseppe Loianno. Multi-robot collaborative perception with graph neural networks. IEEE Robotics and A utomation Letters (RA-L) , 7(2):2289–2296, 2022. [8] Alexander A Nguyen, Faryar Jabbari, and Magnus Egerstedt. Mutualistic interactions in heterogeneous multi- agent systems. In Pr oceedings of the IEEE Confer ence on Decision and Contr ol (CDC) , pages 411–418, 2023. [9] Zhao Mandi, Shreeya Jain, and Shuran Song. RoCo: Dialectic multi-robot collaboration with large language models. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 286–299. IEEE, 2024. [10] Brooks A Butler and Philip E P aré. Collaborati ve safety-critical control in coupled netw orked systems. IEEE Open Journal of Contr ol Systems , 4:433–446, Sept. 2025. [11] Alessandro Farinelli, Luca Iocchi, and Daniele Nardi. Multirobot systems: a classification focused on co- ordination. IEEE T ransactions on Systems, Man, and Cybernetics, P art B (Cybernetics) , 34(5):2015–2028, 2004. [12] Zhi Y an, Nicolas Jouandeau, and Arab Ali Cherif. A surve y and analysis of multi-robot coordination. Interna- tional Journal of Advanced Robotic Systems , 10(12):399, 2013. [13] Y ara Rizk, Mariette A wad, and Edward W . T unstel. Cooperati ve heterogeneous multi-robot systems: A survey . A CM Computing Surveys , 52(2):1–31, Apr . 2019. [14] Mehran Mesbahi and Magnus Egerstedt. Graph Theor etic Methods in Multiagent Networks . Princeton Uni versity Press, 2010. [15] Abdollah Amirkhani and Amir Hossein Barshooi. Consensus in multi-agent systems: A revie w . Artificial Intelligence Revie w , 55(5):3897–3935, 2022. [16] Paulo T abuada, George J Pappas, and Pedro Lima. Feasible formations of multi-agent systems. In Pr oceedings of the IEEE American Contr ol Confer ence (A CC) , pages 56–61, 2001. [17] Y ang Quan Chen and Zhongmin W ang. Formation control: a revie w and a new consideration. In Pr oceedings of the IEEE/RSJ International Confer ence on Intelligent Robots and Systems (IR OS) , pages 3181–3186, 2005. [18] J. Cortes, S. Martinez, T . Karatas, and F . Bullo. Coverage control for mobile sensing networks. IEEE T ransactions on Robotics and Automation , 20(2):243–255, Apr . 2004. [19] Jorge Cortés, Sonia Martinez, T imur Karatas, and Francesco Bullo. Co verage control for mobile sensing networks: V ariations on a theme. In Pr oceeding of the Mediterranean Confer ence on Contr ol and Automation , pages 9–13, 2002. [20] Jorge Cortés and Magnus Egerstedt. Coordinated control of multi-robot systems: A survey . SICE Journal of Contr ol, Measur ement, and System Inte gration , 10(6):495–503, Jan. 2017. [21] Zool Hilmi Ismail, Nohaidda Sariff, and E Gorrostieta Hurtado. A surve y and analysis of cooperativ e multi-agent robot systems: Challenges and directions. Applications of Mobile Robots , 5:8–14, Nov . 2018. [22] W einan Chen, W enzheng Chi, Sehua Ji, Hanjing Y e, Jie Liu, Y unjie Jia, Jiajie Y u, and Jiyu Cheng. A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration. Biomimetic Intelligence and Robotics , 5(2):100203, Jun. 2025. 14 Collaboration in Multi-Robot Systems [23] T an Zhang, W enjun Zhang, and Madan M. Gupta. Resilient robots: Concept, revie w , and future directions. Robotics , 6(4):22, Sept. 2017. [24] Amanda Prorok, Matthew Malencia, Luca Carlone, Gaurav S. Sukhatme, Brian M. Sadler , and V ijay Kumar . Beyond rob ustness: A taxonomy of approaches towards resilient multi-robot systems, 2021. [25] Brian P . Gerkey and Maja J. Matari ´ c. A formal analysis and taxonomy of task allocation in multi-robot systems. The International Journal of Robotics Resear ch , 23(9):939–954, Sept. 2004. [26] G. A yorkor K orsah, Anthony Stentz, and M. Bernardine Dias. A comprehensi ve taxonomy for multi-robot task allocation. The International Journal of Robotics Resear ch , 32(12):1495–1512, Oct. 2013. [27] Elio T uci, Muhanad H. M. Alkilabi, and Otar Akanyeti. Cooperative object transport in multi-robot systems: A revie w of the state-of-the-art. F r ontiers in Robotics and AI , 5:59, 2018. [28] Xing An, Celimuge W u, Y angfei Lin, Min Lin, Tsutomu Y oshinaga, and Y usheng Ji. Multi-robot systems and cooperativ e object transport: Communications, platforms, and challenges. IEEE Open Journal of the Computer Society , 4:23–36, Jan. 2023. [29] Fabian Menebröker , Jannik Stadtler , Adrian Böckenkamp, Dennis Lünsch, and Sven Franke. Multi mobile robot collaboration in industrial applications: A structured surve y . In Pr oceedings of the IEEE International Confer ence on Automation Science and Engineering (CASE) , pages 2428–2435, 2025. [30] James Orr and A yan Dutta. Multi-agent deep reinforcement learning for multi-robot applications: A surve y . Sensors , 23(7):3625, Mar . 2023. [31] Peng Y i, Jinlong Lei, Xiuxian Li, Shu Liang, Min Meng, and Jie Chen. A survey on noncooperative games and distributed Nash equilibrium seeking over multi-agent networks. CAAI Artificial Intelligence Resear ch , 1(1):8–27, 2022. [32] Kaiqing Zhang, Zhuoran Y ang, and T amer Ba ¸ sar . Multi-agent reinforcement learning: A selecti ve ov ervie w of theories and algorithms. Handbook of Reinfor cement Learning and Contr ol , pages 321–384, 2021. [33] Ryan Lowe, Y i I W u, A viv T amar , Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor- critic for mixed cooperati ve-competiti ve en vironments. In Proceedings of the Advances in Neur al Information Pr ocessing Systems (NeurIPS) , v olume 30. Curran Associates, Inc., 2017. [34] Reza Olfati-Saber and Richard M Murray . Consensus problems in networks of agents with switching topology and time-delays. IEEE T r ansactions on Automatic Contr ol (T AC) , 49(9):1520–1533, 2004. [35] Michael M. Zavlanos, Magnus B. Egerstedt, and Geor ge J. Pappas. Graph-theoretic connectivity control of mobile robot networks. Pr oceedings of the IEEE , 99(9):1525–1540, 2011. [36] Fei Chen and W ei Ren. On the control of multi-agent systems: A surve y . F oundations and T r ends in Systems and Contr ol , 6(4):339–499, Jul. 2019. [37] Y anjiang Li and Chong T an. A surv ey of the consensus for multi-agent systems. Systems Science & Contr ol Engineering , 7(1):468–482, 2019. [38] Jiahu Qin, Qichao Ma, Y ang Shi, and Long W ang. Recent advances in consensus of multi-agent systems: A brief surve y . IEEE T ransactions on Industrial Electr onics , 64(6):4972–4983, 2016. [39] W ei Ren, Randal W Beard, and Ella M Atkins. A survey of consensus problems in multi-agent coordination. In Pr oceedings of the IEEE American Contr ol Confer ence (A CC) , volume 3, pages 1859–1864. IEEE, 2005. [40] Reza Olfati-Saber and Richard M Murray . Graph rigidity and distributed formation stabilization of multi-vehicle systems. In Pr oceedings of the IEEE Confer ence on Decision and Contr ol (CDC) , volume 3, pages 2965–2971. IEEE, 2002. [41] Kwang-K yo Oh, Myoung-Chul Park, and Hyo-Sung Ahn. A surve y of multi-agent formation control. Automatica , 53:424–440, 2015. [42] Y efeng Liu, Jingjing Liu, Zengpeng He, Zhenhong Li, Qichun Zhang, and Zhengtao Ding. A survey of multi-agent systems on distributed formation control. Unmanned Systems , 12(05):913–926, 2024. [43] Andreas Breitenmoser , Mac Schwager , Jean-Claude Metzger , Roland Siegwart, and Daniela Rus. V oronoi cov erage of non-con vex en vironments with a group of networked robots. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 4982–4989. IEEE, 2010. [44] Hamid Mahboubi and Amir G Aghdam. Distributed deployment algorithms for cov erage improv ement in a network of wireless mobile sensors: Relocation by virtual force. IEEE T ransactions on Contr ol of Network Systems (TCNS) , 4(4):736–748, 2016. 15 Collaboration in Multi-Robot Systems [45] Ruoyu Lin, Gennaro Notomista, and Magnus Egerstedt. Disentangled control of multi-agent systems, 2025. [46] Angelia Nedic and Asuman Ozdaglar . Distributed subgradient methods for multi-agent optimization. IEEE T r ansactions on Automatic Contr ol (T AC) , 54(1):48–61, 2009. [47] Mac Schwager , Brian J Julian, and Daniela Rus. Optimal cov erage for multiple hovering robots with do wnward facing cameras. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 3515–3522. IEEE, 2009. [48] Ioannis Rekleitis, V incent Lee-Shue, Ai Peng Ne w , and Ho wie Choset. Limited communication, multi-robot team based coverage. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , volume 4, pages 3462–3468. IEEE, 2004. [49] Noam Hazon and Gal A. Kaminka. Redundancy , ef ficiency and robustness in multi-robot coverage. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 735–741. IEEE, 2005. [50] Nare Karapetyan, Kelly Benson, Chris McKinney , Perouz T aslakian, and Ioannis Rekleitis. Ef ficient multi-robot cov erage of a kno wn en vironment. In Pr oceedings of the IEEE/RSJ International Confer ence on Intelligent Robots and Systems (IR OS) , pages 1846–1852. IEEE, 2017. [51] Athanasios Ch Kapoutsis, Sa vvas A Chatzichristofis, and Elias B K osmatopoulos. D ARP: Di vide areas algorithm for optimal multi-robot coverage path planning. Journal of Intelligent & Robotic Systems , 86(3):663–680, 2017. [52] M Bernardine Dias, Robert Zlot, Nidhi Kalra, and Anthony Stentz. Market-based multirobot coordination: A surve y and analysis. Pr oceedings of the IEEE , 94(7):1257–1270, 2006. [53] Ruoyu Lin and Magnus Egerstedt. Dynamic multi-target tracking using heterogeneous cov erage control. In 2023 IEEE/RSJ International Confer ence on Intelligent Robots and Systems (IROS) , pages 11103–11110. IEEE, 2023. [54] Riwa Karam, Ruoyu Lin, Brooks A. Butler , and Magnus Egerstedt. Resource allocation with multi-team collaboration based on hamilton’ s rule. In Pr oceedings of the IEEE Confer ence on Decision and Contr ol (CDC) , pages 6891–6898. IEEE, 2025. [55] Imad Jawhar , Nader Mohamed, Jie W u, and Jameela Al-Jaroodi. Networking of multi-robot systems: Architec- tures and requirements. Journal of Sensor and Actuator Networks , 7(4):52, 2018. [56] Mohamed Badreldin, Ahmed Hussein, and Alaa Khamis. A comparati ve study between optimization and market-based approaches to multi-robot task allocation. Advances in Artificial Intelligence , 2013(1):256524, 2013. [57] Alexander A Nguyen, Luis Guerrero-Bonilla, Faryar Jabbari, and Magnus Egerstedt. Scalable, pairwise collaborations in heterogeneous multi-robot teams. IEEE Contr ol Systems Letters (L-CSS) , 8:604–609, 2024. [58] David R. Martin, Brooks A. Butler , Scott Nivison, Magnus Egerstedt, Mohammad Abdullah Al F aruque, and Pramod Khar gonekar . Collaborative task allocation for heterogeneous multi-robot systems through iterati ve clustering. IEEE Robotics and Automation Letter s (RA-L) , 11(1):33–40, 2026. [59] Joel Lindsay , Jordan Ross, Mae L Seto, Edward Gregson, Alexander Moore, Jay Patel, and Robert Bauer . Collaboration of heterogeneous marine robots to ward multidomain sensing and situational a wareness on partially submerged tar gets. IEEE Journal of Oceanic Engineering , 47(4):880–894, 2022. [60] Kehui Liu, Zixin T ang, Dong W ang, Zhigang W ang, Xuelong Li, and Bin Zhao. COHERENT: Collaboration of heterogeneous multi-robot system with large language models. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 10208–10214. IEEE, 2025. [61] Ruoyu Lin, Soobum Kim, and Magnus Egerstedt. Heterogeneous collaborati ve pursuit via cov erage control driv en by fokker -planck equations. IEEE T ransactions on Robotics (T -R O) , 41:3649–3668, 2025. [62] Liwang Zhang, Dong Liang, Minglong Li, W enjing Y ang, and Shaowu Y ang. Coalition formation g ame approach for task allocation in heterogeneous multi-robot systems under resource constraints. In Proceedings of the IEEE/RSJ International Confer ence on Intelligent Robots and Systems (IR OS) , pages 3439–3446. IEEE, 2024. [63] Xiaotao Shan, Y ichao Jin, Marius Jurt, and Peizheng Li. A distributed multi-robot task allocation method for time-constrained dynamic collectiv e transport. Robotics and Autonomous Systems , 178:104722, 2024. [64] W eiheng Dai, Aditya Bidwai, and Guillaume Sartoretti. Dynamic coalition formation and routing for multirobot task allocation via reinforcement learning. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 16567–16573. IEEE, 2024. [65] W eiheng Dai, Utkarsh Rai, Jimmy Chiun, Cao Y uhong, and Guillaume Sartoretti. Heterogeneous multi-robot task allocation and scheduling via reinforcement learning. IEEE Robotics and Automation Letters (RA-L) , 2025. 16 Collaboration in Multi-Robot Systems [66] T aher Hekmatfar , Ellips Masehian, and Seyed Ja vad Mousa vi. Cooperativ e object transportation by multiple mobile manipulators through a hierarchical planning architecture. In Pr oceedings of the RSI/ISM International Confer ence on Robotics and Mechatr onics (ICRoM) , pages 503–508. IEEE, 2014. [67] W ei-T ao Li and Y en-Chen Liu. Human-swarm collaboration with coverage control under nonidentical and limited sensory ranges. Journal of the F ranklin Institute , 356(16):9122–9151, 2019. [68] Mohammad Divband Soorati, Jediah Clark, Javad Ghofrani, Danesh T arapore, and Sarv apali D Ramchurn. Designing a user-centered interaction interf ace for human-swarm teaming. Dr ones , 5(4):131, 2021. [69] Eric Bonabeau, Marco Dorigo, and Guy Theraulaz. Swarm Intelligence: F r om Natural to Artificial Systems . Oxford univ ersity press, 1999. [70] Janine M Benyus et al. Biomimicry: Innovation Inspir ed by Natur e , volume 688136915. Morrow Ne w Y ork, 1997. [71] Y oseph Bar-Cohen. Biomimetics: Biologically Inspired T echnolo gies . CRC press, 2005. [72] Julian FV V incent, Olga A Bogatyrev a, Nikolaj R Bogatyrev , Adrian Bowyer , and Anja-Karina Pahl. Biomimetics: its practice and theory . Journal of the Royal Society Interface , 3(9):471–482, 2006. [73] Robert E. Ricklefs and Gary Miller . Ecology . Macmillan, 2000. [74] Jonathan N. Pauli, Jor ge E. Mendoza, Sha wn A. Steff an, Cayelan C. Carey , Paul J. W eimer , and M. Zachariah Peery . A syndrome of mutualism reinforces the lifestyle of a sloth. Pr oceedings of the Royal Society B: Biolo gical Sciences , 281(1778):20133006, 03 2014. [75] Magnus Egerstedt, Jonathan N P auli, Gennaro Notomista, and Seth Hutchinson. Robot ecology: Constraint-based control design for long duration autonomy . Annual Reviews in Contr ol , 46:1–7, 2018. [76] Gennaro Notomista and Magnus Egerstedt. Constraint-driven coordinated control of multi-robot systems. In Pr oceedings of the IEEE American Contr ol Confer ence (A CC) , pages 1990–1996. IEEE, 2019. [77] Alexander A. Nguyen, Mauriel Rodriguez Curras, Magnus Egerstedt, and Jonathan N. P auli. Mutualisms as a framew ork for multi-robot collaboration. F r ontiers in Robotics and AI , 12:1566452, Mar . 2025. [78] Xuezhi Niu and Didem Gürdür Broo. Inv estigating symbiosis in robotic ecosystems: A case study for multi-robot reinforcement learning re ward shaping. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation Sciences (ICRAS) , pages 112–117. IEEE, 2025. [79] Aaron D Ames, Samuel Coogan, Magnus Egerstedt, Gennaro Notomista, K oushil Sreenath, and Paulo T abuada. Control barrier functions: Theory and applications. In Pr oceedings of the IEEE Eur opean Contr ol Confer ence (ECC) , pages 3420–3431. IEEE, 2019. [80] Paul Glotfelter , Jorge Cortés, and Magnus Egerstedt. Nonsmooth barrier functions with applications to multi- robot systems. IEEE Contr ol Systems Letters (L-CSS) , 1(2):310–315, 2017. [81] Soobum Kim and Magnus Egerstedt. Heterogeneous cov erage control with mobility-based operating regions. In 2022 American Contr ol Confer ence (A CC) , pages 2148–2153. IEEE, 2022. [82] María Santos, Y ancy Diaz-Mercado, and Magnus Egerstedt. Cov erage control for multirobot teams with heterogeneous sensing capabilities. IEEE Robotics and Automation Letter s , 3(2):919–925, 2018. [83] María Santos and Magnus Egerstedt. Coverage control for multi-robot teams with heterogeneous sensing capabilities using limited communications. In 2018 IEEE/RSJ international confer ence on intelligent r obots and Systems (IR OS) , pages 5313–5319. IEEE, 2018. [84] Hugh B. Cott. Scientific results of an inquiry into the ecology and economic status of the nile crocodile (crocodilus niloticus) in uganda and northern rhodesia. The T ransactions of the Zoolo gical Society of London , 29(4):211–356, 1961. [85] Manuel C. Molles and Barry W . Barker . Ecology: Concepts and Applications . McGaw-Hill, 1999. [86] Andreas K olling, Phillip W alker , Nilanjan Chakraborty , Katia Sycara, and Michael Le wis. Human interaction with robot swarms: A survey . IEEE T ransactions on Human-Machine Systems , 46(1):9–26, 2016. [87] A ya Hussein, Leo Ghignone, T ung Nguyen, Nima Salimi, Hung Nguyen, Min W ang, and Hussein A Abbass. Characterization of indicators for adaptiv e human-swarm teaming. F r ontiers in Robotics and AI , 9:745958, 2022. [88] V alerii Serpi va, Ekaterina Karmano va, Alekse y Fedoseev , Stepan Perminov , and Dzmitry Tsetserukou. Swarm- Paint: Human-swarm interaction for trajectory generation and formation control by DNN-based gesture interf ace. In Pr oceedings of the IEEE International Confer ence on Unmanned Air craft Systems (ICU AS) , pages 1055–1062. IEEE, 2021. 17 Collaboration in Multi-Robot Systems [89] Lucas Coelho Figueiredo, Ítalo Lelis de Carvalho, and Luciano Cunha De Araújo Pimenta. V oronoi multi-robot cov erage control in non-con ve x en vironments with human interaction in virtual reality . In Pr oceedings of the Congr esso Brasileir o de Automatica (CBA) , 2019. [90] Sarjana Oradiambalam Sachidanandam, Sara Honarvar , and Y ancy Diaz-Mercado. Effecti veness of augmented reality for human swarm interactions. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 11258–11264. IEEE, 2022. [91] Chengxi Li, Pai Zheng, Shufei Li, Y atming Pang, and Carman KM Lee. AR-assisted digital twin-enabled robot collaborativ e manufacturing system with human-in-the-loop. Robotics and Computer-Inte gr ated Manufacturing , 76:102321, 2022. [92] Jason R Marden and Jeff S Shamma. Game theory and control. Annual Review of Contr ol, Robotics, and Autonomous Systems , 1(1):105–134, 2018. [93] Rosemary Emery-Montemerlo, Geof f Gordon, Jeff Schneider , and Sebastian Thrun. Game theoretic control for robot teams. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 1163–1169. IEEE, 2005. [94] Hong Qiu, W entao Y u, Gan Zhang, Xuan Xia, and Kun Y ao. Multi-robot collaborativ e 3D path planning based on game theory and particle swarm optimization hybrid method. The J ournal of Supercomputing , 81(3):487, 2025. [95] Lixiang Liu and Peng Li. Game-theoretic cooperati ve task allocation for multiple-mobile-robot systems. V ehicles , 7(2):35, 2025. [96] V iet-Anh Le, V aishnav T adiparthi, Behdad Chalaki, Hossein Nourkhiz Mahjoub, Jovin D’ sa, Ehsan Moradi-Pari, and Andreas A Malikopoulos. Multi-robot cooperati ve na vigation in crowds: A game-theoretic learning-based model predicti ve control approach. In Pr oceedings of the IEEE International Confer ence on Robotics and Automation (ICRA) , pages 4834–4840. IEEE, 2024. [97] Rodica Branzei, Dinko Dimitrov , and Stef Tijs. Models in cooperative game theory . Springer , 2008. [98] Edith Elkind and Jörg Rothe. Cooperative game theory , pages 135–193. Springer , 2016. [99] Filiberto Fele, José M Maestre, and Eduardo F Camacho. Coalitional control: Cooperativ e game theory and control. IEEE Contr ol Systems Magazine (CSM) , 37(1):53–69, 2017. [100] Filiberto Fele, Ezequiel Debada, José María Maestre, and Eduardo F Camacho. Coalitional control for self- organizing agents. IEEE T ransactions on A utomatic Contr ol (T A C) , 63(9):2883–2897, 2018. [101] Bin W u and C Ste ve Suh. State-of-the-art in robot learning for multi-robot collaboration: A comprehensiv e surve y , 2024. [102] Ming T an. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Pr oceedings of the International Confer ence on Machine Learning (ICML) , pages 330–337, 1993. [103] Stev en D. Whitehead. A complexity analysis of cooperati ve mechanisms in reinforcement learning. In Pr oceed- ings of the National Confer ence on Artificial Intelligence , pages 607–613. AAAI Press, 1991. [104] Guillaume Sartoretti, Y ue W u, W illiam Pai vine, TK Satish Kumar , Sven K oenig, and Ho wie Choset. Distributed reinforcement learning for multi-robot decentralized collectiv e construction. In Distributed Autonomous Robotic Systems , pages 35–49. Springer , 2019. [105] Lin Zhang, Y ufeng Sun, Andrew Barth, and Ou Ma. Decentralized control of multi-robot system in cooperative object transportation using deep reinforcement learning. IEEE Access , 8:184109–184119, 2020. [106] Ryan D. Morton, George A. Bekey , and Christopher M. Clark. Altruistic task allocation despite unbalanced relationships within multi-robot communities. In Pr oceedings of the IEEE/RSJ International Confer ence on Intelligent Robots and Systems (IR OS) , pages 5849–5854. IEEE, 2009. [107] Behrad T oghi, Rodolfo V aliente, Dorsa Sadigh, Ramtin Pedarsani, and Y aser P . Fallah. Social coordination and altruism in autonomous driving. IEEE T ransactions on Intelligent T ransportation Systems , 23(12):24791–24804, Dec. 2022. [108] Rodolfo V aliente, Behrad T oghi, Ramtin Pedarsani, and Y aser P . Fallah. Robustness and adaptability of reinforcement learning-based cooperativ e autonomous driving in mixed-autonomy traffic. IEEE Open Journal of Intelligent T ransportation Systems , 3:397–410, May 2022. [109] Brooks A Butler and Magnus Egerstedt. Hamilton’ s rule for enabling altruism in multi-agent systems. In Pr oceedings of the IEEE Confer ence on Decision and Contr ol (CDC) , pages 6776–6783. IEEE, 2025. 18 Collaboration in Multi-Robot Systems [110] J. B. Rosen. Existence and uniqueness of equilibrium points for concave N-person games. Econometrica , 33(3):520–534, 1965. [111] Y air Censor . Pareto optimality in multiobjecti ve problems. Applied Mathematics and Optimization , 4(1):41–59, 1977. [112] Kaiqing Zhang, Zhuoran Y ang, Han Liu, T ong Zhang, and T amer Basar . Fully decentralized multi-agent reinforcement learning with network ed agents. In Pr oceedings of the International Confer ence on Mac hine Learning (ICML) , volume 80. Proceedings of Machine Learning Research (PMLR), Jul. 2018. [113] Xiaojun Bi, Mingjie He, and Y iwen Sun. Mix Q-learning for lane changing: A collaborativ e decision-making method in multi-agent deep reinforcement learning. IEEE T ransactions on V ehicular T echnology , 74(6):8664– 8677, 2025. [114] Ekaterina T olstaya, James Paulos, V ijay Kumar , and Alejandro Ribeiro. Multi-robot coverage and e xploration using spatial graph neural netw orks. In Pr oceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IR OS) , pages 8944–8950. IEEE, 2021. [115] Chengchao Bai, Peng Y an, W ei Pan, and Jifeng Guo. Learning-based multi-robot formation control with obstacle av oidance. IEEE T r ansactions on Intelligent T ransportation Systems , 23(8):11811–11822, 2022. [116] Manohari Goarin and Giuseppe Loianno. Graph neural network for decentralized multi-robot goal assignment. IEEE Robotics and Automation Letter s (RA-L) , 9(5):4051–4058, 2024. [117] Xinyi Li, Sai W ang, Siqi Zeng, Y u W u, and Y i Y ang. A surve y on LLM-based multi-agent systems: workflow , infrastructure, and challenges. V icinagearth , 1(1):9, 2024. 19

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment