Distributed control for geometric pattern formation of large-scale multirobot systems

Introduction: Geometric pattern formation is crucial in many tasks involving large-scale multi-agent systems. Examples include mobile agents performing surveillance, swarms of drones or robots, and smart transportation systems. Currently, most control strategies proposed to achieve pattern formation in network systems either show good performance but require expensive sensors and communication devices, or have lesser sensor requirements but behave more poorly. Methods and result: In this paper, we provide a distributed displacement-based control law that allows large groups of agents to achieve triangular and square lattices, with low sensor requirements and without needing communication between the agents. Also, a simple, yet powerful, adaptation law is proposed to automatically tune the control gains in order to reduce the design effort, while improving robustness and flexibility. Results: We show the validity and robustness of our approach via numerical simulations and experiments, comparing it, where possible, with other approaches from the existing literature.


Introduction 1.Problem description and motivation
Many robotic applications require-or may benefit from-one or more groups of multiple agents to perform a joint task [1]; this is, for example, the case of surveillance, exploration, herding [2] or transportation [3].When the number of agents becomes extremely large, the task becomes a swarm robotics problem [4].Typically, in these problems, it is assumed that the agents are relatively simple, and thus have limited communication and sensing capabilities, and limited computational resources; see for example the robotic swarms described in [5,6,7].In swarm robotics, typical tasks of interest include aggregation, flocking, foraging, object clustering, navigation, spatial organisation, collaborative manipulation, and task allocation [4,3].Among these, an important subclass of spatial organisation problems is geometric pattern formation, where the goal is for the agents to self-organize their relative positions into some desired structure or pattern, e.g., multiple adjacent triangles.Pattern formation is crucial in many applications [8], including sensor networks deployment [9,10], collective search and rescue [11,12], collective transportation and construction [13,14], and 3D-2D exploration and mapping [15].There are two main difficulties associated with achieving pattern formation.Firstly, as there are no leader agents, the pattern must emerge by exploiting a control strategy that is the same for all agents, distributed and local (i.e., each agent can only use information about "nearby" agents).Secondly, the number of agents is large and may change over time; therefore the control strategy must also be robust to uncertainties in the size of the swarm and to its possible variations.
This sets the problem of achieving pattern formation apart from the more classical formation control problems [16] where agents are typically fewer and have pre-assigned roles within the formation.
Nevertheless, some of the theory and solutions developed for formation control may be exploited to describe pattern formation.For this reason, to classify existing solutions to pattern formation, we employ the same taxonomy proposed in [16] for formation control, which is based on the type of information available to the agents.Namely, existing strategies can be classified as being (i) position-based when it is assumed agents know their position and orientation and those of their neighbours, in a global reference frame; (ii) displacement-based when agents can only sense their own orientation with respect to a global reference direction (e.g., North) and the relative positions of their neighbours; (iii) distance-based when agents can measure the relative positions of their neighbours with respect to their local reference frame.In terms of sensor requirements, positionbased solutions are the most demanding, requiring global positioning sensors, typically GPS, and communication devices, such as WiFi or LoRa.Differently, displacement-based methods require only a distance sensor (e.g., LiDAR) and a compass, although the latter can be replaced by a coordinated initialisation procedure of all local reference frames [17].Finally, distance-based algorithms are the less demand-ing, needing only some distance sensors.
A pressing open challenge in pattern formation problems is devising new control strategies that can combine low sensor requirements with high and consistent performance.This is crucial in swarm robotics, where it would be cumbersome or prohibitively expensive to equip all agents with GPS sensors and communication capabilities.

Position-based approaches
In [18], a position-based algorithm was proposed to achieve 2D triangular lattices in a constellation of satellites in a 3D space.This strategy combines global attraction towards a reference point with local interaction among the agents to control both the global shape and the internal lattice structure of the swarm.In [19], a position-based approach was presented that combines the common radial virtual force (also used in [20,21,22,23]) with a normal force.In this way, a network of connections is built such that each agent has at least two neighbours; then, a set of geometric rules is used to decide whether any or both of these forces are applied between any pair of agents.Importantly, this approach requires the acquisition of positions from two-hop neighbours.In [9], a position-based strategy is presented to achieve triangular and square patterns, as well as lines and circles, both in 2D and 3D; the control strategy features global attraction towards a reference point and re-scaling of distances between neighbours, with the virtual forces changing according to the goal pattern.A qualitative comparison was also provided with the distance-based strategy from [21], showing more precise configurations and a shorter convergence time, due to the position-based nature of the solution.

Displacement-based approaches
In [24], a displacement-based approach is presented based on the use of a geometric control law similar to the one proposed in [25].The aim is to obtain triangular lattices but small persisting oscillations of the agents are present at steady state, as the robots are assumed to have a constant non-zero speed.In [26,27], an approach is discussed inspired by covalent bonds in crystals, where each agent has multiple attachment points for its neighbours.Only starting conditions close to the desired pattern are tested, as the focus is on navigation in environments with obstacles.Finally, in [28] the desired lattice is encoded by a graph, where the vertices denote possible roles the agents may play in the lattice and edges denote rigid body transformations between the local frames or reference of pairs of neighbours.All agents communicate with each other and are assigned a label (or identification number) through which they are organised hierarchically to form triangular, square, hexagonal or octagon-square patterns.

Distance-based approaches
A popular distance-based approach for the formation of triangular and square lattices, named physicomimetics, was proposed in [20] and later also studied in [21,22].The control strategy is based on the use of virtual forces [29], an approach inspired by Physics, where each agent is subject to virtual forces (e.g., Lennard-Jones and Morse functions [4,30]) from neighbouring agents, obstacles, and the environment.In these studies ([20, 21, 22]), triangular lattices are achieved with long-range attraction and short-range repulsion forces only, while square lattices are obtained through a selective rescaling of the distances between some of the agents.An extension for the formation of hexagonal lattices was proposed in [31], but with the requirement of an ad hoc correction procedure to prevent agents from remaining stuck in the centre of a hexagon.The main drawback of the physicomimetics strategy ( [20,21,22,31]) is that it can produce the formation of multiple aggregations of agents, each respecting the desired pattern, but with different orientations.Another problem, described in [21], is that, for some values of the parameters, multiple agents can converge towards the same position and collide.In [23], an approach exploiting Lennard-Jones-like virtual forces is numerically optimised to stabilise locally a hexagonal lattice.When applied to mobile agents, the interaction law is time-varying and requires synchronous clocks among the agents.In [25], a different distance-based control strategy, derived from geometric arguments, was proposed to achieve the formation of triangular lattices.In this study, an analytical proof of convergence to the desired lattice is given exploiting Lyapunov methods.Robustness to agents' failure and the capability of detecting and repairing holes and gaps in the lattice are obtained via an ad hoc procedure and verified numerically.A 3D extension was later presented in [32].

Contribution
In this paper, we introduce a distributed displacement-based control strategy to solve pattern formation problems in swarm robotics that requires no communication among the agents or labelling them.In particular, to achieve triangular and square lattices we employ two virtual forces controlling the norm and the angle of their relative position, respectively.The main contributions can be listed as follows 1.Our strategy performs significantly better than other distance-based algorithms ( [20,31]) when achieving square lattices, in terms of precision and robustness, with only a minimal increase in sensor requirements (a compass) and without needing the more costly sensors and communication devices used for position-based strategies.
2. The control gains can be set automatically, according to a simple adaptive law, in order for the agents to organize themselves and switch from one pattern to the other.
3. Numerical simulations and experiments show its effectiveness even in the presence of actuator constraints and other more realistic effects.

Notation
We denote by • the Euclidean norm.Given a set B, its cardinality is denoted by |B|.We refer to R 2 as the plane.

Planar swarms
Definition 1 (Swarm) A (planar) swarm S := {1, 2, . . ., N } is a set of N ∈ N >0 identical agents that can move on the plane.For each agent i ∈ S, x i (t) ∈ R 2 denotes its position in the plane at time t ∈ R.
is the relative position of agent i with respect to agent j, and θ ij (t) ∈ [0, 2π] is the angle between r ij and the horizontal axis (see Fig. 1).
Definition 2 (Neighbourhood) Given a swarm and a sensing radius R s ∈ R >0 , the neighbourhood of agent i at time t is Definition 3 (Adjacency set) Given a swarm and some finite R min , R max ∈ R >0 , with R min ≤ R max , the adjacency set of agent i at time t is (see Fig. 2) Moreover, E(t) is the set of all links existing at time t.
Clearly, it is possible to associate to the swarm a time-varying graph G(t) = (S, E(t)) [33]; S and E(t) being the set of vertices and edges, respectively. 1 1 Formally, G(t) is a directed graph, even though E(t) is such that the existence of (i, j) implies the existence of (j, i).Finally, given any two links (i, j) and (h, k), we denote with θ hk ij (t) ∈ [0, 2π] the absolute value of the angle between the vectors r ij and r hk .

Lattice and performance metrics
Definition 5 (Lattice) Given some L ∈ {4, 6} and R ∈ R >0 , a (L, R)-lattice is a set of points in the plane that coincide with the vertices of an associated regular tiling [34,35]; R is the distance between adjacent vertices and L is the number of adjacent vertices each point has.
In Definition 5, L = 4, and L = 6 correspond to square and triangular lattices, respectively, as portrayed in Fig. 2. We say that a swarm self-organises into a (L, R)-lattice if (i) each agent has at most L links, and (ii) ∀(i, j) ∈ E and ∀(h, k) ∈ E it holds that θ hk ij is some multiple of 2π/L.To assess whether a swarm self-organises into some desired (L, R)-lattice, we introduce two metrics.
Definition 6 (Regularity metric) Given a swarm and a desired (L, R)-lattice, the regularity metric e θ (t) where, omitting the dependence on time, The regularity metric e θ , derived from [20], quantifies the incoherence in the orientation of the links in the swarm.In particular, e θ = 0 when all the pairs of links form angles that are multiples of 2π/L (which is desirable to achieve the (L, R)-lattice), while e θ = 1 when all pairs of links have the maximum possible orientation error, equal to π/L.Finally, e θ ≈ 0.5 generally corresponds to the agents being arranged randomly.
Definition 7 (Compactness metric) Given a swarm and a desired (L, R)-lattice, the compactness metric e L (t) The compactness metric e L measures the average difference between the number of neighbours each agent has and the one they are ought to have in a (L, R)lattice.e L is maximum (e L = (N −1−L)/L) when all agents are concentrated in a small region, and links exist between all pairs of agents.e L = 1 when all the agents are scattered loosely in the plane, and no links exist between them.Finally, e L = 0 when all the agents have L links (which is desirable to achieve the (L, R)-lattice).It is important to remark that, if the number N of agents is finite, e L can never be equal to zero, because the agents on the boundary of the group will always have less than L links (Fig. 2).This effect gets less relevant as N increases.Note that a similar metric is also independently defined in [28].
For the sake of brevity, in what follows we will omit dependence on time when that is clear from the context.

Control design 3.1 Problem formulation
Consider a planar swarm S whose agents' dynamics is described by the first order model where x i (t) was given in Definition 1 and u i (t) ∈ R 2 is some input signal determining the velocity of agent i. 2 We aim to solve the following control problem.
Problem statement Design some distributed feedback control law u i = g({r ij } j∈Ni , L, R) to let the swarm self-organise into a desired triangular or square lattice, starting from any set of initial positions in some disk of radius r.Moreover, we require the law to be: 1. robust to failures of agents and to noise; 2. flexible, allowing dynamic reorganisation into different patterns; 3. scalable, allowing the number of agents N to change dynamically.
To assess the self-organising capability of the swarm, we seek to minimise the performance metrics e θ and e L (see Definitions 6 and 7).

Distributed control law
Next, we present a distributed displacement-based control law that solves the problem described in Sec.3.1.Namely, we set where u r,i and u n,i are the radial and normal control inputs, respectively.The two inputs have different purposes and each comprises several virtual forces.The radial input u r,i is the sum of attracting/repelling actions between the agents, with the purpose of aggregating them into a compact swarm, while avoiding collisions.The normal input u n,i is also the sum of multiple actions, used to adjust the angles of the relative positions of the agents.Law ( 7) is displacement-based because it only requires that each agent i (i) can measure the relative 2 First order models like (6) are often used in the literature [25,32,19,9].In some other works [20,21,31] a second order model is used, given by mẍ i + µ ẋi = u i , where u i is a force, m is a mass and µ is a viscous friction coefficient.Under the simplifying assumptions of small inertia (m vi µ v i ) and µ = 1, the two models coincide.
positions of the agents close to it (in the sets N i and A i ), and (ii) has the knowledge of a common reference direction.Next, we describe in detail the two control actions in (7).

Radial Interaction
The radial control input u r,i in ( 7) is defined as the sum of several virtual forces, one for each agent in N i (neighbours of i), each force being attractive (if the neighbour is far) or repulsive (if the neighbour is close).Specifically, where G r,i ∈ R ≥0 is the radial control gain.Note that u r,i is termed as radial input because in (8) the attraction/repulsion forces are parallel to the vectors r ij (see Fig. 1).The magnitude and sign of each of these actions depend on the distance ( r ij ) between the corresponding agents, according to the radial interaction function f r : R ≥0 → R. Here, we select f r as the Physics-inspired Lennard-Jones function [4,22], given by where a, b ∈ R >0 and c ∈ N are design parameters.In (9), f r is saturated to 1 to avoid divergence for r ij → 0. f r is portrayed in Fig. 3a.

Normal Interaction
For any link (i, j), we define the angular error θ err ij ∈ − π L , π L as the difference between θ ij and the closest multiple of 2π/L (see Fig. 1), that is, Then, the normal control input u n,i in ( 7) is defined as where G n,i ∈ R ≥0 is the normal control gain.Each of these actions is applied in the direction of r ⊥ ij , that is the vector normal to r ij , obtained by applying a π/2 counterclockwise rotation (see Fig. 1).The magnitude and sign of these forces are determined by the normal interaction function f n is portrayed in Fig. 3b.We remark that by rotating the axis with respect to which angles θ ij are measured, our algorithm allows to achieve triangular or square lattices with different orientations.

Numerical validation
In this section, we assess the performance and the robustness of our proposed control algorithm (7) through an extensive simulation campaign.The experimental validation of the strategy is later reported in Sec. 6.First in Sec.4.2, using a numerical optimisation procedure, we tune the control gains G r,i and G n,i in ( 8) and (11), as the performance of the controlled swarm strongly depends on these values.Then in Sec.4.3, we assess the robustness of the control law with respect to (i) agents' failure and to (ii) noise, (iii) flexibility to pattern change, and (iv)

Simulation setup
We consider a swarm consisting of N = 100 agents (unless specified differently).To represent the fact that the agents are deployed from a unique source (as typically done in the literature [20,21]), their initial positions are drawn randomly with uniform distribution from a disk of radius r = 2 centred at the origin. 3 Initially, for the sake of simplicity and to avoid the possibility of some agents becoming disconnected from the group, we assume that R s in (1) is large i.e., any agent can sense the relative position of all others.Later, in Sec.4.3, we will drop this assumption and show the validity of our control strategy also for smaller values of R s .All simulation trials are conducted in Matlab4 , integrating the agents' dynamics using the forward Euler method with a fixed time step ∆t > 0.Moreover, the speed of the agents is limited to V max > 0. The values of the parameters used in the simulations are reported in Tab. 2.

Performance evaluation
To assess the performance of the controlled swarm we exploit the metrics e θ and e L given in Definitions 6 and 7. Namely, we select empirically the thresholds e * θ = 0.2 and e * L = 0.3, which are associated to satisfactory compactness and regularity of the swarm.Then, letting T w > 0 be the length of a time window, we say that e θ is at steady-state from time t = k∆t We give an analogous definition for the steady state of e L (using e * L rather than e * θ ).Then, we say that in a trial the swarm achieved steady-state at time t ss if there exists a time instant such that both e θ and e L are at steady state, and t ss is the smallest of such time instants.Moreover

Tuning of the control gains
For the sake of simplicity, in this section we assume that G r,i = G r and G n,i = G n , for all i ∈ S; later, in Sec. 5, we will present an adaptive control strategy allowing each agent to independently vary online its own control gains.To select the values of G r and G n giving the best performance in terms of regularity and compactness, we conducted an extensive simulation campaign and evaluated, for each pair The results are reported in Fig. 4   In Fig. 5, we report three snapshots at different time instants of two representative simulations, together with the metrics e θ (t) and e L (t), for the cases of a triangular and a square lattice, respectively.The control gains were set to the optimal values (G * r , G * n ) L=6 and (G * r , G * n ) L=4 .In both cases, the metrics quickly converge below their prescribed thresholds, as max{T θ , T L } < 2.75 s.Finally, note that e L (t) decreases faster than e θ (t), meaning that the swarm tends to first reach the desired level of compactness and then agents' positions are rearranged to achieve the desired pattern.

Robustness analysis
In this section, we investigate numerically the properties that we required in Sec.3.1, that is robustness to faults and noise, flexibility, and scalability.

Robustness to faults
We ran a series of simulations in which we removed a percentage of the agents at a certain time instant, and assessed the capability of the swarm to recover the desired pattern.For the sake of brevity, we report one of them in Fig. 6, where, with L = 4, 30% of the agents were removed at random at time t = 30 s.We notice that, as the agents are removed, e L (t) and e θ (t) suddenly increase, but, after a short time, they converge again to values below the thresholds, recovering the desired pattern, despite the formation of small holes that increase e ss L .

Robustness to noise
We assumed that the dynamics (6) of each agent is affected by additive white Gaussian noise with standard deviation σ.Then, we set L = 4 and varied σ in the interval [0, 1] with increments of 0.05.For each value of σ, we ran M = 30 trials, starting from random initial conditions, and report the average values of e ss θ and e ss L in Fig. 7.We observe that large intensities of noise (σ ≥ 0.4) worsen performance, up to the point of making the trials unsuccessful and preventing the swarm from forming the desired lattice.Interestingly, smaller noise (0 < σ ≤ 0.2) actually improves performance.This is because small random displacements can prevent the agents from getting stuck in undesired configurations, including those containing holes.

Flexibility
In Fig. 8, we report a simulation where L was initially set equal to 4 (square lattice), changed to 6 (triangular lattice) at time t = 30 s, and finally changed back to 4 at t = 60 s.The control gains are set to (G * r , G * n ) L=4 and kept constant during the entire the simulation.Clearly, as L is changed, both e L and e θ suddenly increase, but the swarm is quickly able to reorganise and reduce them below their prescribed thresholds in less than 5 s, thus achieving the desired pattern.

Scalability
Before properly testing for scalabiltiy, we dropped the assumption that (13) holds and characterised e ss L as a function of the sensing radius R s .The results are portrayed in Fig. 9a, showing that the performance starts deteriorating for approximately R s < 6 m, until it becomes unacceptable for about R s < 1.1 m.Therefore, as a good trade-off between performance and feasibility, we set R s = 3 m.To test for scalability, we varied the number N of agents (initially, N = 100), reporting the results in Fig. 9b.We see that (i) the controlled swarm correctly achieves the desired pattern for at least four-fold changes in the size of the swarm, and (ii) compactness (e ss L ) improves as N increases.

Comparison with [20, 21]
We compared our control law (7) to the so-called "gravitational virtual forces strategy" (see the Appendix) [20,21], that represent an established solution to geometric pattern formation problems.In [20,21], a second order damped dynamics is considered for the agents.Hence, for the sake of comparison, we reduced that model to the first order model in (6), by assuming that the viscous friction force is significantly larger than the inertial one.
Then, we performed the same scalability test in Sec.4.3.4 and report the results in Fig. 11.Remarkably, by comparing these results with those in the previous Fig.9b, we see that our proposed control strategy performs better, obtaining much smaller values of e ss θ , regardless of the size N of the swarm.In particular, the control law from [20,21] only rarely achieves e ss θ ≤ e * θ , implying a low success rate.

Adaptive tuning of control gains
Tuning the control gains (here G r,i and G n,i ) can in general be a tedious and time-consuming procedure.
Therefore, to avoid it, we propose the use of a simple adaptive control law, that might also improve the ro-  bustness and flexibility of the swarm.Specifically, for the sake of simplicity, G r,i is set to a constant value G r for all the swarm, while each agent computes its gain G n,i independently, using only local information.Letting e θ,i ∈ [0, 1] be the average angular error for agent i, given by e θ,i := L π G n,i is varied according to the law G n,i (0) = 0, (21b)  where α > 0 is an adaptation gain and e * θ (introduced in §4.1) is used to determine the amplitude of the dead-zone.Here, we empirically choose α = 3.To evaluate the effect of the adaptation law, we also define the average normal gain of the swarm

Ḡn (t) :=
In Fig. 12, we report the time evolution of e L , e θ , and of Ḡn for a representative simulation.First, we notice that the average normal gain Ḡn eventually settles to a constant value.Moreover, comparing the results with the case in which the gains G n,i are not chosen adaptively (see Sec 4.2 and Fig. 5j, here T θ , T L and t ss are larger (meaning longer convergence time) but e ss θ and e ss L are smaller (meaning better regularity and compactness performance).

Robustness analysis
Next, we test robustness to faults, flexibility, and scalability for the adaptive law (21), similarly to what we did in Sec.4.3.

Robustness to faults
We ran a series of agent removal tests, as described in Sec.4.3.1.For the sake of brevity, we report the results of one of such tests with L = 4 in Fig. 13.At t = 30 s, 30% of the agents are removed; yet, after a short time the swarm reaggregates to recover the desired lattice.

Flexibility
We repeated the test in Sec.4.3.3,with the difference that this time we set G r = 18.5 (that is the average between the optimal gain for square and triangular patterns), and set G n,i according to law (21), resetting all G n,i to 0 when L is changed.The results are shown in Fig. 14.When compared to the nonadaptive case (Fig. 8), here e ss θ and e ss L are smaller (better pattern formation), but T θ and T L are larger (slower), especially when forming square patterns.Interestingly, when L = 4, Ḡn settles to about 5, while when L = 6 it settles to about 0.3, a much smaller value.

Scalability
We repeated the test in Sec.4.3.4,setting again the sensing radius R s to 3 m and assessing performance while varying the size N of the swarm; results are shown in Fig. 15.First, we notice that the larger the swarm is, the larger the steady state value of Ḡn is.Comparing the results with those of the static gains, in Fig. 9b, here we observe a slight improvement of performance, with a slightly smaller e ss θ .

Robotarium Experiments
To further validate our control algorithm, we tested it in a real robotic scenario, using the open access Robotarium platform [36,37].The experimental setup features 20 differential drive robots (GRITSBot [38]), that can move in a 3.2 m × 2 m rectangular arena.The robots have a diameter of about 11 cm, a maximum (linear) speed of 20 cm/s, and a maximum rotational speed of about 3.6 rad/s.To cope with the limited size of the arena, distances r ij in (9) are doubled, while control inputs u i are halved.The Robotarium implementation includes a collision avoidance safety protocol and transforms the velocity inputs (7) into appropriate acceleration control inputs for the robots.Moreover, we run an initial routine to yield an initial condition in which the agents are aggregated as much as possible at the centre, similarly to what considered in Sec. 4.
As a paradigmatic example, we performed a flexibility test (similarly to what done in Sec 4.3.3 and reported in Fig. 8).During the first 33 seconds, the agents reach an aggregated initial condition.Then we set L = 4 for t ∈ [33, 165), L = 6 for t ∈ [165, 297), and L = 4 for t ∈ [297, 429], ending the simulation.We used the static control law ( 7)-( 8) and (11), and to comply with the limited size of the arena, we scaled the control gains to the values G r = 0.8 and G n = 0.4, selected empirically.
The resulting movie is available online (https:// github.com/diBernardoGroup/SwarmSimPublic),while representative snapshots are reported in Fig. 16, with the time evolution of the metrics.The metrics qualitatively reproduce the behaviour obtained in simulation (see Fig. 8).In particular, we obtain e ss θ < e * θ , with both triangular and square patterns.On the other hand, we obtain e ss L < e * L when forming square patterns, but e ss L > e * L with triangular patterns; this is a consequence of the relatively small swarm size, and does not mean that the pattern is not achieved, as it can be seen in Fig. 16.(c) showing the pattern is successfully achieved.

Conclusions
We presented a distributed control law to solve pattern formation for the case of square and triangular lattices, based on the use of virtual forces.Our control strategy is distributed, only requires distance sensors and a compass, and does not need communication between the agents.We showed via exhaustive simulations and experiments that the strategy is effective in achieving triangular and square lattice; we also compared it the distance-based strategy in [21], observing better performance particularly when the goal is that of achieving square lattices.Additionally, we showed that the control law is robust to failures of the agents, to noise, is flexible to changes in the lattice and scalable with respect to the number of agents.We also presented a simple yet effective adaptive law to automatically tune the gains so as to be able to switch the goal pattern in real-time.
In the future, we plan to study analytically the stability and convergence of the control law.Additionally, we will investigate the extension to other patterns (e.g.hexagonal ones) and a more sophisti- cated adaptive law able to tune all the control gains at the same time.

Figure 1 :
Figure 1: Geometrical relationship between a pair of agents.

Figure 2 :
Figure 2: (L, R)-lattice formations for (a) triangular (L = 6) and (b) square (L = 4) lattices.Red dots are agents in the adjacency set (A i ) of the black agent (i).

Figure 3 :
Figure 3: (a) Radial and (b) normal interaction functions.Red dots highlight zeros of the functions.Parameters are taken from Tab 2.

3
That is, denoting with U ([a, b]) the uniform distribution on the interval [a, b], the initial position of each agent in polar coordinates x i (0) := (d i , φ i ) is obtained by independently sampling φ i ∼ U ([0, 2π[) and d i is chosen according to the probability density function p l (ξ) : [0, r] → R ≥0 defined as p l (ξ) = 2ξ/r 2 .

Figure 4 :
Figure 4: Tuning of the control gains Gr and Gn with (a) L = 6 and (b) L = 4 ( § 4.2).The black dots correspond to (G * r , G * n ) L=6 and (G * r , G * n ) L=4 , minimising C. The black curves delimit the regions where C ≤ 1.

Figure 5 :
Figure 5: Snapshots at different time instants of a swarm forming (a)-(d) a triangular lattice (f)-(i) and a square lattice ( § 4.2).Panels e and j show the time evolution of the metrics e θ and e L for the cases that L = 6 and L = 4, respectively.When L = 6, we set (Gr, Gn) = (G * r , G * n ) L=6 ; when L = 4, we set (Gr, Gn) = (G * r , G * n ) L=4 .

Figure 6 :
Figure 6: Robustness to removal of agents ( § 4.3.1).Panels a-d show snapshots at different time instants.Panel e shows the time evolution of the metrics; dashed vertical lines denote the time instant when agents are removed.L = 4, (Gr, Gn) = (G * r , G * n ) L=4 .

Figure 7 :
Figure 7: Robustness to noise ( § 4.3.2). e ss L e e ss θ , averaged over M = 30 trials, varying the standard deviation of the Gaussian noise.The shaded areas represent the maximum and minimum values obtained over the M trials.L = 4, (Gr, Gn) = (G * r , G * n ) L=4 .

Figure 9 :
Figure 9: Scalability ( § 4.3.4).(a) e ss L averaged over M = 30 trials with varying Rs.(b) e ss θ and e ss L averaged over the trials, with varying N ; Rs = 3 m; agents' initial positions are drawn with uniform distribution from a disk with radius r = N/25.The shaded areas represent the maximum and minimum values over the M trials.L = 4, (Gr, Gn) = (G * r , G * n ) L=4 .

Figure 11 :
Figure 11: Scalability test for the algorithm from [21] ( § 4.4).e ss L and e ss θ averaged over M = 30 trials, as N varies.Agents' initial positions are drawn with uniform distribution from a disk of radius r = N/25.The shaded area represents the maximum and minimum values over the trials.L = 4, (G, Fmax) = (G * , F * max ).

Figure 12 :
Figure 12: Pattern formation using the adaptive tuning law (21) ( § 5).Initial conditions are the same as those of the simulation in Fig. 5.The shaded magenta area is delimited by max i∈S G n,i and min i∈S G n,i .L = 4, Gr = 15.

Figure 13 :
Figure 13: Robustness to agents removal using the adaptive tuning law (21) ( § 5.1.1).Initial conditions are the same as those of the simulation in Fig. 6.Panels a-d show snapshots at different time instants.Panel e shows the time evolution of the metrics; dashed vertical lines denote the time instant when agents are removed.The shaded magenta area is delimited by max i∈S G n,i and min i∈S G n,i .L = 4, Gr = 15.

Figure 14 :
Figure 14: Flexibility using the adaptive tuning law (21) ( § 5.1.2).Initial conditions are the same as those of the simulation in Fig. 8.The shaded magenta area is delimited by max i∈S G n,i and min i∈S G n,i .

Figure 15 :
Figure 15: Scalability using the adaptive tuning law (21) ( § 5.1.3).e ss θ and e ss L are averaged over M = 30 trials with varying N .Rs = 3 m; agents' initial positions are drawn with uniform distribution from a disk with radius r = N/25.G ss n := Ḡn(tss).The shaded areas represent the maximum and minimum values over the trials.L = 4, Gr = 15.

Figure 16 :
Figure 16: Flexibility test in Robotarium ( § 6).Panels a-d show the swarm at different time instants.Panel e shows the time evolution of the metrics and the parameter L. (Gr, Gn) = (G * r , G * n ) L=4 .

Table 1 :
Simulations and experiments

Table 2 :
Simulation parameters , we deem the trial successful if e θ (t ss ) < e * θ and e L (t ss ) < e * L .If in a trial steadystate is not reached in the time interval [0, t max ], the trial is stopped (and deemed unsuccessful).We define