Simultaneous localization and mapping in a multi-robot system in a dynamic environment with unknown initial correspondence

A basic assumption in most approaches to simultaneous localization and mapping (SLAM) is the static nature of the environment. In recent years, some research has been devoted to the field of SLAM in dynamic environments. However, most of the studies conducted in this field have implemented SLAM by removing and filtering the moving landmarks. Moreover, the use of several robots in large, complex, and dynamic environments can significantly improve performance on the localization and mapping task, which has attracted many researchers to this problem more recently. In multi-robot SLAM, the robots can cooperate in a decentralized manner without the need for a central processing center to obtain their positions and a more precise map of the environment. In this article, a new decentralized approach is presented for multi-robot SLAM problems in dynamic environments with unknown initial correspondence. The proposed method applies a modified Fast-SLAM method, which implements SLAM in a decentralized manner by considering moving landmarks in the environment. Due to the unknown initial correspondence of the robots, a geographical approach is embedded in the proposed algorithm to align and merge their maps. Data association is also embedded in the algorithm; this is performed using the measurement predictions in the SLAM process of each robot. Finally, simulation results are provided to demonstrate the performance of the proposed method.


Introduction
Simultaneous localization and mapping (SLAM) is one of the most important developments in the field of robotics, enabling robots operating in GPS-denied environments to perceive their surroundings and localize themselves within the identified map.This is especially useful for industries such as underground mining and nuclear decommissioning, where mobile robots are required to explore and complete various tasks, such as maintenance, inspection, and transportation, in environments that are inaccessible and hazardous for humans.These missions are accomplished either using a single robot in isolation (Montazeri et al., 2021) or using a team of cooperative robots (Burrell et al., 2018).The robot or robots need to cope with the time-varying, restricted, uncertain, and 10.3389/frobt.2023.1291672unstructured nature of the environment to achieve planning and execution of the necessary tasks.This in turn requires the design and development of advanced motion control and navigation algorithms, along with strong cognitive capabilities in order for the robot to perceive the surrounding environment effectively.The use of both single-and multi-robot platforms can be advantageous, depending on the specific application and environment.
SLAM-based navigation refers to the techniques by which the robots simultaneously localize themselves in an unknown environment where a map is required.This technique can be applied to single-robot (Debeunne and Vivet, 2020) or multirobot (Almadhoun et al., 2019) systems, as well as in either static (Kretzschmar et al., 2010) or dynamic (Saputra et al., 2018) environments.
In a static environment, landmarks are fixed, which means that all objects in the environment are stationary and none are moving (Stachniss et al., 2016).A dynamic environment is an environment in which objects are moving, such as an environment that includes humans, robots, and other means of transportation.In a dynamic environment, the SLAM process becomes more complex due to continuous changes in the environment (such as moving objects, changing colors, and combinations of shadows), and this makes the problem more challenging, for example, by creating a mismatch between previous and new data (Li et al., 2021).
Several successful solutions have been presented for the singlerobot and multi-robot SLAM problem in static environments (Saeedi et al., 2016;Lluvia et al., 2021).However, the SLAM problem in a dynamic environment is more complicated due to the requirements for simultaneous detection, classification, and tracking of moving landmarks.In recent years, single-robot and multirobot SLAM problems in dynamic environments have attracted the attention of many researchers.In (Vidal et al., 2015;Badalkhani and Havangi, 2021), solutions for solving the single-robot and multirobot SLAM problems in dynamic environments are presented.In (Vidal et al., 2015), a method is presented for SLAM in which only static (fixed) landmarks are considered and moving ones are omitted from the map.The paper uses an outlier filter and separates fixed and moving landmarks, which are included in a set of negligible signs; the authors only use fixed landmarks to implement their solution to the SLAM problem and do not consider moving landmarks.
In (Badalkhani and Havangi, 2021), a solution for the multirobot SLAM problem in a dynamic environment is presented.The proposed SLAM-based method is developed using a Kalman filter (KF); it detects moving landmarks based on possible location constraints and expected landmark area, which enables detection and filtering of moving landmarks.The main goal of (Badalkhani and Havangi, 2021) is the development of a new method to identify the moving parts of the environment and remove them from the map.
There are also some pieces of research implementing the SLAM problem in a dynamic environment through object recognition or using image processing techniques.However, some of these methods, such as those presented in (Liu et al., 2019;Pancham et al., 2020;Chen et al., 2021;Theodorou et al., 2022;Guan et al., 2023), also aim to remove moving landmarks found in the dynamic environment.
As mentioned, researchers who have investigated the problem of SLAM in a dynamic environment using one or more robots have mostly attempted to implement SLAM through the elimination of the moving landmarks, with only fixed ones being retained for the mapping procedure, which leads to the failure of the method when the environment is complex, with many moving objects, or in the absence of fixed objects.
In this paper, in contrast with previous work, a decentralized multi-robot modified Fast-SLAM-based algorithm is designed for use in a dynamic environment where all the landmarks are moving.It is worth mentioning that, in Fast-SLAM (Montemerlo et al., 2002), particle filtering (PF) (Godsill, 2019) is used to handle the non-linear kinematics of the robots via the Monte Carlo technique.Fast-SLAM is a filtering-based approach to SLAM decomposed into a robot localization problem, and a collection of landmark estimation problems conditioned on the robot pose estimate.This can be obtained using Bayes' rule combined with the statistical independence of landmark positions which leads to Rao Blackwellized particle filtering (RBPF) SLAM (Sadeghzadeh-Nokhodberiz et al., 2021).
In this study, we have simplified the multi-robot Fast-SLAM problem by considering two wheel-based robots in a dynamic environment with two moving landmarks with a known kinematic model.The proposed method can easily be extended to cases with more robots and landmarks.Each robot searches the environment and observes it with its onboard lidar sensor.Here, it is assumed that the kinematic models of the robots and landmarks are known, which makes the use of a modified Fast-SLAM method meaningful.This method is based on observer use of a particle filter and extended Kalman filter (EKF).However, several modifications of the normal Fast-SLAM method are required.The first modification is adding a prediction step to the EKF used for mapping using the kinematic model of the landmarks.Adding this step to the algorithm does not obviate the need for initialization, as the initial values of the landmarks' positions are assumed to be unknown.This prediction step is performed using the landmark's kinematic model at every sample time after the first visit to the landmark, even if the landmark has not been visited again by the lidar sensor.The second modification is that, in contrast with the normal approaches to data association with static landmarks, data association is performed based on the predicted measurement obtained from the predicted map.When data association is performed based on the predicted position of a landmark, it is less likely that a previously visited landmark will be wrongly diagnosed as a new one due to its movement.As the third modification, and due to the unknown initial correspondence, coordinate alignment and map-merging are added to the algorithm.In this step, the relative transformation matrix of the robots' inertial frames is computed when the robots meet each other, using a geographical approach as presented in (Zhou and Roumeliotis, 2006;Romero and Costa, 2010) to compute this transformation matrix, and the maps obtained in the coordinate system of each robot are then fused and merged.
In summary, the novelty of this paper can be considered to be the extension of the Fast-SLAM algorithm in several aspects, as follows: 1. Addition of a prediction step to the EKF used for mapping, based on the kinematic model of the landmarks after the first visit to each landmark and at every sample time.2. Data association based on the predicted measurements obtained from the predicted map. 10.3389/frobt.2023.1291672

FIGURE 1
The robots and their corresponding coordinates.
3. Addition of coordinate alignment and map-merging to the algorithm when the robots meet each other.
The paper is organized as follows.In Section 2, a system overview is presented, including a kinematic model of the robots and landmarks as well as the lidar measurement model.The modified Fast-SLAM algorithm for the dynamic environment is proposed in Section 3. Section 4 describes the coordinate alignment and mapmerging problem.Simulation results are presented in Section 5, and conclusions are provided in Section 6.

System overview
In this paper, for the sake of simplicity, two mobile robots and two dynamic landmarks have been considered.As an example, differential drive mobile robots such as the Pioneer P3-DX (Zaman et al., 2011) moving in a dynamic environment were selected for the study.The robots are equipped with lidar and IMU sensors.In this section, the kinematic models of the robots and landmarks and the measurement model of the lidar are assumed to be known.
It is worth mentioning that IMUs can provide information on linear acceleration and angular velocity.However, linear speed can be obtained through the integration of linear acceleration.

Kinematic model of the robots
The kinematic model for the two differential drive Pioneer P3-DX mobile robots can be written as follows: where x r i and y r i are the i th positions of the robot on the x and y axes and φ r i is its corresponding direction relative to the x-axis for i = 1, 2.
The coordinate system of the first robot is G 1 and that of the second robot is G 2 , as depicted in Figure 1.In (1), υ r i (k) is zero-mean non-Gaussian process noise with a known probability density function (PDF).Additionally, Δ t is the sampling time of the process and the variables v(k) and ω(k) are the linear and the angular velocities of the robot, which are assumed to be known.
Eq. 1 can be rewritten in the general non-linear state space form as follows: where and is the kinematic model presented in (1).

Kinematic model of the landmarks
In this study, both landmarks are considered to be moving, with the following kinematic model: where x L j ,r i and y L j ,r i are the j th positions of the landmark on the x and y axes, respectively, and φ L j ,r i is its corresponding direction with respect to the x-axis for j = 1, 2, presented in G i for i = 1, 2. Variables v L j (k) and ω L j (k) are the linear velocity and angular velocity of landmark L j , respectively; Δ t is the sampling time of the process, and k refers to the sample number.It is assumed that v L j (k) and ω L j (k) are known parameters.Moreover, υ L j ,r i (k) is a zero-mean Gaussian noise vector: that is, υ L j ,r i (k) ∼ N(0, R L j ,r i ), where N(., .)refers to the Gaussian PDF.
By writing (3) in the general non-linear state space model, we obtain where ) is the kinematic model presented in (4).

Lidar measurement model
Lidar sensor output includes the distance and angle of each robot relative to the observed landmarks.Each observation by each robot of the landmarks that are present can be expressed by the following observation model: where z L j ,r i represents the i th robot observation vector of the j th landmark in the environment, r ij represents the distance of the robot from the observed landmark, and φ ij is the corresponding angle of The relationships between different coordinates.

FIGURE 3
Real and estimated position of robot R 1 in the frame G 1 .
the robot relative to the landmark.Additionally, υ ij is a zero-mean Gaussian measurement noise vector with the covariance matrix of R z ij .Moreover, (5) can be rewritten in the general non-linear measurement model as follows: 3 Modified Fast-SLAM with dynamic landmarks Robots R 1 and R 2 explore the environment, with each of them performing the SLAM process from the corresponding frames of G 1 and G 2 , respectively; and an independent map of the environment is thereby created by each of them.Information on 10.3389/frobt.2023.1291672

FIGURE 4
Real and estimated position of robot R 2 in the frame G 1 .

FIGURE 5
Real and estimated position of robot R 2 in the frame G 2 .

FIGURE 6
Real and estimated values of x r 1 , y r 1 , and φ r 1 in G 1 .
the distances and angles of the robots relative to the observed landmarks is provided by the lidar sensor in each robot.Each robot then uses a modified Fast-SLAM algorithm.In the proposed modified Fast-SLAM, Rao Blackwellized particle filtering (RBPF) is employed to estimate both vehicle trajectories and landmark positions, in which each landmark is estimated with EKF and PFs are employed to generate particles only used for trajectory estimation.
In the remainder of this section, this process is explained, with the application of a modification in EKF due to the presence of moving landmarks with known kinematics.The general steps of Fast-SLAM consist of particle filtering with embedded extended Kalman filtering for the mapping; the data association process; and the map-merging procedure.These steps, with the necessary modifications, are explained for our problem in the following subsections.

Particle generation
An important and initial step in the PF is particle generation.To this end, particles are generated using the known PDF of υ r i (k) through Monte Carlo simulation, where particles of υ l r i (k − 1) for l = 1, ..., N are generated and replaced in the motion model of the i th robot (1), and therefore x l r i (k), y l r i (k), and φ l r i (k) for l = 1, ..., N and i = 1, 2 are generated.It is worth mentioning that N refers to the number of particles.This process can be formulated in the following general form according to (2):

Mapping
Each robot separately performs mapping after the particle generation process using EKF, as explained in the below subsections.After the first meeting of the robots and coordinate transformation, the maps are merged, as explained in detail in subsequent subsections.

• Initialization
After the particle generation process as explained in the previous subsection, when the j th landmark is visited for the first time, initialization is performed using the inverse sensor model for Real and estimated values of x r 2 , y r 2 , and φ r 2 in G 1 .
each particle, as follows, if it has been visited at sample instant k (neglecting the measurement noise): Where xl+ , and φl+ L j ,r i (k) generally refer to the posterior estimates.Although the posterior estimates are obtained in the update step of the Bayesian filtering procedure, as the initial values will be used for prior estimates in the prediction step, a similar notation has been used.( 8) can be rewritten in the following general form using (6): Moreover, let predicted covariance matrix P l+ L j ,r i = KI 3 , where K is a small real number and I 3 is an identity matrix.

• Prediction step
After the first visit to the j th landmark, prior estimates (predicted estimates) xl− L j ,r i (k), ŷ l− L j ,r i (k), and φl− L j ,r i (k) are computed based on the landmark kinematics (3) at each time step for each particle l.Additionally, it is necessary to compute the predicted covariance matrix.This step is the first modification of our method compared with normal Fast-SLAM with static landmarks, in which no kinematics exist for the landmarks.In other words: where in (11) P l− L j ,r i is the predicted covariance matrix and P l+ L j ,r i is the updated one.Moreover: • Data association Data association is one of the important issues for SLAM in dynamic environments.For this purpose, in this paper, the measurement prediction is simply computed as follows: , e y r 1 , and e φ r 1 in G 1 .

FIGURE 9
Estimation errors e x r 2 , e y r 2 , and e φ r 2 in G 1 .

FIGURE 10
Real position of landmark L 1 and position estimated by robot R 1 in the frame G 1 .

FIGURE 11
Real position of landmark L 2 and position estimated by robot R 1 in the frame G 1 .

FIGURE 12
Real position of landmark L 1 and position estimated by robot R 2 in the frame G 1 .

FIGURE 13
Real position of landmark L 2 and position estimated by robot R 2 in the frame G 1 .

FIGURE 14
Real and estimated position of robot R 1 in the frame G 1 (with different initial conditions).

FIGURE 15
Real and estimated position of robot R 2 in the frame G 1 (with different initial conditions).
Frontiers (k) through simple averaging, and it is updated to achieve a more precise map.Moreover, similarly, the trajectory of the second robot can be expressed in G 1 as a global inertial frame.
Similarly, the map created by the first robot is presented in frame G 2 using G 2 C G 1 and is obtained as follows: (k) to provide more precise mapping through averaging.
To clarify the proposed method, the general proposed algorithm for multi-robot Fast-SLAM in a dynamic environment is presented and summarized in Algorithm 1.

Simulation results
To evaluate the performance of the proposed multi-robot SLAM algorithm in a dynamic environment, a series of simulation studies were conducted using MATLAB m-file codes; these are reported on in this section.Two Pioneer P3-DX mobile robots, each equipped with a lidar sensor, are simulated.As mentioned earlier in the article, two moving landmarks are considered and are modeled via their kinematic models.
As mentioned earlier, robots R 1 and R 2 explore the environment and observe the landmarks within it; each of them implements the SLAM process from its corresponding frame of G 1 or G 2 , respectively, and makes an independent map of the environment.The parameters in our simulation are considered as follows: All of the sources of noise are considered to be zero-mean Gaussian noise; the covariance matrices of the process (kinematics) noise, for both landmarks and robots, are specified as 0.001I 3 , and the covariance matrices of the lidar noise are specified as 0.001I 2 . Additionally: Once the robots meet one another, the transformation matrix between the two frames G 1 and G 2 is obtained and the maps can be fused to obtain a more precise map of the environment and achieve more precise localization performance.In the case of our simulation, the robots meet each other at 9.35 s and the estimated transformation matrix is as follows: Step 0: Initialization at each sample time k: for each robot r i , i = 1, 2: for each particle l = 1, ..., N: Step 1: particle generation: generate particles according to (7).
Step 2: Mapping: for each L j , j = 1, 2: • if no measurement is available but the landmark has been visited do: 1. predict according to (10), ( 11) and ( 12) •endif • if measurement z L j ,r i (k) is available then: • do data association according to ( 14): • if the landmark L j is visited for the first time do 1.initialization according to (8) and ( 9).

perform resampling and generate equally
weighted estimates, resulting in: •endif Step 4: Coordinate alignment and map merging: • if the robot visits another robot do: 1. compute the rotation matrix, G 1 C G 2 , and the translation vector, G 1 P G 2 .
4. fuse maps as explained in Section 4.

•endif
Return: xL j ,r i (k) and xr i (k) through averaging.
Algorithm 1.The general multi-robot Fast-SLAM-based algorithm with moving landmarks.
Frontiers in Robotics and AI 13 frontiersin.org

FIGURE 16
Real position of landmark L 1 and position estimated by robot R 2 in the frame G 1 (with different initial conditions).

FIGURE 17
Real position of landmark L 2 and position estimated by robot R 2 in the frame G 1 (with different initial conditions).
Frontiers in Robotics and AI 14 frontiersin.org

FIGURE 18
Real position of landmark L 1 and position estimated by robot R 1 in the frame G 1 (with different initial conditions).

FIGURE 19
Real position of landmark L 2 and position estimated by robot R 1 in the frame G 1 (with different initial conditions).
Frontiers in Robotics and AI 15 frontiersin.org10.3389/frobt.2023.1291672 Figures 3, 4 depict the results of the localization of robots R 1 and R 2 , respectively, in 2D space, expressed in the G 1 coordinate frame, which is selected as the global frame for this paper.Figure 5 depicts the estimated and real paths of robot R 2 in frame G 2 to provide a better illustration of how robot R 2 is performing localization in its own frame G 2 .
The real and estimated values of x r i , y r i , and φ r i , expressed in frame G 1 with respect to time, are depicted in Figures 6, 7 for i = 1 (robot R 1 ) and i = 2 (robot R 2 ), respectively.To show the precision of the estimation, Figures 8, 9 depict the corresponding errors: that is, the error between the real and estimated values (e x r i , e y r i , and e φ r i ) in G 1 for i = 1, 2.
The results of the mapping in 2D space are shown in Figures 10, 11 for landmarks L 1 and L 2 obtained from R 1 , respectively, expressed in frame G 1 ; the results obtained from R 2 in frame G 1 are shown in Figures 12, 13.It is worth mentioning again that when the robots meet each other, the maps obtained from R 1 and R 2 are merged.It is clear from the figures that the localization and mapping results are significantly improved after the map-merging process.
To examine the efficiency of the method for the case in which some of the conditions are altered, the initial conditions were selected as follows: The results of localization of the robots in frame G 1 are depicted in Figures 14, 15, and the results of the mapping are depicted in Figures 16-19.It is clear from the figures that the algorithm is successful in simultaneous localization and mapping under changing initial conditions.

Conclusion
In this paper, an efficient algorithm has been presented for a multi-robot SLAM problem with unknown initial correspondence in a dynamic environment, using a modified Fast-SLAM method.In our scenario, each robot independently searches the environment, observes the moving landmarks in the environment using a lidar sensor, and implements the SLAM algorithm.In order to distinguish the moving landmarks, kinematic models are considered for the landmarks, which led to a modification of the normal Fast-SLAM method in the form of the addition of a prediction phase to the method; additionally, data association was performed according to the predicted measurements obtained from this prediction step.Although the kinematic models of the landmarks are known within each robot's coordinate system, after the first meeting of the robots, an initialization is embedded in the algorithm to obtain the current positions of the landmarks, as they are unknown to the robots.
Since the initial correspondence of the robots is unknown (or, in other words, each robot performs the mapping from the perspective of its own coordinate frame), a map-merging procedure was embedded in the proposed algorithm to fuse the independent maps of the robots when the robots meet each other.This mapmerging is only possible when the relative transformation matrix of the robots' inertial frames is computed, which occurs when the robots meet each other.For this purpose, a geographical approach to compute this transformation matrix was embedded in the proposed algorithm.
The performance of the proposed method was evaluated through simulations in MATLAB.It can be concluded from the simulation results that although each robot was able to solve the SLAM problem with an acceptable level of performance, the accuracy of SLAM was significantly improved when the robots met each other and map-merging was performed.
Although the proposed method showed very good performance in simulations in the MATLAB environment, it will perhaps encounter some difficulties in a real environment, such as model mismatches in relation to both the landmarks and the robots due to noise, and the occurrence of drift, for example, in the inertial sensor measurements.Additionally, designing a rendezvous method for the robots to force them to visit each other can represent another issue in a real environment.
Generalization of the proposed method for the general case of a multi-robot and multi-landmark system, with more than two robots and two moving landmarks, is proposed for future work.Embedding some approach to object detection, instead of using the kinematic models of the landmarks, would also add significant value to this research.

FIGURE 8
FIGURE 8Estimation errors e x r 1 , e y r 1 , and e φ r 1 in G 1 .