AUTHOR=Adil Afnan Ahmed , Sakhrieh Saber , Mounsef Jinane , Maalouf Noel TITLE=A multi-robot collaborative manipulation framework for dynamic and obstacle-dense environments: integration of deep learning for real-time task execution JOURNAL=Frontiers in Robotics and AI VOLUME=Volume 12 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2025.1585544 DOI=10.3389/frobt.2025.1585544 ISSN=2296-9144 ABSTRACT=This paper presents a multi-robot collaborative manipulation framework, implemented in the Gazebo simulation environment, designed to enable the execution of autonomous tasks by mobile manipulators in dynamic environments and dense obstacles. The system consists of multiple mobile robot platforms, each equipped with a robotic manipulator, a simulated RGB-D camera, and a 2D LiDAR sensor on the mobile base, facilitating task coordination, object detection, and advanced collision avoidance within a simulated warehouse setting. A leader-follower architecture governs collaboration, allowing for the dynamic formation of teams to tackle tasks requiring combined effort, such as transporting heavy objects. Task allocation and control are achieved through a centralized control structure architecture in which the leader robot coordinates subordinate units based on high-level task assignments. The framework incorporates deep learning-based object detection (YOLOv2) to identify target objects using a simulated RGB-D camera mounted on the manipulator’s end-effector. Path planning is achieved through a sampling-based algorithm that is integrated with the LiDAR data to facilitate precise obstacle avoidance and localization. It also provides real-time path rerouting for safe navigation when dynamically moving obstacles, such as humans or other entities, intersect planned paths. This functionality ensures uninterrupted task execution and enhances safety in human-robot shared spaces. High-level task scheduling and control transitions are managed using MATLAB and Stateflow logic, while ROS facilitates data communication between MATLAB, Simulink, and Gazebo. This multirobot architecture is adaptable, allowing configuration of team size for collaborative tasks based on load requirements and environmental complexity. By integrating computer vision and deep learning for visual processing, and YOLOv2 for object detection, the system efficiently identifies, picks, and transports objects to designated locations, demonstrating the scalability of multi-robot framework for future applications in logistics automation, collaborative manufacturing, and dynamic human-robot interaction scenarios.