In recent decades, the path has been paved for complementary approaches to the supervision and control of robot systems blending automatic control and computer science. The resulting methods are not limited to execute actions at a low level of abstraction, such as controlling the robot actuators and safely navigating in indoor/outdoor environments, further addressing higher-level decision-making, such as allocating tasks in a team of multi-robot systems, and then planning the task and executing the resulting plan.
This novel paradigm has been successfully achieved through different means, such as approaching the task planning and execution problems using discrete-event systems (DES), or combining DES with continuous state-space time-driven systems in hybrid systems, to solve task and motion planning problems. Logic-based specifications for such planners have been introduced, using different classes of temporal logic (TL). And, even more recently, the control community is using machine learning (ML), either exploiting reinforcement learning (RL) methods or revisiting the use of neural nets for nonlinear control.
The goal of this Research Topic is to display recent work surrounding supervision, control and learning robot tasks, blending automatic control and computer science approaches, which lead to exciting progress on the supervision (limiting the possible system trajectories), control (steering the system dynamics by choosing among possible trajectories) and learning (e.g., the dynamic models of complex systems; model-free control) in robot systems, including cooperative multi-robot systems.
In particular, we are interested in problems and solutions where task-level decisions have a key role in modelling and solving the considered problem and any evaluation is carried out through actual experiments on real robots in real or realistic applications.
Themes of interest include, but are not limited to:
• Supervision of Robot Tasks Represented as Discrete Event Systems
• Robot Task Control Using Markov Decision Processes
• Robot Task Control Using Partially Observable Markov Decision Processes
• Robot Control Using (Deep) Reinforcement Learning
• Temporal Logic Planning of Robot Tasks
• Robot System Control Using Hybrid Systems
• Learning Model Predictive Control for Robot Systems
• Robot Task Plan Analysis Using Petri Nets
• Robot Task Planning Using Petri Nets
• Robot Task Execution Using High-Level Formalisms
Supplementary videos showing real robots performing tasks in real environments would be highly appreciated.
In recent decades, the path has been paved for complementary approaches to the supervision and control of robot systems blending automatic control and computer science. The resulting methods are not limited to execute actions at a low level of abstraction, such as controlling the robot actuators and safely navigating in indoor/outdoor environments, further addressing higher-level decision-making, such as allocating tasks in a team of multi-robot systems, and then planning the task and executing the resulting plan.
This novel paradigm has been successfully achieved through different means, such as approaching the task planning and execution problems using discrete-event systems (DES), or combining DES with continuous state-space time-driven systems in hybrid systems, to solve task and motion planning problems. Logic-based specifications for such planners have been introduced, using different classes of temporal logic (TL). And, even more recently, the control community is using machine learning (ML), either exploiting reinforcement learning (RL) methods or revisiting the use of neural nets for nonlinear control.
The goal of this Research Topic is to display recent work surrounding supervision, control and learning robot tasks, blending automatic control and computer science approaches, which lead to exciting progress on the supervision (limiting the possible system trajectories), control (steering the system dynamics by choosing among possible trajectories) and learning (e.g., the dynamic models of complex systems; model-free control) in robot systems, including cooperative multi-robot systems.
In particular, we are interested in problems and solutions where task-level decisions have a key role in modelling and solving the considered problem and any evaluation is carried out through actual experiments on real robots in real or realistic applications.
Themes of interest include, but are not limited to:
• Supervision of Robot Tasks Represented as Discrete Event Systems
• Robot Task Control Using Markov Decision Processes
• Robot Task Control Using Partially Observable Markov Decision Processes
• Robot Control Using (Deep) Reinforcement Learning
• Temporal Logic Planning of Robot Tasks
• Robot System Control Using Hybrid Systems
• Learning Model Predictive Control for Robot Systems
• Robot Task Plan Analysis Using Petri Nets
• Robot Task Planning Using Petri Nets
• Robot Task Execution Using High-Level Formalisms
Supplementary videos showing real robots performing tasks in real environments would be highly appreciated.