- Engineering Department, University of Campania “Luigi Vanvitelli”, Aversa, Italy
Introduction: The rapid advancement of collaborative robotics has driven significant interest in Human-Robot Interaction (HRI), particularly in scenarios where robots work alongside humans. This paper considers tasks where a human operator teaches the robot an operation that is then performed autonomously.
Methods: A multi-modal approach employing tactile fingers and proximity sensors is proposed, where tactile fingers serve as an interface, while proximity sensors enable end-effector movements through contactless interactions and collision avoidance algorithms. In addition, the system is modular to make it adaptable to different tasks.
Results: Demonstrative tests show the effectiveness of the proposed system and algorithms. The results illustrate how the tactile and proximity sensors can be used separately or in a combined way to achieve human-robot collaboration.
Discussion: The paper demonstrates the use of the proposed system for tasks involving the manipulation of electrical wires. Further studies will investigate how it behaves with object of different shapes and in more complex tasks.
1 Introduction
In the rapidly evolving landscape of robotics, the collaboration between humans and robots has emerged as a pivotal research domain, marked by its potential to redefine the way we interact with robotic systems. One fundamental aspect of this collaboration involves the integration of human guidance mechanisms based on multi-modal sensing systems, empowering robots to learn and perform tasks through human interaction.
Thanks to the development of collaborative robots, known as “cobots”, many researchers started to investigate innovative solutions and methodologies for human-robot interaction, also inspired by typical collaboration techniques among humans. Madan et al. (2015) studied haptic interaction patterns that are typically encountered during human-human cooperation, with the aim to simplify the transfer of a collaborative task in a human-robot context. Kronander and Billard (2014), considering the Learning from Demonstration framework, present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. Singh et al. (2020) propose a collaborative dual-arm teleoperation setup, where one of the two arms acts as a controller and the other as a worker. The authors exploit the possibility of using joint torque commands for controlling the robotic arms and the integrated torque sensors at each joint actuator to transfer the motion from the controller to the worker, and the external forces sensed by the worker back to the controller. Another example of HRI is presented in (Grella et al., 2022), where authors exploit contact information provided by distributed tactile sensors, acting as the physical communication interface between the control system and the human, to ensure the robot moves only when the operator intentionally decides to move it. In this scenario, human safety is an important element as it contributes to efficient cooperation with the collaborative robots in several application fields (Zacharaki et al., 2020; Chandrasekaran and Conrad, 2015). Moreover, the human factor is also the focus of the Industry 5.0. In particular, the human behaviour modeling in industrial HRI manufacturing is analyzed in (Jahanmahin et al., 2022). These works represent, certainly not exhaustively, examples that demonstrate the interest in realizing easy and safe collaboration among humans and robots. It is worth noticing that in all of these examples there is a device specifically added to the system whose only purpose is to serve as an interface between the human operator and the robot.
Based on the complexity of the task that robots are requested to perform and the unstructured environments where some of these tasks are carried out, robotic systems are equipped with several sensing systems also depending on the type of interaction needed. Among these, tactile and proximity sensors play an increasingly interesting role. By reporting only some recent application examples, tactile sensors can be used for detecting the directionality of an external stimulus (Gutierrez and Santos, 2020), for grasp stabilization, by predicting possible object slippage (Veiga et al., 2018), or for haptic exploration of unknown objects, by extracting information such as friction, center of mass, inertia, to exploit for robust in-hand manipulation (Solak and Jamone, 2023). A wide list of the touch technologies involved in the interaction has been recently explored in (Olugbade et al., 2023). Proximity sensors can be used for implementing safety algorithms like obstacle detection and/or avoidance. In (Cichosz and Gurocak, 2022), the authors use ultrasonic proximity sensors mounted on a robotic arm for achieving collision avoidance in a pick-and-place task with the presence of a human operator in the same workcell. Other examples are detailed in (Moon et al., 2021; Liu et al., 2023), where capacitive proximity sensors are exploited for real-time planning of safe trajectories considering obstacles.
Clarified that the HRI research field is wide and intricate from both technology and methodology points of view, this paper focuses on the development of a modular sensing system that can be used as an interface in collaboration tasks, where humans guide robots in executing specific operations, allowing a learning process that aims to autonomous execution, and also for not collaborative tasks where sensing is required. Recently, the combination of different sensors is increasingly used, through sensor fusion or machine learning techniques, to tackle more complex tasks. To give some examples, in (Govoni et al., 2023) tactile sensors and proximity sensors are both used in a wiring harness manipulation task, while (Strese et al., 2017) classifies the texture of objects by considering information such as images, audio, friction forces, and acceleration. Fonseca et al. (2023) propose a flexible sensor based on piezoresistive and self-capacitance technology that can be applied to the robot links and used for hand guidance or collision avoidance. Similarly, Yim et al. (2024) present a sensor integrating electromechanical and infrared Time-of-Flight technologies to enhance safety during physical Human-Robot Interactions. Also, especially when dealing with non-rigid objects, it is not uncommon to fuse heterogeneous data coming from multiple sensors (Nadon et al., 2018).
Following this direction, this paper proposes a multi-modal sensing system, constituted by a modular solution, which can integrate different combinations of tactile and proximity sensors, together with a suitable methodology for exploiting these sensors during the execution of human-robot interaction tasks. From technology point of view, the design has been optimized based on previous experiences of some of the authors on tactile sensors (Cirillo et al., 2021a) and proximity sensors (Cirillo et al., 2021b). From a methodology point of view, for human-robot collaboration, the tactile fingers can be used, by means of specifically defined indicators, as the interface for controlling the robot motion during the teaching phase, avoiding the addition of specific tools in the setup. Instead, the proximity sensors are exploited for implementing collision avoidance and/or a method for moving the end effector in a contactless fashion. Differently from similar systems, the proposed solution does not require the addition of a sensing skin to the robot (like in Fonseca et al. (2023)) or a specific tool integrating the sensors (like in Yim et al. (2024)) since all the sensing components are already included in the modular end effector. The developed sensing system allows an extension of preliminary results presented in Laudante and Pirozzi (2023), increasing the degrees-of-freedom (DOFs) available for the robot guidance, covering the 3-D space and not only a plane. Additionally, the proposed modular solution presents a common mechanical base designed to guarantee easy integration in commercial parallel grippers. Some demonstrative tests are reported to show the validity of the proposed methodology.
The rest of the article is organized as follows. Material and methods Section 2 presents first the proposed tools, detailing the characteristics of the embedded tactile sensors in Section 2.1 and proximity sensors in Section 2.2, and then the developed methodology to exploit the sensors for Human-Robot Interaction in Section 2.3. Section 3 reports demonstrations to show the effectiveness of the presented technologies and methodology. Finally, Section 4 concludes the article by discussing on possible future developments.
2 Materials and methods
2.1 Tactile finger
The designed finger contains a tactile sensor based on optoelectronic technology and is a modified version of the finger previously presented in (Cirillo et al., 2021a). The following text briefly describes the sensing technology and the designed mechanical components, which can be seen in the pictures in Figure 1a.

Figure 1. (a) CAD of the tactile finger showing the mechanical components. (b) Electronic board of the tactile finger. (c) FEA results for the tactile finger. (d) Samples of deformable pad and rigid grid.
2.1.1 Sensing technology
The tactile sensor is based on the sensing technology reported in (Cirillo et al., 2021a) but with a smaller form factor. The sensor has 12 sensing points, here called “taxels”, each one constituted by a photo-reflector (NJL5908AR by New Japan Radio) that is the combination of a Light Emitting Diode (LED) and a phototransistor optically matched. The core part of the sensor is a Printed Circuit Board (PCB), where the 12 photo-reflectors are organized in six rows and two columns with a spatial resolution equal to
The photo-reflectors are positioned underneath a deformable pad, which transduces deformations into contact information. In fact, the light emitted by each LED is reflected by the bottom part of the deformable pad and reaches the corresponding phototransistor in an amount dependent on the local deformation of the pad. Voltage signals from the sensor can be obtained by interrogating the microcontroller via serial interface, and this is done using a ROS (Robot Operating System) node running on a computer connected to the sensor via USB.
2.1.2 Mechanical components
All the mechanical parts constituting the finger are shown in Figure 1a: the deformable pad, a rigid grid, and a case divided into three pieces. These parts are described in the following.
The deformable pad is made of silicone (PRO-LASTIX by Prochima) with a hardness equal to 20 Shore A due to its good elastic properties and low hysteresis. The pad presents cells with a parallelepiped shape on the bottom face (see Figure 1d) which are aligned with the optoelectronic components. The specific shape of these cells has been selected after an optimization design process as reported in Laudante et al. (2023). The whole pad is black, while the ceiling of each cell is white. In this way, it is possible to achieve good reflection of the light of the LEDs and to avoid interferences coming from near taxels or external light sources.
To ensure perfect alignment between the cells in the pad and the sensing points onto the PCB, a rigid grid with protruding parts that fit into the cells is glued to the electronic board. Additionally, the thickness of the grid is such that it guarantees that the reflective surface, i.e., the ceiling of the cells, cannot reach a distance less than
Finally, the PCB described in the previous section is housed in a case consisting of three pieces. The one indicated as “bottom side” in Figure 1a, in addition to keeping the PCB in position, gives rigidity to the finger. In particular, during the design of this part, Finite Elements Analysis (FEA) has been exploited to select appropriate dimensions to obtain a robust mechanical component considering the maximum force applicable on the sensor pad before signal saturation, equal to
The rigid grid, the bottom side of the case and the top side part which cover the PCB are made of nylon PA12 and are 3D printed. The thickness of the layers is
Figure 1a contains another part, i.e., the central support, which has not been mentioned previously. This component makes the end effector modular. In fact, it can be mounted on parallel grippers, and different tools can be attached to its sides. For example, Figure 1a shows two tactile fingers attached to the support, while in Figure 2a there is a tactile finger at one side and a proximity sensor at the other. Given the possibility to easily 3D print mechanical adapter, it is possible to create different configurations depending on the application requirements.
2.2 Proximity sensor
This section describes the proximity sensor, which is almost the same as the one reported in Cirillo et al. (2021b). However, in order to make the article self-contained, some details are reported here.
2.2.1 Sensing technology
The developed sensor is a self-consistent board with a dedicated processing unit, capable of hosting up to 4 Time-of-Flight (ToF) modules, one of which is directly welded on the PCB. The CAD model of the sensor is shown in Figure 2a. The ToFs are based on the VL6180X chip (manufactured by STMicroelectronics) and, apart from the one welded on the main PCB, are embedded in Plug&Play modules whose dimensions are
2.2.2 Mechanical design
A mechanical adapter to connect the proximity sensor to the central support has been designed and is shown in Figure 2a. This component is such that it can be fixed to the central support by means of two screws and can securely host the proximity board. Figure 2b shows a real sensor with three ToF modules: one is welded to the board (1) and two are connected through the available connectors, on the right (2) and on the left (3).
2.3 Human-robot interaction methodology
The objective is to demonstrate how it is possible to exploit the tactile and proximity sensors to allow human-robot interaction. Only as an example, in the considered application the tactile sensor is used to teach the robot the routing path for a wire, and the proximity sensor to avoid an obstacle or to follow the hand of an operator.
2.3.1 Teaching by demonstration through tactile data
The configuration of the modular tools for this use case is shown in Figure 3a, where the orientation of the reference frame related to the Tool Center Point is also shown (its origin is at the center of the four fingers). It consists of two pairs of tactile fingers and each pairs has an active finger, i.e., with the tactile sensor inside, and a passive finger, i.e., without the sensing components. It is worth mentioning that the only reason for not having all active fingers is that at the time of writing only two PCBs with tactile components are available. However, the following can be easily extended to a setup with four active tactile fingers. The two tactile sensors are exploited to compute tactile indicators that are used to move the robot according to the direction of the force/torque exerted by an operator on the grasped object (an electrical wire in the considered task) during the teaching phase. In fact, thanks to the presence of a grid of taxels and the asymmetry of the optoelectronic component constituting each taxel (see Laudante et al., 2023), it is possible to detect deformations due to the application of shear forces on the deformable pad by tracking the displacement of the centroid of the tactile map. The tactile indicators are derived from the one defined in Caccavale et al. (2023) for a single tactile sensor, which has been extended to combine data from different sensors. For each of the two tactile sensors, the quantity
where (
where (

Figure 3. (a) Gripper with two pairs of tactile fingers (blue = active; black = passive) and orientation of TCP reference frame. (b) Voltage signals naming convention and reference frame
The use of these indicators allows us to have a total of four independent components coming from the two tactile sensors which are related to the force and torque that an operator applies to the grasped object. In the case where the grasped object is a thin electrical wire, it is possible to distinguish up to four different movements by suitably combining
The movement of the grasped object along the

Figure 4. Schematic representations of the contact indicator behaviour in different conditions: (a) force applied along
Instead, considering the example schematized in Figure 4b where a torque is applied about the
The value of the
When a force is exerted on the object along the positive
The
To accomplish the hand guidance phase, velocity commands for the
2.3.2 Interaction through proximity sensor data
The proximity sensor board can be exploited to accomplish, for example, collision avoidance or user-following tasks. In collision avoidance mode, the data acquired from the proximity sensors are used to detect an obstacle, e.g., a human operator working in the cell, so that the robot arm can retract and avoid collision. In particular, fixed a safety distance
where
In user-following mode, the objective is to control the linear velocity of the robot to keep the measured distance equal to a desired value
The parameter
3 Results
A series of demonstrations have been carried out to validate the proposed methodology. The setup is constituted by a Universal Robot UR5e manipulator equipped with a Robotiq Hand-E gripper on which the modular sensing system has been mounted. The first scenario aims to demonstrate the potential of the teaching-by-demonstration methodology during the manipulation of a wire. The second and third ones show the possibilities of human-robot interaction using the proximity sensor for collision avoidance and user-following tasks. The last use case combines the use of tactile and proximity sensors to guide the robot while avoiding obstacles.
3.1 Teaching by demonstration
The considered task consists in teaching the robot the path to follow for routing a wire. From the starting point where the wire is grasped, the operator guides the robot by exploiting the proposed indicators during the interaction with the grasped wire. At the end of the routing path, the wire is inserted into a clip. The trajectory followed during the teaching phase is saved so that it can be then re-executed autonomously by the robot. Task management requires a software architecture (reported in Figure 6a) with a ROS node that orchestrates the overall execution. In particular, after the wire grasping, the master node checks if a trajectory is available. In case a trajectory exists, it is autonomously executed; otherwise the hand guidance mode starts. The flowchart in Figure 6b summarizes the task operations. Additionally, during autonomous trajectory execution, it is possible to check if the wire gets entangled by monitoring the same indicators used during the teaching. In particular, the robot motion is stopped if the indicator

Figure 6. (a) Software architecture for the teaching-by-demonstration use case. (b) Flowchart of the teaching-by-demonstration task.
As already discussed, the saturated dead-zone functions have to be tuned before computing velocities from indicators. In detail, the following parameters have been chosen for the realized application:
1. for
2. for
3. for
4. for
As said, it is possible to adapt the function parameters to the experiments by taking into account the characteristics of the sensor used, e.g., signal noise and sensitivity which are typically sensor dependent. In our case, for example, the values reported above take into account that the sensitivity of the indicator in the
In order to show the effects of the chosen indicator-velocity functions, two examples (one for the linear velocity and one for the angular velocity) are reported. Figure 7a reports the indicator
The whole routing task has been executed as described in the following. At the beginning no trajectory is saved into the database, so the robot is set in hand-guidance mode. Hence, the operator guides the robot through the desired path by exerting forces or torques on the wire grasped by the modular sensors. Figure 8 shows the value of the indicators and the corresponding velocities during the whole experiment, which can be divided into six intervals. From the starting point (Figure 9a), the robot is moved in the

Figure 9. Operator guiding the robot by acting on the grasped electrical wire. From the starting point (a), the operator moves the robot along the
3.2 Collision avoidance and user-following
A demonstration with an operator who approaches the gripper with his hand has been implemented to show the potential of the proposed collision avoidance methodology. Figure 11 reports the distance data acquired from the three proximity sensor modules and the corresponding velocities with respect to

Figure 12. Collision avoidance involving two proximity sensor modules. The operator simulates obstacles on two sides of the end effector and the robot reacts by moving away from the operator’s hands.
A similar test has been carried out to show the capability of the user-following modality. As for the previous experiment, Figure 13 shows the distances
3.3 Collision avoidance during hand guidance
The last test intends to demonstrate how it is possible to combine the use of the interaction indicators computed from tactile data with the proximity sensor data, implementing more complex tasks. In detail, an operator can interact with the robot through the grasped wire and, at the same time, the robot can avoid collisions. In this case, the modular sensors mounted on the gripper are constituted by a pair of fingers for grasping the wire, and a proximity sensor. The wire is pulled along the

Figure 14. Hand guidance and obstacle avoidance. An operator guides the robot by pulling the grasped wire while another operator simulates an obstacle nearby the end effector. The robot reacts by following the operator guidance but moving far from the obstacle.

Figure 15. Measured distance, indicator, and velocities during the obstacle avoidance in hand guidance experiment.
4 Discussion
This paper proposed a robotic system, considering both hardware and software technologies, that can be used in those tasks where human-robot interaction is requested. Regarding hardware, a modular system that can be mounted on commercial parallel grippers has been developed, and, for the tasks considered, it has been equipped with tactile and/or proximity sensors. Differently from previous works that proposed systems integrating tactile and proximity sensors for safe HRI applications, the proposed hardware is such that it can be directly mounted on the robot’s parallel gripper and does not require additional tools/skins on/near the end effector. In addition to the hardware, the methodology for using the aforementioned sensors has been detailed. For instance, tactile sensors have been exploited as a human-robot interface to implement a teaching-by-demonstration application in which an operator guides a robotic arm through the routing path for an electrical cable. Finally, proximity sensors have been used both as a safety system, implementing a collision avoidance mechanism, and as an interface for moving the robot end effector in a contactless fashion. Demonstrations for all the proposed algorithms have been presented, showing their effectiveness. While the use cases showcased in this paper only consider the manipulation of electrical wires, we expect that the algorithms proposed here can be used with objects of different shapes. This aspect will be evaluated in future studies. In addition, with regard to the modularity of the developed system, there are several possible future developments. In fact, by changing the sensors or tools on the end effector, the system can be adapted to the specific application. One example could be the integration of cameras and the implementation of computer vision algorithms to recognize and locate the object(s) of interest in the scene, providing even more autonomy to the robot when needed.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors upon request, without undue reservation.
Author contributions
GL: Writing – original draft, Writing – review and editing. MM: Writing – original draft, Writing – review and editing. OP: Writing – original draft, Writing – review and editing. SP: Writing – original draft, Writing – review and editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This work was partially supported by the European Commission under the Horizon Europe research grant IntelliMan, project ID: 101070136.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that no Generative AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frobt.2025.1581154/full#supplementary-material
References
Caccavale, R., Finzi, A., Laudante, G., Natale, C., Pirozzi, S., and Villani, L. (2023). Manipulation of boltlike fasteners through fingertip tactile perception in robotic assembly. IEEE/ASME Trans. Mechatronics 29, 820–831. doi:10.1109/TMECH.2023.3320519
Chandrasekaran, B., and Conrad, J. M. (2015). “Human-robot collaboration: a survey,” in SoutheastCon 2015, 1–8. doi:10.1109/SECON.2015.7132964
Cichosz, C., and Gurocak, H. (2022). “Collision avoidance in human-cobot work cell using proximity sensors and modified bug algorithm,” in 2022 10th international conference on control, mechatronics and automation (ICCMA), 53–59. doi:10.1109/ICCMA56665.2022.10011601
Cirillo, A., Costanzo, M., Laudante, G., and Pirozzi, S. (2021a). Tactile sensors for parallel grippers: design and characterization. Sensors 21, 1915. doi:10.3390/s21051915
Cirillo, A., Laudante, G., and Pirozzi, S. (2021b). Proximity sensor for thin wire recognition and manipulation. Machines 9, 188. doi:10.3390/machines9090188
Fonseca, D., Safeea, M., and Neto, P. (2023). A flexible piezoresistive/self-capacitive hybrid force and proximity sensor to interface collaborative robots. IEEE Trans. Industrial Inf. 19, 2485–2495. doi:10.1109/TII.2022.3174708
Govoni, A., Laudante, G., Mirto, M., Natale, C., and Pirozzi, S. (2023). “Towards the automation of wire harness manufacturing: a robotic manipulator with sensorized fingers,” in 2023 9th international conference on control, decision and information technologies (CoDIT), 974–979. doi:10.1109/CoDIT58514.2023.10284109
Grella, F., Albini, A., and Cannata, G. (2022). “Voluntary interaction detection for safe human-robot collaboration,” in 2022 sixth IEEE international conference on robotic computing (IRC), 353–359. doi:10.1109/IRC55401.2022.00069
Gutierrez, K., and Santos, V. J. (2020). Perception of tactile directionality via artificial fingerpad deformation and convolutional neural networks. IEEE Trans. Haptics 13, 831–839. doi:10.1109/TOH.2020.2975555
Jahanmahin, R., Masoud, S., Rickli, J., and Djuric, A. (2022). Human-robot interactions in manufacturing: a survey of human behavior modeling. Robotics Computer-Integrated Manuf. 78, 102404. doi:10.1016/j.rcim.2022.102404
Kronander, K., and Billard, A. (2014). Learning compliant manipulation through kinesthetic and tactile human-robot interaction. IEEE Trans. Haptics 7, 367–380. doi:10.1109/TOH.2013.54
Laudante, G., Pennacchio, O., and Pirozzi, S. (2023). “Multiphysics simulation for the optimization of an optoelectronic-based tactile sensor,” in Proceedings of the 20th international conference on informatics in control, automation and robotics - volume 2: ICINCO, 101–110. doi:10.5220/0012166900003543
Laudante, G., and Pirozzi, S. (2023). “An intelligent system for human intent and environment detection through tactile data,” in 2022 advances in system-integrated intelligence (SysInt), 497–506. doi:10.1007/978-3-031-16281-7_47
Liu, Z., Chen, D., Ma, J., Jia, D., Wang, T., and Su, Q. (2023). A self-capacitance proximity e-skin with long-range sensibility for robotic arm environment perception. IEEE Sensors J. 23, 14854–14863. doi:10.1109/JSEN.2023.3275206
Madan, C. E., Kucukyilmaz, A., Sezgin, T. M., and Basdogan, C. (2015). Recognition of haptic interaction patterns in dyadic joint object manipulation. IEEE Trans. Haptics 8, 54–66. doi:10.1109/TOH.2014.2384049
Moon, S. J., Kim, J., Yim, H., Kim, Y., and Choi, H. R. (2021). Real-time obstacle avoidance using dual-type proximity sensor for safe human-robot interaction. IEEE Robotics Automation Lett. 6, 8021–8028. doi:10.1109/LRA.2021.3102318
Nadon, F., Valencia, A. J., and Payeur, P. (2018). Multi-modal sensing and robotic manipulation of non-rigid objects: a survey. Robotics 7, 74. doi:10.3390/robotics7040074
Olugbade, T., He, L., Maiolino, P., Heylen, D., and Bianchi-Berthouze, N. (2023). Touch technology in affective human–, robot–, and virtual–human interactions: a survey. Proc. IEEE 111, 1333–1354. doi:10.1109/JPROC.2023.3272780
Singh, J., Srinivasan, A. R., Neumann, G., and Kucukyilmaz, A. (2020). Haptic-guided teleoperation of a 7-dof collaborative robot arm with an identical twin master. IEEE Trans. Haptics 13, 246–252. doi:10.1109/TOH.2020.2971485
Solak, G., and Jamone, L. (2023). Haptic exploration of unknown objects for robust in-hand manipulation. IEEE Trans. Haptics 16, 400–411. doi:10.1109/TOH.2023.3300439
Strese, M., Schuwerk, C., Iepure, A., and Steinbach, E. (2017). Multimodal feature-based surface material classification. IEEE Trans. Haptics 10, 226–239. doi:10.1109/TOH.2016.2625787
Veiga, F., Peters, J., and Hermans, T. (2018). Grip stabilization of novel objects using slip prediction. IEEE Trans. Haptics 11, 531–542. doi:10.1109/TOH.2018.2837744
Yim, H., Kang, H., Nguyen, T. D., and Choi, H. R. (2024). Electromagnetic field and tof sensor fusion for advanced perceptual capability of robots. IEEE Robotics Automation Lett. 9, 4846–4853. doi:10.1109/LRA.2024.3386455
Keywords: human-robot interaction, human-robot collaboration, tactile sensor, proximity sensor, multi-modal, modular
Citation: Laudante G, Mirto M, Pennacchio O and Pirozzi S (2025) A multi-modal sensing system for human-robot interaction through tactile and proximity data. Front. Robot. AI 12:1581154. doi: 10.3389/frobt.2025.1581154
Received: 21 February 2025; Accepted: 15 May 2025;
Published: 10 June 2025.
Edited by:
Rongxin Cui, Northwestern Polytechnical University, ChinaReviewed by:
Sylvain Bouchigny, Commissariat à l'Energie Atomique et aux Energies Alternatives (CEA), FranceJonathas Henrique Mariano Pereira, Federal Institute of São Paulo, Brazil
Copyright © 2025 Laudante, Mirto, Pennacchio and Pirozzi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Gianluca Laudante, Z2lhbmx1Y2EubGF1ZGFudGVAdW5pY2FtcGFuaWEuaXQ=