New Advances at the Intersection of Brain-Inspired Learning and Deep Learning in Autonomous Vehicles and Robotics, Volume II

21.1K
views
23
authors
5
articles
Editors
4
Impact
Loading...
Methods
19 July 2022
Side-Scan Sonar Image Segmentation Based on Multi-Channel CNN for AUV Navigation
Dianyu Yang
3 more and 
Feihu Zhang

The AUV (Autonomous Underwater Vehicle) navigation process relies on the interaction of a variety of sensors. The side-scan sonar can collect underwater images and obtain semantic underwater environment information after processing, which will help improve the ability of AUV autonomous navigation. However, there is no practical method to utilize the semantic information of side scan sonar image. A new convolutional neural network model is proposed to solve this problem in this paper. The model is a standard codec structure, which extracts multi-channel features from the input image and then fuses them to reduce parameters and strengthen the weight of feature channels. Then, a larger convolution kernel is used to extract the features of large-scale sonar images more effectively. Finally, a parallel compensation link with a small-scale convolution kernel is added and spliced with features extracted from a large convolution kernel in the decoding part to obtain features of different scales. We use this model to conduct experiments on self-collected sonar data sets, which were uploaded on github. The experimental results show that ACC and MIoU reach 0.87 and 0.71, better than other classical small-order semantic segmentation networks. Furthermore, the 347.52 g FOLP and the number of parameters around 13 m also ensure the computing speed and portability of the network. The result can extract the semantic information of the side-scan sonar image and assist with AUV autonomous navigation and mapping.

5,039 views
19 citations
Original Research
01 June 2022
Parallel Image-Based Visual Servoing/Force Control of a Collaborative Delta Robot
Minglei Zhu
3 more and 
Dawei Gong

In this paper, a parallel Image-based visual servoing/force controller is developed in order to solve the interaction problem between the collaborative robot and the environment so that the robot can track the position trajectory and the desired force at the same time. This control methodology is based on the image-based visual servoing (IBVS) dynamic computed torque control and couples the force control feedback in parallel. Simulations are performed on a collaborative Delta robot and two types of image features are tested to determine which one is better for this parallel IBVS/force controller. The results show the efficiency of this controller.

3,769 views
14 citations
Recommended Research Topics
Enjoyed this Research Topic?
Share it using

New Advances at the Intersection of Brain-Inspired Learning and Deep Learning in Autonomous Vehicles and Robotics, Volume II

21.1K
views
23
authors
5
articles