The AUV (Autonomous Underwater Vehicle) navigation process relies on the interaction of a variety of sensors. The side-scan sonar can collect underwater images and obtain semantic underwater environment information after processing, which will help improve the ability of AUV autonomous navigation. However, there is no practical method to utilize the semantic information of side scan sonar image. A new convolutional neural network model is proposed to solve this problem in this paper. The model is a standard codec structure, which extracts multi-channel features from the input image and then fuses them to reduce parameters and strengthen the weight of feature channels. Then, a larger convolution kernel is used to extract the features of large-scale sonar images more effectively. Finally, a parallel compensation link with a small-scale convolution kernel is added and spliced with features extracted from a large convolution kernel in the decoding part to obtain features of different scales. We use this model to conduct experiments on self-collected sonar data sets, which were uploaded on github. The experimental results show that ACC and MIoU reach 0.87 and 0.71, better than other classical small-order semantic segmentation networks. Furthermore, the 347.52 g FOLP and the number of parameters around 13 m also ensure the computing speed and portability of the network. The result can extract the semantic information of the side-scan sonar image and assist with AUV autonomous navigation and mapping.
Editors
4
Impact
Loading...
Methods
19 July 2022
Dianyu Yang
, 3 more and
Feihu Zhang
Original Research
09 June 2022
Lingjian Ye
, 1 more and
Yimin Zhou
4,939 views
11 citations
Original Research
04 August 2022
Long Cheng
, 5 more and
Xiaoqin Zhang
3,295 views
5 citations
Original Research
20 October 2022
Zhenlong Xiao
, 1 more and
Lin Hong
3,444 views
1 citations
Original Research
01 June 2022
Minglei Zhu
, 3 more and
Dawei Gong
In this paper, a parallel Image-based visual servoing/force controller is developed in order to solve the interaction problem between the collaborative robot and the environment so that the robot can track the position trajectory and the desired force at the same time. This control methodology is based on the image-based visual servoing (IBVS) dynamic computed torque control and couples the force control feedback in parallel. Simulations are performed on a collaborative Delta robot and two types of image features are tested to determine which one is better for this parallel IBVS/force controller. The results show the efficiency of this controller.
Recommended Research Topics