Abstract
We propose a hybrid camera/sonar localization system for Autonomous Underwater Vehicles (AUVs) operating in and around Wave Energy Converters (WECs). With plans underway to deploy WECs to create new streams of reliable and sustainable energy, AUVs play an important role in cost-effective infrastructure inspection and maintenance. However, challenges such as energy distribution and poor connectivity without a physical tether hinder AUV usefulness in real deployments. Underwater docking stations help alleviate these shortcomings by providing consistent communications and recharging in offshore environments. However, without autonomous capabilities, AUV docking requires excessive human intervention during operation.
We have previously worked to enable autonomous docking through the development of a robust Model Predictive Control (MPC) architecture to consistently rendezvous with a docking station. Our method demonstrated over 70% success rate during laboratory trials with an oscillating dock. To function, our MPC framework requires localization relative to the docking station, which was provided using visual tracking fiducials during trials. However, this method of localization is not well suited to offshore use, where poor water conditions can dramatically interfere with visual tracking accuracy.
Energetic, murky, and poorly lit environments are common where AUVs are expected to operate, which makes visual-only localization ill-suited for this application compared to sensors like sonar. Vision-based approaches suffer from inaccuracies due to suspended particulates, underwater blur, and color changes, while sonar can reliably make ranging estimates. However, in optimal conditions, image data is still more informative than comparatively low bandwidth sonar data. To provide accurate localization in all conditions, a solution must make use of both the high information gain of image data and the consistency of sonar data.
To use both sensors for our task, we aim to implement a Simultaneous Localization and Mapping (SLAM) framework using both image and sonar data to enable online dock reconstruction, as well as an active planning algorithm to guide AUV exploration. When complete, the motion planner would actively control the vehicle to efficiently generate a 3D reconstruction of the dock using opti-acoustic SLAM. Then, once sufficiently dense, the reconstruction would be used alongside prior knowledge of dock parameters to provide relative localization.
The primary focuses of our work will include prioritizing multi-modal sensor readings based on their efficacy in current conditions and including failure recovery into the active planner. Our solution should function in a wide range of conditions and dynamically adjust how sensor data is valued. Lastly, our system will allow for recovery from failed or interrupted motions during exploration.
When complete, our framework will leverage opti-acoustic sensor fusion to provide active perception underwater and, combined with our docking planner, provide a high accuracy autonomous docking solution.