Underwater Visual Localization Using Machine Learning and LSTM: Datasets | HackerNoon
Briefly

To train and test our model, we used one dataset from an underwater simulator and two datasets from a tank. Augmenting left-camera data with right-camera data improved performance significantly.
The ROV in the simulator dataset performed spiral motion inspections on a vertical pipe, covering 2x4x2 m spatial extent. Tank datasets involved lawnmower path with translations and rotation maneuvers at 5 points.
The first tank dataset comprised 3,437 data samples with minimal rotations, while the second included 4,977 samples focusing on rotation maneuvers. Total spatial extent in tank datasets was 0.4x0.6x0.2 m.
Read at Hackernoon
[
|
]