Fish Detection AI, Optic and Sonar-trained Object Detection Models
The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy.
A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection.
In this research we achieved:
1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability.
This submission of data includes:
- The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file
- Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named "Yolo_Object_Detection_How_To_Document.docx")
- Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment
- Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data
- link to the article that we were replicating (Xu, Matzner 2018)
- link to the Yolo documentation site from the original creators of that model (ultralytics)
- link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document)
Citation Formats
TY - DATA
AB - The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy.
A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection.
In this research we achieved:
1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability.
This submission of data includes:
- The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file
- Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named "Yolo_Object_Detection_How_To_Document.docx")
- Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment
- Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data
- link to the article that we were replicating (Xu, Matzner 2018)
- link to the Yolo documentation site from the original creators of that model (ultralytics)
- link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document)
AU - Slater, Katherine
A2 - Yoder, Delano
A3 - Noyes, Carlos
A4 - Scott, Brett
DB - Open Energy Data Initiative (OEDI)
DP - Open EI | National Renewable Energy Laboratory
DO -
KW - MHK
KW - Marine
KW - Hydrokinetic
KW - energy
KW - power
KW - AI
KW - YOLO model
KW - object detection
KW - you only look once model
KW - neural networks
KW - EyeSea dataset
KW - Fish Detection AI
KW - Eyesea
KW - small fish detection
KW - YOLO version 8
KW - YOLOv8
KW - PyTorch
KW - code
KW - PyTorch code
KW - Python
KW - Yaml code
KW - Shell code
KW - Sonar-trained Object Detection Models
KW - YOLO performance
KW - hyper-image approach
KW - cross-domain adaptability
KW - Eyesea optical dataset
LA - English
DA - 2014/06/25
PY - 2014
PB - Water Power Technology Office
T1 - Fish Detection AI, Optic and Sonar-trained Object Detection Models
UR - https://data.openei.org/submissions/8419
ER -
Slater, Katherine, et al. Fish Detection AI, Optic and Sonar-trained Object Detection Models. Water Power Technology Office, 25 June, 2014, MHKDR. https://mhkdr.openei.org/submissions/600.
Slater, K., Yoder, D., Noyes, C., & Scott, B. (2014). Fish Detection AI, Optic and Sonar-trained Object Detection Models. [Data set]. MHKDR. Water Power Technology Office. https://mhkdr.openei.org/submissions/600
Slater, Katherine, Delano Yoder, Carlos Noyes, and Brett Scott. Fish Detection AI, Optic and Sonar-trained Object Detection Models. Water Power Technology Office, June, 25, 2014. Distributed by MHKDR. https://mhkdr.openei.org/submissions/600
@misc{OEDI_Dataset_8419,
title = {Fish Detection AI, Optic and Sonar-trained Object Detection Models},
author = {Slater, Katherine and Yoder, Delano and Noyes, Carlos and Scott, Brett},
abstractNote = {The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy.
A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection.
In this research we achieved:
1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability.
This submission of data includes:
- The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file
- Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named "Yolo_Object_Detection_How_To_Document.docx")
- Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment
- Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data
- link to the article that we were replicating (Xu, Matzner 2018)
- link to the Yolo documentation site from the original creators of that model (ultralytics)
- link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document)},
url = {https://mhkdr.openei.org/submissions/600},
year = {2014},
howpublished = {MHKDR, Water Power Technology Office, https://mhkdr.openei.org/submissions/600},
note = {Accessed: 2025-05-21}
}
Details
Data from Jun 25, 2014
Last updated May 21, 2025
Submitted Apr 10, 2025
Organization
Water Power Technology Office
Contact
Victoria Sabo
Authors
Original Source
https://mhkdr.openei.org/submissions/600Research Areas
Keywords
MHK, Marine, Hydrokinetic, energy, power, AI, YOLO model, object detection, you only look once model, neural networks, EyeSea dataset, Fish Detection AI, Eyesea, small fish detection, YOLO version 8, YOLOv8, PyTorch, code, PyTorch code, Python, Yaml code, Shell code, Sonar-trained Object Detection Models, YOLO performance, hyper-image approach, cross-domain adaptability, Eyesea optical datasetDOE Project Details
Project Name Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), Water Power Technologies Office (WPTO)
Project Lead Samantha Eaves
Project Number 32326