Joint Workshop on Marine Vision (October 19th, 2025)

6th Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)

3rd Workshop on Automated Analysis of Marine Visual Data for Environmental Monitoring (AAMVEM)

Speakers


Christin Kahn

Affiliation: NOAA, Woods Hole, MA

Talk: Geospatial AI for Animals: Developing Annotated Satellite Imagery for Whale Detection Models

Abstract

The Geospatial Artificial Intelligence for Animals (GAIA) initiative integrates very high-resolution satellite imagery, machine learning, and cloud computing into a dedicated marine mammal detection system. In 2025, we launched the GAIA cloud application and developed a custom preprocessing workflow—including projection, orthorectification, radiometric correction, and pansharpening—to prepare imagery for analysis. Our initial deployment targets North Atlantic right whales in Cape Cod Bay, where Maxar satellite data are being evaluated against crewed aerial survey results from the Center for Coastal Studies. By aligning with established survey platforms and focusing on areas of known whale presence, we aim to assess both the potential and limitations of satellite-based whale detection. Although still in early stages, GAIA demonstrates the promise of scalable, remote-sensing tools that complement traditional monitoring methods and contribute to global marine mammal conservation. Future work will refine detection capabilities, broaden application to additional species and regions, and strengthen collaboration through open-science workflows, building toward a next-generation platform for marine mammal detection.

Grigory Solomatov

Affiliation: University of Haifa, Israel

Talk: Can a monochrome camera measure spectral attenuation?

Abstract

The beam attenuation coefficient is an important inherent optical parameter that is determined by the constituents of a water body. It’s generally measured by a transmissometer, but such measurements often disagree between different transmissometers. We have developed a method for estimating the beam attenuation coefficient using nothing but a normal consumer camera as well as a Macbeth color chart. This method is verified in three different ways: 1) by providing a mathematical proof of correctness in the absence of noise 2) by Monte Carlo simulations with varying levels of noise and 3) by performing real world experiments.

Kakani Katija

Affiliation: MBARI, Moss Landing, CA

Talk: Ocean Vision AI: Accelerating the processing of underwater visual data for marine biodiversity surveys

Abstract

In order to fully explore our ocean and effectively steward the life that lives there, we need to scale up our observational capabilities both in time and space. Marine biological observations and surveys of the future call for building distributed networks of underwater sensors, vehicles, and data analysis pipelines, which requires significant advances in automation. Underwater imaging, a major sensing modality for marine biology, is being deployed on a diverse array of platforms, however the community faces a data analysis backlog that artificial intelligence and machine learning may be able to address. How can we leverage novel computer and data science tools to automate image and video analysis in the ocean? How can we create workflows, data pipelines, and hardware/software tools that will enable novel research themes to expand our understanding of the ocean and its inhabitants in a time of great change? Here we describe our efforts to build Ocean Vision AI (OVAI), a central hub for researchers using imaging, AI, open data, and hardware/software. Through OVAI, we are creating data pipelines from existing image and video data repositories and provide project tools for coordination (portal); leveraging public participation and engagement via gamification (FathomVerse); and aggregating labelled data products and machine learning models (FathomNet) that are widely shared. Together, Ocean Vision AI will be used to directly accelerate the automated analysis of underwater visual data to enable scientists, explorers, policymakers, storytellers, and the public, to learn, understand, and care more about the life that inhabits our ocean.

Guolei Sun

Affiliation: ETH Zürich, Switzerland

Talk: Multi-modal dense object localization and counting in underwater scenes

Abstract

Underwater scene understanding holds immense potential for ocean exploration, yet remains underexplored. In this talk, I will present our recent work on dense object localization, including a large-scale, challenging dataset we developed, extensive benchmarking experiments, and a state-of-the-art method addressing the unique challenges of underwater environments. Additionally, we extended our research by creating a multi-modal dataset and proposing a novel multi-modal approach. Our datasets, methods, and insights aim to advance research in this critical field.

Ben Richards

Affiliation: NOAA, Honolulu, HI

Talk: TBD

Abstract

TBD