{"id":71,"date":"2025-05-08T10:52:29","date_gmt":"2025-05-08T10:52:29","guid":{"rendered":"https:\/\/vap.aau.dk\/waves\/?page_id=71"},"modified":"2025-10-09T12:19:56","modified_gmt":"2025-10-09T10:19:56","slug":"speakers","status":"publish","type":"page","link":"https:\/\/vap.aau.dk\/marinevision\/speakers\/","title":{"rendered":"Speakers"},"content":{"rendered":"\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<div style=\"height:43px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-full has-custom-border is-style-rounded is-style-rounded--1\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/christinkahn.jpeg\" alt=\"\" class=\"wp-image-277\" style=\"border-radius:100px\" srcset=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/christinkahn.jpeg 800w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/christinkahn-300x300.jpeg 300w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/christinkahn-150x150.jpeg 150w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/christinkahn-768x768.jpeg 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h3 class=\"wp-block-heading is-style-default has-body-font-family\"><strong>Christin Kahn<\/strong><\/h3>\n\n\n\n<p><span style=\"text-decoration: underline;\">Affiliation:<\/span> NOAA, Woods Hole, MA<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">Talk:<\/span> Geospatial AI for Animals: Developing Annotated Satellite Imagery for Whale Detection Models<\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary><span style=\"text-decoration: underline;\">Abstract<\/span><\/summary>\n<p id=\"christin_abstract\">The Geospatial Artificial Intelligence for Animals (GAIA) initiative integrates very high-resolution satellite imagery, machine learning, and cloud computing into a dedicated marine mammal detection system. In 2025, we launched the GAIA cloud application and developed a custom preprocessing workflow\u2014including projection, orthorectification, radiometric correction, and pansharpening\u2014to prepare imagery for analysis. Our initial deployment targets North Atlantic right whales in Cape Cod Bay, where Maxar satellite data are being evaluated against crewed aerial survey results from the Center for Coastal Studies. By aligning with established survey platforms and focusing on areas of known whale presence, we aim to assess both the potential and limitations of satellite-based whale detection. Although still in early stages, GAIA demonstrates the promise of scalable, remote-sensing tools that complement traditional monitoring methods and contribute to global marine mammal conservation. Future work will refine detection capabilities, broaden application to additional species and regions, and strengthen collaboration through open-science workflows, building toward a next-generation platform for marine mammal detection.<br><\/p>\n<\/details>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-large has-custom-border is-style-rounded is-style-rounded--2\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory-1024x1024.jpg\" alt=\"\" class=\"wp-image-274\" style=\"border-radius:100px\" srcset=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory-1024x1024.jpg 1024w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory-300x300.jpg 300w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory-150x150.jpg 150w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory-768x768.jpg 768w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory-1536x1536.jpg 1536w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/grigory.jpg 1697w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h3 class=\"wp-block-heading is-style-default has-body-font-family\"><strong>Grigory Solomatov<\/strong><\/h3>\n\n\n\n<p><span style=\"text-decoration: underline;\">Affiliation<\/span>: University of Haifa, Israel<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">Talk:<\/span> Estimating Optical Properties of Water from RGBD Data<\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary><span style=\"text-decoration: underline;\">Abstract<\/span><\/summary>\n<p id=\"grigory_abstract\">Computer vision tasks like species classification are inherently more difficult under water because of color distortions, however, these distortions could in principle be removed if the optical properties governing them were known. Relying on the Single Scattering Approximation of the Radiative Transfer Equation, we show that the spectral beam attenuation coefficient, which is the most important optical property for image formation, can be accurately estimated from photographs of a Macbeth color chart at two different distances. More generally, we show that photographs of a color chart can serve as measurements from a radiometer as well as a transmissometer, provided that the spectral sensitivities of the camera are known. Finally, we show that the same could in principle be achieved even without a color chart, as long as the distribution of scene reflectances is sufficiently well-behaved.<\/p>\n<\/details>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-full has-custom-border is-style-rounded is-style-rounded--3\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"263\" src=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/10\/Daniels_Joost-300x263-1.jpg\" alt=\"\" class=\"wp-image-328\" style=\"border-radius:100px\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h3 class=\"wp-block-heading is-style-default has-body-font-family\"><strong>Joost Daniels<\/strong><\/h3>\n\n\n\n<p><span style=\"text-decoration: underline;\">Affiliation:<\/span> MBARI, Moss Landing, CA<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">Talk:<\/span> Ocean Vision AI: Accelerating the processing of underwater visual data for marine biodiversity surveys<\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary><span style=\"text-decoration: underline;\">Abstract<\/span><\/summary>\n<p id=\"joost_abstract\">In order to fully explore our ocean and effectively steward the life that lives there, we need to scale up our observational capabilities both in time and space.&nbsp;Marine biological observations and surveys of the future call for building distributed networks of underwater sensors, vehicles, and data analysis pipelines, which requires significant advances in automation. Underwater imaging, a major sensing modality for marine biology, is being deployed on a diverse array of platforms, however the community faces a data analysis backlog that artificial intelligence and machine learning may be able to address. How can we leverage novel computer and data science tools to automate image and video analysis in the ocean? How can we create workflows, data pipelines, and hardware\/software tools that will enable novel research themes to expand our understanding of the ocean and its inhabitants in a time of great change? Here we describe our efforts to build Ocean Vision AI (OVAI), a&nbsp;<em>central hub for researchers using imaging, AI, open data, and hardware\/software.<\/em>&nbsp;Through OVAI, we are<em>&nbsp;creating data pipelines from existing image and video data repositories and provide project tools for coordination (portal); leveraging public participation and engagement via gamification (FathomVerse); and aggregating labelled data products and machine learning models (FathomNet) that are widely shared.&nbsp;<\/em>Together, Ocean Vision AI will be used to directly accelerate the automated analysis of underwater visual data to enable scientists, explorers, policymakers, storytellers, and the public, to learn, understand, and care more about the life that inhabits our ocean.<\/p>\n<\/details>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-large has-custom-border is-style-rounded is-style-rounded--4\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/07\/profile-e1751880279632-1024x1024.jpg\" alt=\"\" class=\"wp-image-269\" style=\"border-radius:100px\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h3 class=\"wp-block-heading is-style-default has-body-font-family\"><strong>Guolei Sun<\/strong><\/h3>\n\n\n\n<p><span style=\"text-decoration: underline;\">Affiliation:<\/span> ETH Z\u00fcrich, Switzerland<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">Talk:<\/span> Multi-modal dense object localization and counting in underwater scenes<\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary><span style=\"text-decoration: underline;\">Abstract<\/span><\/summary>\n<p id=\"guolei_abstract\">Underwater scene understanding holds immense potential for ocean exploration, yet remains underexplored. In this talk, I will present our recent work on dense object localization, including a large-scale, challenging dataset we developed, extensive benchmarking experiments, and a state-of-the-art method addressing the unique challenges of underwater environments. Additionally, we extended our research by creating a multi-modal dataset and proposing a novel multi-modal approach. Our datasets, methods, and insights aim to advance research in this critical field.<\/p>\n<\/details>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-full has-custom-border is-style-rounded is-style-rounded--5\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"300\" src=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/10\/Troni_Giancarlo2-300x300-1.jpg\" alt=\"\" class=\"wp-image-330\" style=\"border-radius:100px\" srcset=\"https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/10\/Troni_Giancarlo2-300x300-1.jpg 300w, https:\/\/vap.aau.dk\/marinevision\/wp-content\/uploads\/sites\/9\/2025\/10\/Troni_Giancarlo2-300x300-1-150x150.jpg 150w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h3 class=\"wp-block-heading is-style-default has-body-font-family\"><strong>Giancarlo Troni<\/strong><\/h3>\n\n\n\n<p>Affiliation: MBARI, Moss Landing, CA<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">Talk:<\/span> Deep Underwater Vision at Scale: Enabling Precision Navigation and High-Resolution Mapping<\/p>\n\n\n\n<details class=\"wp-block-details is-layout-flow wp-block-details-is-layout-flow\"><summary>Abstract<\/summary>\n<p id=\"giancarlo_abstract\">Exploring and monitoring the deep ocean require cost-effective perception systems that operate reliably in visually challenging environments. This talk highlights recent advances and remaining challenges in underwater vision technologies that enable precision navigation and high-resolution mapping at scale. Our systems integrate optical cameras, forward-looking imaging sonar, and laser scanners to capture complementary visual and geometric information. Through sensor fusion and machine-learning-based perception, we enhance localization, mapping accuracy, and environmental understanding even under challenging conditions. Deployed across a range of robotic platforms, these technologies demonstrate scalable and affordable solutions for autonomous ocean observation\u2014bridging computer vision, marine robotics, and ocean science to advance persistent monitoring of marine ecosystems.<\/p>\n<\/details>\n<\/div>\n<\/div>\n\n\n\n<script>\ndocument.addEventListener(\"DOMContentLoaded\", function() {\n  if (window.location.hash) {\n    const target = document.querySelector(window.location.hash);\n    if (target && target.tagName.toLowerCase() === \"details\") {\n      target.setAttribute(\"open\", \"\");  \/\/ auto-open\n      target.scrollIntoView();          \/\/ optional: scrolls neatly to it\n    }\n  }\n});\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>Christin Kahn Affiliation: NOAA, Woods Hole, MA Talk: Geospatial AI for Animals: Developing Annotated Satellite Imagery for Whale Detection Models Grigory Solomatov Affiliation: University of Haifa, Israel Talk: Estimating Optical Properties of Water from RGBD Data Joost Daniels Affiliation: MBARI, Moss Landing, CA Talk: Ocean Vision AI: Accelerating the processing of underwater visual data for [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-71","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/pages\/71","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/comments?post=71"}],"version-history":[{"count":14,"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/pages\/71\/revisions"}],"predecessor-version":[{"id":332,"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/pages\/71\/revisions\/332"}],"wp:attachment":[{"href":"https:\/\/vap.aau.dk\/marinevision\/wp-json\/wp\/v2\/media?parent=71"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}