dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Analysing the temporal dynamics of chemosynthetic ecosystems by using automated image processing tools
VerfasserIn Michael Aron, Jozée Sarrazin, Pierre-Marie Sarradin, Grégoire Mercier
Konferenz EGU General Assembly 2011
Medientyp Artikel
Sprache Englisch
Digitales Dokument PDF
Erschienen In: GRA - Volume 13 (2011)
Datensatznummer 250052418
 
Zusammenfassung
Access and sampling in the deep ocean are still limited by time and budget constraints and require the use of large oceanographic vessels and deep sea submersibles. Optical imagery represents an efficient means to acquire quantitative data on large spatial scales in these remote habitats. In addition, these techniques are non-invasive for the environment so they can provide useful information about ecosystem’s natural dynamics. Recent efforts have been invested by the international scientific community into developing deep sea observatories with video imaging capacities in order to study the dynamics of marine ecosystems. The Tempo-mini is such a module. For his first trial, it was deployed at 100m depth in the Saanich Inlet (BC, Canada) and connected to the VENUS cabled observatory (http://www.venus.uvic.ca/). This module, equipped with a high-definition submarine colour video-camera, acquired four months of video footage, representing 487 944 images of typical benthic habitat. A first study has been conducted by our laboratory in order to identify the data which can be extracted from these images (position of squat lobsters, zooplankton densities, species diversity). Because of time constraints, this study was manually achieved only on a small subset of the whole video sequence (0.11%, i.e. 540 images). Our research is now focusing on the elaboration an automated image processing platform in order to automatically extract the biological, physical and geological data on the whole video sequence. This new video processing tool uses automatic image treatment methods adapted to the specificity of submarine images. Firstly, all the moving objects in the video frames are segmented and labelled, allowing us to use statistical methods to process very large datasets. Secondly, computer vision techniques are used in order to get metric and 3D information from the images. For example, the speed of moving objects and the image surface area are computed. The method implies the calibration of the camera with a calibration target, i.e. computing the projective geometric properties of the camera and the scene. Lastly, the developed methods will be integrated in a user-friendly processing platform that will be handled by our team to analyse observatory data coming from different research projects (e.g. Endeavour/Neptune-Canada and MoMAR). The adaptation of image processing and computer vision methods to submarine video sequences will allow us to quantitatively study changes in community structure and environmental conditions. The automated processing of these data is essential for improving our capacity to process long time series such as those that will be acquired by future deep-sea observatories. Expected results may then provide fundamental knowledge on the functioning of deep-sea chemosynthetic ecosystems. Acknowledgements: This project is part of the European programmes ESONET (2008-2011) Network of Excellence, contract #36851, and HERMIONE (2009-2012), contract #226354. This work has been possible thanks to the VENUS- and Neptune-Canada infrastructures.