Hey everyone, I am Karan, a third-year engineering student at TIET, Patiala, currently living in Mumbai, India. I have been given the opportunity to contribute to Orcasound as a Google Summer of Code 2022 student on the project: Stream external hydrophones to Orcasound website. The goal of this project is to design a tool that fetches and streams audio data archived at the Ocean Observatories Initiative (OOI) website. When the whales are out in the Pacific it is difficult to track them and determine their behavior, so to overcome this we want to monitor the cabled hydrophones off the coast of Oregon. Our study is focusing first on the slope base shallow (195m) hydrophone that operates on a sampling frequency of 64kHz. This hydrophone goes by the identifier ‘PC01A’ and is currently operational. Eventually we will shift our attention to the hydrophone closer to shore (mid-shelf at a depth of 80m) as it is more likely to detect the endangered Southern Resident killer whales as they seek salmon along the outer coast each winter.
The audio data from OOI hydrophones is stored in miniSEED format. This format keeps the audio data as waveforms and is quite popular in the seismic community. Our aim is to transcode the data using HTTP live streaming (HLS) protocol, so that the audio segments can be streamed live to the Orcasound web app and ML pipeline.
To achieve this, we have used plenty of tools each having their own distinct functions. The high-level flow goes something like this:
- The audio data will be fetched from the OOI website and will be converted to HLS (.ts) segments using an open-source tool called FFmpeg.
- These HLS segments will be stored in an S3 bucket which is a storage service provided by AWS. The bucket will also contain a “manifest” file namely ‘live.m3u8’ unique to each directory. This file contains metadata like filename, segment length, segment type, etc., and acts as a playlist for the audio segments.
- The website will leverage these files and enable users to listen easily for whale sounds in near-real-time, whereas previously the underlying mseed data files have been practically inaccessible to community scientists.
It is to be noted that Orcasound is a part of the Registry of Open Data on AWS. This registry helps people discover datasets that are available publicly on AWS and are provided by third parties.
Progress as of 25th July 2022
Since the commencement of the coding period which was on June 13th we have made some significant progress in regards to fetching data from OOI website and uploading it to the S3 bucket. Some notable changes are mentioned below. We are:
- using the OOIpy library to fetch data and remove short gaps from the audio segments;
- iterating over a given time slot to fetch data that was previously missing due to a delay in its upload; and,
- manually writing m3u8 files that list the segments in the order in which they are to be played.
For the remaining part of the first phase and the entire second phase I have planned to do the following:
- Edit m3u8 files when delayed data is available and put the segment name in the right place.
- Deploy the functions to services like AWS Lambda.
- Add integration tests.