Augment SilenceDetection with Voice Activity Detection


Opencast added SilenceDetection in MH-9178, which examines the audio track to detect segments with no audio. These are then proposed as trimming points for the video editor.

In some venue A/V configurations where boundary microphones are used as a fallback for lapel microphones, there may be never be a region of silence - instead, background noise will be recorded when a speaker is not speaking.

It is helpful to trim out these segments from the video (typically at the start and end of a recording), but SilenceDetection does not detect them as silence (because they are not).

Voice Activity Detection ( is helpful here, because we can identify two additional cases for possible trimming:

  • Continuous background speech (audience noise)

  • White noise which contains no speech but is also not silence (e.g. an empty room with other sources of noise)

There is an open source implementation of Voice Activity Detection in WebRTC (, which is considered to be one of the best open implementations available (other options include Sphinx4, and some audio codecs which include VAD support).

There is a python module which surfaces just the VAD component from WebRTC and is easy to install:

It is proposed to extend the SilenceDetection service to additionally use VAD (if configured to do so).

Some experimentation and an appropriate algorithm is required to use the WebRTC VAD output to identify segments appropriately (distinguishing continuous background noise speech, single speaker speech, white noise and silence).


Stephen Marquard
February 15, 2018, 10:27 AM

UCT is implementing an audio classifier using that accomplishes this goal.


Stephen Marquard


Stephen Marquard