Encoding Opus files in Linux with opusenc for your own collection and HTML 5

Introduction


Hello everyone! I am finally back to writing a new blog entry, after taking a very long month off. I hope all of my U.S readers had a great Fourth of July and are enjoying the summer. Today, I am going to be discussing and showing you how to encode with the the new IETF working audio codec called "Opus". Briefly, Opus is a combination of Xiph.org "CELT" codec and Skype's "SILK" codec combined into one project. It's a low-latency audio codec that was designed with "scalability" in mind and does things "fundamentally" different, even though it should sound the same if not better in different situations for speech and music. The project was finally completed this past winter and standardized and will be used in the new WebRTC project (for video conferencing), Skype (for video calls), and other companies that wish to use it. Google is NOW (as of July 2013) incorporating it, into the next generation codebase of WebM, which will now consist of (VP9/Opus) in the future going forward. What does this mean? It means that if your files are already encoded with Vorbis or FLAC, their will be no need to rencode to Opus, because it's not primarily designed for those types of scenario's. It's designed to be the audio portion of WebM and compete with the next generation video standard or H.265 (HEVC) video.

Many recent studies have suggested that Internet video bandwidth consumes about 40% (as of 2014) of all Internet traffic (Note: I don't know how accurate those statistics are, based upon a study done by Network Engineers at Cisco. I am willing to bet, those numbers are very close to "accurate", however despite the fact). Henceforth, this is the "motivation" for wanting to reduce overall A.V bandwidth on most content delivery networks e.g (YouTube and Vimeo). This is why there are two competing standards on the web. What will be the deal breaker between this and other open-source codecs that we have seen the past before for widespread adoption? The simple answer is hardware support. If next-generation mobile devices support WebM, there is a very good "chance" many people will use it! If not it will either fall to the way side yet again or never see any use outside of the Internet (Note: it should be dually noted here that one of the reasons Vorbis audio codec, was so successful was because of hardware support). Now that I have covered you on all the bases and filled you in with the necessary background information, it's time to show you how to encode with opusenc for your own personal collection and HTML 5. Now we can begin:

Note: The guide below assumes you already have the necessary tools, including libopus encoder, decoder, and file info binaries installed in your distro. This guide will NOT show you how to build them from the source. I recommend you visit the Xiph.org website IF you need to build them from yourself (for whatever reason you so choose). If you are using "newer" versions of Ubuntu/Fedora, check your package manager by searching for 'opus' to see if these tools have already been compiled by someone and exist in your distro. If you have any other questions, feel free to contact me and I will get you squared away!

Encoding with Opus using default VBR settings


Opus will default to ~96 Kbps with 48 kHz sampling, if you DO NOT set up the encoder and will encode your music that way using default 20 msec frame size. This PERFECTLY FINE for music/speech realtime streaming. If you desire something higher or lower you need to "tweak" the settings a little bit (depending upon the input). I will show you five "experimental" settings you can use to encode your files below. Two with speech and three with music. Keep in mind, Opus is designed WITH low-latency in mind for bandwidth "constrained" traffic conditions. Unlike, Vorbis and AAC which use longer 200 msec frames, it uses a newer mathematical technique called "constrained frequency lapped transforms" that scale based upon server traffic making it EFFECTIVE for next-gen realtime web A.V applications, henceforth WebRTC, Skype, etc.

Experimental setting #1: Speech

$opusenc --vbr 24.0 --framesize 40 /home/USR/music/*.wav /home/USR/music/*.opus

Experimental setting #2: Speech with 10% packet loss

$opusenc --vbr 48.0 --framesize 60 --packet-loss 10 /home/USR/music/*.wav /home/USR/music/*.opus

Experimental setting #3: Music

$opusenc --vbr 64.0 --framesize 20 /home/USR/music/*.wav /home/USR/music/*.opus

Experimental setting #4: Music with 20% packet loss

$opusenc --vbr 96.0 --framesize 10 --packet-loss 20 /home/USR/music/*.wav /home/USR/music/*.opus

Experimental setting #5: Music

$opusenc --vbr 128.0 --framesize 20 /home/USR/music/*.wav /home/USR/music/*.opus


... that is the general "gist" with how to encode your speech/music files with Opus! If you have any more questions about this blog post or you would like to see it expanded leave me a comment or get in touch with me. In the meantime, I will be back next month with a music related entry (having missed an entry for June). I look forward to writing again. If I find any topics, that I feel may be of some interest to my readers before then, I will write a new entry. Thanks for your viewership in the meantime and look forward to writing again soon! Take care. ;-D


Comments

Popular posts from this blog

Encoding Vorbis files in Linux using oggenc for your own music collection and HTML 5

Transcoding H.264 files to Google's WebM format (VP8/Vorbis) in FFMPEG 0.6 or better using Linux for your own collection and HTML 5