The “broadband-everywhere” philosophy, while not yet fully realized, brings streaming media to more people and places each year. Dial-up is a distant memory for most, replaced with an array of high-speed wired connections and a maturing mobile universe.
This doesn’t automatically make the production and delivery of high-quality streams a simple process, however. If anything, there are more challenges to overcome than ever before—for both video and audio. Understanding the end-to-end components, bandwidth challenges and formats involved goes a long way in providing the best quality streams for consumers.
Production and Testing
Streaming in the professional AV space can mean many things, but the majority fall under live events. This can mean anything from corporate and government meetings to worship and entertainment events, with a complete live production workflow featuring multiple cameras, switching and routing gear, and audio production at the venue.
Ensuring production quality across the workflow is the first step to capturing a robust streaming signal. For audio, noise reduction tools and audio signal processing to maximize loudness will keep levels on track. The onsite technician can use a standard VU meter to measure audio levels, with a professional pair of headphones to listen and make adjustments. Meanwhile, a second technician can monitor changes from a web player, providing feedback on changes to volume and other levels.
Most important is providing an extended soundcheck and camera to confirm that the signal is flowing to the encoder, with an understanding of how to boost volume as needed. Video signal quality should be confirmed within well-lit areas well in advance of the event—without directly facing a lighting source. Broadcast-quality signal monitors and analyzers will assist with detecting faults and providing references for frame-by-frame progressions. These tools will quickly confirm if signals are breaking up or freezing en route to the encoder.
The Handoff
The encoding point is the handshake between the production arm and the content delivery network (CDN). The encoder converts the source signal into formats acceptable for streaming to end devices. Technology has improved in recent years to better address bandwidth consumption and usage, both of which come into the picture at this stage of the workflow.
The most significant base to cover is ensuring ample upload capacity to the streaming server, which ultimately delivers the stream to consumers. A common performance issue at the encoding point is buffering—a surefire sign that upload bandwidth is lacking. A professional-grade encoder should be perfectly capable of handing off a smooth signal to the streaming server.
An inordinate amount of buffering from a robust encoder can indicate high network traffic. Taking steps to minimize that traffic will typically reduce or eliminate buffering. Ensure that others in your office or facility are not using the Internet, and password-protect shared wireless networks so that others are not hogging mobile bandwidth.
The ideal result is a low-latency connection with no packet loss between the encoder and the streaming server.
Speaking of Mobile
Not every venue has a wired connection—and delivering a signal to the encoding point over Wi-Fi is a tricky proposition. This is particularly a concern for outdoor locations and in live performance venues.
Bonded cellular technology from companies like Mushroom Networks combine multiple cellular cards into a single aggregated line, offering plenty of bandwidth that overcome fluctuations in video quality and latency that are common over cellular connections. This is achieved by dividing source signals into “chunks” that are delivered to a cloud-based component, which reassembles the various elements into a single, consistent stream.
The end result is a confirmed method to deliver quality streams to the encoder at locations where securing wired connections are a challenge.
One-to-Many
The streaming server will ultimately determine how far and wide your signal will go. More than ever, the need to accommodate multiple formats and bitrates is critical so that the content owner is not locked into a certain format or quality—ultimately restricting the number of devices and consumers reached.
Having plenty of upload bandwidth at the encoding point will allow delivery of multiple bitrates to the streaming server. This eliminates transcoding stages that introduce generational loss—instead permitting the streaming server to “natively” deliver the same signal at varying quality levels.
Many products do offer more efficient cloud-based transcoding with minimal generation loss, should transcoding be required. This does require a generous amount of server resources to do properly, which the CDN must build into the streaming server architecture.
The important result is to ensure that the streaming server can deliver a single source signal encoded into many bitrates to multiple devices, supporting video codecs for delivery to iOS devices (HLS format) and Flash players (RTMP) format, for example. Similarly, multiple audio bitrates should be supported using the same technology, allowing the CDN to deliver mp3 as well as higher-quality audio streams such as HE-AAC.
The end customer should declare which devices they want to target in advance, allowing the CDN to employ industry-best practices to determine which formats and encoding setting are most appropriate. This is especially critical if the CDN is targeting mobile consumer devices in addition to traditional desktop and laptop targets.
There are a number of other variables to consider for a complete “quality-streaming” experience, from integrating mobile apps and advertisements into the streams to writing archives for on-demand viewing, and finally delivering quality reports that inform the end user of visitor statistics. But these basics will certainly help end users in the AV universe understand the techniques for streaming and improve upon execution.
Andy Jones is the director of sales engineering with StreamGuys.