Seamless Client Onboarding – Hardware and Software Synergy – interview with Kenneth Robinson

A crucial aspect of NETINT’s value proposition is its proactive and holistic customer support, from the pre-purchase phase to onboarding and the post-purchase journey. NETINT streamlines this transition with seamless hardware installation facilitated by compliance with U.2 and PCIe standards and intuitive software integration via tools like FFmpeg and GStreamer, and an SDK.

A recent conversation with Kenneth Robinson, NETINT’s Manager of Field Application Engineering, detailed how he and his team support NETINT customers through the buying, onboarding and implementation process and beyond. By way of background, Robinson joined NETINT in January 2023 and brings substantial expertise from his prior tenure at a video gateway development company. During the conversation, he described how his team’s adeptness with scripting and debugging simplifies and accelerates customer deployments.

The discussion also spotlights the efficiency of NETINT’s transcoder management, GStreamer’s increased usage among NETINT customers due to its hyperthreaded efficiency, and several strategic recommendations for potential server buyers. Robinson’s insights solidify NETINT’s reputation as a client-centric enterprise, leveraging both its technological prowess and dedicated human capital.

From Jan Ozer

This interview is with Kenneth Robinson, NETINT’s manager of field application engineering. We discussed how Kenneth and his team help get NETINT customers up and running, including hardware and software installation and the operation of software like GStreamer and FFmpeg.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
Kenneth, tell us a little bit about yourself. What’s your background, and how long have been with NETINT?

Kenneth:
I’ve been with NETINT since January of this year (2023). Prior to that, I worked for a company that developed video gateways for big MSOs for installation in hotels and other uses. I ran a team of quality engineers and managed the support team there as well.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
So, you’re comfortable with video and video-related technologies?

Kenneth:
Oh yes. And familiar with a lot of different ways to deliver video, like streaming and multicast.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
What’s the typical skillset of your FAE team?

Kenneth:
They are software people. They understand software and debugging software and write scripts to help customers test or debug different issues. So very good communicators. They work with our customers to make sure that NETINT cards benefit them in the way that they are supposed to..

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
What do you see as your role in the company?

Kenneth:
I see it as ensuring that our customers get the support they need in a timely manner and making sure the transition from their current transcoders to NETINT transcoders happens smoothly, quickly, and efficiently. And that any roadblocks are removed in a very timely manner for them.

Supporting New Customer Installations

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
How’s the typical process work? Do you start when customers are evaluating NETINT products, or after they decide to purchase and deploy them?

Kenneth:
Both situations. Often the sales team will include me in a customer call to learn exactly how they want to use our products and to make sure we can deliver what they need. And then the other half is usually after a customer buys one of our products.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
How does that work? When a customer buys a product, what happens? It gets shipped, and they receive it. How do they get the software and documentation?

Kenneth:

We know they’ve received the product based on the tracking number. Then we’ll reach out to the customer and send links to our documentation portal with the software SDK. This has the installation guide, integration guides, application notes, and everything they need to install and get up and running. And then we’ll usually follow up every couple of weeks or so just to make sure the process is going smoothly.

But, if at any point the customer has a question, they can reach out to us, and we will be happy to help them

Hardware Installation

Figure 1. NETINT offers products in two form factors, U.2 and PCIe.
Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
What’s the hardware installation like?

Kenneth:

So, the hardware is very simple. We have two form factors. We have the PCIe form factor, which is just like any network card or GPU that you just install. And then there’s the U.2 form factor, which is the same as a hard drive. So, there’s nothing special required or special tools or knowledge; if you’ve worked on a computer before, you should be able to install either form factor.

 

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
In the nine months you’ve been here, what types of incompatibilities have you seen with the servers in the field?

Kenneth:

We haven’t seen any incompatibilities. Our products have worked on every server that we’ve tried because we follow the different standards for the U.2 and PCIe form factors.

Software Installation and Operation

Figure 2 - The Quadra Server - software architecture for controlling the Quadra Server
Figure 2. You can control all transcoders with FFmpeg, GStreamer, or the API (libxcoder).
Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
So, the hardware installation is straightforward. What’s the software installation like?

Kenneth:

The software is relatively easy. We work with FFmpeg and GStreamer, but our software code is not pushed into the repository. So, part of our SDK is a patch that you apply and then compile FFmpeg or GStreamer, though we have installation scripts that will automate that process for you. If you just want to run a quick test, the installation scripts are very good and will get you up and running in a matter of minutes.

We also have an API, so the customer can access the cards directly and not rely on FFMPEG or Gstreamer.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
If you install multiple cards, how does the software distribute jobs among those cards?

Kenneth:

There are two ways. You can specify the exact card you want to use as the encoder or decoder. Or, you can allow a resource manager to manage that, and it will send each job to whichever decoder or encoder has the capacity.

FFmpeg, Gstreamer, or API?

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
In terms of software control, what’s the typical customer doing? We’ve got GStreamer, FFmpeg, and the API. What percentage are using each alternative?

Kenneth:

The majority is FFmpeg and, after that, the API. Then there’s a small number that use GStreamer, although GStreamer is slowly getting more popular.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
Why is that?

Kenneth:

We found that when FFmpeg scales multiple files simultaneously, like when creating an encoding ladder, it sometimes would bottleneck. While the capacity was good, it wasn’t great. If we tried Gstreamer, the capacity increased significantly enough that it made sense to use GStreamer for that workflow.

Server vs. Individual Cards

Figure 3. NETINT offers two servers populated with ten Quadras or T408s.
Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
Let’s switch gears a bit. What’s your experience with the server? What would you advise someone to buy a server fully loaded with Quadras or T408s versus buying the cards and installing them themselves?

Kenneth:

If you need a custom architecture, like adding GPUs for cloud gaming, you should buy the cards and install them yourself. If you intend to perform high-volume file-based transcoding or live streaming, you should consider either server.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
So, if you’ve got a set application and you just want to get a device in and start working, the servers are a good option. If you’re going to customize your servers, buy the cards.

Kenneth:

Yes, that’s correct.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
That’s all I have. Thanks for taking the time today.

Kenneth:

Thanks for having me.

Watch on-demand: Symposium on Building Your Live Streaming Cloud

Cloud services are an effective way to begin live streaming, but once you reach a particular scale, you may realize that you’re paying too much and can save significant OPEX by deploying your own transcoding infrastructure. The question is, how to get started? 

Build Your Own Live Streaming Cloud symposium was a huge hit, with many insights from industry insiders on how to build a live streaming cloud. Here are replays of the event. (For the best viewing experience, please watch from your desktop.)

From Cloud to Local Transcoding For Minimum Latency and Maximum Quality

From Cloud to Local Transcoding

Over the last ten years or so, most live productions have migrated towards a workflow that sends a contribution stream from the venue into the cloud for transcoding and delivery. For live events that need absolute minimum latency and maximum quality, it may be time to rethink that workflow, particularly if you’ve got multiple sharable inputs at the venue.

So says Bart Snoeks, Account & Partnership Director of THEO Technologies (“THEO”). By way of background, THEO invented and has commercially implemented the High-Efficiency Streaming Protocol (HESP), an adaptive HTTP- based video streaming protocol that enables sub-second end-to-end latency. You see how HESP compares to other low latency protocols in the table shown in Figure 1 from the HESP Alliance website – the organization focused on promoting and further advancing HESP.

Figure 1. HESP compared to other low latency protocols.

THEO has productized HESP as a real-time streaming service called THEOlive, which targets applications like live sports and betting, casino igaming, live auctions, and other events that require high-quality video at exceptionally low latency with delivery at scale. For example, in the case of in-play betting, cutting latency from 8 to 10 seconds (HLS) to under one second expands the betting window during the critical period just before the event.

When streaming casino games, ultra-low latency promotes fluent interactions between the players and ensures that all players see the turn of the cards in real time. When latency is lower, players can bet more quickly, increasing the number of hands that can be played.

According to Snoeks, a live streaming workflow that sends a contribution stream to the cloud for transcoding will always increase latency and can degrade quality as re-transcoding is needed. It’s especially poorly suited for stadium venues with multiple camera locations that want to enhance the attendee experience with multiple live feeds. In those latency-critical use cases you are actually adding network latency with a roundtrip to and from the cloud. Instead, it makes much more sense creating your encoding ladder and packaging on-site and pulling that directly from the origin to a private CDN for delivery.

Let’s take a step back and examine these two workflows.

Live Streaming Workflows

As stated at the top, most live-streaming productions encode a single contribution stream on-site and send that into the cloud for transcoding to a full ladder, packaging, and delivery. You see this workflow in Figure 2.

Figure 2. Encoding a contribution stream on-site to deliver to the cloud for transcoding, packaging, and delivery

This schema has multiple advantages. First, you’re sending a single stream to the cloud, lowering bandwidth requirements. Second, you’re centralizing your transcoding assets in a single location in the cloud, which typically enables better utilization.

According to Snoeks, however, this workflow will add 200 to 500  milliseconds of latency at a minimum, depending on the encoding speed, quality, and contribution protocol. In addition, though high-quality contribution encoders can minimize generational loss from the contribution stream, lower-quality transcoders can noticeably degrade the quality of the final output. You also need a contribution encoder for each camera, which can jack up hardware costs in high-volume igaming and similar applications.

Instead, for some specific use cases, you should consider the workflow shown in Figure 3. Here, you transcode on-site and send the full encoding ladder to a public CDN for external delivery and to a private CDN or equivalent for local viewing. This decreases latency to a minimum and produces absolute top quality as you avoid the additional transcoding step.

From Cloud to Local Transcoding - Figure-2
Figure 3. Encoding and packaging the encoding ladder on site and transmitting the streams to a public CDN for external viewers and a private CDN for local viewers.

This schema is particularly useful for venues that want to enhance the in-stadium experience with multiple camera feeds. Imagine a stock car race where an attendee only sees his driver on the track once every minute or so. Encoding on-site might allow attendees to watch the camera view from inside their favorite driver’s car with near real-time latency. It might let golf fans follow multiple groups while parked at a hole or following their favorite player.

If you’re encoding input from many cameras, say in a casino or even racetrack environment, the cost of on-site encoding might be less than the cost of the individual contribution encoders. So, you get the best of all worlds, lower cost per stream, lower latency, higher quality, and a better in-person experience where applicable.

If you’re interested in learning about your transcoding options, check out our symposium Building Your Own Live Streaming Cloud, where you can hear from multiple technology experts discussing transcoding options like CPU-only, GPU, and ASIC-based transcoding and their respective costs, throughput, and density.

If you’re interested in learning more about HESP, THEO in general, or THEOlive, watch for an upcoming episode of Voices of Video, where I interview Pieter-Jan Speelman, CTO of THEO Technologies. We’ll discuss HESP’s history and evolution, the power of THEOlive real-time streaming technology, and how to use it in your live production stack. Make sure you don’t miss it!

Now ON-DEMAND: Symposium on Building Your Live Streaming Cloud

Get Free CAE on NETINT VPUs with Capped CRF

Capped CRF

NETINT recently added capped CRF to the rate control mechanism across our Video Processing Unit (VPU) product lines. With the wide adoption of content-adaptive encoding techniques (CAE), constant rate factor (CRF) encoding with a bit rate cap gained popularity as a lightweight form of CAE to reduce the bitrate of easy-to-encode sequences, saving delivery bandwidth with constant video quality. It’s a mode that we expect many of our customers to use, and this document will explain what it is, how it works, and how to get the most use from the feature.

In addition to working with H.264, HEVC, and AV1 on the Quadra VPU line, capped CRF works with H.264 and HEVC on the T408 and T432 video transcoders. This document details how to encode with capped CRF using the H.264 and HEVC codecs on Quadra VPUs, though most application scenarios apply to all codecs across the NETINT VPU lines.

What is Capped CRF and How Does it Work?

Capped CRF is a bitrate control technique that combines constant rate factor (CRF) encoding with a bit rate cap. Multiple codecs and software encoders support it, including x264 and x265 within FFmpeg. In contrast to CBR and VBR encoding, which encode to a specified target bitrate (and ignore output quality), CRF encodes to a specified quality level and ignores the bitrate.

CRF values range from 0-51, with lower numbers delivering higher quality at higher bitrates (less savings) and higher CRF values delivering lower quality levels at lower bitrates (more bitrate savings). Many encoding engineers will utilize values spanning 21 to 23. Which is right for you? As you will read below, your desired quality and bitrate savings balance determines the best value for your use case.

For example, with the x264 codec, if you transcode to CRF 23, the encoder typically outputs a file with a VMAF quality of 93-95. If that file is a 4K60 soccer match, the bitrate might be 30 Mbps. If it’s a 1080p talking head, it might be 1.2 Mbps. Because CRF delivers a known quality level, it’s ideal for creating archival copies of videos. However, since there’s no bitrate control, in most instances, CRF alone is unusable for streaming delivery.

When you combine CRF with a bit rate cap, you get the best of both worlds, a bit rate reduction with consistent quality for easy-to-encode clips and similar to CBR quality and bitrate or more complex clips.

Here’s how capped CRF could be used with the Quadra VPU:

ffmpeg -i input crf=23:vbvBufferSize=1000:bitrate=6000000 output

The relevant elements are:

  • CRF=23 – sets the quality target at around 95 VMAF

  • vbvBufferSize=1000 – sets the VBV buffer to one second (1000 ms)

  • bitrate=6000000 – caps the bitrate at 6 Mbps.

These commands would produce a file that targets close to 95 VMAF quality but, in all cases, peaks at around 6 Mbps.

For a simple-to-encode talking head clip, Quadra produced a file with an average bitrate of 1,274 kbps and a VMAF score of 95.14. Figure 1 shows this output in a program called Bitrate Viewer. Since the entire file is under the 6 Mbps cap, the CRF value controls the bitrate throughout.

Encoding this clip with Quadra using CBR at 6 Mbps produced a file with a bit rate of 5.4 Mbps and a VMAF score of 97.50. Multiple studies have found that VMAF scores above 95 are not perceptible by viewers, so the extra 2.26 VMAF score doesn’t improve the viewer’s quality of experience (QoE). In this case, capped CRF reduces your bandwidth cost by 76% without impacting QoE.

Figure 1. Capped CRF encoding a simple-to-encode video in Bitrate Viewer.

You see this in Figure 2, showing the capped CRF frame with a VMAF score of 94.73 on the left and the CBR frame with a VMAF score of 97.2 on the right. The video on the right has a bit rate over 4 Mbps larger than the video on the left, but the viewer wouldn’t notice the difference.

Figure 2. Frames from the talkinghead clip. Capped CRF at 1.23 Mbps on the left,
CBR at 5.4 Mbps on the right. No viewer would notice the difference.

Figure 3 shows capped CRF operation with a hard-to-encode American football clip. The average bitrate is 5900 kbps, and the VMAF score is 94.5. You see that the bitrate for most of the file is pushing against the 6 Mbps cap, which means that the cap is the controlling element. In the two regions where there are slight dips, the CRF setting controls the quality.

Figure 3. Capped CRF encoding a hard-to-encode video in Bitrate Viewer.

In contrast, the CBR encode of the football clip produced a bit rate of 6,013 kbps and a VMAF score of  94.73. Netflix has stated that most viewers won’t notice a VMAF differential under 6 points, so a viewer would not perceive the .25 VMAF delta between the CBR and capped CRF file. In this case, capped CRF reduced delivery bandwidth by about 2% without impacting QoE.

Of course, as shown in Figure 2, the two-minute segment tested was almost all high motion. The typical sports broadcast contains many lower-motion sequences, including some commercials, cutting to the broadcasters, or during timeouts and penalty calls. In most cases, you would expect many more dips like those shown in Figure 2 and more substantial savings.

So, the benefits of capped CRF are as follows:

  • You can use a single ladder for all your content, automatically saving bitrate on easy-to-encode clips and delivering the equivalent QoE on hard-to-encode clips.
  • Even if you modify your ladder by type of content, you should save bandwidth on easy-to-encode regions within all broadcasts without impacting QoE.
  • Provides the benefit of CAE without the added integration complexity or extra technology licensing cost. Capped CRF is free across all NETINT VPU and video transcoder products.

Producing Capped CRF

Using the NETINT Quadra VPU series, the following commands for H.264 capped CRF will optimize video quality and deliver a file or stream with a fully compliant VBV buffer. As noted previously, this command string with the appropriate modifications to codec value will work across the entire NETINT product line. For example, to output HEVC, change -c:v h264_ni_quadra_enc to -c:v h265_ni_quadra_enc.

Here’s the command string.

ffmpeg -y -i input.mp4 -y -c:v h264_ni_quadra_enc -xcoder-params “gopPresetIdx=5:RcEnable=0:crf=23:intraPeriod=120:lookAheadDepth=10:cuLevelRCEnable=1:v
bvBufferSize=1000:bitrate=6000000:tolCtbRcInter=0:tolCtbRcIntra=0:zeroCopyMode=0″ output.mp4

Here’s a brief explanation of the encoding-related switches.

  • -c:v h264_ni_quadra_enc -xcoder-params – Selects Quadra’s H.264 codec and identifies the codec commands identified below.

  • gopPresetIdx=5 – this chooses the Group of Pictures (GOP) pattern, or the mixture of B-frame and P-frames within each GOP. You should be able to adjust this without impacting capped CRF performance.

  • RcEnable=0 – this disables rate control. You must use this setting to enable capped CRF.

  • crf=23 – this chooses the CRF value. You must include a CRF value within your command string to enable capped CRF.

  • intraPeriod=120 – This sets the GOP size to four seconds which we used for all tests. You can adjust this setting to your normal target without impacting CRF operation.

  • lookAheadDepth=10 – This sets the lookahead to 10 frames. You can adjust this setting to your normal target without impacting CRF operation.

  • cuLevelRCEnable=1 – this enables coding unit-level rate control. Do not adjust this setting without verifying output quality and VBV compliance.

  • vbvBufferSize=1000 – This sets the VBV buffer size. You must set this to trigger capped CRF operation.

  • bitrate=6000000 – This sets the bitrate. You must set this to trigger capped CRF operation. You can adjust this setting to your target without impacting CRF operation.

  • tolCtbRcInter=0 – This defines the tolerance of CU-level rate control for P-frames and B-frames. Do not adjust this setting without verifying output quality and VBV compliance.

  • tolCtbRcIntra=0 – This sets the tolerance of CU level rate control for I-frames. Do not adjust this setting without verifying output quality and VBV compliance.

  • zeroCopyMode=0 – this enables or disables the libxcoder zero copy feature. Do not adjust this setting without verifying output quality and VBV compliance.

You can access additional information about these controls in the Quadra Integration and Programming Guide.

Choosing the CRF Value and Bitrate Cap – H.264

Deploying capped CRF involves two significant decisions, choosing the CRF value and setting the bitrate cap. Choosing the CRF value is the most critical decision, so let’s begin there.

Table 1 shows the bitrate and VMAF quality of ten files encoded with the H.264 codec using the CRF values shown with a 6 Mbps cap and using CBR encoding with a 6 Mbps cap. The table presents the easy-to-encode files on top, showing clip-specific results and the average value for the category. The Delta from CBR shows the bitrate and VMAF differential from the CBR score. Then the table does the same for hard-to-encode clips, showing clip-specific results and the average value for the category. The bottom two rows present the overall average bitrate and VMAF values and the overall savings and quality differential from CBR.

Capped CRF - Table 1. CBR and capped CRF bitrates and VMAF scores for H.264 encoded clips.
Table 1. CBR and capped CRF bitrates and VMAF scores for H.264 encoded clips.

As mentioned, with CRF, lower values produce higher quality. In the table, CRF 19 produces the highest quality (and lowest bitrate savings), and CRF 27 delivers the lowest quality (and highest bitrate savings). What’s the right CRF value? The one that delivers the target VMAF score for your typical clips for your target audience.

For the test clips shown, CRF 19 produces an average quality of well over 95; as mentioned above, VMAF scores beyond 95 aren’t perceivable by the average viewer, so the extra bandwidth needed to deliver these files is wasted. Premium services should choose CRF values between 21-23 to achieve the top rung quality of around 95 VMAF scores. These deliver more significant bandwidth savings than CRF 19 while preserving the desired quality level. In contrast, commodity services should experiment with higher values like 25-27 to deliver slightly lower VMAF scores while achieving more significant bandwidth savings.

What bitrate cap should you select? CRF sets quality, while the bitrate cap sets the budget. In most cases, you should consider using your existing cap. As we’ve seen, with easy-to-encode clips, capped CRF should deliver about the same quality of experience with the potential for bitrate savings. For hard-to-encode clips, capped CRF should deliver the same QoE with the potential for some bitrate savings on easy-to-encode sections of your broadcast.

Note that identifying the optimal CRF value will vary according to the complexity of your video files, as well as frame rate, resolution, and bitrate cap. If you plan to implement capped CRF with Quadra or any encoder, you should run similar tests on your standard test clips using your encoding parameters and draw your own conclusions.

Now let’s examine capped CRF and HEVC.

Choosing the CRF Value and Bitrate Cap – HEVC

Table 2 shows the results of HEVC encodes using CBR at 4.5 Mbps and the specified CRF values with a cap of 4.5 Mbps. With these test clips and encoding parameters, Quadra’s CRF values produce nearly the same result, with CRF values 21-23 appropriate for premium services and 25 – 27 good settings for UGC content.

Capped CRF - Table 2. CBR and capped CRF bitrates and VMAF scores for HEVC encoded clips.
Table 2. CBR and capped CRF bitrates and VMAF scores for HEVC encoded clips.

Again, the cap is yours to set; we arbitrarily reduced the H.264 bitrate cap of 6 Mbps by 25% to determine the 4.5 Mbps cap for HEVC.

Capped CRF Performance

Note that as currently tested, capped CRF comes with a modest performance hit, as shown in Table 3. Specifically, in CBR mode, Quadra output twenty 1080p30 H.264-encoded streams. This dropped to sixteen using capped CRF, a reduction of 20%.

For HEVC, throughput dropped from twenty-three to eighteen 1080p30 streams, a reduction of about 22%. We performed all tests using CRF 21, with a 6 Mbps cap for H.264 and 4.5 Mbps for HEVC. Note that these are early days in the CRF implementation, and it may be that this performance delta is reduced or even eliminated over time.

Capped CRF - Table 3. 1080p30 outputs produced using the techniques shown.
Table 3. 1080p30 outputs produced using the techniques shown.

We installed the Quadra in a workstation powered by a 3.6 GHz AMD Ryzen 5 5600X 6-Core Processor running Ubuntu 18.04.6 LTS with 16 GB of RAM. As you can see in the table, we also tested output for the x264 codec in FFmpeg using the medium and veryfast presets, producing two and five 1080p30 outputs, respectively. For x265, we tested using the medium and ultrafast presets and the workstation produced one and three 1080p30 streams.

Even at the reduced throughput, Quadra’s CRF output dwarfs the CPU-only output. When you consider that the NETINT Quadra Video Server packs ten Quadra VPUs into a single 1RU form factor, you get a sense of how VPUs offer unparalleled density and the industry’s lowest cost per stream and power consumption per stream.

Bandwidth is one of the most significant costs for all live-streaming productions. In many applications, capped CRF with the NETINT Quadra delivers a real opportunity to reduce bandwidth cost with no perceived impact on viewer quality of experience.

From Cloud to Control. Building Your Own Live Streaming Platform

Cloud services are an effective way to begin live streaming. Still, once you reach a particular scale, it’s common to realize that you’re paying too much and can save significant OPEX by deploying transcoding infrastructure yourself. The question is, how to get started?

NETINT’s Build Your Own Live Streaming Platform symposium gathers insights from the brightest engineers and game-changers in the live-video processing industry on how to build and deploy a live-streaming platform.

In just three hours, we’ll cover the following:

  • Hardware options for live transcoding and encoding to cut costs by as much as 80%.
  • Software options for producing, delivering, and playing your live video streams.
  • Co-location selection criteria to achieve cloud-like performance with on-premise affordability.

You’ll also hear from two engineers who will demystify the process of assembling a live-streaming facility, how they identified and solved key hurdles, along with real costs and performance data.

Cloud? Or your own hardware?

It’s clear to many that producing live streams via a public cloud like AWS can be vastly more expensive than owning your hardware. (You can learn more by reading “Cloud or On-Premises? The Streaming Dilemma” and “How to Slash CAPEX, OPEX, and Carbon Emissions Using the NETINT T408 Video Transcoder”). 

To quote serial entrepreneur David Hansson, who recently migrated two SaaS services from the cloud to on-premise, “Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.” 

For those who have only operated in the cloud, there’s fear of the unknown. Fear buying hardware transcoders, selecting the right software, and choosing the best colocation service. So, we decided to fight fear with education and host a symposium to educate streaming engineers on all these topics.  

“Building Your Own Live Streaming Cloud” will uncover how owning your encoding stack can slash operating costs and boost performance with minimal CAPEX.

Learn to select the optimal transcoding hardware, transcoding and packaging software, and colocation facilities. We’ll also discuss strategies to reduce carbon emissions from your transcoding engine. 

This FREE virtual event takes place on August 17th, from 11:00 AM – 2:15 PM EST.

Five issues tackled by nine experts:

Transcoding Hardware Options:

Learn the pros and cons of CPU, GPU, and ASIC-based transcoding via detailed throughput and cost examples shared by Kenneth Robinson, Manager of Field Application Engineers at NETINT Technologies. Then Ilya Mikhaelis, Streaming Backend Tech Lead at Mayflower, will describe his company’s journey from CPU to GPU to ASICs, covering costs, power consumption, latency, and density metrics.

Software Options:

Jan Ozer from NETINT will identify the three categories of transcoding software: multimedia frameworks, media servers, and other tools. Then, you’ll hear from experts in each category, starting with Romain Bouqueau, founder of Motion Spell, who will discuss the capabilities of the GPAC multimedia framework. Barry Owen, Chief Solutions Architect at Wowza, will discuss Wowza Streaming Engine’s suitability for private clouds. Lastly, Adrian Roe, Director at Id3as, developer of Norsk, will demonstrate Norsk’s simple, scripting-based operation, and extensive production and transcoding features.

Housing Options:

Once you select your hardware and software, the next step is finding the right co-location facility to house your live streaming infrastructure. Kyle Faber, with experience in building Edgio’s video streaming infrastructure, will guide you through the essential factors to consider when choosing a co-location facility.

Minimizing the Environmental Impact:

As responsible streaming professionals, it’s essential to address the environmental impact of our operations. Barbara Lange, Secretariat of Greening of Streaming, will outline actionable steps video engineers can take to minimize power consumption when acquiring and deploying transcoding servers.

Pulling it All Together:

Stef van der Ziel, founder of live-streaming pioneer Jet-Stream, will share lessons learned from his experience in creating both Jet-Stream’s private cloud and cloud transcoding solutions for customers. In his closing talk, Stef will demystify the process of choosing hardware, software, and a hosting facility, bringing all the previous discussions together into a cohesive plan.

Full Agenda:

11:00 am. – 11:10 am EST

Introduction (10 minutes):
Mark Donnigan, Head of Strategic Marketing at NETINT Technologies
Welcome, overview, and what you will learn.

 

11:10 am. – 11:40 am EST

Choosing transcoding hardware (30 minutes):
Kenneth Robinson, Manager of Field Application Engineers at NETINT Technologies
You have three basic approaches to transcoding, CPU-only, GPU, and ASICs. Kenneth outlines the pros and cons of each approach with extensive throughput and CAPEX and OPEX examples for each.

 

11:40 am. – 12:00 pm EST

From CPU to GPU to ASIC: Our Transcoding Journey (20 minutes):
Ilya Mikhaelis, Streaming Backend Tech Lead at Mayflower
Charged with supporting very high-volume live transcoding operations, Ilya started with libx264 software transcoding, which consumed massive power but yielded low stream density per server. Then he experimented with GPUs and other hardware and ultimately transitioned to an ASIC-based solution with much lower power consumption and much higher stream density per server. Ilya will detail the costs, power consumption, and density of all options, providing both data and an invaluable evaluation framework.

 

12:00 pm. – 12:10 pm EST

Choosing your live production software (10 minutes): 
Jan Ozer, Senior Director of Video Technology at NETINT Technologies
The core of every live streaming system is transcoding and packaging software. This comes in many shapes and sizes, from open-source software like FFmpeg and GPAC, to streaming servers like Wowza, and production systems like Norsk. Jan discusses these multiple options so you can cohesively and affordably build your own live-streaming ecosystem.

 

12:10 pm. – 1:10 pm EST

Speed Round (60 minutes):
20-minute presentations from GPAC, Wowza, and NORSK.
Speakers from GPAC, Wowza, and NORSK discussing the features, functions, operational paradigms, and cost structure of their live software offering.

Speakers include:

  • Adrian Roe, CEO at id3as, Product: Norsk, Title: Make Live Easy with NORSK SDK
  • Romain Bouqueau, Founder and CEO, Motion Spell (home for GPAC Licensing), Product: GPAC Title of Talk: Deploying GPAC for Transcoding and Packaging
  • Barry Owen, Chief Solutions Architect at Wowza, Title of Talk: Start Streaming in Minutes with Wowza Streaming Engine



1:10 pm. – 1:40 pm EST

Choosing a co-location facility (30 minutes): 
Kyle Faber, Senior Director of Product Management at Edgio.
Once you’ve chosen your hardware and software, you need a place to install them. If you don’t have your own connected data center, you may consider a colocation facility. In his talk, Kyle addresses the key factors to consider when choosing a co-location facility for your live streaming infrastructure.

 

1:40 pm. – 1:55 pm EST

How to Greenify Your Encoding Stack (15 minutes):
Barbara Lange, Secretariat of Greening of Streaming.
Learn how video streaming companies can work to significantly reduce their energy footprint and contribute to a greener streaming industry. Implement hardware and infrastructure optimization using immersion cooling and data center design improvements to maximize energy efficiency in your streaming infrastructure.

 

1:55 pm. – 2:15 pm EST

Closing Keynote (20 minutes):
Stef van der Ziel, Founder Jet-Stream
Jet-stream has delivered streaming solutions since its launch in 1994 and offers its own live streaming platform. One focus has been creating custom transcoding solutions for customers seeking to create their own private cloud for various applications. In his closing talk, Stef will demystify the process of choosing hardware, software, and a hosting facility and wrap a pretty bow around all previous presentations.

Co-location for Optimized, Sustainable Live Streaming Success

Choosing a co-location facility

If you decide to buy and run your transcoding servers versus a public cloud, you must choose where to host the servers. If you have a well-connected data center, that’s an option. But if you don’t, you’ll want to consider a co-location facility or co-lo.

A co-location facility is a data center that rents space to third parties for servers and other computing hardware. This rented space typically includes the physical area for the hardware (often measured in rack units or cabinets) and the necessary power, cooling, and security.

While prices vary greatly, in the US, you can expect to pay between $50 – $200 per month per RU, with prices ranging from $60 – $250 per RU in Europe, $80 – $300 per month per RU in South American, and $70 – $280 per month per RU in Asia.

Co-location facilities will provide a high-bandwidth internet connection, redundant power supplies, and sophisticated cooling systems to ensure optimal performance and uptime for hosted equipment. They also include robust physical security measures, including surveillance cameras, biometric access controls, and round-the-clock security personnel.

At a high level, businesses use co-location facilities to leverage economies of scale they couldn’t achieve on their own. By sharing the infrastructure costs with other tenants, companies can access high-level data center capabilities without a significant upfront investment in building and maintaining their facility.

Choosing a Co-lo for Live Streaming

Choosing a co-lo facility for any use involves many factors. However, live streaming demands require a focus on a few specific capabilities. We discuss these below to help you make an informed decision and maximize the efficiency and cost-effectiveness of your live-streaming operations.

Network Infrastructure and Connectivity

Live streaming requires high-performance and reliable network connections. If you’re using a particular content delivery network, ensure the link to the CDN is high performing. Beyond this, consider a co-lo with multiple (and redundant) high-speed connections to multiple top-tier telecom and cloud providers, which can ensure your live stream remains stable, even if one of the connections has issues.

Multiple content distribution providers can also reduce costs by enabling competitive pricing. If you need to connect to a particular cloud provider, perhaps for content management, analytics, or other services, make sure these connections are also available.

Geographic Location and Service

Choosing the best location or locations is a delicate balance. From a pure quality of experience perspective, facilities closer to your target audience can reduce latency and ensure a smoother streaming experience. However, during your launch, cost considerations may dictate a single centralized location that you can supplement over time with edge servers near heavy concentrations of viewers.

During the start-up phase and any expansion, you may need access to the co-lo facility to update or otherwise service existing servers and install new ones. That’s simpler to perform when the facility is closer to your IT personnel.

If circumstances dictate choosing a facility far from your IT staff, consider choosing a provider with the necessary managed services. While the services offered will vary considerably among the different providers, most locations provide hardware deployment and management services, which should cover you for expansion and maintenance.

Similarly, live streaming operations usually run round-the-clock, so you need a facility that offers 24/7 technical support. A highly responsive, skilled, and knowledgeable support team can be crucial in resolving any unexpected issues quickly and efficiently.

Scalability

Your current needs may be modest, but your infrastructure needs to scale as your audience grows. The chosen co-lo facility (or facilities) should have ample space and resources to accommodate future growth and expansion. Check whether they have flexible plans allowing upgrades and scalability as needed.

Redundancy and Disaster Recovery

In live streaming, downtime is unacceptable. Check for guarantees in volatile coastal or mountain regions that data centers can withstand specific types of disasters, like floods and hurricanes.

When disaster strikes, the co-location facility should have redundant power supplies, backup generators, and efficient cooling systems to prevent potential hardware failures. Check for procedures to protect equipment, backup data, and other steps to minimize the risk and duration of loss of service. For example, some facilities offer disaster recovery services to help customers restore disrupted environments. Walk through the various scenarios that could impact your service and ensure that the providers you consider have plans to minimize disruption and get you up and running as quickly as possible.

Security and Compliance

Physical and digital security should be a primary concern, particularly if you’re streaming third-party premium content that must remain protected. Ensure the facility uses modern security measures like CCTV, biometric access, fire suppression systems, and 24/7 on-site staff. Digital security should include robust firewalls, DDoS mitigation services, and other necessary precautions.

Environment Sustainability

An essential requirement for most companies today is environmental sustainability. ASIC-based transcoding is the most power-efficient of all transcoding alternatives. We believe that all companies should work to reduce their carbon footprints. Accordingly, choosing a co-location facility committed to energy efficiency and renewable energy sources will lower your energy costs and align with your company’s environmental goals.

Remember, the co-location facility is an extension of your live-streaming business. With the proper infrastructure, you can ensure high-quality, reliable live streams that satisfy your audience and grow your business. Take the time to visit potential facilities, ask questions, and thoroughly evaluate before deciding.

Cloud services are an effective way to begin live streaming. Still, once you reach a particular scale, it’s common to realize that you’re paying too much and can save significant OPEX by deploying transcoding infrastructure yourself. The question is, how to get started?

NETINT’s Build Your Own Live Streaming Platform symposium gathers insights from the brightest engineers and game-changers in the live-video processing industry on how to build and deploy a live-streaming platform.

In just three hours, we’ll cover the following:

  • Hardware options for live transcoding and encoding to cut costs by as much as 80%.
  • Software options for producing, delivering, and playing your live video streams.
  • Co-location selection criteria to achieve cloud-like performance with on-premise affordability.

You’ll also hear from two engineers who will demystify the process of assembling a live-streaming facility, how they identified and solved key hurdles, along with real costs and performance data.

Denser / Leaner / Greener - Symposium on Building Your Live Streaming Cloud

Unveiling the Quadra Server: The Epitome of Power and Scalability

The Quadra Server review by Jan Ozer from NETINT Technologies

Streaming engineers face constant pressure to produce more streams at a lower cost per stream and reduced power consumption. However, those considering new transcoding technologies need a solution that integrates with their existing workflows while delivering the quality and flexibility of software with the cost efficiency of ASIC-based hardware.

If this sounds like you, the US $21,000 NETINT Quadra Video Server could be the ideal solution. Combining the Supermicro 1114S-WN10RT AMD EPYC 7543P-powered server hosting ten NETINT Quadra T1U Video Processing Units (VPUs), is a power house. The Quadra server outputs H.264, HEVC, and AV1 streams at normal or low latency, and you can control operation via FFmpeg, GStreamer, or a low-level API. This makes the server a drop-in replacement for a traditional FFmpeg-based software or GPU-based encoding stack.

As you’ll see below, the 1RU form factor server can output up to 20 8Kp30 streams, 80 4Kp30 streams, up to 320 1080p30 streams, and 640 720p30 streams for live and interactive video streaming applications. For ABR production, the server can output over 120 encoding ladders in H.264, HEVC, and AV1 formats. This unparalleled density enables all video engineers to greatly expand capacity while shrinking the number of required servers and the associated power bills.

I’ll start this review with a technical description of the server and transcoding hardware. Then we’ll review some performance results for one-to-one streaming and H.264, HEVC, and AV1 ladder generation and finish with a look at the Quadra server’s AI-based features and output.

The Quadra Server - Quadra video processing server powered by 10 Quadras, ASIC-based VPUs from NETINT
Figure 1. The Quadra Video Server powered by the Codensity G5 ASIC.

Hardware Specs - The Quadra Server

The NETINT Quadra Video Server uses the Supermicro 1114S-WN10RT server platform with a 32-core AMD EPYC 7543P CPU running Ubuntu 20.04.05 LTS. The Quadra server ships with 128 GB of DDR4-3200 RAM and a 400GB M.2 SSD drive with 3x PCIe slots and ten NVME slots that house the Quadra T1U VPUs. NETINT also offers the Quadra server with two other CPUs, the 64-core AMD EPYC 7713P processor ($24,000) for more demanding applications and the economical 8-core AMD EPYC 7232P processor ($19,000) for pure transcoding applications that may not require a 32-core CPU.

Supermicro* is a leading server and storage vendor that designs, develops, and manufactures primarily in the United States. Supermicro* adheres to high-quality standards, with a quality management system certified to the ISO 9001:2015 and ISO 13485:2016 standards, and an environmental management system certified to the ISO 14001:2015 standard. Supermicro is also a leader in green computing and reducing data center footprints (see the white paper Green Computing: Top Ten Best Practices for a Green Data Center). As you’ll see below, this focus has resulted in an extremely power-efficient server to house the NETINT Quadra VPUs.

*We are the leading server and storage vendor that designs, develops, and manufactures the majority of our development in the United States – at our headquarters in San Jose, Calif. Our Quality Management System is certified to ISO 9001:2015 and ISO 13485:2016 standards and our Environmental Management System is certified to ISO 14001:2015 standard. In addition to that, the Supermicro Information Security Managemen

SOURCE: https://www.supermicro.com/en/about

Hardware Specs – Quadra VPUs

The Quadra T1U VPUs are powered by the NETINT Codensity G5 ASIC and packaged in a U.2 form factor that plugs into the NVMe slots in the server and communicate via the ultra-high bandwidth PCIe 4.0 bus. Quadra VPUs can decode H.264, HEVC, and VP9 inputs and encode into the H.264, HEVC, and AV1 standards.

Beyond transcoding, Quadra VPUs house 2D processing engines that can crop, pad, and scale video, and perform video overlay, YUV and RGB conversion, reducing the load on the host CPU to increase overall throughput. These engines can perform xStack operations in hardware, making the Quadra server ideal for conferencing and security applications that combine multiple feeds into a multi-pane output mosaic window.

Each Quadra T1U in the Quadra server includes a 15 TOPS Deep Neural Network Inference Engine that can support models trained with all major deep learning frameworks, including Caffe, TensorFlow, TensorFlow Lite, Keras, Darknet, PyTorch, and ONNX. NETINT supplies several reference models, including a facial detection model that uses region of interest encoding to improve facial quality on security and other highly compressed streams. Another model provides background removal for conferencing applications.

Operational Overview

We tested the Quadra server with FFmpeg and GStreamer. Operationally, both GStreamer and FFmpeg communicate with the libavcodec layer that functions between the Quadra NVME interface and the FFmpeg/GStreamer software layers. This allows existing FFmpeg and GStreamer-based transcoding applications to control server operation with minimal changes.

Figure 2 - The Quadra Server - software architecture for controlling the Quadra Server
Figure 2. The software architecture for controlling the server.

To allocate jobs to the ten Quadra T1U VPUs, the Quadra device driver software includes a resource management module that tracks Quadra capacity and usage load to present inventory and status on available resources and enable resource distribution. There are several modes of operation, including auto, which automatically distributes the work among the available VPUs.

Alternatively, you can manually assign decoding and encoding tasks to different Quadra VPUs in the command line or application and even control which streams are decoded by the host CPU or a Quadra. With these and similar controls, you can most efficiently balance the overall transcoding load between the Quadra and host CPU and maximize throughput. We used auto distribution for all tests.

We tested running FFmpeg v 5.2.3 and GStreamer version 1.18 (with FFmpeg v 4.3.1), and with Quadra release 3.2.0. As you’ll see, we weren’t able to complete all tests in all modes with both software programs, so we presented the results we were able to complete.

In all tests, we configured the Quadra VPUs for maximum throughput as opposed to maximum quality. You can read about the configuration options and their impact on output quality and performance in Benchmarking Hardware Transcoder Performance. While quality will relate to each video and encoding configuration, the configuration used should produce quality at least equal to the veryfast x264 and x265 presets, with quality up to the slow presets available in configurations that optimize quality over throughput.

We tested multiple facets of system performance. The first series of tests involved a single stream in and single stream out, either at the same resolution as the incoming stream or scaled down and output at a lower resolution. Many applications use this mode of operation, including gaming, gambling, and auctions.

The second use case is ABR distribution, where a single input stream is transcoded to a full encoding ladder. Here we supplemented the results with software-only transcodes for comparison purposes. To assess AI-related throughput, we tested region-of-interest transcoding and background removal.

In most modes, we tested normal and low-latency performance. To simulate live streaming and minimize file I/O as a drag on system performance, we retrieved the source file from a RAM drive on the Quadra server and delivered the encoded file to RAM.

Same-Resolution Transcoding

Table 1 shows transcoding results for 8K, 4K, 1080p, and 720p in latency tolerant and low-delay modes. The number represents the amount of full frame rate outputs produced by the system at each configuration.

These results are most relevant for interactive gambling and similar applications that input a single stream, transcode the stream at full resolution, and stream it out. You see that 8K streaming is not available in the AV1 format and that H.264 and HEVC are not available in low latency mode with either program. Interestingly, FFmpeg outperformed GStreamer at this resolution while the reverse was true at 1080p.

4K and 720p results were consistent for all input and output codecs and for normal and low delay modes. All output numbers are impressive, but the 640 720p streams for AV1, H.264, or HEVC is remarkable density for a 1RU rack server.

At 1080p there are minor output differences between normal and low-delay mode and the different codecs, though the codec-related differences aren’t that substantial. Interestingly, HEVC throughput is slightly higher than H.264, with AV1 about 16% behind HEVC.

Jan Ozer - the Quadra server review-table-1
Table 1. Same resolution transcoding results.

Table 2 shows a collection of maximum data points (worst case) from the transcoding results presented in Table 1. As you can see, both Max CPU and power consumption track upwards with the number of streams produced. Max latency (decode plus encode) in normal latency mode tracks downward with the stream resolution, becoming quite modest at 720p. Max latency (decode plus encode) in low-delay mode for both decoding and encoding starts and stays under 30.9 milliseconds, which is less than a single frame.

Jan Ozer - the Quadra server review-table-2
Table 2. Maximum CPU, power consumption, and latency data for pure transcoding.

As between FFmpeg and GStreamer, the latter proved more CPU and power efficient than the former, in both normal and low-delay modes. For example, in all tests, GStreamer’s CPU utilization was less than half of FFmpeg, through the power consumption delta was generally under 20%.

At 8K and 4K resolutions, the latency reported was about even between the two programs, but at the lower resolutions in low-delay mode, GStreamer’s latency was often half that of FFmpeg. You can see an example of these two observations in Table 3, reporting 720p HEVC input and output as HEVC. Though the throughput was identical, GStreamer used much less energy and produced much lower latency. As you’ll see in the next section, this dynamic stayed true in transcoding with scaling tests, making GStreamer the superior app for applications involving same-resolution transcoding and transcoding with scaling. 

Quadra Server - Table 3. GStreamer was much more CPU and power efficient and delivered substantially lower latency than FFmpeg in these same resolution transcode tests.
Table 3. GStreamer was much more CPU and power-efficient
and delivered substantially lower latency than FFmpeg
in these same resolution transcode tests.

Transcoding and Scaling

Table 4 shows transcoding while scaling results, first 8K input to 4K output, then 4K to 1080p, and lastly 1080p to 720p. If you compare Table 3 with Table 1, you’ll see that performance tracks the input resolution, not output, which makes sense because decoding is a separate operation that obviously involves its own hardware limits.

Jan Ozer - the Quadra server review-table-4
Table 4. Transcoding while scaling results.

As the Quadra VPUs perform scaling on-board, there was no drop in throughput with the scaling related tests; rather, there was a slight increase in 8K > 4K and 4K > 1080p outputs over the same resolution transcoding reported in Table 1. In terms of throughput, the results were consistent between the codecs and software programs.

Table 5 shows the max CPU and power usage for all the transcodes in Table 3, which increased somewhat from the low-quantity high-resolution transcodes to the high-quantity low-resolution transcodes but was well within the performance envelope for this 32-core server.

The Max latency for all normal encodes was relatively consistent between five and six frames. With low delay engaged, 8K > 4K latency didn’t drop that significantly, though you’d assume that 8K to 4K transcodes are uncommon. Latency dropped to below a single frame in the two lower resolution transcodes.

Jan Ozer - the Quadra server review-table-5
Table 5. Maximum CPU, power consumption, and latency data for transcoding while scaling.

As between FFmpeg and GStreamer we saw the same dynamic as with full resolution transcodes; in most tests, GStreamer consumed significantly less power and produced sharply lower latency. You can see an example of this in Table 6, reporting the results of 1080p incoming HEVC output to AV1 at 720p. 

Table 6. GStreamer was much more CPU and power-efficient
and delivered much lower latency than FFmpeg in this scale then transcode tests.

Encoding Ladder Testing

Table 7 shows the results of full ladder testing with CPU, latency, and power consumption embedded in the output instances. Note that we tested a five-rung ladder for H.264, and four-rung ladders for HEVC and AV1. We didn’t test 4K H.264 output because few services would deploy this configuration. Also, we didn’t test with GSteamer because NETINT’s current GStreamer implementation can’t use Quadra’s internal scalers when producing more than a single file, an issue that the NETINT engineering team will resolve soon. Also, as you can see, low-delay mode wasn’t available for 4K testing. 

This fine print behind us, as with the single file testing, throughput was impressive. The ability to deliver up to 140 HEVC 4-rung ladders from a single 1RU rack, in either normal or low-latency mode, is remarkable.

Jan Ozer - the Quadra server review-table-7
Table 7: Encoding ladder throughput. 

For comparison purposes, we produced the equivalent encoding ladders on the same server using software-only encoding with FFmpeg and the x264, x265, and SVT-AV1 codecs. To match the throughput settings used for Quadra, we used the ultrafast preset for x264 and x265, and preset eleven for SVT-AV1. You see the results in Table 8

Note that these numbers over-represent software-based output since no engineer would produce a live stream with CPU utilization over 60 – 65%, since a sudden spike in CPU usage would crash all the streams. Not only is CPU utilization much lower for the Quadra-driven encodes, minimizing the risk of exceeding CPU capacity, Quadra-based transcoding is much more determinist than CPU-based transcoding, so CPU requirements don’t typically change in midstream.

All that said, Quadra proved much more efficient than software-based encoding for all codecs, particularly HEVC and AV1. In Table 7, the Multiple column shows the number of servers required to produce the same output as the Quadra server, plus the power consumed by all these servers. For H.264, you would need six servers instead of a single Quadra server to produce the 120 instances, and power costs would be nearly six times higher. That’s running each Quadra server at 98.3% CPU utilization. Running at a more reasonable 60% utilization would translate to ten servers and 4,287 watts per hour.

Jan Ozer - the Quadra server review-table-8
Table 8. Ladders, CPU utilization, and power consumed for CPU-only transcoding.

Even without factoring in the 60% CPU-utilization limits, the comparison reaches untenable levels with HEVC and AV1. As the data shows, CPU-based transcoding simply can’t keep up with these more complex codecs, while the ASIC-driven Quadra remains relatively consistent. 

AI-Related Functions

The next two tables benchmark AI-related functions, first region of interest encoding, then background removal. Briefly, region of interest encoding uses AI to search for faces in a stream and then increases the bits assigned to those faces to increase quality. This is useful in surveillance videos or any low-bitrate video environment where facial quality is important. 

We tested 1080p AVC input and output with FFmpeg only, and the system delivered sixty outputs in both normal and low-delay modes, with very modest CPU utilization and power consumption. For more on Quadra’s AI-related functions, and for an example of the region of interest filter, see an Introduction to AI Processing on Quadra.

Jan Ozer - the Quadra server review-table-9
Table 9. Throughput for Region of Interest transcoding via Artificial Intelligence.

Table 10 shows 1080p input/output using the AVC codec with background removal, which is useful in conferencing and other applications to composite participants in a virtual environment (see Figure 2). This task involves considerably more CPU but delivers slightly greater throughput.

Jan Ozer - the Quadra server review-table-10
Table 10. Throughput for background removal and transcoding via Artificial Intelligence.

As you can read about in the Introduction to AI Processing on Quadra, Quadra comes with these and other AI-based applications and can deploy AI-based models developed in most machine learning programs. Over time, AI-based operations will become increasingly integral to video transcoding functions, and the Quadra Video Server provides a future-proof platform for that integration.

Figure 3 -The Quadra Server - Compositing participants in a virtual environment with background removal
Figure 3. Compositing participants in a virtual environment with background removal

Conclusion

While there’s a compelling case for ASIC-based transcoding solely for H.264 production, these tests show that as applications migrate to more complex codecs like HEVC and AV1, CPU-based transcoding is untenable economically and for the environment. Beyond pure transcoding functionality, if there’s anything that the ChatGPT-era has proven, it’s that AI-based transcoding-related functions will become mainstream much sooner than anyone might have thought. With highly efficient ASIC-based transcoding hardware and AI engines, the Quadra Video Server checks all the boxes for a server to strongly consider for all high-volume live streaming applications. 

Build Your Own Streaming Infrastructure – Software

Build Your Own Streaming Infrastructure - Article by Jan Ozer from NETINT Technologies

My assumption is that you’re currently using a cloud-based service like AWS for your live streaming and are seeking to reduce costs by buying your own transcoding hardware, installing the necessary software, and hosting the server on-premises or in a co-location facility. This article covers the software side.

To begin, let’s acknowledge that AWS and other cloud services have created a well-featured and highly integrated ecosystem for live streaming and distribution. The downside is the cost.

To illustrate the potential savings, I’ll refer to this article, which compared the cost of producing 21 H.264 ladders and 27 HEVC ladders via AWS MediaLive and by encoding with NETINT’s recently launched Logan Video Server. As you can see in the table, MediaLive costs around $400K for H.264 and $1.8 million for HEVC, as compared to $11,140 in both cases for the co-located server.

Streaming Infrastructure - Table from article 'cloud or on-prem'
Table 1. Five-year cost comparison . AWS MediaLive pricing compared to the NETINT Server

While there are less expensive options available inside and outside of AWS, whenever you pay for hardware by the minute or hour of production, you’re vastly overpaying as compared to owning your own hardware. Sure, you say, but it’s so easy compared to running your own hardware.

If that’s a concern, here are some comforting words from David Heinemeier Hansson, co-owner, and CTO of software developer 37signals, the developer of the project management platform Basecamp and email service Hey. Recently, Hansson wrote  Why we’re leaving the cloud, a blog that detailed his companies’ decisions to do just that. Here’s the relevant quote.

Up until very recently, everyone ran their own servers, and much of the progress in tooling that enabled the cloud is available for your own machines as well. Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.

My wife has chihuahuas, and given their difficulties with potty training, I seriously doubt they could do it, but you get the point. To paraphrase FDR, all you have to fear is fear itself. The bottom line is that running your own live streaming service should cost relatively little CAPEX, will save significant OPEX, and won’t be nearly as challenging as you might be fearing.

Let’s look at your options for the software required to run your homegrown system.

Transcoding and Packaging Software

Figure 1 shows the minimum software and infrastructure needed for a live-streaming service. Presumably, you’ve already got the live production covered, and since AWS doesn’t offer a player, you have that piece addressed as well. You’ll need a content delivery network to deliver your streaming video, but you can continue to use CloudFront or other CDN. The software that you absolutely have to replace is the live transcoding and packaging component.

Here you have three options; multimedia frameworks, media servers, and “other.” Let’s discuss each in turn.

Multimedia Frameworks

Multimedia frameworks are software libraries, tools, and APIs that provide a set of functionalities and capabilities for multimedia processing, manipulation, and streaming. The best-known framework is FFmpeg, followed by GStreamer and GPAC, and they are all available open source.

Build Your Own Streaming Infrastructure - Software- diagram-2
Figure 1. Netflix uses GPAC for its packaging,
a significant technology endorsement for GPAC
and for multimedia frameworks in general.

Multimedia frameworks excel in projects at both ends of the complexity spectrum. For simple projects, like transcoding an input stream to an encoding ladder, you can create a script that inputs the stream, transcodes, and hands the packaged output streams off to a CDN in a matter of minutes. You can use the script to process thousands of simultaneous jobs, all at no charge.

At the other end of the spectrum, these frameworks also excel at complex jobs with idiosyncratic custom requirements that likely aren’t available in a server or commercial software product. The development, maintenance, and modification costs are considerable, but you get maximum feature flexibility if you’re willing to pay that cost.

What you don’t get with these tools is a user interface or simple configuration options – you start with a blank slate and must program in all desired features. What could be as simple as checking a checkbox in a streaming media server could require dozens or even thousands of lines of code in a multimedia framework.

Which takes us to streaming media servers.

Streaming Media Servers

The next category of products are streaming media servers, and it includes Wowza Streaming Engine, Nimble Streamer, and two open-source servers, Red5 and Ant Media Server. These servers tend to excel for most productions in the middle of the complexity spectrum and offer multiple advantages over multimedia frameworks.

There are several reasons why you might choose to use a streaming server over a multimedia framework, including a simplified setup and configuration. Most streaming servers provide out-of-the-box streaming solutions with pre-configured settings and management interfaces that simplify the setup and configuration process. While not all offer GUIs, those that don’t offer simple option selection in configuration files.

Build Your Own Streaming Infrastructure - Software- diagram-3
Figure 2. Wowza Streaming Engine is a highly regarded streaming server

As mentioned above, streaming servers often offer simpler access to advanced features that you’d have to craft by hand with a multimedia framework. They also offer better integration with third-party services like digital rights management (DRM) and content delivery networks. Between the simplified setup, easier access to features, and improved integration with other services, packaged servers can dramatically accelerate getting your live streaming service up and running.

Once you’re operational, you’ll appreciate management interfaces that monitor the health and performance of your streaming infrastructure, track viewer analytics, manage streaming workflows, and make real-time adjustments. If you’re in a dynamic demand environment, some streaming servers offer built-in scalability features and load balancing to manage the load over multiple hard transcoding resources. You’d have to build all that by hand or with plug-ins if using a multimedia framework.

The two potential downsides of streaming servers are cost and customizability. You’ll have to pay a monthly fee for some versions of these servers, and you may find it complicated or nearly impossible to add what you might consider to be essential features.

Other Streaming-Capable Programs

Most companies building their own live-streaming infrastructures will implement either a multimedia framework or a streaming server, but there are other programs that incorporate the core encoding and packaging functions. One such program is Norsk from id3as. Norsk bills itself as “an SDK that enables developers to easily create amazing, dynamic live video workflows and deploy them at any scale.” As such, it combines both video production and streaming server-related functions

You see this in Figure 3. The top portion shows that Norsk supports the typical codecs and packaging formats deployed by live-streaming producers. At the bottom of the figure, you see that Norsk also offers production-oriented features like multiple camera support, graphics and overlays, and transitions.

Build Your Own Streaming Infrastructure - Software- diagram-4
Figure 3. Norsk offers both production and server-related functions.

Interestingly, Norsk doesn’t have a GUI, instead offering a high-level API to simplify configuration and operation, with a Workflow Visualizer component to view the running state of the application. In this fashion, Norsk attempts to provide the configurability of multimedia frameworks with the ease of operation of scripting-driven streaming media servers.

Finding a program like Norsk that combines transcoding and packaging with other essential streaming-related functions makes a lot of sense; there’s one less vendor to onboard and one less product to learn and support. As remote production becomes more common, we expect more programs like Norsk to become available.

Those are your high-level options. If you’re interested in learning more about these and other programs that can drive encoding and packaging for your live transcoder. You should plan to attend our upcoming symposium; details will be available in the next couple of weeks.

What Can a VPU Do for You?

What Can a VPU Do for You? - NETINT Technologies

For Cloud-Gaming, a VPU can deliver 200 simultaneous 720p30 game sessions from a single 2RU server.

When you encode using a Video Processing Unit (VPU) rather than the built-in GPU encoder, you will decrease your cost per concurrent user (CCU) by 90%, enabling profitability at a much lower subscription price. How is this technically feasible? Two technology enablers make this possible. First, extraordinarily capable encoding hardware, known as a VPU (video processing unit), dedicated to the task of high-quality video encoding and processing. And second, peer-to-peer direct memory access (DMA) that enables video frames to be delivered at the speed of memory compared to the much slower NVMe buss between the GPU and VPU. Let’s discuss these in reverse order.

Peer-to-Peer Direct Memory Access (DMA)

Within a cloud gaming architecture, the primary role of the GPU is to render frames from the game engine output. These frames are then encoded into a standard codec that is easily decoded on a wide cross section of devices. Generally this is H.264 or HEVC, though AV1 is becoming of interest to those with a broader Android user based. Encoding on the GPU is efficient from a data transfer standpoint because the rendering and encoding occurs on the same silicon die; there’s no transfer of the rendered YUV frame to a separate transcoder over the slower PCIe or NVMe busses. However, since encoding requires substantial GPU resources, this dramatically reduces the overall throughput of the system. Interestingly, it’s the encoder that is often at full capacity and, thus the bottleneck, not the rendering engine. Modern GPU’s are built for general-purpose graphical operations, thus, more real estate is devoted to this compared to video encoding.

By installing a dedicated video encoder in the system and using traditional data transfer techniques, the host CPU can easily manage the transfer of the YUV frame from the GPU to the transcoder but as the number of concurrent game sessions increase the probability of dropped frames or corrupted data makes this technique not usable.

NETINT, working with AMD enabled peer-to-peer direct memory access (DMA) to overcome this situation. DMA is a technology that enables devices within a system to exchange data in memory by allowing the GPU to send frames directly to the VPU whereby removing the situation of the buss becoming clogged as the concurrent session count increases above 48 720p streams.

What can a VPU do for you?

The Benefits of Peer-to-Peer DMA

Peer-to-peer DMA delivers multiple benefits. First, by eliminating the need for CPU involvement in data transfers, peer-to-peer DMA significantly reduces latency, which translates to a more responsive and immersive gaming experience for end-users. NETINT VPUs feature latencies as low as 8ms in fully loaded and sustained operation.

In addition, peer-to-peer DMA relieves the CPU of the burden of managing inter-device data transfers. This frees up valuable CPU cycles, allowing the CPU to focus on other critical tasks, such as game logic and physics calculations, optimizing overall system performance and producing a smoother gaming experience.

By leveraging peer-to-peer communications, data can be transferred at greater speeds and efficiency than CPU-managed transfers. This improves productivity and scalability for cloud gaming production workflows.

These factors combine to produce higher throughput without the need for additional costly resources. This cost-effectiveness translates to improved return on investment (ROI) and a major competitive advantage.

Extraordinarily Capable VPUs

Peer-to-peer DMA has no value if the encoding hardware used is not equally capable. With NETINT VPUs, that isn’t the case here.

The reference system that produces 200 720p30 cloud gaming sessions is built on the Supermicro AS-2015CS-TNR server platform with a single GPU and two Quadra T2A VPUs. This server supports AV1, HEVC, and H.264 video game streaming at up to 8K and 60fps, though as may be predicted, the simultaneous stream counts will be reduced as you increase framerate or resolution.

Quadra T2A is the most capable of the Quadra VPU line, the world’s first dedicated hardware to support AV1. With its embedded AI and 2D engines, the Quadra T2A can support AI-enhanced video encoding, region of interest, and content-adaptive encoding. Quadra T2A coupled with a P2P DMA enabled GPU, allows cloud gaming providers to achieve unprecedented high throughput with ultra-low latency.

Quadra T2A is an AIC (HH HL) form-factor video processing unit with two Codensity G5 ASICs that operates in x86 or Arm-based servers requiring just 40 watts at maximum load. It enables cloud gaming platforms to transition from software or GPU-only based encoding with up to a 40x reduction in the total cost of ownership.

What Can A VPU Do For You?

What Can A VPU Do For You?

It makes Cloud Gaming profitable, finally.

Peer-to-peer DMA is a game-changing technology that reduces latency and increases system throughput. When paired with an extraordinarily capable VPU like the NETINT Quadra T2A, now you can deliver an immersive gaming experience at a CCU that cannot be matched by any competing architecture.

Key Cloud Gaming Concepts with Blacknut’s Olivier Avaro

Cloud Gaming Primer - key concepts - NETINT Technologies

Recently, our Mark Donnigan interviewed Olivier Avaro, the CEO of Blacknut, the world’s leading pure-player cloud gaming service. As an emerging market, cloud gaming is new to many, and the interview covered a comprehensive range of topics with clarity and conciseness. For this reason, we decided to summarize some of the key concepts and include them in this post. If you’d like to listen to the complete interview, and we recommend you do, click here. Otherwise, you can read a lightly edited summary of the key topics below.

For perspective, Avaro founded Blacknut in 2016, and the company offers consumers over seven hundred premium titles for a monthly subscription, with service available across Europe, Asia, and North America on a wide range of devices, including mobiles, set-top-boxes, and Smart TVs. Blacknut also distributes through ISPs, device manufacturers, OTT services, and media companies, offering a turnkey service, including infrastructure and games that allow businesses to instantly offer their own cloud gaming service.

Cloud Gaming Primer - the key points covered in the interview

The basic cloud gaming architecture is simple.

The architecture of cloud gaming is simple. You take games, you put them on the server in the cloud, and you virtualize and stream it in the form of a video stream so that you don’t have to download the game on the client side. When you interact with the game, you send a command back to the server, and you interact with the game this way.

Of course, bandwidth needs to be sufficient, let’s say six megabits per second. Latency needs to be good, let’s say less than 80 milliseconds. And, of course, you need to have the right infrastructure on the server that can run games. This means a mixture of CPU, GPU, storage, and all this needs to work well.

But cost control is key.

We passed the technology inflection point where actually the service becomes to be feasible. Technically feasible, the experience is good enough for the mass market. Now, the issue is on the unique economics and how much it costs to stream and deliver games in an efficient manner so that it is affordable for the mass market.

Public Cloud is great for proof of concept.

We started deploying the service based on the public cloud because this allowed us to test the different metrics, how people were playing the service, and how many hours. And this was actually very fast to launch and to scale…That’s great, but they are quite expensive.

But you need your own infrastructure to become profitable.

So, to optimize the economics, we built what we call the hybrid cloud for cloud gaming, which is a combination of both the public cloud and private cloud. So, we must install our own servers based on GPUs, CPUs, and so on so we can improve the overall performance and the unique economics of the system.

Cost per concurrent user (CCU) is the key metric.

The ultimate measure is the cost per concurrent user that you can get on a specific bill of material. If you have a CPU plus GPU architecture, the game is going to slice the GPU in different pieces in a more dynamic manner and in a more appropriate manner so that you can run different games and as many games as possible.

GPU-only architectures deliver high CCUs, which decreases profitability.

There are some limits on how much you can slice the GPU and still be efficient and so there are some limits in this architecture because it all relies on the GPU. We are investigating different architectures using a VPU, like NETINT’s, that will offload the GPU of the task of encoding and streaming the video so that we can augment the density.

VPU-augmented architectures decrease CCU by a factor of ten.

I think in terms of some big games, because they rely much more on the GPU, you will probably not augment the density that much. But we think that overall, we can probably gain a factor of ten on the number of games that you can run on this kind of architecture. So, passing from a max of 20, 24 games to running two hundred games on an architecture of this kind.

Which radically increases profitability.

So, augmenting the density by a factor of ten means also, of course, diminishing the cost per CCU by a factor of ten. So, if you pay $1 currently, you will pay ten cents, and that makes a whole difference. Because let’s assume basic gamers will play 10 hours per month or 30 hours per month; if this costs $1 per hour, this is $30, right? If this is ten cents, then costs are from $1 to $3, which I think makes the match work on the subscription, which is between 5 to 15 euros per month

The secret sauce is peer-to-peer DMA.

[Author’s note: These comments, explaining how NETINT VPU’s deliver a 10x performance advantage over GPUs, are from Mark Donnigan].

Anybody who understands basic server architecture, it’s not difficult to think, wait a second, isn’t there a bottleneck inside the machine? What NETINT did was create a peer-to-peer sharing inside the DMA (Direct Memory Access). So, the GPU will output a rendered frame, and it’s transferred inside memory, so that the VPU can pick that up, encode it, and there’s effectively zero latency because it’s happening in the memory buffer.

5G is key to successful gameplay in emerging markets.

[Back to Olivier] What we’ve been doing with Ericsson is using 5G networks and defining specific characteristics of what is a slice in the 5G network. So, we can tune the 5G network to make it fit for gaming and to optimize the delivery of gaming with 5G.

So, we think that 5G is going to get much faster in those regions where actually the internet is not so great. We’ve been deploying the Blacknut service in Thailand, Singapore, Malaysia, now in the Philippines. And this has allowed us to reach people in regions where there is no cable or bandwidth with fiber.

Latency needs to be eighty milliseconds or less (much less for first-person shooter games).

You can get a reasonably good experience at 80 milliseconds for most games. But for first-person shooter games, you need to be close to frame accuracy, which is very difficult in cloud gaming. You need to go down to thirty milliseconds and lower, right?

That’s only feasible with the optimal network infrastructure.

And that’s only feasible if you have a network that allows for it. Because it’s not only about the encoding part, the server side, and the client side; it’s also about where the packets are going through the networks. You need to make sure that there is some form of CDN for cloud gaming in place that makes the experience optimal.

Edge servers reduce latency.

We are putting a server at the edge of the network. So, inside the carrier’s infrastructure, the latency is super optimized. So that’s one thing that is key for the service. We started with a standard architecture, with CPU and GPU. And now, with the current VPU architecture, we are putting whole servers consisting of AMD GPU and NETINT VPU. We build the whole package so that we put this in the infrastructure of the carrier, and we can deploy the Blacknut cloud gaming on top of it.

The best delivery resolution is device dependent.

The question is, again, the cost and the experience. Okay? Streaming 4K on a mobile device does not really make sense. The screen is smaller, so you can screen a smaller resolution and that’s sufficient. On a TV, likely you need to have a bigger resolution. Even if there is a great upscale available on most TV sets, we stream 720p on Samsung devices, and that’s super great, right? But of course, scaling up to 1080p will provide a much better experience. So, on TVs and for the game that requires it, I think we’re indeed streaming the service at about 1080p.

Frame rates must match game speed.

When playing a first-person shooter, if you have the choice and you cannot stream 1080p, you would probably stream 720p at 60 FPS rather than 1080p at 30 FPS. But if you have different games with elaborate textures, the resolution is more important, then maybe you will actually select more 1080p and 30 fps resolution.

What we build is fully adaptable. Ultimately, you should not forget that there is a network in between. And even if technically you can stream 4K or 8K, the networks may not sustain it. Okay? And then you’ll have a worse experience streaming 4K than at 1080p 60 FPS resolution.

Revolutionizing Online Media Distribution and Delivery

Advancements in Streaming

Streaming technologies have revolutionized the digital media landscape, transforming how content is distributed and delivered to audiences worldwide. One pioneering figure in this field is Alex Zambelli, whose career at Microsoft has been closely intertwined with the rise of streaming as the dominant digital media distribution method. Zambelli’s work with NBC Sports, particularly during the 2008 Beijing Olympics and subsequent events, was pivotal in advancing online streaming capabilities and earning industry recognition. This article, based on Jan Ozer‘s conversation with Alex during Voices of Video, explores Zambelli’s contributions to streaming technologies, the implementation of multi-view camera angles in Sunday Night Football, and key considerations in livestreaming from insights gained during Olympic events.

Evolution of Streaming Technologies

Alex Zambelli’s career at Microsoft has coincided with the transition from physical media to streaming as the dominant method of distributing digital media. Around 2007, streaming started gaining momentum, gradually overtaking CDs, DVDs, and Blu-rays. Zambelli’s focus on streaming technologies led him to work on Microsoft’s Silverlight, a competitor to Flash, which facilitated the creation of rich web experiences and premium media delivery, including digital rights management. This technology was a significant milestone in the evolution of streaming.

Zambelli’s collaboration with NBC Sports began with the 2008 Beijing Olympics, where they aimed to pioneer online streaming of all Olympics content. Initially, they utilized Windows Media and Silverlight, incorporating adaptive streaming capabilities. The subsequent transition to Microsoft’s Smooth Streaming technology for the 2010 Vancouver Olympics marked a significant advancement. This technology offered on-demand and live streams in high definition, providing viewers with an immersive and seamless experience. These groundbreaking endeavors earned Zambelli and the team recognition from the industry, including nominations for sports Emmys.

Multi-View Camera Angles in Sunday Night Football

The implementation of Smooth Streaming technology played a crucial role in enabling the seamless transition between camera angles in Sunday Night Football broadcasts. By utilizing a single manifest that contained all four camera angles, switching between views became as smooth as switching between bitrates in modern streaming protocols like DASH or HLS. This technology, developed by the broadcast team, allowed viewers to simultaneously watch multiple camera angles, enhancing the overall viewing experience.

Key Considerations in Livestreaming: Insights from Olympic Events

Livestreaming presents unique challenges compared to on-demand streaming due to its real-time nature. Issues such as packet loss, segment loss, blackouts, and ad insertions demand immediate attention and resolution. Unlike on-demand streaming, where there is some leeway to address content or delivery chain issues over time, livestreaming requires constant vigilance. Even a brief interruption or technical problem can significantly impact the viewer experience.

Successful livestreaming events often involve collaborative efforts from multiple companies, including Microsoft, NBC, Akamai, and iStreamPlanet. These events require dedicated teams ready to address and resolve any issues that arise in real time. The nature of livestreaming necessitates a higher level of focus and attention compared to on-demand streaming. It is crucial to prioritize and allocate sufficient resources to ensure the seamless execution of live events. The potential for unexpected issues or failures makes constant monitoring and immediate troubleshooting essential, as even a minor disruption can have significant consequences.

Voices of Video - Cloud Gaming being Real

Play Video about Advancements in Streaming Technologies - NETINT Technologies (Voices of Video with Alex Zambelli from Warner Bros Discovery
VOICES OF VIDEO
Scalable distribution in the age of DRM: Key Challenges and Implications.
Watch the full conversation on YouTube: https://youtu.be/s_afoa71muM
 

Evolution of Video Codecs and Streaming Protocols

The evolution of video codecs and streaming protocols has played a vital role in shaping the streaming landscape. In the early 2000s, the popular video codecs for streaming were VC-1 (supported by Silverlight) and H.264 (supported by Flash). However, the introduction of HTML5 posed challenges for streaming solutions, as the HTML specification lacked the necessary APIs to provide the required level of control and functionality for streaming.

Silverlight and Flash emerged as proprietary plugins that advanced streaming technology beyond what HTML could offer at the time. They provided opportunities to overcome HTML’s limitations and introduced features such as media stream sources and content protection (DRM) to enhance the streaming experience. Silverlight’s media stream source concept, which later influenced HTML’s media source extensions, allowed developers to handle their own segment downloading and parsing, passing the video and audio streams to a media buffer for decoding and rendering. Content protection was a crucial aspect addressed by Silverlight and Flash, as HTML lacked a robust solution for DRM.

Around 2011-2012, Silverlight and Flash gradually phased out as HTML5 matured, offering the necessary APIs for implementing streaming protocols like DASH, HLS, and Smooth Streaming within the browser while incorporating DRM capabilities. HTML5 overcame initial growing pains and established itself as the predominant platform for streaming. By 2014-2015, HTML5 had evolved sufficiently to support basic streaming functionalities and content protection with DRM.

Optimizing Encoding Quality and Cost

Achieving optimal encoding quality while considering cost is a crucial concern for content creators and distributors. At Warner Brothers Discovery, the x264 and x265 codecs are commonly used for transcoding purposes, employing the slow or slower presets to achieve higher quality outputs. This approach balances encoding cost with desired video quality.

Recent discussions within the organization have prompted exploration into the idea of customizing presets based on specific resolutions and content complexities. The focus is on optimizing encoding efficiency by adjusting presets according to the intricacy of the content and the resolution being processed. Different resolutions have varying encoding requirements, and applying the very slow preset to all resolutions may result in unnecessary computational overhead for lower resolutions. Similarly, content complexity plays a role in determining the appropriate preset, as not all content requires the very slow preset. Customizing presets based on resolution and content characteristics allows for more efficient allocation of computational resources.

The popularity and viewership of specific content also factor into the choice of preset. Content with a larger audience may benefit from the slower preset due to potential CDN savings resulting from improved video quality. On the other hand, smaller-scale content with fewer viewers may not necessitate the same level of complexity in encoding. Balancing encoding quality and cost requires thoughtful consideration of these factors.

Adaptive Encoding Ladders: Variations, Frame Rates, and Device Considerations

Adaptive encoding ladders play a crucial role in delivering content based on source resolution and frame rate. At Warner Brothers Discovery, these encoding ladders consist of approximately six to eight different variations, allowing flexibility in content delivery. The source resolution determines the stopping point within the UHD ladder, minimizing the need for multiple permutations of the ladders themselves.

Variations in frame rates necessitate different encoding ladders. The introduction of high frame rates, especially with reality TV content, requires separate encoding ladders to preserve the temporal resolution. Encoding ladders also differ for SDR and HDR content, with distinctions made between HDR10 and Dolby Vision 5, offering specific encoding settings for each.

While currently the same encoding ladders are used for all devices, specific subsets of the ladder may be delivered to certain devices to accommodate their capabilities. Device differentiation is particularly important for high frame rates or resolutions above 1080p. By intentionally capping the manifest delivered to devices that cannot handle certain capabilities, compatibility and optimal viewing experiences can be ensured. Differentiating encoding ladders for various devices is essential for maintaining consistent quality across different devices.

VBR Control, Per-Title Encoding, and DRM Considerations in Video Encoding

Video encoding involves crucial considerations such as VBR control, per-title encoding, and DRM integration. At Warner Brothers Discovery, the x264 and x265 codecs employ a CRF (Constant Rate Factor) rate control with a bitrate and buffer cap for VBR (Variable Bit Rate) encoding. This approach ensures control over codec levels, peak rates, and overall encoding quality.

VBR control is achieved by using VBV (Video Buffering Verifier) buffer size and VBV max rate parameters. These parameters allow for setting the highest average bitrate for the video, while CRF brings the average bitrate below the specified max rate in most cases. This method enables per-title encoding, achieving CDN savings without compromising quality. Differentiating encoding ladders based on resolutions, frame rates, and HDR formats is essential to conform to content licensing agreements and compatibility requirements.

DRM has a significant impact on the encoding ladder. Licensing agreements often demand different security levels for various resolutions, necessitating the assignment of different encryption keys and playback policies to different security groups. The use of hardware-backed DRM, such as Widevine L1 and PlayReady SL3000, is often required for higher resolutions. The trend in the industry is moving towards increased use of DRM across the entire encoding ladder, with a focus on stricter requirements for HDR content. Content licensing agreements are evolving to require comprehensive DRM implementation for improved content protection.

Exploring Hardware and Software DRM: Implementation and Impact on Video Streaming

The choice between hardware and software DRM implementations has implications for video streaming security and performance. Hardware DRM involves integrating DRM clients into the secure video path of the system, tightly coupling with the hardware decoder. This ensures secure decoding and decryption of video streams, preventing unauthorized access to the content. Hardware-based DRM establishes a secure video path or secure media path, where the decrypted and decoded bits cannot be retrieved or accessed by applications. This level of security is achieved through close integration with the hardware decoder, ensuring protection throughout the entire decoding process.

On the other hand, software DRM performs decoding and decryption in software, introducing a potential vulnerability where the decoded bits could be compromised or accessed by unauthorized parties. Software DRM lacks the same level of hardware integration and security provided by hardware-based DRM.

The limitations of software-based DRM can impact the resolution of premium content when viewing it on certain platforms or browsers without hardware support. For example, Chrome’s support for Widevine DRM is limited to L3, the software-based implementation. This can result in inferior video quality compared to browsers like Edge or Safari, which support hardware DRM, allowing for a more secure video path and higher quality streaming.

Unifying Packaging Formats: HLS, DASH, and CMAF in Video Streaming

Standardizing packaging formats is crucial for compatibility and interoperability in video streaming. Warner Brothers Discovery and Hulu have been utilizing both HLS (HTTP Live Streaming) and DASH (Dynamic Adaptive Streaming over HTTP) for content distribution. HLS is predominantly used for Apple devices, while DASH is employed for other devices.

The commonality between HLS and DASH lies in their utilization of the CMAF (Common Media Application Format) standard. CMAF serves as a standardized version of fragmented MP4 (fMP4), specifying the necessary boxes and encryption application for fMP4 media segments used in HLS and DASH. CMAF is not a streaming protocol itself but encompasses two components.

Firstly, it defines a refined version of fMP4 for HLS and DASH, establishing a more precise set of guidelines for compatibility. Many existing HLS and DASH implementations using fMP4 media segments are already CMAF-compliant.

Secondly, CMAF specifies a hypothetical logical media presentation model, outlining the relationship between tracks, segments, fragments, and chunks. This model closely resembles HLS or DASH without explicitly using those terms. It provides a framework for addressing different levels of the media presentation.

HLS and DASH can be considered as the physical implementations of the logical media presentation model described by CMAF. The HLS-DASH interoperability specification, such as CTA 5005, heavily relies on CMAF, serving as a unifying model and describing how both HLS and DASH integrate with CMAF. This unification allows for similar concepts to be described across both formats, enhancing compatibility and simplifying the streaming ecosystem.

Exploring Hardware and Software DRM: Implementation and Impact on Video Streaming

The streaming industry faces challenges related to content publishing and compatibility across diverse platforms and devices. The Consumer Technology Association (CTA) plays a crucial role in addressing these challenges and streamlining content publishing processes. The CTA is actively working to enhance interoperability within the streaming industry, allowing publishers to focus primarily on content development rather than compatibility concerns.

The CTA’s WAVE initiative serves as a platform for fostering efforts to streamline content publishing and compatibility. One major challenge in the streaming landscape is the presence of numerous application development platforms. For example, within Warner Brothers Discovery, there are approximately a dozen or 16 different application development platforms utilized for their streaming service, with some overlap between certain platforms such as Android TV and Fire TV.

Developers often encounter the unique scenario of building multiple versions of the same application in various programming languages using different platform APIs. This complexity arises due to the diversity of devices and platforms requiring tailored applications. This situation is unparalleled compared to other industries where typically a web app, iOS app, and Android app cover the majority of development needs.

The multitude of application development platforms poses challenges in areas such as encoding and packaging. Determining device capabilities becomes arduous without a standardized specification or set of APIs that can provide consistent and reliable information across different platforms.

The standardization of device media capabilities detection APIs is a crucial step towards enhancing compatibility in the streaming industry. Efforts within the World Wide Web Consortium (W3C) to define these APIs in HTML are underway. However, it is important to note that not all platforms utilize HTML, necessitating the presence of similar APIs across all platforms. Once standardized APIs for media capabilities detection are established, developing a standardized method for signaling these capabilities to servers becomes essential. This facilitates targeting specific devices based on their capabilities and enables actions such as manifest filtering.

Standardization efforts are vital for simplifying content publishing and enhancing compatibility in the streaming industry. By establishing standardized specifications and APIs, the industry can overcome compatibility challenges and streamline the development and distribution of streaming content.

The Leverage Is Imperative

The evolution of streaming technologies has brought about significant advancements in digital media distribution and delivery. Pioneers like Alex Zambelli have played a crucial role in driving innovation and pushing the boundaries of what is possible in online streaming. The implementation of multi-view camera angles, considerations in livestreaming, advancements in video codecs and streaming protocols, and optimization of encoding quality and cost are key areas that shape the streaming landscape. Standardization efforts, hardware and software DRM implementations, and the role of organizations like the CTA further contribute to enhancing compatibility and simplifying content publishing in the streaming industry. As the streaming industry continues to evolve, leveraging these advancements and best practices is imperative to deliver high-quality, seamless streaming experiences to audiences worldwide.