Beyond Traditional Transcoding: NETINT’s Pioneering Technology for Today’s Streaming Needs

Welcome to our here’s-what’s-new-since-last-IBC-so-you-should-schedule-a-meeting-with-us blog post. I know you’ve got many of these to wade through, so I’ll be brief.

First, a brief introduction. We’re NETINT, the ASIC-based transcoding company. We sell standalone products like our T408 video transcoder and Quadra VPUs ( for video transcoding units) and servers with ten of either device installed. All offer exceptional throughput at an industry-low cost per stream and power consumption per stream. Our products are denser, leaner, and greener than any competitive technology.
They’re also more innovative. The first-generation T408 was the first ASIC-based hardware transcoder available for at least a decade, and the second-generation Quadra was the first hardware transcoder with AV1 and AI processing. Our Quadra shipped before Google and Meta shipped their first generation ASIC-based transcoders and they still don’t support AV1.
That’s us; here’s what’s new.

Capped CRF Encoding

We’ve added capped CRF encoding to our Quadra products for H.264, HEVC, and AV1, with capped CRF coming for the T408 and T432 (H.264/HEVC). By way of background, with the wide adoption of content-adaptive encoding techniques (CAE), constant rate factor (CRF) encoding with a bit rate cap gained popularity as a lightweight form of CAE to reduce the bitrate of easy-to-encode sequences, saving delivery bandwidth and delivering CBR-like quality on hard-to-encode sequences. Capped CRF encoding is a mode that we expect many of our customers to use.

Figure 1 shows capped CRF operation on a theoretical football clip. The relevant switches in the command string would look something like this:

-crf 21  -maxrate 6MB

This directs FFmpeg to deliver at least the quality of CRF 21, which for H.264 typically equals around a 95 VMAF score. However, the maxrate switch ensures that the bitrate never exceeds 6 Mbps.

As shown in the figure, in operation, the Quadra VPU transcodes the easy-to-encode sideline shots at CRF 21 quality, producing a bitrate of around 2 Mbps. Then, during actual high-motion game footage, the 6MB cap would control, and the VPU would deliver the same quality as CBR. In this fashion, capped CRF saves bandwidth with easy-to-encode scenes while delivering equivalent to CBR quality with hard-to-encode scenes.

Figure 1. Capped CRF in operation. Relatively low-motion sideline shots are encoded to CRF 21 quality (~95 VMAF), while the 6 Mbps bitrate cap controls during high-motion game footage. Transcoding.
Figure 1. Capped CRF in operation. Relatively low-motion sideline shots are encoded to CRF 21 quality (~95 VMAF), while the 6 Mbps bitrate cap controls during high-motion game footage.

By deploying capped CRF, engineers can efficiently deliver high-quality video streams, enhance viewer experiences, and reduce operational expenses. As the demand for video streaming continues to grow, Capped CRF emerges as a game-changer for engineers striving to stay at the forefront of video delivery optimization.

You can read more about capped CRF operation and performance in Get Free CAE on NETINT VPUs with Capped CRF.

Peer-to-Peer Direct Memory Access (DMA) for Cloud Gaming

Peer-to-peer DMA is a feature that makes the NETINT Quadra VPU ideal for cloud gaming. By way of background, in a cloud-gaming workflow, the GPU is primarily used to render frames from the game engine output. Once rendered, these frames are encoded with codecs like H.264 and HEVC.

Many GPUs can render frames and transcode to these codecs, so it might seem most efficient to perform both operations on the same GPU. However, encoding demands a significant chunk of the GPU’s resources, which in turn reduces overall system throughput. It’s not the rendering engine that’s stretched to its limits but the encoder.

What happens when you introduce a dedicated video transcoder into the system using normal techniques? The host CPU manages the frame transfer between the GPU and the transcoder, which can create a bottleneck and slow system performance.

Figure 2. Peer-to-peer DMA enables up to 200 720p60 game streams from a single 2RU server. Transcoding.
Figure 2. Peer-to-peer DMA enables up to 200 720p60 game streams from a single 2RU server.

In contrast, peer-to-peer DMA allows the GPU to send frames directly to the transcoder, eliminating CPU involvement in data transfers (Figure 2). With peer-to-peer DMA enabled, the Quadra supports latencies as low as 8ms, even under heavy loads. It also unburdens the CPU from managing inter-device data transfers, freeing it to handle other essential tasks like game logic and physics calculations. This optimization enhances the overall system performance, ensuring a seamless gaming experience.

Some NETINT customers are using Quadra and peer-to-peer DMA to produce 200 720p60 game streams from a single 2RU server, and that number will increase to 400 before year-end. If you’re currently assembling an infrastructure for cloud gaming, come see us at IBC.

Logan Video Server

NETINT started selling standalone PCIe and U.2 transcoding devices, which our customers installed into servers. In late 2022, customers started requesting a prepackaged solution comprised of a server with ten transcoders installed. The Logan Video Server is our first response.

Logan refers to NETINT’s first-generation G4 ASIC, which transcodes to H.264 and HEVC. The Logan Video Server, which launched in the first quarter of 2023, includes a SuperMicro server with a 32-core AMD CPU running Ubuntu 20.04 LTS and ten NETINT T408 U.2 transcoder cards (which cost $300 each) for $8,900. There’s also a 64-core option available for $11,500 and an 8-core option for $7,000.

The value proposition is simple. You get a break on price because of volume commitments and don’t have to install the individual cards, which is generally simple but still can take an hour or two. And the performance with ten installed cards is stunning, given the price tag.

You can read about the performance of the 32-core server model in my review here, which also discusses the software architecture and operation. We’ll share one table, which shows one-to-one transcoding of 4K, 1080p, and 720p inputs with FFmpeg and GStreamer.

At the $8,900 cost, the server delivers a cost per stream as low as $445 for 4K, $111.25 for 1080p, and just over $50 for 720p at normal and low latency. Since each T408 only draws 7 watts and CPU utilization is so low, power consumption is also exceptionally low.

Meet NETINT at IBC - Transcoding - Table-1
Table 1. One-to-one transcoding performance for 4K, 1080p, and 720p.

With impressive density, low power consumption, and multiple integration options, the NETINT Video Transcoding Server is the new standard to beat for live streaming applications. With a lower-priced model available for pure encoding operations and a more powerful model for CPU-intensive operations, the NETINT Logan server family meets a broad range of requirements.

Quadra Video Server

Once the Logan Video Server became available, customers started asking about a similarly configured server for NETINT’s Quadra line of video transcoding units (VPUs), which adds AV1 output, onboard scaling and overlay, and two AI processing engines. So, we created the Quadra Video Server.

This model uses the same Supermicro chassis as the Logan Video Server and the same Ubuntu operating system but comes with ten Quadra T1U U.2 form factor VPUs, which retail for $1,500 each. Each T1U offers roughly four times the throughput of the T408, performs on-board scaling and overlay, and can output AV1 in addition to H.264 and HEVC.

The CPU options are the same as the Logan server, with the 8-core unit costing $19,000, the 32-core unit costing $21,000, and the 64-core model costing $24,000. That’s 4X the throughput at just over 2x the price.

You can read my review of the 32-core Quadra Video Server here. I’ll again share one table, this time reporting encoding ladder performance at 1080p for H.264 (120 ladders), HEVC (140), and AV1 (120), and 4K for HEVC (40) and AV1 (30).

In comparison, running FFmpeg using only the CPU, the 32-core system only produced nineteen H.264 1080p ladders, five HEVC 1080p ladders, and six AV1 1080p ladders. Given this low-volume throughput at 1080p, we didn’t bother trying to duplicate the 4K results with CPU-only transcoding.

Figure 2. Encoding ladder performance of the Quadra Video Server.
Table 2. Encoding ladder performance of the Quadra Video Server.

Beyond sheer transcoding performance, the review also details AI-based operations and performance for tasks like region of interest transcoding, which can preserve facial quality in security and other relatively low-quality videos, and background removal for conferencing applications.

Where the Logan Video Server is your best low-cost option for high volume H.264 and HEVC transcoding, the Quadra Video Server quadruples these outputs, adds AV1 and onboard scaling and overlay, and makes AI processing available.

Come See Us at the Show

We now return to our normally scheduled IBC pitch. We’ll be in Stand 5.A86 and you can book a meeting by clicking here.

Figure 3. Book a meeting.
.

Now ON-DEMAND: Symposium on Building Your Live Streaming Cloud

Seamless Client Onboarding – Hardware and Software Synergy – interview with Kenneth Robinson

A crucial aspect of NETINT’s value proposition is its proactive and holistic customer support, from the pre-purchase phase to onboarding and the post-purchase journey. NETINT streamlines this transition with seamless hardware installation facilitated by compliance with U.2 and PCIe standards and intuitive software integration via tools like FFmpeg and GStreamer, and an SDK.

A recent conversation with Kenneth Robinson, NETINT’s Manager of Field Application Engineering, detailed how he and his team support NETINT customers through the buying, onboarding and implementation process and beyond. By way of background, Robinson joined NETINT in January 2023 and brings substantial expertise from his prior tenure at a video gateway development company. During the conversation, he described how his team’s adeptness with scripting and debugging simplifies and accelerates customer deployments.

The discussion also spotlights the efficiency of NETINT’s transcoder management, GStreamer’s increased usage among NETINT customers due to its hyperthreaded efficiency, and several strategic recommendations for potential server buyers. Robinson’s insights solidify NETINT’s reputation as a client-centric enterprise, leveraging both its technological prowess and dedicated human capital.

From Jan Ozer

This interview is with Kenneth Robinson, NETINT’s manager of field application engineering. We discussed how Kenneth and his team help get NETINT customers up and running, including hardware and software installation and the operation of software like GStreamer and FFmpeg.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
Kenneth, tell us a little bit about yourself. What’s your background, and how long have been with NETINT?

Kenneth:
I’ve been with NETINT since January of this year (2023). Prior to that, I worked for a company that developed video gateways for big MSOs for installation in hotels and other uses. I ran a team of quality engineers and managed the support team there as well.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
So, you’re comfortable with video and video-related technologies?

Kenneth:
Oh yes. And familiar with a lot of different ways to deliver video, like streaming and multicast.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
What’s the typical skillset of your FAE team?

Kenneth:
They are software people. They understand software and debugging software and write scripts to help customers test or debug different issues. So very good communicators. They work with our customers to make sure that NETINT cards benefit them in the way that they are supposed to..

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
What do you see as your role in the company?

Kenneth:
I see it as ensuring that our customers get the support they need in a timely manner and making sure the transition from their current transcoders to NETINT transcoders happens smoothly, quickly, and efficiently. And that any roadblocks are removed in a very timely manner for them.

Supporting New Customer Installations

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
How’s the typical process work? Do you start when customers are evaluating NETINT products, or after they decide to purchase and deploy them?

Kenneth:
Both situations. Often the sales team will include me in a customer call to learn exactly how they want to use our products and to make sure we can deliver what they need. And then the other half is usually after a customer buys one of our products.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
How does that work? When a customer buys a product, what happens? It gets shipped, and they receive it. How do they get the software and documentation?

Kenneth:

We know they’ve received the product based on the tracking number. Then we’ll reach out to the customer and send links to our documentation portal with the software SDK. This has the installation guide, integration guides, application notes, and everything they need to install and get up and running. And then we’ll usually follow up every couple of weeks or so just to make sure the process is going smoothly.

But, if at any point the customer has a question, they can reach out to us, and we will be happy to help them

Hardware Installation

Figure 1. NETINT offers products in two form factors, U.2 and PCIe.
Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
What’s the hardware installation like?

Kenneth:

So, the hardware is very simple. We have two form factors. We have the PCIe form factor, which is just like any network card or GPU that you just install. And then there’s the U.2 form factor, which is the same as a hard drive. So, there’s nothing special required or special tools or knowledge; if you’ve worked on a computer before, you should be able to install either form factor.

 

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
In the nine months you’ve been here, what types of incompatibilities have you seen with the servers in the field?

Kenneth:

We haven’t seen any incompatibilities. Our products have worked on every server that we’ve tried because we follow the different standards for the U.2 and PCIe form factors.

Software Installation and Operation

Figure 2 - The Quadra Server - software architecture for controlling the Quadra Server
Figure 2. You can control all transcoders with FFmpeg, GStreamer, or the API (libxcoder).
Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
So, the hardware installation is straightforward. What’s the software installation like?

Kenneth:

The software is relatively easy. We work with FFmpeg and GStreamer, but our software code is not pushed into the repository. So, part of our SDK is a patch that you apply and then compile FFmpeg or GStreamer, though we have installation scripts that will automate that process for you. If you just want to run a quick test, the installation scripts are very good and will get you up and running in a matter of minutes.

We also have an API, so the customer can access the cards directly and not rely on FFMPEG or Gstreamer.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
If you install multiple cards, how does the software distribute jobs among those cards?

Kenneth:

There are two ways. You can specify the exact card you want to use as the encoder or decoder. Or, you can allow a resource manager to manage that, and it will send each job to whichever decoder or encoder has the capacity.

FFmpeg, Gstreamer, or API?

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
In terms of software control, what’s the typical customer doing? We’ve got GStreamer, FFmpeg, and the API. What percentage are using each alternative?

Kenneth:

The majority is FFmpeg and, after that, the API. Then there’s a small number that use GStreamer, although GStreamer is slowly getting more popular.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
Why is that?

Kenneth:

We found that when FFmpeg scales multiple files simultaneously, like when creating an encoding ladder, it sometimes would bottleneck. While the capacity was good, it wasn’t great. If we tried Gstreamer, the capacity increased significantly enough that it made sense to use GStreamer for that workflow.

Server vs. Individual Cards

Figure 3. NETINT offers two servers populated with ten Quadras or T408s.
Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
Let’s switch gears a bit. What’s your experience with the server? What would you advise someone to buy a server fully loaded with Quadras or T408s versus buying the cards and installing them themselves?

Kenneth:

If you need a custom architecture, like adding GPUs for cloud gaming, you should buy the cards and install them yourself. If you intend to perform high-volume file-based transcoding or live streaming, you should consider either server.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
So, if you’ve got a set application and you just want to get a device in and start working, the servers are a good option. If you’re going to customize your servers, buy the cards.

Kenneth:

Yes, that’s correct.

Seamless Client Onboarding - Hardware and Software Synergy - Kenneth Robinson from NETINT

Jan:
That’s all I have. Thanks for taking the time today.

Kenneth:

Thanks for having me.

Watch on-demand: Symposium on Building Your Live Streaming Cloud

Cloud services are an effective way to begin live streaming, but once you reach a particular scale, you may realize that you’re paying too much and can save significant OPEX by deploying your own transcoding infrastructure. The question is, how to get started? 

Build Your Own Live Streaming Cloud symposium was a huge hit, with many insights from industry insiders on how to build a live streaming cloud. Here are replays of the event. (For the best viewing experience, please watch from your desktop.)

From Cloud to Local Transcoding For Minimum Latency and Maximum Quality

From Cloud to Local Transcoding

Over the last ten years or so, most live productions have migrated towards a workflow that sends a contribution stream from the venue into the cloud for transcoding and delivery. For live events that need absolute minimum latency and maximum quality, it may be time to rethink that workflow, particularly if you’ve got multiple sharable inputs at the venue.

So says Bart Snoeks, Account & Partnership Director of THEO Technologies (“THEO”). By way of background, THEO invented and has commercially implemented the High-Efficiency Streaming Protocol (HESP), an adaptive HTTP- based video streaming protocol that enables sub-second end-to-end latency. You see how HESP compares to other low latency protocols in the table shown in Figure 1 from the HESP Alliance website – the organization focused on promoting and further advancing HESP.

Figure 1. HESP compared to other low latency protocols.

THEO has productized HESP as a real-time streaming service called THEOlive, which targets applications like live sports and betting, casino igaming, live auctions, and other events that require high-quality video at exceptionally low latency with delivery at scale. For example, in the case of in-play betting, cutting latency from 8 to 10 seconds (HLS) to under one second expands the betting window during the critical period just before the event.

When streaming casino games, ultra-low latency promotes fluent interactions between the players and ensures that all players see the turn of the cards in real time. When latency is lower, players can bet more quickly, increasing the number of hands that can be played.

According to Snoeks, a live streaming workflow that sends a contribution stream to the cloud for transcoding will always increase latency and can degrade quality as re-transcoding is needed. It’s especially poorly suited for stadium venues with multiple camera locations that want to enhance the attendee experience with multiple live feeds. In those latency-critical use cases you are actually adding network latency with a roundtrip to and from the cloud. Instead, it makes much more sense creating your encoding ladder and packaging on-site and pulling that directly from the origin to a private CDN for delivery.

Let’s take a step back and examine these two workflows.

Live Streaming Workflows

As stated at the top, most live-streaming productions encode a single contribution stream on-site and send that into the cloud for transcoding to a full ladder, packaging, and delivery. You see this workflow in Figure 2.

Figure 2. Encoding a contribution stream on-site to deliver to the cloud for transcoding, packaging, and delivery

This schema has multiple advantages. First, you’re sending a single stream to the cloud, lowering bandwidth requirements. Second, you’re centralizing your transcoding assets in a single location in the cloud, which typically enables better utilization.

According to Snoeks, however, this workflow will add 200 to 500  milliseconds of latency at a minimum, depending on the encoding speed, quality, and contribution protocol. In addition, though high-quality contribution encoders can minimize generational loss from the contribution stream, lower-quality transcoders can noticeably degrade the quality of the final output. You also need a contribution encoder for each camera, which can jack up hardware costs in high-volume igaming and similar applications.

Instead, for some specific use cases, you should consider the workflow shown in Figure 3. Here, you transcode on-site and send the full encoding ladder to a public CDN for external delivery and to a private CDN or equivalent for local viewing. This decreases latency to a minimum and produces absolute top quality as you avoid the additional transcoding step.

From Cloud to Local Transcoding - Figure-2
Figure 3. Encoding and packaging the encoding ladder on site and transmitting the streams to a public CDN for external viewers and a private CDN for local viewers.

This schema is particularly useful for venues that want to enhance the in-stadium experience with multiple camera feeds. Imagine a stock car race where an attendee only sees his driver on the track once every minute or so. Encoding on-site might allow attendees to watch the camera view from inside their favorite driver’s car with near real-time latency. It might let golf fans follow multiple groups while parked at a hole or following their favorite player.

If you’re encoding input from many cameras, say in a casino or even racetrack environment, the cost of on-site encoding might be less than the cost of the individual contribution encoders. So, you get the best of all worlds, lower cost per stream, lower latency, higher quality, and a better in-person experience where applicable.

If you’re interested in learning about your transcoding options, check out our symposium Building Your Own Live Streaming Cloud, where you can hear from multiple technology experts discussing transcoding options like CPU-only, GPU, and ASIC-based transcoding and their respective costs, throughput, and density.

If you’re interested in learning more about HESP, THEO in general, or THEOlive, watch for an upcoming episode of Voices of Video, where I interview Pieter-Jan Speelman, CTO of THEO Technologies. We’ll discuss HESP’s history and evolution, the power of THEOlive real-time streaming technology, and how to use it in your live production stack. Make sure you don’t miss it!

Now ON-DEMAND: Symposium on Building Your Live Streaming Cloud

Get Free CAE on NETINT VPUs with Capped CRF

Capped CRF

NETINT recently added capped CRF to the rate control mechanism across our Video Processing Unit (VPU) product lines. With the wide adoption of content-adaptive encoding techniques (CAE), constant rate factor (CRF) encoding with a bit rate cap gained popularity as a lightweight form of CAE to reduce the bitrate of easy-to-encode sequences, saving delivery bandwidth with constant video quality. It’s a mode that we expect many of our customers to use, and this document will explain what it is, how it works, and how to get the most use from the feature.

In addition to working with H.264, HEVC, and AV1 on the Quadra VPU line, capped CRF works with H.264 and HEVC on the T408 and T432 video transcoders. This document details how to encode with capped CRF using the H.264 and HEVC codecs on Quadra VPUs, though most application scenarios apply to all codecs across the NETINT VPU lines.

What is Capped CRF and How Does it Work?

Capped CRF is a bitrate control technique that combines constant rate factor (CRF) encoding with a bit rate cap. Multiple codecs and software encoders support it, including x264 and x265 within FFmpeg. In contrast to CBR and VBR encoding, which encode to a specified target bitrate (and ignore output quality), CRF encodes to a specified quality level and ignores the bitrate.

CRF values range from 0-51, with lower numbers delivering higher quality at higher bitrates (less savings) and higher CRF values delivering lower quality levels at lower bitrates (more bitrate savings). Many encoding engineers will utilize values spanning 21 to 23. Which is right for you? As you will read below, your desired quality and bitrate savings balance determines the best value for your use case.

For example, with the x264 codec, if you transcode to CRF 23, the encoder typically outputs a file with a VMAF quality of 93-95. If that file is a 4K60 soccer match, the bitrate might be 30 Mbps. If it’s a 1080p talking head, it might be 1.2 Mbps. Because CRF delivers a known quality level, it’s ideal for creating archival copies of videos. However, since there’s no bitrate control, in most instances, CRF alone is unusable for streaming delivery.

When you combine CRF with a bit rate cap, you get the best of both worlds, a bit rate reduction with consistent quality for easy-to-encode clips and similar to CBR quality and bitrate or more complex clips.

Here’s how capped CRF could be used with the Quadra VPU:

ffmpeg -i input crf=23:vbvBufferSize=1000:bitrate=6000000 output

The relevant elements are:

  • CRF=23 – sets the quality target at around 95 VMAF

  • vbvBufferSize=1000 – sets the VBV buffer to one second (1000 ms)

  • bitrate=6000000 – caps the bitrate at 6 Mbps.

These commands would produce a file that targets close to 95 VMAF quality but, in all cases, peaks at around 6 Mbps.

For a simple-to-encode talking head clip, Quadra produced a file with an average bitrate of 1,274 kbps and a VMAF score of 95.14. Figure 1 shows this output in a program called Bitrate Viewer. Since the entire file is under the 6 Mbps cap, the CRF value controls the bitrate throughout.

Encoding this clip with Quadra using CBR at 6 Mbps produced a file with a bit rate of 5.4 Mbps and a VMAF score of 97.50. Multiple studies have found that VMAF scores above 95 are not perceptible by viewers, so the extra 2.26 VMAF score doesn’t improve the viewer’s quality of experience (QoE). In this case, capped CRF reduces your bandwidth cost by 76% without impacting QoE.

Figure 1. Capped CRF encoding a simple-to-encode video in Bitrate Viewer.

You see this in Figure 2, showing the capped CRF frame with a VMAF score of 94.73 on the left and the CBR frame with a VMAF score of 97.2 on the right. The video on the right has a bit rate over 4 Mbps larger than the video on the left, but the viewer wouldn’t notice the difference.

Figure 2. Frames from the talkinghead clip. Capped CRF at 1.23 Mbps on the left,
CBR at 5.4 Mbps on the right. No viewer would notice the difference.

Figure 3 shows capped CRF operation with a hard-to-encode American football clip. The average bitrate is 5900 kbps, and the VMAF score is 94.5. You see that the bitrate for most of the file is pushing against the 6 Mbps cap, which means that the cap is the controlling element. In the two regions where there are slight dips, the CRF setting controls the quality.

Figure 3. Capped CRF encoding a hard-to-encode video in Bitrate Viewer.

In contrast, the CBR encode of the football clip produced a bit rate of 6,013 kbps and a VMAF score of  94.73. Netflix has stated that most viewers won’t notice a VMAF differential under 6 points, so a viewer would not perceive the .25 VMAF delta between the CBR and capped CRF file. In this case, capped CRF reduced delivery bandwidth by about 2% without impacting QoE.

Of course, as shown in Figure 2, the two-minute segment tested was almost all high motion. The typical sports broadcast contains many lower-motion sequences, including some commercials, cutting to the broadcasters, or during timeouts and penalty calls. In most cases, you would expect many more dips like those shown in Figure 2 and more substantial savings.

So, the benefits of capped CRF are as follows:

  • You can use a single ladder for all your content, automatically saving bitrate on easy-to-encode clips and delivering the equivalent QoE on hard-to-encode clips.
  • Even if you modify your ladder by type of content, you should save bandwidth on easy-to-encode regions within all broadcasts without impacting QoE.
  • Provides the benefit of CAE without the added integration complexity or extra technology licensing cost. Capped CRF is free across all NETINT VPU and video transcoder products.

Producing Capped CRF

Using the NETINT Quadra VPU series, the following commands for H.264 capped CRF will optimize video quality and deliver a file or stream with a fully compliant VBV buffer. As noted previously, this command string with the appropriate modifications to codec value will work across the entire NETINT product line. For example, to output HEVC, change -c:v h264_ni_quadra_enc to -c:v h265_ni_quadra_enc.

Here’s the command string.

ffmpeg -y -i input.mp4 -y -c:v h264_ni_quadra_enc -xcoder-params “gopPresetIdx=5:RcEnable=0:crf=23:intraPeriod=120:lookAheadDepth=10:cuLevelRCEnable=1:v
bvBufferSize=1000:bitrate=6000000:tolCtbRcInter=0:tolCtbRcIntra=0:zeroCopyMode=0″ output.mp4

Here’s a brief explanation of the encoding-related switches.

  • -c:v h264_ni_quadra_enc -xcoder-params – Selects Quadra’s H.264 codec and identifies the codec commands identified below.

  • gopPresetIdx=5 – this chooses the Group of Pictures (GOP) pattern, or the mixture of B-frame and P-frames within each GOP. You should be able to adjust this without impacting capped CRF performance.

  • RcEnable=0 – this disables rate control. You must use this setting to enable capped CRF.

  • crf=23 – this chooses the CRF value. You must include a CRF value within your command string to enable capped CRF.

  • intraPeriod=120 – This sets the GOP size to four seconds which we used for all tests. You can adjust this setting to your normal target without impacting CRF operation.

  • lookAheadDepth=10 – This sets the lookahead to 10 frames. You can adjust this setting to your normal target without impacting CRF operation.

  • cuLevelRCEnable=1 – this enables coding unit-level rate control. Do not adjust this setting without verifying output quality and VBV compliance.

  • vbvBufferSize=1000 – This sets the VBV buffer size. You must set this to trigger capped CRF operation.

  • bitrate=6000000 – This sets the bitrate. You must set this to trigger capped CRF operation. You can adjust this setting to your target without impacting CRF operation.

  • tolCtbRcInter=0 – This defines the tolerance of CU-level rate control for P-frames and B-frames. Do not adjust this setting without verifying output quality and VBV compliance.

  • tolCtbRcIntra=0 – This sets the tolerance of CU level rate control for I-frames. Do not adjust this setting without verifying output quality and VBV compliance.

  • zeroCopyMode=0 – this enables or disables the libxcoder zero copy feature. Do not adjust this setting without verifying output quality and VBV compliance.

You can access additional information about these controls in the Quadra Integration and Programming Guide.

Choosing the CRF Value and Bitrate Cap – H.264

Deploying capped CRF involves two significant decisions, choosing the CRF value and setting the bitrate cap. Choosing the CRF value is the most critical decision, so let’s begin there.

Table 1 shows the bitrate and VMAF quality of ten files encoded with the H.264 codec using the CRF values shown with a 6 Mbps cap and using CBR encoding with a 6 Mbps cap. The table presents the easy-to-encode files on top, showing clip-specific results and the average value for the category. The Delta from CBR shows the bitrate and VMAF differential from the CBR score. Then the table does the same for hard-to-encode clips, showing clip-specific results and the average value for the category. The bottom two rows present the overall average bitrate and VMAF values and the overall savings and quality differential from CBR.

Capped CRF - Table 1. CBR and capped CRF bitrates and VMAF scores for H.264 encoded clips.
Table 1. CBR and capped CRF bitrates and VMAF scores for H.264 encoded clips.

As mentioned, with CRF, lower values produce higher quality. In the table, CRF 19 produces the highest quality (and lowest bitrate savings), and CRF 27 delivers the lowest quality (and highest bitrate savings). What’s the right CRF value? The one that delivers the target VMAF score for your typical clips for your target audience.

For the test clips shown, CRF 19 produces an average quality of well over 95; as mentioned above, VMAF scores beyond 95 aren’t perceivable by the average viewer, so the extra bandwidth needed to deliver these files is wasted. Premium services should choose CRF values between 21-23 to achieve the top rung quality of around 95 VMAF scores. These deliver more significant bandwidth savings than CRF 19 while preserving the desired quality level. In contrast, commodity services should experiment with higher values like 25-27 to deliver slightly lower VMAF scores while achieving more significant bandwidth savings.

What bitrate cap should you select? CRF sets quality, while the bitrate cap sets the budget. In most cases, you should consider using your existing cap. As we’ve seen, with easy-to-encode clips, capped CRF should deliver about the same quality of experience with the potential for bitrate savings. For hard-to-encode clips, capped CRF should deliver the same QoE with the potential for some bitrate savings on easy-to-encode sections of your broadcast.

Note that identifying the optimal CRF value will vary according to the complexity of your video files, as well as frame rate, resolution, and bitrate cap. If you plan to implement capped CRF with Quadra or any encoder, you should run similar tests on your standard test clips using your encoding parameters and draw your own conclusions.

Now let’s examine capped CRF and HEVC.

Choosing the CRF Value and Bitrate Cap – HEVC

Table 2 shows the results of HEVC encodes using CBR at 4.5 Mbps and the specified CRF values with a cap of 4.5 Mbps. With these test clips and encoding parameters, Quadra’s CRF values produce nearly the same result, with CRF values 21-23 appropriate for premium services and 25 – 27 good settings for UGC content.

Capped CRF - Table 2. CBR and capped CRF bitrates and VMAF scores for HEVC encoded clips.
Table 2. CBR and capped CRF bitrates and VMAF scores for HEVC encoded clips.

Again, the cap is yours to set; we arbitrarily reduced the H.264 bitrate cap of 6 Mbps by 25% to determine the 4.5 Mbps cap for HEVC.

Capped CRF Performance

Note that as currently tested, capped CRF comes with a modest performance hit, as shown in Table 3. Specifically, in CBR mode, Quadra output twenty 1080p30 H.264-encoded streams. This dropped to sixteen using capped CRF, a reduction of 20%.

For HEVC, throughput dropped from twenty-three to eighteen 1080p30 streams, a reduction of about 22%. We performed all tests using CRF 21, with a 6 Mbps cap for H.264 and 4.5 Mbps for HEVC. Note that these are early days in the CRF implementation, and it may be that this performance delta is reduced or even eliminated over time.

Capped CRF - Table 3. 1080p30 outputs produced using the techniques shown.
Table 3. 1080p30 outputs produced using the techniques shown.

We installed the Quadra in a workstation powered by a 3.6 GHz AMD Ryzen 5 5600X 6-Core Processor running Ubuntu 18.04.6 LTS with 16 GB of RAM. As you can see in the table, we also tested output for the x264 codec in FFmpeg using the medium and veryfast presets, producing two and five 1080p30 outputs, respectively. For x265, we tested using the medium and ultrafast presets and the workstation produced one and three 1080p30 streams.

Even at the reduced throughput, Quadra’s CRF output dwarfs the CPU-only output. When you consider that the NETINT Quadra Video Server packs ten Quadra VPUs into a single 1RU form factor, you get a sense of how VPUs offer unparalleled density and the industry’s lowest cost per stream and power consumption per stream.

Bandwidth is one of the most significant costs for all live-streaming productions. In many applications, capped CRF with the NETINT Quadra delivers a real opportunity to reduce bandwidth cost with no perceived impact on viewer quality of experience.

From Cloud to Control. Building Your Own Live Streaming Platform

Cloud services are an effective way to begin live streaming. Still, once you reach a particular scale, it’s common to realize that you’re paying too much and can save significant OPEX by deploying transcoding infrastructure yourself. The question is, how to get started?

NETINT’s Build Your Own Live Streaming Platform symposium gathers insights from the brightest engineers and game-changers in the live-video processing industry on how to build and deploy a live-streaming platform.

In just three hours, we’ll cover the following:

  • Hardware options for live transcoding and encoding to cut costs by as much as 80%.
  • Software options for producing, delivering, and playing your live video streams.
  • Co-location selection criteria to achieve cloud-like performance with on-premise affordability.

You’ll also hear from two engineers who will demystify the process of assembling a live-streaming facility, how they identified and solved key hurdles, along with real costs and performance data.

Cloud? Or your own hardware?

It’s clear to many that producing live streams via a public cloud like AWS can be vastly more expensive than owning your hardware. (You can learn more by reading “Cloud or On-Premises? The Streaming Dilemma” and “How to Slash CAPEX, OPEX, and Carbon Emissions Using the NETINT T408 Video Transcoder”). 

To quote serial entrepreneur David Hansson, who recently migrated two SaaS services from the cloud to on-premise, “Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.” 

For those who have only operated in the cloud, there’s fear of the unknown. Fear buying hardware transcoders, selecting the right software, and choosing the best colocation service. So, we decided to fight fear with education and host a symposium to educate streaming engineers on all these topics.  

“Building Your Own Live Streaming Cloud” will uncover how owning your encoding stack can slash operating costs and boost performance with minimal CAPEX.

Learn to select the optimal transcoding hardware, transcoding and packaging software, and colocation facilities. We’ll also discuss strategies to reduce carbon emissions from your transcoding engine. 

This FREE virtual event takes place on August 17th, from 11:00 AM – 2:15 PM EST.

Five issues tackled by nine experts:

Transcoding Hardware Options:

Learn the pros and cons of CPU, GPU, and ASIC-based transcoding via detailed throughput and cost examples shared by Kenneth Robinson, Manager of Field Application Engineers at NETINT Technologies. Then Ilya Mikhaelis, Streaming Backend Tech Lead at Mayflower, will describe his company’s journey from CPU to GPU to ASICs, covering costs, power consumption, latency, and density metrics.

Software Options:

Jan Ozer from NETINT will identify the three categories of transcoding software: multimedia frameworks, media servers, and other tools. Then, you’ll hear from experts in each category, starting with Romain Bouqueau, founder of Motion Spell, who will discuss the capabilities of the GPAC multimedia framework. Barry Owen, Chief Solutions Architect at Wowza, will discuss Wowza Streaming Engine’s suitability for private clouds. Lastly, Adrian Roe, Director at Id3as, developer of Norsk, will demonstrate Norsk’s simple, scripting-based operation, and extensive production and transcoding features.

Housing Options:

Once you select your hardware and software, the next step is finding the right co-location facility to house your live streaming infrastructure. Kyle Faber, with experience in building Edgio’s video streaming infrastructure, will guide you through the essential factors to consider when choosing a co-location facility.

Minimizing the Environmental Impact:

As responsible streaming professionals, it’s essential to address the environmental impact of our operations. Barbara Lange, Secretariat of Greening of Streaming, will outline actionable steps video engineers can take to minimize power consumption when acquiring and deploying transcoding servers.

Pulling it All Together:

Stef van der Ziel, founder of live-streaming pioneer Jet-Stream, will share lessons learned from his experience in creating both Jet-Stream’s private cloud and cloud transcoding solutions for customers. In his closing talk, Stef will demystify the process of choosing hardware, software, and a hosting facility, bringing all the previous discussions together into a cohesive plan.

Full Agenda:

11:00 am. – 11:10 am EST

Introduction (10 minutes):
Mark Donnigan, Head of Strategic Marketing at NETINT Technologies
Welcome, overview, and what you will learn.

 

11:10 am. – 11:40 am EST

Choosing transcoding hardware (30 minutes):
Kenneth Robinson, Manager of Field Application Engineers at NETINT Technologies
You have three basic approaches to transcoding, CPU-only, GPU, and ASICs. Kenneth outlines the pros and cons of each approach with extensive throughput and CAPEX and OPEX examples for each.

 

11:40 am. – 12:00 pm EST

From CPU to GPU to ASIC: Our Transcoding Journey (20 minutes):
Ilya Mikhaelis, Streaming Backend Tech Lead at Mayflower
Charged with supporting very high-volume live transcoding operations, Ilya started with libx264 software transcoding, which consumed massive power but yielded low stream density per server. Then he experimented with GPUs and other hardware and ultimately transitioned to an ASIC-based solution with much lower power consumption and much higher stream density per server. Ilya will detail the costs, power consumption, and density of all options, providing both data and an invaluable evaluation framework.

 

12:00 pm. – 12:10 pm EST

Choosing your live production software (10 minutes): 
Jan Ozer, Senior Director of Video Technology at NETINT Technologies
The core of every live streaming system is transcoding and packaging software. This comes in many shapes and sizes, from open-source software like FFmpeg and GPAC, to streaming servers like Wowza, and production systems like Norsk. Jan discusses these multiple options so you can cohesively and affordably build your own live-streaming ecosystem.

 

12:10 pm. – 1:10 pm EST

Speed Round (60 minutes):
20-minute presentations from GPAC, Wowza, and NORSK.
Speakers from GPAC, Wowza, and NORSK discussing the features, functions, operational paradigms, and cost structure of their live software offering.

Speakers include:

  • Adrian Roe, CEO at id3as, Product: Norsk, Title: Make Live Easy with NORSK SDK
  • Romain Bouqueau, Founder and CEO, Motion Spell (home for GPAC Licensing), Product: GPAC Title of Talk: Deploying GPAC for Transcoding and Packaging
  • Barry Owen, Chief Solutions Architect at Wowza, Title of Talk: Start Streaming in Minutes with Wowza Streaming Engine



1:10 pm. – 1:40 pm EST

Choosing a co-location facility (30 minutes): 
Kyle Faber, Senior Director of Product Management at Edgio.
Once you’ve chosen your hardware and software, you need a place to install them. If you don’t have your own connected data center, you may consider a colocation facility. In his talk, Kyle addresses the key factors to consider when choosing a co-location facility for your live streaming infrastructure.

 

1:40 pm. – 1:55 pm EST

How to Greenify Your Encoding Stack (15 minutes):
Barbara Lange, Secretariat of Greening of Streaming.
Learn how video streaming companies can work to significantly reduce their energy footprint and contribute to a greener streaming industry. Implement hardware and infrastructure optimization using immersion cooling and data center design improvements to maximize energy efficiency in your streaming infrastructure.

 

1:55 pm. – 2:15 pm EST

Closing Keynote (20 minutes):
Stef van der Ziel, Founder Jet-Stream
Jet-stream has delivered streaming solutions since its launch in 1994 and offers its own live streaming platform. One focus has been creating custom transcoding solutions for customers seeking to create their own private cloud for various applications. In his closing talk, Stef will demystify the process of choosing hardware, software, and a hosting facility and wrap a pretty bow around all previous presentations.

Build Your Own Streaming Infrastructure – Software

Build Your Own Streaming Infrastructure - Article by Jan Ozer from NETINT Technologies

My assumption is that you’re currently using a cloud-based service like AWS for your live streaming and are seeking to reduce costs by buying your own transcoding hardware, installing the necessary software, and hosting the server on-premises or in a co-location facility. This article covers the software side.

To begin, let’s acknowledge that AWS and other cloud services have created a well-featured and highly integrated ecosystem for live streaming and distribution. The downside is the cost.

To illustrate the potential savings, I’ll refer to this article, which compared the cost of producing 21 H.264 ladders and 27 HEVC ladders via AWS MediaLive and by encoding with NETINT’s recently launched Logan Video Server. As you can see in the table, MediaLive costs around $400K for H.264 and $1.8 million for HEVC, as compared to $11,140 in both cases for the co-located server.

Streaming Infrastructure - Table from article 'cloud or on-prem'
Table 1. Five-year cost comparison . AWS MediaLive pricing compared to the NETINT Server

While there are less expensive options available inside and outside of AWS, whenever you pay for hardware by the minute or hour of production, you’re vastly overpaying as compared to owning your own hardware. Sure, you say, but it’s so easy compared to running your own hardware.

If that’s a concern, here are some comforting words from David Heinemeier Hansson, co-owner, and CTO of software developer 37signals, the developer of the project management platform Basecamp and email service Hey. Recently, Hansson wrote  Why we’re leaving the cloud, a blog that detailed his companies’ decisions to do just that. Here’s the relevant quote.

Up until very recently, everyone ran their own servers, and much of the progress in tooling that enabled the cloud is available for your own machines as well. Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.

My wife has chihuahuas, and given their difficulties with potty training, I seriously doubt they could do it, but you get the point. To paraphrase FDR, all you have to fear is fear itself. The bottom line is that running your own live streaming service should cost relatively little CAPEX, will save significant OPEX, and won’t be nearly as challenging as you might be fearing.

Let’s look at your options for the software required to run your homegrown system.

Transcoding and Packaging Software

Figure 1 shows the minimum software and infrastructure needed for a live-streaming service. Presumably, you’ve already got the live production covered, and since AWS doesn’t offer a player, you have that piece addressed as well. You’ll need a content delivery network to deliver your streaming video, but you can continue to use CloudFront or other CDN. The software that you absolutely have to replace is the live transcoding and packaging component.

Here you have three options; multimedia frameworks, media servers, and “other.” Let’s discuss each in turn.

Multimedia Frameworks

Multimedia frameworks are software libraries, tools, and APIs that provide a set of functionalities and capabilities for multimedia processing, manipulation, and streaming. The best-known framework is FFmpeg, followed by GStreamer and GPAC, and they are all available open source.

Build Your Own Streaming Infrastructure - Software- diagram-2
Figure 1. Netflix uses GPAC for its packaging,
a significant technology endorsement for GPAC
and for multimedia frameworks in general.

Multimedia frameworks excel in projects at both ends of the complexity spectrum. For simple projects, like transcoding an input stream to an encoding ladder, you can create a script that inputs the stream, transcodes, and hands the packaged output streams off to a CDN in a matter of minutes. You can use the script to process thousands of simultaneous jobs, all at no charge.

At the other end of the spectrum, these frameworks also excel at complex jobs with idiosyncratic custom requirements that likely aren’t available in a server or commercial software product. The development, maintenance, and modification costs are considerable, but you get maximum feature flexibility if you’re willing to pay that cost.

What you don’t get with these tools is a user interface or simple configuration options – you start with a blank slate and must program in all desired features. What could be as simple as checking a checkbox in a streaming media server could require dozens or even thousands of lines of code in a multimedia framework.

Which takes us to streaming media servers.

Streaming Media Servers

The next category of products are streaming media servers, and it includes Wowza Streaming Engine, Nimble Streamer, and two open-source servers, Red5 and Ant Media Server. These servers tend to excel for most productions in the middle of the complexity spectrum and offer multiple advantages over multimedia frameworks.

There are several reasons why you might choose to use a streaming server over a multimedia framework, including a simplified setup and configuration. Most streaming servers provide out-of-the-box streaming solutions with pre-configured settings and management interfaces that simplify the setup and configuration process. While not all offer GUIs, those that don’t offer simple option selection in configuration files.

Build Your Own Streaming Infrastructure - Software- diagram-3
Figure 2. Wowza Streaming Engine is a highly regarded streaming server

As mentioned above, streaming servers often offer simpler access to advanced features that you’d have to craft by hand with a multimedia framework. They also offer better integration with third-party services like digital rights management (DRM) and content delivery networks. Between the simplified setup, easier access to features, and improved integration with other services, packaged servers can dramatically accelerate getting your live streaming service up and running.

Once you’re operational, you’ll appreciate management interfaces that monitor the health and performance of your streaming infrastructure, track viewer analytics, manage streaming workflows, and make real-time adjustments. If you’re in a dynamic demand environment, some streaming servers offer built-in scalability features and load balancing to manage the load over multiple hard transcoding resources. You’d have to build all that by hand or with plug-ins if using a multimedia framework.

The two potential downsides of streaming servers are cost and customizability. You’ll have to pay a monthly fee for some versions of these servers, and you may find it complicated or nearly impossible to add what you might consider to be essential features.

Other Streaming-Capable Programs

Most companies building their own live-streaming infrastructures will implement either a multimedia framework or a streaming server, but there are other programs that incorporate the core encoding and packaging functions. One such program is Norsk from id3as. Norsk bills itself as “an SDK that enables developers to easily create amazing, dynamic live video workflows and deploy them at any scale.” As such, it combines both video production and streaming server-related functions

You see this in Figure 3. The top portion shows that Norsk supports the typical codecs and packaging formats deployed by live-streaming producers. At the bottom of the figure, you see that Norsk also offers production-oriented features like multiple camera support, graphics and overlays, and transitions.

Build Your Own Streaming Infrastructure - Software- diagram-4
Figure 3. Norsk offers both production and server-related functions.

Interestingly, Norsk doesn’t have a GUI, instead offering a high-level API to simplify configuration and operation, with a Workflow Visualizer component to view the running state of the application. In this fashion, Norsk attempts to provide the configurability of multimedia frameworks with the ease of operation of scripting-driven streaming media servers.

Finding a program like Norsk that combines transcoding and packaging with other essential streaming-related functions makes a lot of sense; there’s one less vendor to onboard and one less product to learn and support. As remote production becomes more common, we expect more programs like Norsk to become available.

Those are your high-level options. If you’re interested in learning more about these and other programs that can drive encoding and packaging for your live transcoder. You should plan to attend our upcoming symposium; details will be available in the next couple of weeks.

What Can a VPU Do for You?

What Can a VPU Do for You? - NETINT Technologies

For Cloud-Gaming, a VPU can deliver 200 simultaneous 720p30 game sessions from a single 2RU server.

When you encode using a Video Processing Unit (VPU) rather than the built-in GPU encoder, you will decrease your cost per concurrent user (CCU) by 90%, enabling profitability at a much lower subscription price. How is this technically feasible? Two technology enablers make this possible. First, extraordinarily capable encoding hardware, known as a VPU (video processing unit), dedicated to the task of high-quality video encoding and processing. And second, peer-to-peer direct memory access (DMA) that enables video frames to be delivered at the speed of memory compared to the much slower NVMe buss between the GPU and VPU. Let’s discuss these in reverse order.

Peer-to-Peer Direct Memory Access (DMA)

Within a cloud gaming architecture, the primary role of the GPU is to render frames from the game engine output. These frames are then encoded into a standard codec that is easily decoded on a wide cross section of devices. Generally this is H.264 or HEVC, though AV1 is becoming of interest to those with a broader Android user based. Encoding on the GPU is efficient from a data transfer standpoint because the rendering and encoding occurs on the same silicon die; there’s no transfer of the rendered YUV frame to a separate transcoder over the slower PCIe or NVMe busses. However, since encoding requires substantial GPU resources, this dramatically reduces the overall throughput of the system. Interestingly, it’s the encoder that is often at full capacity and, thus the bottleneck, not the rendering engine. Modern GPU’s are built for general-purpose graphical operations, thus, more real estate is devoted to this compared to video encoding.

By installing a dedicated video encoder in the system and using traditional data transfer techniques, the host CPU can easily manage the transfer of the YUV frame from the GPU to the transcoder but as the number of concurrent game sessions increase the probability of dropped frames or corrupted data makes this technique not usable.

NETINT, working with AMD enabled peer-to-peer direct memory access (DMA) to overcome this situation. DMA is a technology that enables devices within a system to exchange data in memory by allowing the GPU to send frames directly to the VPU whereby removing the situation of the buss becoming clogged as the concurrent session count increases above 48 720p streams.

What can a VPU do for you?

The Benefits of Peer-to-Peer DMA

Peer-to-peer DMA delivers multiple benefits. First, by eliminating the need for CPU involvement in data transfers, peer-to-peer DMA significantly reduces latency, which translates to a more responsive and immersive gaming experience for end-users. NETINT VPUs feature latencies as low as 8ms in fully loaded and sustained operation.

In addition, peer-to-peer DMA relieves the CPU of the burden of managing inter-device data transfers. This frees up valuable CPU cycles, allowing the CPU to focus on other critical tasks, such as game logic and physics calculations, optimizing overall system performance and producing a smoother gaming experience.

By leveraging peer-to-peer communications, data can be transferred at greater speeds and efficiency than CPU-managed transfers. This improves productivity and scalability for cloud gaming production workflows.

These factors combine to produce higher throughput without the need for additional costly resources. This cost-effectiveness translates to improved return on investment (ROI) and a major competitive advantage.

Extraordinarily Capable VPUs

Peer-to-peer DMA has no value if the encoding hardware used is not equally capable. With NETINT VPUs, that isn’t the case here.

The reference system that produces 200 720p30 cloud gaming sessions is built on the Supermicro AS-2015CS-TNR server platform with a single GPU and two Quadra T2A VPUs. This server supports AV1, HEVC, and H.264 video game streaming at up to 8K and 60fps, though as may be predicted, the simultaneous stream counts will be reduced as you increase framerate or resolution.

Quadra T2A is the most capable of the Quadra VPU line, the world’s first dedicated hardware to support AV1. With its embedded AI and 2D engines, the Quadra T2A can support AI-enhanced video encoding, region of interest, and content-adaptive encoding. Quadra T2A coupled with a P2P DMA enabled GPU, allows cloud gaming providers to achieve unprecedented high throughput with ultra-low latency.

Quadra T2A is an AIC (HH HL) form-factor video processing unit with two Codensity G5 ASICs that operates in x86 or Arm-based servers requiring just 40 watts at maximum load. It enables cloud gaming platforms to transition from software or GPU-only based encoding with up to a 40x reduction in the total cost of ownership.

What Can A VPU Do For You?

What Can A VPU Do For You?

It makes Cloud Gaming profitable, finally.

Peer-to-peer DMA is a game-changing technology that reduces latency and increases system throughput. When paired with an extraordinarily capable VPU like the NETINT Quadra T2A, now you can deliver an immersive gaming experience at a CCU that cannot be matched by any competing architecture.

Unlocking the Potential of Cloud Gaming with VPUs

Blacknut-cloud gaming-B.jpg

In this interview, Olivier Avaro, the CEO of Blacknut, discusses the emergence and potential of cloud gaming. Blacknut aims to bring the joy of gaming to the mass market by offering a large catalog of games through cloud-based distribution. Avaro highlights the maturity of both users and technology, making cloud gaming a feasible and attractive option. The interview explores the transition from physical discs to streaming, the importance of cost-effectiveness in delivery, and the architectural advancements in cloud gaming systems.

Avaro emphasizes the potential of hybrid cloud infrastructure and the role of GPU and VPU in maximizing the number of concurrent players and reducing costs. He acknowledges the challenge of making cloud gaming affordable for a wider range of consumers, including those in emerging markets. However, he emphasizes that the cost of delivering the service can be kept within a reasonable range, with subscription prices ranging from $5 to $15 per month, depending on the economic conditions of the region.

The technical infrastructure of cloud gaming is explored in detail. Avaro explains the basic architecture, where games are stored on cloud servers and streamed to users’ devices, eliminating the need for downloads. The key requirements for a seamless experience include sufficient bandwidth, low latency, and a well-equipped server infrastructure comprising CPUs, GPUs, and storage. Initially deployed on public cloud platforms for scalability, Blacknut has devised a hybrid cloud approach to optimize the economics of the service. This involves the incorporation of private cloud servers, allowing for improved performance and cost efficiency.

The interview addresses an innovative architectural aspect of Blacknut’s system. Avaro discusses the decision to offload video encoding from the GPU to a dedicated video processor unit (VPU) provided by NETINT.

This approach increases the density of concurrent game sessions, enabling up to 200 players on a single server. This breakthrough in density enhances the economic viability of cloud gaming platforms by significantly reducing costs.

These insights offer valuable perspectives on the advancements in cloud gaming, the importance of cost considerations, and the technological infrastructure that underpins its success.

Avaro also addresses challenges related to unstable internet connectivity in certain regions, discussing collaborations with Ericsson to leverage 5G networks and optimize network characteristics for gaming. While geographical limitations exist, Blacknut is actively expanding its presence to provide global access to its gaming service.

Voices of Video - Cloud Gaming being Real

Play Video about Cloud gaming platforms can greatly benefit from Avaro's revelation: offloading video encoding to a dedicated VPU, enabling 200 players on a single server.
VOICES OF VIDEO
Cloud Gaming being Real. A conversation with the CEO of Blacknut
Watch the full conversation on YouTube: https://youtu.be/w9Pho6G_bdM
 

Mark Donnigan:
So we are at the top of the hour, and looks like we should get started. Oliver, are you ready to talk about cloud gaming?

Oliver Avaro:
Absolutely ready.

Mark Donnigan:
Excellent, excellent. Well, welcome to those who are joining us live. This is the May edition of Voices of Video. And if you haven’t joined us before, Voices of Video is a conversation, or some might say a real dialogue. Not a podcast, I guess a videocast. We go live on LinkedIn and also a lot of other platforms. And we are talking each month with innovators in the video space. And so this month I am super excited to have Oliver Avaro, who is the CEO of a company called Blacknut. And we are talking about cloud gaming. I will let Oliver tell us all about what his company does. But welcome to Voices of Video, Oliver.

Oliver Avaro:
Look, thanks a lot, Mark, for the nice introduction. So my name is Oliver Avaro, I’m the CEO of Blacknut, which in short is doing to games what Spotify did for music, right? So we are distributing game from the cloud, large catalog of games, more than 700 games so far, and this for a simple subscription fee, right? I was long time a gamer. I enjoyed it a lot when I was a teenager. I enjoyed it a lot with friends, with my family, later with my kids. And I started Blacknut in 2016 with the big ambition to actually brings this joy of gaming, this good emotion, all the also positive value of playing together to the mass market. We deployed the tech for about three years. I think cloud gaming does require a bit of technology to work efficiently. Then we started deploy it all over the world and this is where we are today.

Mark Donnigan:
So we are at the top of the hour, and looks like we should get started. Oliver, are you ready to talk about cloud gaming?

Oliver Avaro:
Absolutely ready.

Is the Blacknut CEO a gamer himself?

Mark Donnigan:
I love it. So I have to ask the question, sometimes when we’re building advanced technologies, we get so into the technology, we don’t get to do the thing that we originally set up to do like play games. So are you still a gamer? Set aside time each day to play?

Oliver Avaro:
I set aside each time to play a little bit. That’s true. And I have to say that I was a… The first game I played was on the Commodore 64 machine, it was named Boulder Dash, right? The older of the audience will know about it. Now I’m still, I’ve been playing with my kid of course on the Wii, all the Nintendo games. And Mario and Super Mario Kart and Super Mario Galaxy, right? And to be truly honest, I’m still playing a bit with my kid, but mostly I’m touching a bit Pokemon Go sometimes to still get a conversation with my wife on gaming.

Mark Donnigan:
That’s good. That’s good. Well, I am really excited for this conversation. And I was just thinking back as I was making some notes for what I thought we should talk about. And in 2007 I had the distinct privilege, and I really do consider it to be a privilege, to be a part of a company, one of the early, early innovators of streaming what we call now OTT, and at the time it was transactional VOD. The company still exists, it’s called Voodoo. And we had this crazy idea to take the Blockbuster, those who have been around for a little while will remember Blockbuster video stores in the US. Other countries, they had the equivalent. And eventually I think Blockbuster did expand outside the US. But you’d go to the video store, you’d rent a disc, DVD, and then eventually Blu-ray, and you would drive home so excited for the family to join around the TV and watch it.

And I can remember how shocking it was to have built this amazing experience where every title was in stock. And those of us who remember the video store, remember that that was part of the challenge, on new release day you had to rush down to the store to be the first in line so you could even get the movie, because they only had so many copies. And then of course you had to worry about did I return it, did I return it by the deadline or do I have to pay for a second day. There was a lot about the experience that actually wasn’t so great. And yet we were shocked at how many people said, “Why would I want to stream over the internet? DVD is great. This is amazing. Look at the quality. No one’s going to want to replace the DVD.” Well, 15 years later, obviously that sounds absolutely crazy, as now the entire world is streaming and we can’t even imagine a world without it.

But as I was thinking about cloud gaming, it feels like maybe we’re a little bit further than we were in 2007, but they’re still not everybody’s convinced. And I’m even surprised that major publishers that I’m coming across, and it’s not a foregone conclusion that the console is going to be replaced with streaming. And so let’s start there. Oliver, I have to imagine that a lot of what you’re spending time doing, aside from building the technology, is making the case for why internet delivery of a game experience is going to be better and is ultimately better than something that’s installed on a PC, downloaded or a console. So what insights do you have to share about where we are in this transition from consoles and discs to streaming for games?

Oliver Avaro:
And Mark, I think the analogy with the Blockbusters I think is very relevant. And I feel that first, in terms of market maturity for the end user, we are probably at that point where people would question, “Why should I do that? I can download a game, why should I actually stream it? Why do something different?” Right? And when I created Blacknut, actually a person that I highly respect told me, “Wow.” People will not use it because they can download it, right? Now, if you look at where we are right now with people now consuming all the media, like audio and video and your musics and books in a streaming manner, it seemed that definitely having those people accessing games the same way seems to be actually, it’s the right idea or the right next step, right?

And I do think that there is a bit more of maturity of people actually willing to access games this way. Now, there has been probably an inflection points in terms of technology maturity. I think the technology, meaning basically the hardware you can have on the cloud, the bandwidth you have available on your home, as a kind of device you have to run it and so on, is good enough to provide actually a great experience. And I do think that we are at the time here where we’re passing this inflection point that probably years ago it was not sufficient. And we have seen lot of companies trying to do this, but actually failing and failing really badly. But actually learning a lot from these failures.

So I think we’re at a very exciting time now where we have this maturity in terms of technology. We have the maturity of the end user, because they are used to consume this kind of media with audio, video, eBooks and so on. So probably they’re craving to get access to game, and more and more people are gaming. And we have also the maturity of the content owner and the publisher. So I think we’re at a very, very good time in the market.

Deliver at ultra low latency. Possible?

Mark Donnigan:
Well, I definitely agree that we are much further advanced than we were. I think of some of the things that we had to do, Voodoo in 2007 actually required an appliance, a device with a hard drive in it that we could download the first 30 seconds, maybe a minute of every single title in the library in it. At that time, the library was not as big as what the libraries are today. But just because streaming bandwidth was 768 kilobits. Maybe 1.5 megabits was really fast. If you were really lucky you had 5 megabits. My, how we’ve grown. So it’s definitely we’re in a better position.

Before we get into the technology, because that’s where we’re going to spend the bulk of our time today. But something that I think also you’re in a really good position to address is, is the cost side. So certainly, we’re at a place today with the cloud that you can deliver anything, really anywhere via the cloud. So the notion that you can do cloud gaming, i.e., it’s possible to deliver an ultra low latency, very high quality experience from the cloud. I don’t think anybody conceivably would say, “Oh, I don’t believe that. That’s not possible.” But there is a real issue of the cost. And so why don’t you address where we’re at in terms of just delivery cost, and I’m speaking of OpEx. Where are we at? I mean, is this possible but not affordable, or is this possible and affordable, even for someone who might not be able to charge their consumer a whole lot of money? Not all markets are the US or Western Europe, or some of these regions where consumers are willing to pay $10, $15, $20 a month.

Oliver Avaro:
No, that really is a key issue, Mark. Because, as you mentioned, I think we passed the technology inflection point where actually the service becomes to be feasible. Technically feasible, the experience is good. We think it’s good enough for the mass market. I am sure that some people will be unhappy with it. Really, core gamers will say, “Well…”

Mark Donnigan:
Sure.

Oliver Avaro:
Probably the same people that when the DVD came they say, “Well, I still want to listen to my vinyl on my turntable because this is what I’m using to listen my music. And you will not beat that quality with digital sound.” Right? But for the mass market, I think we got to the point where the feasibility is here. Of course we need good bandwidth, stable, very low jitter, so the variation of the latency. But we are here right.

Now, the issue is indeed on the unique economics and how much it costs to actually stream and deliver games in an efficient manner, so that it is affordable basically for the mass market. And one thing here is I think the gaming is not done. Okay? There is some challenges. As you know, the cost of streaming depends on the number of hours per month, let’s say that you stream. We think that we got at least some maturity where it’s becoming available so that you get to a price point which is what people expect, which is between $5 to $15, depending on the how poor are the country is. So we think this is realistic. But of course, it depends on the intensity of the player, how much they play. And if you want somehow to really sustain and to have great economics, there is still some improvement to be done. Okay? And I would say we have the baseline architecture that allows the service to be profitable, to make it really work, really scale. There is still some margin of improvement. And we have ways actually to improve this unique economics.

Technical infrastructure

Mark Donnigan:
So you’re saying right now that to the end user, which means that the actual cost to deliver the service has to be less. But to the end user, about $5 a month to $15 a month is a target that is possible to reach?

So $5 a month, even in more emerging markets where maybe subscription prices cannot be what they are say in the US, feels like that’s doable. So that’s actually good to hear. Tell us what is the technical… Let’s talk now about what the technical infrastructure looks like and what it takes to deliver. How have you built your system? And then we will get to the broader architecture of Blacknut and what exactly you’re offering. But let’s start with what is your system built on? What does it look like? What are you deploying? Is this a cloud service? Is it run all on prem?

Oliver Avaro:
So basically, the architecture of cloud gaming is somehow simple. You take games, you put them on the server in the cloud and you’re going basically to virtualize it and stream it in the form of a video stream or in some other format so that you don’t have to download the game on the client side, and you can play it as you are playing a video stream. And when you interact with the game, you send a command back to the server and then you interact with the game this way. And so of course bandwidth need to be sufficient, let’s say 6 megabit per second. Latency need to be good, let’s say less than 80 milliseconds. And of course you need to have the right infrastructure on the server that can run games. No games mean a mixture of CPU, GPU, storage, and all this need to work well.

We start deploying the service based on public cloud, because this allow us to test the different metrics, how people were playing the service, how many hours. And this was actually very fast to launch and to scale. So this is what the public clouds, the hyperscaler, SCP, and so on provides. That’s great, but they are quite expensive as you know. So to optimize the economics, we actually built and invented in Blacknut what we call the hybrid cloud for cloud gaming, which is a combination of both the public cloud and private cloud. So we have to install our own servers based on GPUs, CPUs and so on, either directly in Blacknut or with some partners like Radian Arc so that we can improve the overall performances and the unique economics of the system. That I think allowed us to build a profitable service. I think if you just match basically the public cloud currently, I think this is super hard to get something which is viable. But with this kind of hybrid cloud, I think it’s actually very doable.

Mark Donnigan:
And these are standard x86, commercial, off-the-shelf, Intel, AMD machines. I mean, there’s nothing special required or have you gone to a purpose-built design?

Oliver Avaro:
No, the current design is basically definitely specific for the private cloud, but it’s based on standard x86. And for GPU we use a AMD or NVIDIA. Okay? We have a mixture of different providers, but basically this is, I would say reasonably standard architecture, with a mix of CPU, GPU and storage.

Cloud gaming use case

Mark Donnigan:
The cloud gaming use case is a primary one and that’s obviously why we got introduced. And you are using Netin, which we will get to. But kind of the key measure from a technology perspective, and it maps directly back to cost, for a cloud gaming installation is the number of concurrent sessions per server. Obviously, just stands to reason that the more concurrent sessions or players that you can get on a server, well, it’s going to be less expensive to operate and to run. So that’s not too difficult to understand.

One of the things that’s really interesting is, and I’d like for you to talk about this architecture where you have the GPU rendering the game, but you’re actually not doing the video encoding on the GPU. So what does that look like? And also, talk to us about the evolution, because that’s not where you started. And most cloud gaming platforms today are attempting to keep everything on the GPU, which has some advantages, but it has some very distinct disadvantages and trade-offs. And the disadvantage is you just can’t get the density, which means that your cost per stream likely cannot meet that economic bar where you can really affordably deliver to a wider number of players. I.e., you can’t drive your cost down so you have to charge more, and there’s people who will say, “Well that’s too expensive.” But talk to us about this architecture.

Oliver Avaro:
So that’s correct, Mark. I think the ultimate measure is the cost per CCU, right? The cost per concurrent user that you can get on a specific bill of material. If you have a CPU plus GPU architecture, the game is going to actually slice the GPU in different pieces in the more dynamic manner and in the more appropriate manner so that you can run different game and as much game as possible. Right? So typically if you get on the standard GPU, you can run probably a big game, like a large game and you can cut the GPU in four pieces. If you run a medium game, you can run it maybe in 6 or 8 pieces. And if you run a smaller game, then maybe you can get to, I don’t know, 20 pieces, right?

There is some limits on how much you can slice the GPU for the GPU to be still efficient. And likely, for example, the NVIDIA centralized you to slice one GPU in 24 pieces, but that’s it, right? And so there is some limits in this architecture because it all rely on the GPU. We are indeed investigating different architectures where indeed we are using a VPU, like NETINT is providing a video processor that will somehow offload the GPU of the task of encoding and streaming the video so that we can augment the density. And we see it in as terms of full architecture as something which will be a bit more flexible. I think in terms of number of big games, because they rely much more on the GPU, probably you will not augment the density that much. But we think that overall, probably we can gain a factor of 10 on the number of games that you can overall run on this kind of architecture. So passing from a max of 20, 24 games to a time 10, right? Running 200 games on architecture of this kind.

Mark Donnigan:
Yeah, that’s really remarkable. And just in case somebody isn’t doing the quick math here, what you’re saying is that is it with this CPU plus GPU plus VPU, which the VPU is the ASIC based video encoder, all in the same chassis, so the same server, we’re not talking about different servers, you can get up to 200 game players simultaneously, so concurrent players. Which just radically changes the economics. And in our experience, working with publishers and working with platforms, cloud gaming platforms, nearly everybody has said literally without that it’s not even really economical to build the platform. In other words, you end up having to charge your customer so much, and where the experience is, it’s not viable.

Oliver Avaro:
That’s correct.

Mark Donnigan:
Yeah, that’s important.

Oliver Avaro:
And for certain category of games, definitely you can reach this level. So actually augmenting the density by a factor of 10 means also of course diminishing the cost per CCU by a factor of 10. So if you pay $1, currently you will pay 10 cents, and that makes a whole difference. Because let’s assume basic gamers will play 10 hours per month or 30 hours per month, if this is $1, this is $30, right? If this is 10 cents, then you go to one to $3, which I think makes the match work on the subscription, which is between 5 to 15 euro per month.

Is hardware super expensive

Mark Donnigan:
One of the questions that comes up, and I know we’ve had this conversation with you, is how is this possible? Because anybody who understands basic server architecture, basically it’s not difficult to think, well, wait a second, isn’t there a bottleneck inside the machine? And this must require a really super hot rodded machine. So maybe the cost savings is offset by super expensive hardware. And I think it’s important to note that the reason why this is possible is first of all, the VPU is built on NVMe architecture. So it’s using the exact same storage protocol as your hard drive, as the SSDs that are in the machine. And what we have done, what Netin has done is actually created a peer-to-peer sharing inside the DMA. So basically the GPU will output a frame, a rendered frame, and it’s transferred literally inside memory, so that then the VPU can pick that up, encode it, and there’s effectively zero latency, at least in terms of the latency is so low because it’s happening in the memory buffer.

And so if anybody’s listening and raising an eyebrow wondering, “Well wait a second, surely there’s a bottleneck.” And especially if you’re talking 60 frame per second, which by the way, our benchmarks are generally always at 60 frames per second. Because unless it’s real casual games, you need that frame rate to really deliver a great experience. Even above resolution in some cases, it’s better to get the frame rate up than to increase the size of the frame.

Oliver Avaro:
Absolutely. Absolutely.

Mark Donnigan:
Yeah. Let me just pause here and say that we would love to have questions. And so feel free, on whatever platform, if you’re on YouTube or LinkedIn or wherever watching us right now, just type in and I will try and pick those up. I have looks like, like we already have one. I think this is actually a really good one. I’m going to pick this up right here. But feel free to enter questions in the chat. So Oliver, the question is, “I live in a country where stable internet is not always available.” And by the way, I would say that this isn’t only a country issue, internet varies, right? And the expectation of users is more and more that they don’t think about the fact that I’m in a car, I happen to be in an area where there’s great coverage, but seven miles down the road that changes, right? They want to keep playing and keep enjoying this great experience.

So the question is, “I live in a country where stable internet is not always available. How will this affect the gaming experience?” And yeah, I mean, that’s the question. So what’s your experience and how are you guys solving for this?

Oliver Avaro:
You see, in Netflix or Spotify, you can actually buffer content so that even if your bandwidth is a bit clumsy, you can actually store that content in the CDM and keep the experience good enough, right? Or you can download the video and make it work. So definitely you have some way to solve that problem in I would say cold media, right? Media that you can encode in one way, then stream later. In games, this is completely different.

Mark Donnigan:
Yeah, you can’t do that.

Oliver Avaro:
Because we have to encode, stream, deliver, and then in text integration right away. So if your bandwidth is not enough, if the quality of the bandwidth is not enough, and not only in terms of the size of the bandwidth but also in terms of characteristic. The latency, how this latency is stable and so on, then the experience will be great, right?

So what we’ve been doing actually with Ericsson, okay, is to use 5G networks and to define specific characteristic of what is a slice in the 5G network. So we can tune the 5G network to make it fit for gaming. And to optimize basically the delivery of gaming with 5G. So we think that 5G is going to get much faster in those region where actually the internet is not so great. We’ve been deploying the Blacknut service in Thailand, in Singapore, in Malaysia, now in the Philippines and so on. And this has allowed us to actually reach people in regions where there is no cable or bandwidth with fiber and this kind of things. So look, I’m not going to solve a problem where bandwidth is not available, but maybe bandwidth will come faster with 5G and that could be the solution.

Mark Donnigan:
Yeah, I want to make a comment there, and thank you for the answer. We are seeing, so it’s very interesting, and I’ll use India as an example. So for years in video streaming, the Indian market was used as an example of where it was very difficult to deliver high quality, and especially if you wanted to deliver say 720p, and 1080p was almost assumed at a certain period of time it’s not even possible. Because the network capacity and the speeds were just so low.

What has happened is, and India’s a great case study here, but it’s really almost all regions of the world, as these infrastructures, these wireless infrastructures have been upgraded, they leapfrogged literally from 3G or in some cases even 2.5G and before, and just went all the way to 5G. And so in the last five years there has been such a fundamental shift in bandwidth availability that in some cases, some of these regions of the world, not only is it definitely no longer true that they’re slow, they’re faster than some of the more developed countries. So I do want to make that statement there. One question, Oliver, can you talk about is this webRTC? What protocols you’re using? There’s a lot of talk right now about QUIC. And I think that would be interesting for some of the listeners who might be wondering even what protocols you’re using.

Oliver Avaro:
So we use standard codeX to start with the bottom line. We have not embedded codeX, we have been into the standardization industry of audio and video for quite some years, and I think you have great experts here doing great technology. And this technology is actually embedded into the chipset, into the hardware, so actually you can rely on hardware encoding and decoding capabilities. So we do think standard codeX is basically a must have, right? Of course you need to configure them the right way because you have to code real time. Okay? So you cannot use a particular techniques to wait for a couple of frames or more, so you have to optimize this. But basically we use standard codeX.

Then on the protocols on top of this we have actually a large variety of protocol. It depends on the device on which you are streaming. So it can goes from full-property protocol that we have invented and patented in Blacknut, to standard webRTC. Okay? So if you look at devices like Samsung and LG, which are basically the top manufacturers, I think the service has been launched on LG. We are going to announce, I think our launch with Samsung in very short time. And these devices support webRTC, and that basically is the only way to implement and to support the cloud gaming solution efficiently. So short answer, we use a wide range of protocol, always the one that is the most appropriate and provides the best experience to the end user. We’re using at of course new protocol, new standards, experimenting this. But I would say for the main streamline new solution, we use our own solution plus webRTC. It’s the only… that they’re there.

The end-to-end latency targets

Mark Donnigan:
The end-to-end latency targets, I think previously you made the comment about 80 milliseconds. But give us some guidelines, what is, obviously the answer is as low as possible, but what’s the upper limit where the game experience just falls apart? It’s just not playable?

Oliver Avaro:
You know that the limit for conventional video is about 150 milliseconds. For playing games, this is much lower, probably half of it. So I think you can get a reasonably good experience at 80 milliseconds for actually most of the game that does not require this kind of fast reaction. But then if you want to go to FPS or this kind of thing, that really need to… to nearly be reactive at the frame accuracy, which is very of course difficult in cloud gaming, you need to go down to the 30 millisecond and lower, right? And then I think it’s only feasible if you have a network that allows for it. Because it’s not only about the encoding part, the server side and the client side, it’s also on where the packets are going through the networks. Okay?

Because you can have the most efficient systems in terms of encoding latency and decoding latency, but if you bucket instead of going directly from the server to the end user, go here and there and transit in many places, then your experience will be crappy. And Mark, this is actually a real issue, because we for example had a great demonstration with Ericsson in Barcelona of the Mobile World Congress. And we had servers in Madrid, but when we first make the first test, we discovered that the packets were going from Madrid to Paris, and back to Barcelona, right? So this need a bit of intelligence and technology to make this connection as efficient as possible.

Mark Donnigan:
Tell us about Blacknut, what exactly you guys deliver?

Oliver Avaro:
We provide basically a cloud gaming service, which is, let’s say categorize it as a game as a service. Okay? This means that for the subscription fee per month you get access to the real stuff. You get access to 700 games. We are adding 10 to 15 new games per month, which is I think the fastest pace in terms of increasing game on the market. And we provide this experience on all single devices that can actually receive a video. Okay? So that’s what we do. And we distribute this service either B2C, so direct to the consumer. So if you go on your Blacknut webpage, you can subscribe, you can access to the games. But we also distribute it through carriers, so telecommunication carriers, operators all over the world. We currently have about 20 signed agreement with the carriers live actually. More than 40 signed, and we are signing and delivering one to two new carriers per month. So that’s the pace where we are in Blacknut. And there’s the choice to use carriers here is for the reason I explained to you that it’s good to have.

Mark Donnigan:
Optimization of the network.

Oliver Avaro:
You need to know where the packets are going. You need to make sure that there is some form of CDN for cloud gaming that is in place here that makes the experience optimal.

Mark Donnigan:
Yeah, it completely makes sense to me, especially because you mentioned the 5G optimization. And obviously carriers, yeah, they’ve been investing now for years in building out their 5G networks. But they’re always looking for reasons to drive more value and to really extract the full potential off the 5G or out of the 5G investment. So yeah, it really makes sense.

Oliver Avaro:
That’s the kind of thing we’re doing as well with our partner Radian Arc, and we are putting a server at the edge of the network. So inside the carrier’s infrastructure so that the latency is really super optimized. So that’s one thing that is key for the service.

The architecture

Mark Donnigan:
What is the architecture of that edge server? What’s in it? What CPU, GPU, VPU. Describe that.

Oliver Avaro:
We started with a standard architecture, with CPU and GPU. And now with the current VPU architecture, we are putting actually a whole servers consisting in AMD GPU, Netin VPU. And basically we build the whole package so that we put this in the infrastructure of the carrier and we can deploy the Blacknut cloud gaming on top of it.

Mark Donnigan:
And are you delivering to only a handful of fixed resolutions? If I was on a TV for example, do I get 4K or do you limit to 1080p or how do you handle that?

Oliver Avaro:
Again, great question. Okay? We actually can handle multiple resolution. I think we can stream from 720p up to 4K. The technology basically has no limits for it, right? And streaming 4K or even 8K is a problem that has somehow been solved already, from a technical matter. The question is, again, the cost and the experience. Okay? Streaming 4K on the mobile device does not really make sense. I think the screen is a bit more so you can screen a smaller resolution and that’s sufficient. On a TV likely you need to have a bigger resolution. Even if actually there is great upscale available on most of the TV sets, we stream 720p on Samsung devices and that’s super great, right? But of course scaling up to 1080p will provide a much better experience. So on TVs and for the game that require it, I think we’re indeed streaming the service about 1080p for the game that requires this.

Mark Donnigan:
Do you also find that frame rate is almost more important than resolution?

Oliver Avaro:
For certain games, absolutely. But again, it is game dependent. Of course-

Mark Donnigan:
It’s game, yeah.

Oliver Avaro:
If you are on a FPS, you probably, if you have the choice and you cannot stream 1080p, you would probably stream 720p at 60 FPS rather than 1080p 30 FPS, right?

Mark Donnigan:
Yes.

Oliver Avaro:
If you have to make some trade-off. But if you have different games where the textures, the resolution is more important, then maybe you will actually select more 1080p and 30 fps resolution. And what we build is actually fully adaptable. Ultimately, you should not forget that there is a network in between. And even if technically you can stream 4K or 8K, the networks may not sustain it. Okay? And then actually you’ll have less good experience streaming 4K than actually a 1080p 60 FPS resolution.

Gaming anywhere where you live?

Mark Donnigan:
Okay. I see a question just came in and it is how do we know where the service is available or is it available anywhere you live? And so I think you can answer that question, but why don’t you also explain are there geographical limitations? Is your content available anywhere? And then as an extension, I don’t think you actually talked about how many publishers you have. You did talk about every month you’re onboarding I think 10 or 12 new games. But yeah, so are there geographical restrictions? How can someone access this?

Oliver Avaro:
Great. Let’s start with content. Okay? Indeed, we have more than 700 games right now, 10 to 15 new games per month. And we actually try not to have geographical limitation on the content. Okay? So this being the content we have on the catalog is, from a licensing point of view, available worldwide. So that’s basically what we do. And we do have exceptions, as usual. But basically, a large part of the catalog is available worldwide. Now deploys this catalog of different region, we are available in more than 45 countries. We definitely need to have servers that are close enough to the end user so that the streaming experience is good enough. And we think that a reduce of between 750 to 1,500 kilometers probably the maximum. So I think we will actually put some point of presence in those geographical areas so that basically the latency, limited by the speed of light, that does not harm the service.

So of course if you look at it, we have Europe very much covered. We have US and Canada very much covered. We have a large portion of Southeast Asia, Korean and Japan very much covered. We are now expanding in Latin America, which is a bit harder. We have a strong presence now as well in the Middle East, with partners like STC in the region. And of course we have some zone that are less covered. Africa is not well covered at all. South Africa is, but basically the rest of Africa is a bit harder to reach.

Mark Donnigan:
By the way, what is the website? Why don’t you give out the URL there?

Oliver Avaro:
www.blacknut.com
I think try the service. We’ll be very happy to support and give feedback. I’m very interested in the feedback as well.

Mark Donnigan:
It’s super exciting. And as I said in the beginning, for me personally, having been really in the very early stages of the transition from physical entertainment delivery, I’m talking about movies specifically, like DVDs, to streaming. I’m just super excited to also now, 15 years later, be there with games. And there’s a lot of work to be done. And as you pointed out, the experience is absolutely not exactly mapped. We can’t throw out the console yet. But the opportunity to bring really the gaming experience to a much wider audience is really enabled with streaming. So by the way, so I think there’s a follow on question here. Do you have infrastructure in South Africa? You mentioned Africa’s not covered as well, but…

Oliver Avaro:
Yes, we do have the capacity to deploy the service in South Africa, absolutely.

Mark Donnigan:
To deploy in South Africa. Okay, great. Great. Well, we’re right up against time and thank you for everyone who joined us live. Really appreciate it. And thank you, Oliver. It’s amazing what you’ve built. And we’re super excited to be working with Blacknut.

Oliver Avaro:
Thank you everyone. Thanks, Mark.

Video Transcoder vs. Video Processing Unit (VPU)

When choosing a product for live stream processing, half the battle is knowing what to search for. Do you want a live transcoder, a video processing unit (VPU), a video coding unit (VCU), Scalable Video Processor (SVP) or something else? If you’re not quite sure what these terms mean and how they relate, this short article will educate you in four minutes or less.  

In the Beginning, There Were Transcoders

Simply stated, a transcoder is any technology, software or hardware, that can input a compressed stream (decode) and output a compressed stream (encode). FFmpeg is a transcoder, and for video-on-demand applications, it works fine in most low-volume applications.

For live applications, particularly high-volume live interactive applications (think Twitch), you’ll probably need a hardware transcoder to achieve the necessary cost per stream (CAPEX), operating cost per stream, and density.

For example, the NETINT Video Transcoding Server, a single 1RU server with ten NETINT T408 Video Transcoders, can deliver up to 80 H.264/HEVC 1080p30 streams while drawing under 250 watts. Performed in software using only the CPU, this same output could take up to ten separate 1RU servers, each drawing well over 250 watts.

Netint Codensity, ASIC-based T408 Video Transcoder
The NETINT T408 Video Transcoder.

Speaking of the T408, if Websters defined a transcoder (it doesn’t), it might have a picture of the T408 as the perfect example of a transcoder. Based on custom transcoding ASICs, the T408 is inexpensive ($400), capable (4K @ 60 FPS or 4x 1080p60 streams), flexible (H.264 and HEVC), and exceptionally efficient (only 7 watts).

What doesn’t the T408 do? Well, that leads us to the difference between a transcoder and a VPU.

The difference between a transcoder and a Video Processing Unit (VPU)

First, the T408 doesn’t scale video. If you’re building a full encoding ladder from a high-resolution source, all the scaling for the lower rungs is performed by the host CPU. In addition, the T408 doesn’t perform overlay in hardware. So, if you insert a logo or other bug over your videos, again, the CPU does the heavy lifting.

Finally, the T408 was launched in 2019, the first ASIC-based transcoder to ship in quite a long time. So, it’s not surprising that it doesn’t incorporate any artificial intelligence processing capabilities.

What is a Video Processing Unit (VPU)?

What’s a Video Processing Unit? A hardware device that does all that extra stuff, scaling, overlay, and AI. You see this in the transcoding pipeline shown below, which is for the NETINT Quadra.

When it came to labeling the Quadra, you see the problem; It does much more than a video transcoder. Not only does it outperform the T408 by a factor of four, it adds AV1 output and all the additional hardware functionality. It’s much more than a simple video transcoder, it’s a video processing unit (VPU).

As much as we’d like to lay claim to the acronym, it actually existed before we applied it to the Quadra. It’s not surprising. It follows the terminology for CPU (central processing unit) and GPU (graphical processing unit). And, if Websters defined VPU (it doesn’t). Oh, you get the point. Here’s the required Quadra glamour shot.

Netint Codensity, ASIC-based Quadra T1A Video Processing Unit
The NETINT Quadra Video Processing Unit.

VCUs and M(SVP)

While NETINT was busy developing ASIC-based transcoders and VPUs for the mass market, large video publishers like YouTube and Meta produced their own ASICs to achieve similar benefits (and produce more acronyms). In 2021, when Google shipped their own ASIC-based transcoder called Argos, they labeled it a Video Coding Unit, or VCU.

Like the T408 and Quadra, the benefits of this ASIC-based technology are profound; as reported by CNET, “Argos handles video 20 to 33 times more efficiently than conventional servers when you factor in the cost to design and build the chip, employ it in Google’s data centers, and pay YouTube’s colossal electricity and network usage bills.” Interestingly, despite YouTube’s heavy usage of the AV1 codec, Argos encodes only H.264 and VP9, not AV1.

In May 2023, Meta released their own ASIC, which, like Argos, outputs H.264 and VP9, but not AV1. Called the Meta Scalable Video Processor (MSVP), the unit delivered impressive results, including “a throughput gain of ~9x for H.264 when compared against libx264 SW encoding…[and] a throughput gain of ~50x when compared with libVPX speed 2 preset.” Meta also noted that the unit drew only 10 watts of power, which is skimpy but also about 43% higher than the T408.

Of course, neither Google or Meta sells their ASIC to third parties, so if want the CAPEX and OPEX efficiencies that ASIC-based VPUs deliver, you’ll have to buy from NETINT.

Of course, neither Google or Meta sells their ASIC to third parties, so if want the CAPEX and OPEX efficiencies that ASIC-based VPUs deliver, you’ll have to buy from NETINT. The bottom line is that whether you call it a transcoder, VPU, VCU, or MSVP, you’ll get the highest throughput and lowest power consumption if it’s powered by an ASIC.

Play Video about HARD QUESTIONS ON HOT TOPICS: ASIC-based Video Transcoder versus Video Processing Unit (VPU)
HARD QUESTIONS ON HOT TOPICS:
ASIC-based Video Transcoder versus Video Processing Unit (VPU)
Watch the full conversation on YouTube: https://youtu.be/iO7ApppgJAg

Which AWS CPU is Best for FFmpeg – AMD, Graviton, or Intel?

Which AWS CPU is Best for FFmpeg - AMD, Graviton, or Intel?

If you encode with FFmpeg on AWS, you probably know that you have three CPU options: AMD, Graviton, and Intel. Which delivers the most bang for the buck?

For those in a hurry, it’s Graviton for x264 and AMD for x265, often by a significant margin. But the devil is always in the details, and if you want to learn how we tested and how big a difference your CPU selection makes, you can follow the narrative or hopscotch through the fancy charts below. We conclude with a look at the optimal core count for those encoding with AMD CPUs.

Testing the AWS CPUs

Let me start by saying that this was my first foray into CPU testing on AWS, and while it appears straightforward, some unconsidered complexity may have skewed the results. If you see any errors or other factors worth considering, please drop me a note at jan.ozer@netint.com.

Second, your source clip and command string may produce different results than those shown below. If you’re spending big to encode with FFmpeg on AWS, don’t consider my results the final word; instead, consider them as evidence that your CPU choice really does matter and as motivation to perform your own tests. 

Those caveats aside, let’s dig into the testing.

Codecs/Configurations/Command Strings

I tested three test cases.

  • 8-bit 1080p30 with x264
  • 8-bit 1080p30 with x265
  • 10-bit 4K60p with x265

I present the command strings at the bottom of this article. Note that I used the veryslow preset for x264, slower for x265 at 1080p30, and slow for the 4K60 HEVC encodes. Why such demanding presets? Because based upon a total cost of distribution (encoding and bandwidth), the optimal economic decision when view counts will exceed 10,000 views is to use a high-quality preset.

Based upon a total distribution cost (encoding and bandwidth), the optimal economic decision when view counts exceed 10,000 views is to use a high-quality preset.

Remember, presets don’t determine quality; your quality expectations do. Most compressionists target a VMAF score of between 93-95 VMAF points for the top rung of their encoding ladders. Using the veryslow preset, you might achieve that at, say, 3 Mbps. Using ultrafast, you might need a bit rate of as much as 5 Mbps to achieve the same quality. Ultrafast might cut your encoding time/cost by 90%, but you only pay that once, while you pay bandwidth costs for each video view. Even at a cost per GB of $0.02, it takes less than 10,000 views for the veryslow preset to break even based on lower bandwidth costs.

Instances and Pricing

I tested using the 8-core instances and on-demand pricing shown in Table 1. I tested all systems running Ubuntu version 22.04. Note that the cost delta between Intel and AMD is ten percent, a number I’ll refer to below.

Table 1:  Instances and on-demand pricing tested.

Encoding Procedure

As you’ll see in the charts below, I started encoding a single FFmpeg instance and kept adding simultaneous encodes until the cost per stream began to increase, indicating that spinning up another instance was more cost effective than adding additional encodes to the same system.

FFmpeg Versions

Here’s where things get a bit complicated. My premise was that I would produce the optimal results using FFmpeg versions compiled specifically for each CPU tested. I downloaded builds for Graviton, AMD, and Intel from https://johnvansickle.com/ffmpeg/ and happily contributed via PayPal. However, I was also in touch with MulticoreWare, who requested that I test with an advanced version of their x265 codec that was optimized for Graviton.

Figure 1. I tested with CPU-specific versions of FFmpeg 6.0 from https://johnvansickle.com/ffmpeg/.

Before testing, I compared the performance of the stock version of FFmpeg (Version 4.4) with the CPU-specific versions from Vansickle on the AMD and Intel platforms and for x264 on Graviton. In all cases, the Vansickle version produced the same or better throughput with identical quality.

Note that in other tests on different AMD instances with core counts ranging from 2 – 32, the Vansickle version was not always the best performer. So, if you try the Vansickle versions or your own CPU-specific compiled versions, you should verify that it outperforms the native version in all relevant use cases.

Note that the MulticoreWare version of FFmpeg performed much better on the Graviton system than the generic version of 4.4 or the Vansickle version, though still far behind Intel and particularly AMD. As you’ll see clearly below, if you’re running x265 on a Graviton system using high quality presets, you’re missing a great opportunity to shave your costs.

For the record, I tried upgrading the stock version of FFmpeg on the Ubuntu system to version 6.0 but ran into multiple issues that ultimately corrupted the system and forced me to start back at ground zero. Unfortunately, Ubuntu operation and maintenance are not a core-strengths of mine, but since I ran all tests using Version 6.0, whether supplied by Vansickle or MulticoreWare, the results should be representative.

Table 2 shows the different versions of FFmpeg that I ran on the three systems for the three test cases.

Table 2. The FFmpeg versions deployed on the three systems for the three test cases.

Results

Here are the results for the three test cases.

1080p x264

Figure 2 shows the cost per hour to produce a 1080p30 stream using FFmpeg and the x264 codec. One of the more interesting testing results was that the combination of FFmpeg and Ubuntu handled multiple instances of FFmpeg with minimal overhead, particularly on the Graviton CPU. You see this with the cost per hour for Graviton remaining consistent through twelve instances, while it increased slightly for Intel after 10 instances and AMD after 12.

In all cases, you see the cost per instance drop significantly when moving from single to multiple simultaneous encodes. If you’re performing a single 1080p x264 encode on an 8-core system, you’re probably wasting money.

On the other hand, once each CPU hits the lowest cost per hour, it’s time to consider adding another instance. The cost per stream will remain the same, but your encoding speed will double. So, if you’re encoding on a Graviton system, your encoding time will double if you perform twelve simultaneous encodes as opposed to six, but your cost per hour will be almost exactly the same. If you spin up another 8-core system and encode six simultaneous encodes on the two systems, your cost will be almost identical, but your throughput will double.

Figure 2. Cost per hour to produce a single 1080p stream using the x264 codec and FFmpeg. Graviton is clearly the most cost-effective.

1080p x265

What a difference a codec makes. Where Graviton was the clear leader for x264, it’s the clear laggard for x265. Again, I produced the Graviton results shown in Figure 3 using a version of FFmpeg supplied by x265 developer MulticoreWare; the results would have been much worse with either the Vansickle version or the stock version. As you may know, Graviton is an Arm-based CPU that uses a different instruction set than Intel or AMD CPUs. While the x264 codec was Arm-friendly, the x265 codec was decidedly the reverse, at least using the high-quality presets that I used in my tests.

Interestingly, for both Intel and AMD, we realized the lowest cost per stream at relatively low simultaneous stream counts, two for Intel and two and three for AMD. If your testing confirms this, you should consider adding instances once you achieve this threshold rather than adding additional encodes to existing instances.

Figure 3. Cost per hour to produce a single 1080p stream using the x265 codec and FFmpeg.

Comparing the lowest cost Intel ($6.60) to the lowest cost AMD ($5.49), shows a cost delta of about 17%. As shown in Table 1, 10% of this relates to pricing, leaving about a 7% performance delta.

For the record, note that an Amazon engineer ran similar tests here and found that Graviton was faster for both x264 and x265. Note, however, that the author used the ultrafast preset, while I used higher quality presets for the stated reasons. Have a look and draw your own conclusions.

4K60 x265

In 4K60p testing, the Graviton was clearly overwhelmed from both a cost and performance aspect, unable to complete even three simultaneous encodes. The overall cost delta between Intel and AMD narrowed slightly, dropping to 13.7% overall, with 10% relating to pricing. The actual throughput delta between the two in these tests is 3.7%.

Figure 4. Cost per hour to produce a single 4K60p stream using the x265 codec and FFmpeg.

This 4K60 test stressed memory usage much more so than the 1080p tests, limiting successful simultaneous transcodes to two for Graviton and four for AMD and Intel. Interestingly, in these tests, AMD produced the lowest cost per stream while running a single encode, and Intel did so at 2. With these challenging encodes; you may want to spin up new machines after only one or two encodes rather than attempting more simultaneous encodes. Or, perhaps, try a machine with more cores. Hold that thought until the last section.

For reference, Table 3 summarizes the lowest cost per hour for the three test cases.

Table 3. Cost per hour for the three test cases on the three tested CPUs.

Which leads us to the last section.

What’s the Optimal Number of Cores for FFmpeg?

AWS offers multiple core counts in all three CPU flavors: what’s the optimal core count? To evaluate this, I ran tests on multiple AMD CPUs for all three test cases and present the results below.

Let’s talk about expectations first. AWS charges linearly for the machine cores, so an 8-core system costs twice as much as a 4-core system and a quarter of a 32-core system. Given the results presented above, where FFmpeg/Ubuntu proved highly efficient when processing multiple instances, I expected a similar cost per hour for all CPUs. The results were close.

With x264, 2-core and 8-core systems were slightly more affordable than 16-core, though a 32-core system finally caught up at 32 simultaneous transcodes. If you’re going to run a 32-core system for 1080p30/x264 encodes, you need to be running quite a few simultaneous encodes to achieve the optimal cost per stream.

Figure 5. x264 encoding cost for the CPU core counts shown.

With x265 encoding at 1080p, the results were closer to what I expected, though again, the 2-core and 8-core systems were slightly more affordable. Unlike x264, the 32-core system became slightly more expensive as the number of simultaneous encodes increased, making eight simultaneous streams the most affordable.

Figure 6. x265 encoding cost for 1080p30 encodes and the CPU core counts shown.

When encoding 4K videos, the phrase “go big or go home” comes to mind. Here, 32-cores delivered the lowest cost, though only by a fraction, and only at four simultaneous encodes. After that, the cost per hour increases slightly through eight encodes and then starts a more serious climb.

Figure 7. x265 encoding cost for 4K60 encodes and the CPU core counts shown.

As you can see, all these results are highly codec and source material specific. The most important takeaway from this article should not be that Graviton is best for x264 and AMD best for x265. It should be that real differences exist between the performance of the CPUs, and these differences may translate to significant cost differentials. If you’re spending even a few thousand dollars a month on AWS for FFmpeg encoding, it makes sense to run tests like these to identify the most cost-effective CPU and core-count.

Test Strings

1080p30 x264:

ffmpeg -y -i Orchestra.mp4 -c:v libx264 -profile:v high  -preset veryslow -g 60 -keyint_min 60 -sc_threshold 0  -b:v 4200k -pass 1  -f mp4 /dev/null

ffmpeg -y -i Orchestra.mp4 -c:v libx264  -preset veryslow -g 60 -keyint_min 60 -sc_threshold 0  -b:v 4200k -maxrate 8400k -bufsize 8400k -pass 2  orchestra_x264_output.mp4

1080p30 x265:

ffmpeg  -y -i Football_short.mp4 -c:v libx265 -preset slower -x265-params keyint=60:min-keyint=60:scenecut=0:bitrate=3500:pass=1  -f mp4 /dev/null

ffmpeg  -y -i Football_short.mp4 -c:v libx265 -preset slower -x265-params keyint=60:min-keyint=60:scenecut=0:bitrate=3500:vbv-maxrate=7000:vbv-bufsize=7000:pass=2  Football_x265_HD_output.mp4

4K60 x265:

ffmpeg -y -i Football_4K60.mp4 -c:v libx265 -preset slow -x265-params keyint=120:min-keyint=120:scenecut=0:bitrate=12500K:pass=1  -f mp4 /dev/null

ffmpeg -y -i Football_4K60.mp4 -c:v libx265 -preset slow -x265-params keyint=120:min-keyint=120:scenecut=0:bitrate=12500K:vbv-maxrate=25000K:vbv-bufsize=25000K:pass=2  Football_4K_output.mp4 

Play Video about Which AWS CPU is Best for FFmpeg - AMD, Graviton, or Intel?
HARD QUESTIONS ON HOT TOPICS: AMD, Graviton, and Intel
– three CPU options to encode with FFmpeg on AWS
 
Watch the full conversation on YouTube: https://youtu.be/BOZZuiemMAU