NETINT Breaks Into the Streaming Media 100 List 2023

NETINT joins the prestigious Streaming Media 100 List for 2023. Recognized for their pioneering ASIC-based transcoders, celebrated for innovation in live streaming, cloud gaming, and surveillance.

NETINT is proud to be included in the Streaming Media list of the Top 100 Companies in the Streaming Media Universe, which “set themselves apart from the crowd with their innovative approach and their contribution to the expansion and maturation of the streaming media universe.”

The list is compiled by members of Streaming Media Magazine’s inner circle and “foregrounds the industry’s most innovative and influential technology suppliers, service providers, platforms, and media and content companies, as acclaimed by our editorial team. Some are large and established industry standard-bearers, while others are comparably small and relatively new arrivals that are just beginning to make a splash.”

Commenting on the Award, Alex Lui, NETINT CEO said, “Over the last twelve months, video engineers have increasingly recognized the unique value that ASIC-based transcoders deliver to the live streaming, cloud gaming, and surveillance markets, including the lowest cost and power consumption per stream, and the highest density. Our entire company appreciates that insiders at Streaming Media share this assessment.”

“Over the last twelve months, video engineers have increasingly recognized the unique value that ASIC-based transcoders deliver to the live streaming, cloud gaming, and surveillance markets, including the lowest cost and power consumption per stream, and the highest density. Our entire company appreciates that insiders at Streaming Media share this assessment.”

NETINT - Streaming Media 100 in 2023

To learn more about NETINT’s Video Processing Units, access our RESOURCES here or SCHEDULE CONSULTATION with NETINT’s Engineers. 

ON-DEMAND: Building Your Own Live Streaming Cloud

Choosing Transcoding Hardware: Deciphering the Superiority of ASIC-based Technology

Which technology reigns supreme in transcoding: CPU-only, GPU, or ASIC-based? Kenneth Robinson’s incisive analysis from the recent symposium makes a compelling case for ASIC-based transcoding hardware, particularly NETINT’s Quadra. Robinson’s metrics prioritized viewer experience, power efficiency, and cost. While CPU-only systems appear initially economical, they falter with advanced codecs like HEVC. NVIDIA’s GPU transcoding offers more promise, but the Quadra system still outclasses both in quality, cost per stream, and power consumption. Furthermore, Quadra’s adaptability allows a seamless switch between H.264 and HEVC without incurring additional costs. Independent assessments, such as Ilya Mikhaelis‘, echo Robinson’s conclusions, cementing ASIC-based transcoding hardware as the optimal choice.

Choosing transcoding hardware

During the recent symposium, Kenneth Robinson, NETINT’s manager of Field Application Engineering, compared three transcoding technologies: CPU-only, GPU, and ASIC-based transcoding hardware. His analysis, which incorporated quality, throughput, and power consumption, is useful as a template for testing methodology and for the results. You can watch his presentation here and download a copy of his presentation materials here.

Figure 1. Overall savings from ASIC-based transcoding (Quadra) over GPU (NVIDIA) and CPU.
Figure 1. Overall savings from ASIC-based transcoding (Quadra) over GPU (NVIDIA) and CPU.

As a preview of his findings, Kenneth found that when producing H.264, ASIC-based hardware transcoding delivered CAPEX savings of 86% and 77% compared to CPU and GPU-based transcoding, respectively. OPEX savings were 95% vs. CPU-only transcoding and 88% compared to GPU.

For the more computationally complex HEVC codec, the savings were even greater. As compared to CPU-based transcoding, ASICs saved 94% on CAPEX and 98% on OPEX. As compared to GPU-based transcoding, ASICs saved 82% on CAPEX and 90% on OPEX. These savings are obviously profound and can make the difference between a successful and profitable service and one that’s mired in red ink.

Let’s jump into Kenneth’s analysis.

Determining Factors

Digging into the transcoding alternatives, Kenneth described the three options. First are CPUs from manufacturers like AMD or Intel. Second are GPUs from companies like NVIDIA or AMD. Third are ASICs, or Application Specific Integrated Circuits, from manufacturers like NETINT. Kenneth noted that NETINT calls its Quadra devices Video Processing Units (VPU), rather than transcoders because they perform multiple additional functions besides transcoding, including onboard scaling, overlay, and AI processing.

He then outlined the factors used to determine the optimal choice, detailing the four factors shown in Figure 2. Quality is the average quality as assessed using metrics like VMAF, PSNR, or subjective video quality evaluations involving A/B comparisons with viewers. Kenneth used VMAF for this comparison. VMAF has been shown to have the highest correlation with subjective scores, which makes it a good predictor of viewer quality of experience.

Choosing transcoding hardware - Determining Factors
Figure 2. How Kenneth compared the technologies.

Low-frame quality is the lowest VMAF score on any frame in the file. This is a predictor for transient quality issues that might only impact a short segment of the file. While these might not significantly impact overall average quality, short, low-quality regions may nonetheless degrade the viewer’s quality of experience, so are worth tracking in addition to average quality.

Server capacity measures how many streams each configuration can output, which is also referred to as throughput. Dividing server cost by the number of output streams produces the cost per stream, which is the most relevant capital cost comparison. The higher the number of output streams, the lower the cost per stream and the lower the necessary capital expenditures (CAPEX) when launching the service or sourcing additional capacity.

Power consumption measures the power draw of a server during operation. Dividing this by the number of streams produced results in the power per stream, the most useful figure for comparing different technologies.

Detailing his test procedures, Kenneth noted that he tested CPU-only transcoding on a system equipped with an AMD Epic 32-core CPU. Then he installed the NVIDIA L4 GPU (a recent release) for GPU testing and NETINT’s Quadra T1U U.2 form factor VPU for ASIC-based testing.

He evaluated two codecs, H.264 and HEVC, using a single file, the Meridian file from Netflix, which contains a mix of low and high-motion scenes and many challenging elements like bright lights, smoke and fog, and very dark regions. If you’re testing for your own deployments, Kenneth recommended testing with your own test footage.

Kenneth used FFmpeg to run all transcodes, testing CPU-only quality using the x264 and x265 codecs using the medium and very fast presets. He used FFmpeg for NVIDIA and NETINT testing as well, transcoding with the native H.264 and H.265 codec for each device.

H.264 Average, Low-Frame, and Rolling Frame Quality

The first result Kenneth presented was average H.264 quality. As shown in Figure 3, Kenneth encoded the Meridian file to four output files for each technology, with encodes at 2.2 Mbps, 3.0 Mbps, 3.9 Mbps, and 4.75 Mbps. In this “rate-distortion curve” display, the left axis is VMAF quality, and the bottom axis is bitrate. In all such displays, higher results are better, and Quadra’s blue line is the best alternative at all tested bitrates, beating NVIDIA and x264 using the medium and very fast presets.

Figure 3. Quadra was tops in H.264 quality at all tested bitrates.
Figure 3. Quadra was tops in H.264 quality at all tested bitrates.

Kenneth next shared the low-frame scores (Figure 4), noting that while the NVIDIA L4’s score was marginally higher than the Quadra’s, the difference at the higher end was only 1%. Since no viewer would notice this differential, this indicates operational parity in this measure.

Figure 4. NVIDIA’s L4 and the Quadra achieve relative parity in H.264 low-frame testing.
Figure 4. NVIDIA’s L4 and the Quadra achieve relative parity in H.264 low-frame testing.

The final H.264 quality finding displayed a 20-second rolling average of the VMAF score. As you can see in Figure 5, the Quadra, which is the blue line, is consistently higher than the NVIDIA L4 or medium or very fast. So, even though the Quadra had a lower single-frame VMAF score compared to NVIDIA, over the course of the entire file, the quality was predominantly superior.

Figure 5. 20-second rolling frame quality over file duration.
Figure 5. 20-second rolling frame quality over file duration.

HEVC Average, Low-Frame, and Rolling Frame Quality

Kenneth then related the same results for HEVC. In terms of average quality (Figure 6), NVIDIA was slightly higher than the Quadra, but the delta was insignificant. Specifically, NVIDIA’s advantage starts at 0.2% and drops to 0.04% at the higher bit rates. So, again, a difference that no viewer would notice. Both NVIDIA and Quadra produced better quality than CPU-only transcoding with x265 and the medium and very fast presets.

Figure 6. Quadra was tops in H.264 quality at all tested bitrates.
Figure 6. Quadra was tops in H.264 quality at all tested bitrates.

In the low-frame measure (Figure 7), Quadra proved consistently superior, with NVIDIA significantly lower, again a predictor for transient quality issues. In this measure, Quadra also consistently outperformed x265 using medium and very fast presets, which is impressive.

Figure 7. NVIDIA’s L4 and the Quadra achieve relative parity in H.264 low-frame testing.
Figure 7. NVIDIA’s L4 and the Quadra achieve relative parity in H.264 low-frame testing.

Finally, HEVC moving average scoring (Figure 8) again showed Quadra to be consistently better across all frames when compared to the other alternatives. You see NVIDIA’s downward spike around frame 3796, which could indicate a transient quality drop that could impact the viewer’s quality of experience.

Figure 8. 20-second rolling frame quality over file duration.
Figure 8. 20-second rolling frame quality over file duration.

Cost Per Stream and Power Consumption Per Stream - H.264

To measure cost and power consumption per stream, Kenneth first calculated the cost for a single server for each transcoding technology and then measured throughput and power consumption for that server using each technology. Then, he compared the results, assuming that a video engineer had to source and run systems capable of transcoding 320 1080p30 streams.

You see the first step for H.264 in Figure 9. The baseline computer without add-in cards costs $7,100 but can only output fifteen 1080p30 streams using an average of the medium and veryfast presets, resulting in a cost per stream was $473. Kenneth installed two NVIDIA L4 cards in the same system, which boosted the price to $14,214, but more than tripled throughput to fifty streams, dropping cost per stream to $285. Kenneth installed ten Quadra T1U VPUs in the system, which increased the price to $21,000, but skyrocketed throughput to 320 1080p30 streams, and a $65 cost per stream.

This analysis reveals why computing and focusing on the cost per stream is so important; though the Quadra system costs roughly three times the CPU-only system, the ASIC-fueled output is over 21 times greater, producing a much lower cost per stream. You’ll see how that impacts CAPEX for our 320-stream required output in a few slides.

Figure 9. Computing system cost and cost per stream.
Figure 9. Computing system cost and cost per stream.

Figure 10 shows the power consumption per stream computation. Kenneth measured power consumption during processing and divided that by the number of output streams produced. This analysis again illustrates why normalizing power consumption on a per-stream basis is so necessary; though the CPU-only system draws the least power, making it appear to be the most efficient, on a per-stream basis, it’s almost 20x the power draw of the Quadra system.

Figure 10. Computing power per stream for H.264 transcoding.
Figure 10. Computing power per stream for H.264 transcoding.

Figure 11 summarizes CAPEX and OPEX for a 320-channel system. Note that Kenneth rounded down rather than up to compute the total number of servers for CPU-only and NVIDIA. That is, at a capacity of 15 streams for CPU-only transcoding, you would need 21.33 servers to produce 320 streams. Since you can’t buy a fractional server, you would need 22, not the 21 shown. Ditto for NVIDIA and the six servers, which, at 50 output streams each, should have been 6.4, or actually 7. So, the savings shown are underrepresented by about 4.5% for CPU-only and 15% for NVIDIA. Even without the corrections, the CAPEX and OPEX differences are quite substantial.

Figure 11. CAPEX and OPEX for 320 H.264 1080p30 streams.
Figure 11. CAPEX and OPEX for 320 H.264 1080p30 streams.

Cost Per Stream and Power Consumption Per Stream - HEVC

Kenneth performed the same analysis for HEVC. All systems cost the same, but throughput of the CPU-only and NVIDIA-equipped systems both drop significantly, boosting their costs per stream. The ASIC-powered Quadra outputs the same stream count for HEVC as for H.264, producing an identical cost per stream.

Figure 12. Computing system cost and cost per stream.
Figure 12. Computing system cost and cost per stream.

The throughput drop for CPU-only and NVIDIA transcoding also boosted the power consumption per stream, while Quadra’s remained the same.

Figure 13. Computing power per stream for H.264 transcoding.
Figure 13. Computing power per stream for H.264 transcoding.

Figure 14 shows the total CAPEX and OPEX for the 320-channel system, and this time, all calculations are correct. While CPU-only systems are tenuous–at best– for H.264, they’re clearly economically untenable with more advanced codecs like HEVC. While the differential isn’t quite so stark with the NVIDIA products, Quadra’s superior quality and much lower CAPEX and OPEX are compelling reasons to adopt the ASIC-based solution.

Figure 14. CAPEX and OPEX for 320 1080p30 HEVC streams.
Figure 14. CAPEX and OPEX for 320 1080p30 HEVC streams.

As Kenneth pointed out in his talk, even if you’re producing only H.264 today, if you’re considering HEVC in the future, it still makes sense to choose a Quadra-equipped system because you can switch over to HEVC with no extra hardware cost at any time. With a CPU-only system, you’ll have to more than double your CAPEX spending, while with NVIDIA,  you’ll need to spend another 25% to meet capacity.

The Cost of Redundancy

Kenneth concluded his talk with a discussion of full hardware and geo-redundancy. He envisioned a setup where one location houses two servers (a primary and a backup) for full hardware redundancy. A similar setup would be replicated in a second location for geo-redundancy. Using the Quadra video server, four servers could provide both levels of redundancy, costing a total of $84,000. Obviously, this is much cheaper than any of the other transcoding alternatives.

NETINT’s Quadra VPU proved slightly superior in quality to the alternatives, vastly cheaper than CPU-only transcoding, and very meaningfully more affordable than GPU-based transcoders. While these conclusions may seem unsurprising – an employee at an encoding ASIC manufacturer concludes that his ASIC-based technology is best — you can check Ilya Mikhaelis’ independent analysis here and see that he reached the same result.

Now ON-DEMAND: Symposium on Building Your Live Streaming Cloud

Get Free CAE on NETINT VPUs with Capped CRF

Capped CRF

NETINT recently added capped CRF to the rate control mechanism across our Video Processing Unit (VPU) product lines. With the wide adoption of content-adaptive encoding techniques (CAE), constant rate factor (CRF) encoding with a bit rate cap gained popularity as a lightweight form of CAE to reduce the bitrate of easy-to-encode sequences, saving delivery bandwidth with constant video quality. It’s a mode that we expect many of our customers to use, and this document will explain what it is, how it works, and how to get the most use from the feature.

In addition to working with H.264, HEVC, and AV1 on the Quadra VPU line, capped CRF works with H.264 and HEVC on the T408 and T432 video transcoders. This document details how to encode with capped CRF using the H.264 and HEVC codecs on Quadra VPUs, though most application scenarios apply to all codecs across the NETINT VPU lines.

What is Capped CRF and How Does it Work?

Capped CRF is a bitrate control technique that combines constant rate factor (CRF) encoding with a bit rate cap. Multiple codecs and software encoders support it, including x264 and x265 within FFmpeg. In contrast to CBR and VBR encoding, which encode to a specified target bitrate (and ignore output quality), CRF encodes to a specified quality level and ignores the bitrate.

CRF values range from 0-51, with lower numbers delivering higher quality at higher bitrates (less savings) and higher CRF values delivering lower quality levels at lower bitrates (more bitrate savings). Many encoding engineers will utilize values spanning 21 to 23. Which is right for you? As you will read below, your desired quality and bitrate savings balance determines the best value for your use case.

For example, with the x264 codec, if you transcode to CRF 23, the encoder typically outputs a file with a VMAF quality of 93-95. If that file is a 4K60 soccer match, the bitrate might be 30 Mbps. If it’s a 1080p talking head, it might be 1.2 Mbps. Because CRF delivers a known quality level, it’s ideal for creating archival copies of videos. However, since there’s no bitrate control, in most instances, CRF alone is unusable for streaming delivery.

When you combine CRF with a bit rate cap, you get the best of both worlds, a bit rate reduction with consistent quality for easy-to-encode clips and similar to CBR quality and bitrate or more complex clips.

Here’s how capped CRF could be used with the Quadra VPU:

ffmpeg -i input crf=23:vbvBufferSize=1000:bitrate=6000000 output

The relevant elements are:

  • CRF=23 – sets the quality target at around 95 VMAF

  • vbvBufferSize=1000 – sets the VBV buffer to one second (1000 ms)

  • bitrate=6000000 – caps the bitrate at 6 Mbps.

These commands would produce a file that targets close to 95 VMAF quality but, in all cases, peaks at around 6 Mbps.

For a simple-to-encode talking head clip, Quadra produced a file with an average bitrate of 1,274 kbps and a VMAF score of 95.14. Figure 1 shows this output in a program called Bitrate Viewer. Since the entire file is under the 6 Mbps cap, the CRF value controls the bitrate throughout.

Encoding this clip with Quadra using CBR at 6 Mbps produced a file with a bit rate of 5.4 Mbps and a VMAF score of 97.50. Multiple studies have found that VMAF scores above 95 are not perceptible by viewers, so the extra 2.26 VMAF score doesn’t improve the viewer’s quality of experience (QoE). In this case, capped CRF reduces your bandwidth cost by 76% without impacting QoE.

Figure 1. Capped CRF encoding a simple-to-encode video in Bitrate Viewer.

You see this in Figure 2, showing the capped CRF frame with a VMAF score of 94.73 on the left and the CBR frame with a VMAF score of 97.2 on the right. The video on the right has a bit rate over 4 Mbps larger than the video on the left, but the viewer wouldn’t notice the difference.

Figure 2. Frames from the talkinghead clip. Capped CRF at 1.23 Mbps on the left,
CBR at 5.4 Mbps on the right. No viewer would notice the difference.

Figure 3 shows capped CRF operation with a hard-to-encode American football clip. The average bitrate is 5900 kbps, and the VMAF score is 94.5. You see that the bitrate for most of the file is pushing against the 6 Mbps cap, which means that the cap is the controlling element. In the two regions where there are slight dips, the CRF setting controls the quality.

Figure 3. Capped CRF encoding a hard-to-encode video in Bitrate Viewer.

In contrast, the CBR encode of the football clip produced a bit rate of 6,013 kbps and a VMAF score of  94.73. Netflix has stated that most viewers won’t notice a VMAF differential under 6 points, so a viewer would not perceive the .25 VMAF delta between the CBR and capped CRF file. In this case, capped CRF reduced delivery bandwidth by about 2% without impacting QoE.

Of course, as shown in Figure 2, the two-minute segment tested was almost all high motion. The typical sports broadcast contains many lower-motion sequences, including some commercials, cutting to the broadcasters, or during timeouts and penalty calls. In most cases, you would expect many more dips like those shown in Figure 2 and more substantial savings.

So, the benefits of capped CRF are as follows:

  • You can use a single ladder for all your content, automatically saving bitrate on easy-to-encode clips and delivering the equivalent QoE on hard-to-encode clips.
  • Even if you modify your ladder by type of content, you should save bandwidth on easy-to-encode regions within all broadcasts without impacting QoE.
  • Provides the benefit of CAE without the added integration complexity or extra technology licensing cost. Capped CRF is free across all NETINT VPU and video transcoder products.

Producing Capped CRF

Using the NETINT Quadra VPU series, the following commands for H.264 capped CRF will optimize video quality and deliver a file or stream with a fully compliant VBV buffer. As noted previously, this command string with the appropriate modifications to codec value will work across the entire NETINT product line. For example, to output HEVC, change -c:v h264_ni_quadra_enc to -c:v h265_ni_quadra_enc.

Here’s the command string.

ffmpeg -y -i input.mp4 -y -c:v h264_ni_quadra_enc -xcoder-params “gopPresetIdx=5:RcEnable=0:crf=23:intraPeriod=120:lookAheadDepth=10:cuLevelRCEnable=1:v
bvBufferSize=1000:bitrate=6000000:tolCtbRcInter=0:tolCtbRcIntra=0:zeroCopyMode=0″ output.mp4

Here’s a brief explanation of the encoding-related switches.

  • -c:v h264_ni_quadra_enc -xcoder-params – Selects Quadra’s H.264 codec and identifies the codec commands identified below.

  • gopPresetIdx=5 – this chooses the Group of Pictures (GOP) pattern, or the mixture of B-frame and P-frames within each GOP. You should be able to adjust this without impacting capped CRF performance.

  • RcEnable=0 – this disables rate control. You must use this setting to enable capped CRF.

  • crf=23 – this chooses the CRF value. You must include a CRF value within your command string to enable capped CRF.

  • intraPeriod=120 – This sets the GOP size to four seconds which we used for all tests. You can adjust this setting to your normal target without impacting CRF operation.

  • lookAheadDepth=10 – This sets the lookahead to 10 frames. You can adjust this setting to your normal target without impacting CRF operation.

  • cuLevelRCEnable=1 – this enables coding unit-level rate control. Do not adjust this setting without verifying output quality and VBV compliance.

  • vbvBufferSize=1000 – This sets the VBV buffer size. You must set this to trigger capped CRF operation.

  • bitrate=6000000 – This sets the bitrate. You must set this to trigger capped CRF operation. You can adjust this setting to your target without impacting CRF operation.

  • tolCtbRcInter=0 – This defines the tolerance of CU-level rate control for P-frames and B-frames. Do not adjust this setting without verifying output quality and VBV compliance.

  • tolCtbRcIntra=0 – This sets the tolerance of CU level rate control for I-frames. Do not adjust this setting without verifying output quality and VBV compliance.

  • zeroCopyMode=0 – this enables or disables the libxcoder zero copy feature. Do not adjust this setting without verifying output quality and VBV compliance.

You can access additional information about these controls in the Quadra Integration and Programming Guide.

Choosing the CRF Value and Bitrate Cap – H.264

Deploying capped CRF involves two significant decisions, choosing the CRF value and setting the bitrate cap. Choosing the CRF value is the most critical decision, so let’s begin there.

Table 1 shows the bitrate and VMAF quality of ten files encoded with the H.264 codec using the CRF values shown with a 6 Mbps cap and using CBR encoding with a 6 Mbps cap. The table presents the easy-to-encode files on top, showing clip-specific results and the average value for the category. The Delta from CBR shows the bitrate and VMAF differential from the CBR score. Then the table does the same for hard-to-encode clips, showing clip-specific results and the average value for the category. The bottom two rows present the overall average bitrate and VMAF values and the overall savings and quality differential from CBR.

Capped CRF - Table 1. CBR and capped CRF bitrates and VMAF scores for H.264 encoded clips.
Table 1. CBR and capped CRF bitrates and VMAF scores for H.264 encoded clips.

As mentioned, with CRF, lower values produce higher quality. In the table, CRF 19 produces the highest quality (and lowest bitrate savings), and CRF 27 delivers the lowest quality (and highest bitrate savings). What’s the right CRF value? The one that delivers the target VMAF score for your typical clips for your target audience.

For the test clips shown, CRF 19 produces an average quality of well over 95; as mentioned above, VMAF scores beyond 95 aren’t perceivable by the average viewer, so the extra bandwidth needed to deliver these files is wasted. Premium services should choose CRF values between 21-23 to achieve the top rung quality of around 95 VMAF scores. These deliver more significant bandwidth savings than CRF 19 while preserving the desired quality level. In contrast, commodity services should experiment with higher values like 25-27 to deliver slightly lower VMAF scores while achieving more significant bandwidth savings.

What bitrate cap should you select? CRF sets quality, while the bitrate cap sets the budget. In most cases, you should consider using your existing cap. As we’ve seen, with easy-to-encode clips, capped CRF should deliver about the same quality of experience with the potential for bitrate savings. For hard-to-encode clips, capped CRF should deliver the same QoE with the potential for some bitrate savings on easy-to-encode sections of your broadcast.

Note that identifying the optimal CRF value will vary according to the complexity of your video files, as well as frame rate, resolution, and bitrate cap. If you plan to implement capped CRF with Quadra or any encoder, you should run similar tests on your standard test clips using your encoding parameters and draw your own conclusions.

Now let’s examine capped CRF and HEVC.

Choosing the CRF Value and Bitrate Cap – HEVC

Table 2 shows the results of HEVC encodes using CBR at 4.5 Mbps and the specified CRF values with a cap of 4.5 Mbps. With these test clips and encoding parameters, Quadra’s CRF values produce nearly the same result, with CRF values 21-23 appropriate for premium services and 25 – 27 good settings for UGC content.

Capped CRF - Table 2. CBR and capped CRF bitrates and VMAF scores for HEVC encoded clips.
Table 2. CBR and capped CRF bitrates and VMAF scores for HEVC encoded clips.

Again, the cap is yours to set; we arbitrarily reduced the H.264 bitrate cap of 6 Mbps by 25% to determine the 4.5 Mbps cap for HEVC.

Capped CRF Performance

Note that as currently tested, capped CRF comes with a modest performance hit, as shown in Table 3. Specifically, in CBR mode, Quadra output twenty 1080p30 H.264-encoded streams. This dropped to sixteen using capped CRF, a reduction of 20%.

For HEVC, throughput dropped from twenty-three to eighteen 1080p30 streams, a reduction of about 22%. We performed all tests using CRF 21, with a 6 Mbps cap for H.264 and 4.5 Mbps for HEVC. Note that these are early days in the CRF implementation, and it may be that this performance delta is reduced or even eliminated over time.

Capped CRF - Table 3. 1080p30 outputs produced using the techniques shown.
Table 3. 1080p30 outputs produced using the techniques shown.

We installed the Quadra in a workstation powered by a 3.6 GHz AMD Ryzen 5 5600X 6-Core Processor running Ubuntu 18.04.6 LTS with 16 GB of RAM. As you can see in the table, we also tested output for the x264 codec in FFmpeg using the medium and veryfast presets, producing two and five 1080p30 outputs, respectively. For x265, we tested using the medium and ultrafast presets and the workstation produced one and three 1080p30 streams.

Even at the reduced throughput, Quadra’s CRF output dwarfs the CPU-only output. When you consider that the NETINT Quadra Video Server packs ten Quadra VPUs into a single 1RU form factor, you get a sense of how VPUs offer unparalleled density and the industry’s lowest cost per stream and power consumption per stream.

Bandwidth is one of the most significant costs for all live-streaming productions. In many applications, capped CRF with the NETINT Quadra delivers a real opportunity to reduce bandwidth cost with no perceived impact on viewer quality of experience.

What Can a VPU Do for You?

What Can a VPU Do for You? - NETINT Technologies

For Cloud-Gaming, a VPU can deliver 200 simultaneous 720p30 game sessions from a single 2RU server.

When you encode using a Video Processing Unit (VPU) rather than the built-in GPU encoder, you will decrease your cost per concurrent user (CCU) by 90%, enabling profitability at a much lower subscription price. How is this technically feasible? Two technology enablers make this possible. First, extraordinarily capable encoding hardware, known as a VPU (video processing unit), dedicated to the task of high-quality video encoding and processing. And second, peer-to-peer direct memory access (DMA) that enables video frames to be delivered at the speed of memory compared to the much slower NVMe buss between the GPU and VPU. Let’s discuss these in reverse order.

Peer-to-Peer Direct Memory Access (DMA)

Within a cloud gaming architecture, the primary role of the GPU is to render frames from the game engine output. These frames are then encoded into a standard codec that is easily decoded on a wide cross section of devices. Generally this is H.264 or HEVC, though AV1 is becoming of interest to those with a broader Android user based. Encoding on the GPU is efficient from a data transfer standpoint because the rendering and encoding occurs on the same silicon die; there’s no transfer of the rendered YUV frame to a separate transcoder over the slower PCIe or NVMe busses. However, since encoding requires substantial GPU resources, this dramatically reduces the overall throughput of the system. Interestingly, it’s the encoder that is often at full capacity and, thus the bottleneck, not the rendering engine. Modern GPU’s are built for general-purpose graphical operations, thus, more real estate is devoted to this compared to video encoding.

By installing a dedicated video encoder in the system and using traditional data transfer techniques, the host CPU can easily manage the transfer of the YUV frame from the GPU to the transcoder but as the number of concurrent game sessions increase the probability of dropped frames or corrupted data makes this technique not usable.

NETINT, working with AMD enabled peer-to-peer direct memory access (DMA) to overcome this situation. DMA is a technology that enables devices within a system to exchange data in memory by allowing the GPU to send frames directly to the VPU whereby removing the situation of the buss becoming clogged as the concurrent session count increases above 48 720p streams.

What can a VPU do for you?

The Benefits of Peer-to-Peer DMA

Peer-to-peer DMA delivers multiple benefits. First, by eliminating the need for CPU involvement in data transfers, peer-to-peer DMA significantly reduces latency, which translates to a more responsive and immersive gaming experience for end-users. NETINT VPUs feature latencies as low as 8ms in fully loaded and sustained operation.

In addition, peer-to-peer DMA relieves the CPU of the burden of managing inter-device data transfers. This frees up valuable CPU cycles, allowing the CPU to focus on other critical tasks, such as game logic and physics calculations, optimizing overall system performance and producing a smoother gaming experience.

By leveraging peer-to-peer communications, data can be transferred at greater speeds and efficiency than CPU-managed transfers. This improves productivity and scalability for cloud gaming production workflows.

These factors combine to produce higher throughput without the need for additional costly resources. This cost-effectiveness translates to improved return on investment (ROI) and a major competitive advantage.

Extraordinarily Capable VPUs

Peer-to-peer DMA has no value if the encoding hardware used is not equally capable. With NETINT VPUs, that isn’t the case here.

The reference system that produces 200 720p30 cloud gaming sessions is built on the Supermicro AS-2015CS-TNR server platform with a single GPU and two Quadra T2A VPUs. This server supports AV1, HEVC, and H.264 video game streaming at up to 8K and 60fps, though as may be predicted, the simultaneous stream counts will be reduced as you increase framerate or resolution.

Quadra T2A is the most capable of the Quadra VPU line, the world’s first dedicated hardware to support AV1. With its embedded AI and 2D engines, the Quadra T2A can support AI-enhanced video encoding, region of interest, and content-adaptive encoding. Quadra T2A coupled with a P2P DMA enabled GPU, allows cloud gaming providers to achieve unprecedented high throughput with ultra-low latency.

Quadra T2A is an AIC (HH HL) form-factor video processing unit with two Codensity G5 ASICs that operates in x86 or Arm-based servers requiring just 40 watts at maximum load. It enables cloud gaming platforms to transition from software or GPU-only based encoding with up to a 40x reduction in the total cost of ownership.

What Can A VPU Do For You?

What Can A VPU Do For You?

It makes Cloud Gaming profitable, finally.

Peer-to-peer DMA is a game-changing technology that reduces latency and increases system throughput. When paired with an extraordinarily capable VPU like the NETINT Quadra T2A, now you can deliver an immersive gaming experience at a CCU that cannot be matched by any competing architecture.

Unlocking the Potential of Cloud Gaming with VPUs

Blacknut-cloud gaming-B.jpg

In this interview, Olivier Avaro, the CEO of Blacknut, discusses the emergence and potential of cloud gaming. Blacknut aims to bring the joy of gaming to the mass market by offering a large catalog of games through cloud-based distribution. Avaro highlights the maturity of both users and technology, making cloud gaming a feasible and attractive option. The interview explores the transition from physical discs to streaming, the importance of cost-effectiveness in delivery, and the architectural advancements in cloud gaming systems.

Avaro emphasizes the potential of hybrid cloud infrastructure and the role of GPU and VPU in maximizing the number of concurrent players and reducing costs. He acknowledges the challenge of making cloud gaming affordable for a wider range of consumers, including those in emerging markets. However, he emphasizes that the cost of delivering the service can be kept within a reasonable range, with subscription prices ranging from $5 to $15 per month, depending on the economic conditions of the region.

The technical infrastructure of cloud gaming is explored in detail. Avaro explains the basic architecture, where games are stored on cloud servers and streamed to users’ devices, eliminating the need for downloads. The key requirements for a seamless experience include sufficient bandwidth, low latency, and a well-equipped server infrastructure comprising CPUs, GPUs, and storage. Initially deployed on public cloud platforms for scalability, Blacknut has devised a hybrid cloud approach to optimize the economics of the service. This involves the incorporation of private cloud servers, allowing for improved performance and cost efficiency.

The interview addresses an innovative architectural aspect of Blacknut’s system. Avaro discusses the decision to offload video encoding from the GPU to a dedicated video processor unit (VPU) provided by NETINT.

This approach increases the density of concurrent game sessions, enabling up to 200 players on a single server. This breakthrough in density enhances the economic viability of cloud gaming platforms by significantly reducing costs.

These insights offer valuable perspectives on the advancements in cloud gaming, the importance of cost considerations, and the technological infrastructure that underpins its success.

Avaro also addresses challenges related to unstable internet connectivity in certain regions, discussing collaborations with Ericsson to leverage 5G networks and optimize network characteristics for gaming. While geographical limitations exist, Blacknut is actively expanding its presence to provide global access to its gaming service.

Voices of Video - Cloud Gaming being Real

Play Video about Cloud gaming platforms can greatly benefit from Avaro's revelation: offloading video encoding to a dedicated VPU, enabling 200 players on a single server.
VOICES OF VIDEO
Cloud Gaming being Real. A conversation with the CEO of Blacknut
Watch the full conversation on YouTube: https://youtu.be/w9Pho6G_bdM
 

Mark Donnigan:
So we are at the top of the hour, and looks like we should get started. Oliver, are you ready to talk about cloud gaming?

Oliver Avaro:
Absolutely ready.

Mark Donnigan:
Excellent, excellent. Well, welcome to those who are joining us live. This is the May edition of Voices of Video. And if you haven’t joined us before, Voices of Video is a conversation, or some might say a real dialogue. Not a podcast, I guess a videocast. We go live on LinkedIn and also a lot of other platforms. And we are talking each month with innovators in the video space. And so this month I am super excited to have Oliver Avaro, who is the CEO of a company called Blacknut. And we are talking about cloud gaming. I will let Oliver tell us all about what his company does. But welcome to Voices of Video, Oliver.

Oliver Avaro:
Look, thanks a lot, Mark, for the nice introduction. So my name is Oliver Avaro, I’m the CEO of Blacknut, which in short is doing to games what Spotify did for music, right? So we are distributing game from the cloud, large catalog of games, more than 700 games so far, and this for a simple subscription fee, right? I was long time a gamer. I enjoyed it a lot when I was a teenager. I enjoyed it a lot with friends, with my family, later with my kids. And I started Blacknut in 2016 with the big ambition to actually brings this joy of gaming, this good emotion, all the also positive value of playing together to the mass market. We deployed the tech for about three years. I think cloud gaming does require a bit of technology to work efficiently. Then we started deploy it all over the world and this is where we are today.

Mark Donnigan:
So we are at the top of the hour, and looks like we should get started. Oliver, are you ready to talk about cloud gaming?

Oliver Avaro:
Absolutely ready.

Is the Blacknut CEO a gamer himself?

Mark Donnigan:
I love it. So I have to ask the question, sometimes when we’re building advanced technologies, we get so into the technology, we don’t get to do the thing that we originally set up to do like play games. So are you still a gamer? Set aside time each day to play?

Oliver Avaro:
I set aside each time to play a little bit. That’s true. And I have to say that I was a… The first game I played was on the Commodore 64 machine, it was named Boulder Dash, right? The older of the audience will know about it. Now I’m still, I’ve been playing with my kid of course on the Wii, all the Nintendo games. And Mario and Super Mario Kart and Super Mario Galaxy, right? And to be truly honest, I’m still playing a bit with my kid, but mostly I’m touching a bit Pokemon Go sometimes to still get a conversation with my wife on gaming.

Mark Donnigan:
That’s good. That’s good. Well, I am really excited for this conversation. And I was just thinking back as I was making some notes for what I thought we should talk about. And in 2007 I had the distinct privilege, and I really do consider it to be a privilege, to be a part of a company, one of the early, early innovators of streaming what we call now OTT, and at the time it was transactional VOD. The company still exists, it’s called Voodoo. And we had this crazy idea to take the Blockbuster, those who have been around for a little while will remember Blockbuster video stores in the US. Other countries, they had the equivalent. And eventually I think Blockbuster did expand outside the US. But you’d go to the video store, you’d rent a disc, DVD, and then eventually Blu-ray, and you would drive home so excited for the family to join around the TV and watch it.

And I can remember how shocking it was to have built this amazing experience where every title was in stock. And those of us who remember the video store, remember that that was part of the challenge, on new release day you had to rush down to the store to be the first in line so you could even get the movie, because they only had so many copies. And then of course you had to worry about did I return it, did I return it by the deadline or do I have to pay for a second day. There was a lot about the experience that actually wasn’t so great. And yet we were shocked at how many people said, “Why would I want to stream over the internet? DVD is great. This is amazing. Look at the quality. No one’s going to want to replace the DVD.” Well, 15 years later, obviously that sounds absolutely crazy, as now the entire world is streaming and we can’t even imagine a world without it.

But as I was thinking about cloud gaming, it feels like maybe we’re a little bit further than we were in 2007, but they’re still not everybody’s convinced. And I’m even surprised that major publishers that I’m coming across, and it’s not a foregone conclusion that the console is going to be replaced with streaming. And so let’s start there. Oliver, I have to imagine that a lot of what you’re spending time doing, aside from building the technology, is making the case for why internet delivery of a game experience is going to be better and is ultimately better than something that’s installed on a PC, downloaded or a console. So what insights do you have to share about where we are in this transition from consoles and discs to streaming for games?

Oliver Avaro:
And Mark, I think the analogy with the Blockbusters I think is very relevant. And I feel that first, in terms of market maturity for the end user, we are probably at that point where people would question, “Why should I do that? I can download a game, why should I actually stream it? Why do something different?” Right? And when I created Blacknut, actually a person that I highly respect told me, “Wow.” People will not use it because they can download it, right? Now, if you look at where we are right now with people now consuming all the media, like audio and video and your musics and books in a streaming manner, it seemed that definitely having those people accessing games the same way seems to be actually, it’s the right idea or the right next step, right?

And I do think that there is a bit more of maturity of people actually willing to access games this way. Now, there has been probably an inflection points in terms of technology maturity. I think the technology, meaning basically the hardware you can have on the cloud, the bandwidth you have available on your home, as a kind of device you have to run it and so on, is good enough to provide actually a great experience. And I do think that we are at the time here where we’re passing this inflection point that probably years ago it was not sufficient. And we have seen lot of companies trying to do this, but actually failing and failing really badly. But actually learning a lot from these failures.

So I think we’re at a very exciting time now where we have this maturity in terms of technology. We have the maturity of the end user, because they are used to consume this kind of media with audio, video, eBooks and so on. So probably they’re craving to get access to game, and more and more people are gaming. And we have also the maturity of the content owner and the publisher. So I think we’re at a very, very good time in the market.

Deliver at ultra low latency. Possible?

Mark Donnigan:
Well, I definitely agree that we are much further advanced than we were. I think of some of the things that we had to do, Voodoo in 2007 actually required an appliance, a device with a hard drive in it that we could download the first 30 seconds, maybe a minute of every single title in the library in it. At that time, the library was not as big as what the libraries are today. But just because streaming bandwidth was 768 kilobits. Maybe 1.5 megabits was really fast. If you were really lucky you had 5 megabits. My, how we’ve grown. So it’s definitely we’re in a better position.

Before we get into the technology, because that’s where we’re going to spend the bulk of our time today. But something that I think also you’re in a really good position to address is, is the cost side. So certainly, we’re at a place today with the cloud that you can deliver anything, really anywhere via the cloud. So the notion that you can do cloud gaming, i.e., it’s possible to deliver an ultra low latency, very high quality experience from the cloud. I don’t think anybody conceivably would say, “Oh, I don’t believe that. That’s not possible.” But there is a real issue of the cost. And so why don’t you address where we’re at in terms of just delivery cost, and I’m speaking of OpEx. Where are we at? I mean, is this possible but not affordable, or is this possible and affordable, even for someone who might not be able to charge their consumer a whole lot of money? Not all markets are the US or Western Europe, or some of these regions where consumers are willing to pay $10, $15, $20 a month.

Oliver Avaro:
No, that really is a key issue, Mark. Because, as you mentioned, I think we passed the technology inflection point where actually the service becomes to be feasible. Technically feasible, the experience is good. We think it’s good enough for the mass market. I am sure that some people will be unhappy with it. Really, core gamers will say, “Well…”

Mark Donnigan:
Sure.

Oliver Avaro:
Probably the same people that when the DVD came they say, “Well, I still want to listen to my vinyl on my turntable because this is what I’m using to listen my music. And you will not beat that quality with digital sound.” Right? But for the mass market, I think we got to the point where the feasibility is here. Of course we need good bandwidth, stable, very low jitter, so the variation of the latency. But we are here right.

Now, the issue is indeed on the unique economics and how much it costs to actually stream and deliver games in an efficient manner, so that it is affordable basically for the mass market. And one thing here is I think the gaming is not done. Okay? There is some challenges. As you know, the cost of streaming depends on the number of hours per month, let’s say that you stream. We think that we got at least some maturity where it’s becoming available so that you get to a price point which is what people expect, which is between $5 to $15, depending on the how poor are the country is. So we think this is realistic. But of course, it depends on the intensity of the player, how much they play. And if you want somehow to really sustain and to have great economics, there is still some improvement to be done. Okay? And I would say we have the baseline architecture that allows the service to be profitable, to make it really work, really scale. There is still some margin of improvement. And we have ways actually to improve this unique economics.

Technical infrastructure

Mark Donnigan:
So you’re saying right now that to the end user, which means that the actual cost to deliver the service has to be less. But to the end user, about $5 a month to $15 a month is a target that is possible to reach?

So $5 a month, even in more emerging markets where maybe subscription prices cannot be what they are say in the US, feels like that’s doable. So that’s actually good to hear. Tell us what is the technical… Let’s talk now about what the technical infrastructure looks like and what it takes to deliver. How have you built your system? And then we will get to the broader architecture of Blacknut and what exactly you’re offering. But let’s start with what is your system built on? What does it look like? What are you deploying? Is this a cloud service? Is it run all on prem?

Oliver Avaro:
So basically, the architecture of cloud gaming is somehow simple. You take games, you put them on the server in the cloud and you’re going basically to virtualize it and stream it in the form of a video stream or in some other format so that you don’t have to download the game on the client side, and you can play it as you are playing a video stream. And when you interact with the game, you send a command back to the server and then you interact with the game this way. And so of course bandwidth need to be sufficient, let’s say 6 megabit per second. Latency need to be good, let’s say less than 80 milliseconds. And of course you need to have the right infrastructure on the server that can run games. No games mean a mixture of CPU, GPU, storage, and all this need to work well.

We start deploying the service based on public cloud, because this allow us to test the different metrics, how people were playing the service, how many hours. And this was actually very fast to launch and to scale. So this is what the public clouds, the hyperscaler, SCP, and so on provides. That’s great, but they are quite expensive as you know. So to optimize the economics, we actually built and invented in Blacknut what we call the hybrid cloud for cloud gaming, which is a combination of both the public cloud and private cloud. So we have to install our own servers based on GPUs, CPUs and so on, either directly in Blacknut or with some partners like Radian Arc so that we can improve the overall performances and the unique economics of the system. That I think allowed us to build a profitable service. I think if you just match basically the public cloud currently, I think this is super hard to get something which is viable. But with this kind of hybrid cloud, I think it’s actually very doable.

Mark Donnigan:
And these are standard x86, commercial, off-the-shelf, Intel, AMD machines. I mean, there’s nothing special required or have you gone to a purpose-built design?

Oliver Avaro:
No, the current design is basically definitely specific for the private cloud, but it’s based on standard x86. And for GPU we use a AMD or NVIDIA. Okay? We have a mixture of different providers, but basically this is, I would say reasonably standard architecture, with a mix of CPU, GPU and storage.

Cloud gaming use case

Mark Donnigan:
The cloud gaming use case is a primary one and that’s obviously why we got introduced. And you are using Netin, which we will get to. But kind of the key measure from a technology perspective, and it maps directly back to cost, for a cloud gaming installation is the number of concurrent sessions per server. Obviously, just stands to reason that the more concurrent sessions or players that you can get on a server, well, it’s going to be less expensive to operate and to run. So that’s not too difficult to understand.

One of the things that’s really interesting is, and I’d like for you to talk about this architecture where you have the GPU rendering the game, but you’re actually not doing the video encoding on the GPU. So what does that look like? And also, talk to us about the evolution, because that’s not where you started. And most cloud gaming platforms today are attempting to keep everything on the GPU, which has some advantages, but it has some very distinct disadvantages and trade-offs. And the disadvantage is you just can’t get the density, which means that your cost per stream likely cannot meet that economic bar where you can really affordably deliver to a wider number of players. I.e., you can’t drive your cost down so you have to charge more, and there’s people who will say, “Well that’s too expensive.” But talk to us about this architecture.

Oliver Avaro:
So that’s correct, Mark. I think the ultimate measure is the cost per CCU, right? The cost per concurrent user that you can get on a specific bill of material. If you have a CPU plus GPU architecture, the game is going to actually slice the GPU in different pieces in the more dynamic manner and in the more appropriate manner so that you can run different game and as much game as possible. Right? So typically if you get on the standard GPU, you can run probably a big game, like a large game and you can cut the GPU in four pieces. If you run a medium game, you can run it maybe in 6 or 8 pieces. And if you run a smaller game, then maybe you can get to, I don’t know, 20 pieces, right?

There is some limits on how much you can slice the GPU for the GPU to be still efficient. And likely, for example, the NVIDIA centralized you to slice one GPU in 24 pieces, but that’s it, right? And so there is some limits in this architecture because it all rely on the GPU. We are indeed investigating different architectures where indeed we are using a VPU, like NETINT is providing a video processor that will somehow offload the GPU of the task of encoding and streaming the video so that we can augment the density. And we see it in as terms of full architecture as something which will be a bit more flexible. I think in terms of number of big games, because they rely much more on the GPU, probably you will not augment the density that much. But we think that overall, probably we can gain a factor of 10 on the number of games that you can overall run on this kind of architecture. So passing from a max of 20, 24 games to a time 10, right? Running 200 games on architecture of this kind.

Mark Donnigan:
Yeah, that’s really remarkable. And just in case somebody isn’t doing the quick math here, what you’re saying is that is it with this CPU plus GPU plus VPU, which the VPU is the ASIC based video encoder, all in the same chassis, so the same server, we’re not talking about different servers, you can get up to 200 game players simultaneously, so concurrent players. Which just radically changes the economics. And in our experience, working with publishers and working with platforms, cloud gaming platforms, nearly everybody has said literally without that it’s not even really economical to build the platform. In other words, you end up having to charge your customer so much, and where the experience is, it’s not viable.

Oliver Avaro:
That’s correct.

Mark Donnigan:
Yeah, that’s important.

Oliver Avaro:
And for certain category of games, definitely you can reach this level. So actually augmenting the density by a factor of 10 means also of course diminishing the cost per CCU by a factor of 10. So if you pay $1, currently you will pay 10 cents, and that makes a whole difference. Because let’s assume basic gamers will play 10 hours per month or 30 hours per month, if this is $1, this is $30, right? If this is 10 cents, then you go to one to $3, which I think makes the match work on the subscription, which is between 5 to 15 euro per month.

Is hardware super expensive

Mark Donnigan:
One of the questions that comes up, and I know we’ve had this conversation with you, is how is this possible? Because anybody who understands basic server architecture, basically it’s not difficult to think, well, wait a second, isn’t there a bottleneck inside the machine? And this must require a really super hot rodded machine. So maybe the cost savings is offset by super expensive hardware. And I think it’s important to note that the reason why this is possible is first of all, the VPU is built on NVMe architecture. So it’s using the exact same storage protocol as your hard drive, as the SSDs that are in the machine. And what we have done, what Netin has done is actually created a peer-to-peer sharing inside the DMA. So basically the GPU will output a frame, a rendered frame, and it’s transferred literally inside memory, so that then the VPU can pick that up, encode it, and there’s effectively zero latency, at least in terms of the latency is so low because it’s happening in the memory buffer.

And so if anybody’s listening and raising an eyebrow wondering, “Well wait a second, surely there’s a bottleneck.” And especially if you’re talking 60 frame per second, which by the way, our benchmarks are generally always at 60 frames per second. Because unless it’s real casual games, you need that frame rate to really deliver a great experience. Even above resolution in some cases, it’s better to get the frame rate up than to increase the size of the frame.

Oliver Avaro:
Absolutely. Absolutely.

Mark Donnigan:
Yeah. Let me just pause here and say that we would love to have questions. And so feel free, on whatever platform, if you’re on YouTube or LinkedIn or wherever watching us right now, just type in and I will try and pick those up. I have looks like, like we already have one. I think this is actually a really good one. I’m going to pick this up right here. But feel free to enter questions in the chat. So Oliver, the question is, “I live in a country where stable internet is not always available.” And by the way, I would say that this isn’t only a country issue, internet varies, right? And the expectation of users is more and more that they don’t think about the fact that I’m in a car, I happen to be in an area where there’s great coverage, but seven miles down the road that changes, right? They want to keep playing and keep enjoying this great experience.

So the question is, “I live in a country where stable internet is not always available. How will this affect the gaming experience?” And yeah, I mean, that’s the question. So what’s your experience and how are you guys solving for this?

Oliver Avaro:
You see, in Netflix or Spotify, you can actually buffer content so that even if your bandwidth is a bit clumsy, you can actually store that content in the CDM and keep the experience good enough, right? Or you can download the video and make it work. So definitely you have some way to solve that problem in I would say cold media, right? Media that you can encode in one way, then stream later. In games, this is completely different.

Mark Donnigan:
Yeah, you can’t do that.

Oliver Avaro:
Because we have to encode, stream, deliver, and then in text integration right away. So if your bandwidth is not enough, if the quality of the bandwidth is not enough, and not only in terms of the size of the bandwidth but also in terms of characteristic. The latency, how this latency is stable and so on, then the experience will be great, right?

So what we’ve been doing actually with Ericsson, okay, is to use 5G networks and to define specific characteristic of what is a slice in the 5G network. So we can tune the 5G network to make it fit for gaming. And to optimize basically the delivery of gaming with 5G. So we think that 5G is going to get much faster in those region where actually the internet is not so great. We’ve been deploying the Blacknut service in Thailand, in Singapore, in Malaysia, now in the Philippines and so on. And this has allowed us to actually reach people in regions where there is no cable or bandwidth with fiber and this kind of things. So look, I’m not going to solve a problem where bandwidth is not available, but maybe bandwidth will come faster with 5G and that could be the solution.

Mark Donnigan:
Yeah, I want to make a comment there, and thank you for the answer. We are seeing, so it’s very interesting, and I’ll use India as an example. So for years in video streaming, the Indian market was used as an example of where it was very difficult to deliver high quality, and especially if you wanted to deliver say 720p, and 1080p was almost assumed at a certain period of time it’s not even possible. Because the network capacity and the speeds were just so low.

What has happened is, and India’s a great case study here, but it’s really almost all regions of the world, as these infrastructures, these wireless infrastructures have been upgraded, they leapfrogged literally from 3G or in some cases even 2.5G and before, and just went all the way to 5G. And so in the last five years there has been such a fundamental shift in bandwidth availability that in some cases, some of these regions of the world, not only is it definitely no longer true that they’re slow, they’re faster than some of the more developed countries. So I do want to make that statement there. One question, Oliver, can you talk about is this webRTC? What protocols you’re using? There’s a lot of talk right now about QUIC. And I think that would be interesting for some of the listeners who might be wondering even what protocols you’re using.

Oliver Avaro:
So we use standard codeX to start with the bottom line. We have not embedded codeX, we have been into the standardization industry of audio and video for quite some years, and I think you have great experts here doing great technology. And this technology is actually embedded into the chipset, into the hardware, so actually you can rely on hardware encoding and decoding capabilities. So we do think standard codeX is basically a must have, right? Of course you need to configure them the right way because you have to code real time. Okay? So you cannot use a particular techniques to wait for a couple of frames or more, so you have to optimize this. But basically we use standard codeX.

Then on the protocols on top of this we have actually a large variety of protocol. It depends on the device on which you are streaming. So it can goes from full-property protocol that we have invented and patented in Blacknut, to standard webRTC. Okay? So if you look at devices like Samsung and LG, which are basically the top manufacturers, I think the service has been launched on LG. We are going to announce, I think our launch with Samsung in very short time. And these devices support webRTC, and that basically is the only way to implement and to support the cloud gaming solution efficiently. So short answer, we use a wide range of protocol, always the one that is the most appropriate and provides the best experience to the end user. We’re using at of course new protocol, new standards, experimenting this. But I would say for the main streamline new solution, we use our own solution plus webRTC. It’s the only… that they’re there.

The end-to-end latency targets

Mark Donnigan:
The end-to-end latency targets, I think previously you made the comment about 80 milliseconds. But give us some guidelines, what is, obviously the answer is as low as possible, but what’s the upper limit where the game experience just falls apart? It’s just not playable?

Oliver Avaro:
You know that the limit for conventional video is about 150 milliseconds. For playing games, this is much lower, probably half of it. So I think you can get a reasonably good experience at 80 milliseconds for actually most of the game that does not require this kind of fast reaction. But then if you want to go to FPS or this kind of thing, that really need to… to nearly be reactive at the frame accuracy, which is very of course difficult in cloud gaming, you need to go down to the 30 millisecond and lower, right? And then I think it’s only feasible if you have a network that allows for it. Because it’s not only about the encoding part, the server side and the client side, it’s also on where the packets are going through the networks. Okay?

Because you can have the most efficient systems in terms of encoding latency and decoding latency, but if you bucket instead of going directly from the server to the end user, go here and there and transit in many places, then your experience will be crappy. And Mark, this is actually a real issue, because we for example had a great demonstration with Ericsson in Barcelona of the Mobile World Congress. And we had servers in Madrid, but when we first make the first test, we discovered that the packets were going from Madrid to Paris, and back to Barcelona, right? So this need a bit of intelligence and technology to make this connection as efficient as possible.

Mark Donnigan:
Tell us about Blacknut, what exactly you guys deliver?

Oliver Avaro:
We provide basically a cloud gaming service, which is, let’s say categorize it as a game as a service. Okay? This means that for the subscription fee per month you get access to the real stuff. You get access to 700 games. We are adding 10 to 15 new games per month, which is I think the fastest pace in terms of increasing game on the market. And we provide this experience on all single devices that can actually receive a video. Okay? So that’s what we do. And we distribute this service either B2C, so direct to the consumer. So if you go on your Blacknut webpage, you can subscribe, you can access to the games. But we also distribute it through carriers, so telecommunication carriers, operators all over the world. We currently have about 20 signed agreement with the carriers live actually. More than 40 signed, and we are signing and delivering one to two new carriers per month. So that’s the pace where we are in Blacknut. And there’s the choice to use carriers here is for the reason I explained to you that it’s good to have.

Mark Donnigan:
Optimization of the network.

Oliver Avaro:
You need to know where the packets are going. You need to make sure that there is some form of CDN for cloud gaming that is in place here that makes the experience optimal.

Mark Donnigan:
Yeah, it completely makes sense to me, especially because you mentioned the 5G optimization. And obviously carriers, yeah, they’ve been investing now for years in building out their 5G networks. But they’re always looking for reasons to drive more value and to really extract the full potential off the 5G or out of the 5G investment. So yeah, it really makes sense.

Oliver Avaro:
That’s the kind of thing we’re doing as well with our partner Radian Arc, and we are putting a server at the edge of the network. So inside the carrier’s infrastructure so that the latency is really super optimized. So that’s one thing that is key for the service.

The architecture

Mark Donnigan:
What is the architecture of that edge server? What’s in it? What CPU, GPU, VPU. Describe that.

Oliver Avaro:
We started with a standard architecture, with CPU and GPU. And now with the current VPU architecture, we are putting actually a whole servers consisting in AMD GPU, Netin VPU. And basically we build the whole package so that we put this in the infrastructure of the carrier and we can deploy the Blacknut cloud gaming on top of it.

Mark Donnigan:
And are you delivering to only a handful of fixed resolutions? If I was on a TV for example, do I get 4K or do you limit to 1080p or how do you handle that?

Oliver Avaro:
Again, great question. Okay? We actually can handle multiple resolution. I think we can stream from 720p up to 4K. The technology basically has no limits for it, right? And streaming 4K or even 8K is a problem that has somehow been solved already, from a technical matter. The question is, again, the cost and the experience. Okay? Streaming 4K on the mobile device does not really make sense. I think the screen is a bit more so you can screen a smaller resolution and that’s sufficient. On a TV likely you need to have a bigger resolution. Even if actually there is great upscale available on most of the TV sets, we stream 720p on Samsung devices and that’s super great, right? But of course scaling up to 1080p will provide a much better experience. So on TVs and for the game that require it, I think we’re indeed streaming the service about 1080p for the game that requires this.

Mark Donnigan:
Do you also find that frame rate is almost more important than resolution?

Oliver Avaro:
For certain games, absolutely. But again, it is game dependent. Of course-

Mark Donnigan:
It’s game, yeah.

Oliver Avaro:
If you are on a FPS, you probably, if you have the choice and you cannot stream 1080p, you would probably stream 720p at 60 FPS rather than 1080p 30 FPS, right?

Mark Donnigan:
Yes.

Oliver Avaro:
If you have to make some trade-off. But if you have different games where the textures, the resolution is more important, then maybe you will actually select more 1080p and 30 fps resolution. And what we build is actually fully adaptable. Ultimately, you should not forget that there is a network in between. And even if technically you can stream 4K or 8K, the networks may not sustain it. Okay? And then actually you’ll have less good experience streaming 4K than actually a 1080p 60 FPS resolution.

Gaming anywhere where you live?

Mark Donnigan:
Okay. I see a question just came in and it is how do we know where the service is available or is it available anywhere you live? And so I think you can answer that question, but why don’t you also explain are there geographical limitations? Is your content available anywhere? And then as an extension, I don’t think you actually talked about how many publishers you have. You did talk about every month you’re onboarding I think 10 or 12 new games. But yeah, so are there geographical restrictions? How can someone access this?

Oliver Avaro:
Great. Let’s start with content. Okay? Indeed, we have more than 700 games right now, 10 to 15 new games per month. And we actually try not to have geographical limitation on the content. Okay? So this being the content we have on the catalog is, from a licensing point of view, available worldwide. So that’s basically what we do. And we do have exceptions, as usual. But basically, a large part of the catalog is available worldwide. Now deploys this catalog of different region, we are available in more than 45 countries. We definitely need to have servers that are close enough to the end user so that the streaming experience is good enough. And we think that a reduce of between 750 to 1,500 kilometers probably the maximum. So I think we will actually put some point of presence in those geographical areas so that basically the latency, limited by the speed of light, that does not harm the service.

So of course if you look at it, we have Europe very much covered. We have US and Canada very much covered. We have a large portion of Southeast Asia, Korean and Japan very much covered. We are now expanding in Latin America, which is a bit harder. We have a strong presence now as well in the Middle East, with partners like STC in the region. And of course we have some zone that are less covered. Africa is not well covered at all. South Africa is, but basically the rest of Africa is a bit harder to reach.

Mark Donnigan:
By the way, what is the website? Why don’t you give out the URL there?

Oliver Avaro:
www.blacknut.com
I think try the service. We’ll be very happy to support and give feedback. I’m very interested in the feedback as well.

Mark Donnigan:
It’s super exciting. And as I said in the beginning, for me personally, having been really in the very early stages of the transition from physical entertainment delivery, I’m talking about movies specifically, like DVDs, to streaming. I’m just super excited to also now, 15 years later, be there with games. And there’s a lot of work to be done. And as you pointed out, the experience is absolutely not exactly mapped. We can’t throw out the console yet. But the opportunity to bring really the gaming experience to a much wider audience is really enabled with streaming. So by the way, so I think there’s a follow on question here. Do you have infrastructure in South Africa? You mentioned Africa’s not covered as well, but…

Oliver Avaro:
Yes, we do have the capacity to deploy the service in South Africa, absolutely.

Mark Donnigan:
To deploy in South Africa. Okay, great. Great. Well, we’re right up against time and thank you for everyone who joined us live. Really appreciate it. And thank you, Oliver. It’s amazing what you’ve built. And we’re super excited to be working with Blacknut.

Oliver Avaro:
Thank you everyone. Thanks, Mark.

NETINT Video Transcoding Server – ASIC technology at its best

NETINT Video Transcoding Server - quality-speed-density

Many high-volume streaming platforms and services still deploy software-only transcoding, but high energy prices for private data centers and escalating public cloud costs make the OPEX, carbon footprint, and dismal scalability unsustainable. Engineers looking for solutions to this challenge are actively exploring hardware that can integrate with their existing workflows and deliver the quality and flexibility of software with the performance and operational cost efficiency of purpose-built hardware. 

If this sounds like you, the USD $8,900 NETINT Video Transcoding Server could be the ideal solution. The server combines the Supermicro 1114S-WN10RT AMD EPYC 7543P-powered 1RU server with ten NETINT T408 video transcoders that draw just 7 watts each. Encoding HEVC and H.264 at normal or low latency, you can control transcoding operations via  FFmpeg, GStreamer, or a low-level API. This makes the server a drop-in replacement for a traditional x264 or x265 FFmpeg-based or GPU-powered encoding stack.

NETINT Video Transcoding Server

Due to the performance advantage of ASICs compared to software running on x86 CPUs, the server can perform the equivalent work of roughly 10 separate machines running a typical open-source FFmpeg and x264 or x265 configuration. Specifically,  the server can simultaneously transcode twenty 4Kp30 streams, and up to 80 1080p30 live streams. In ABR mode, the server transcodes up to 30 five-rung H.264 encoding ladders from 1080p to 360p resolution, and up to 28 four-rung HEVC encoding ladders. For engineers delivering UHD, the server can output seven 6-rung HEVC encoding ladders from 4K to 360p resolution, all while drawing less than 325 watts of total power.

This review begins with a technical description of the server and transcoding hardware and the options available to drive the encoders, including the resource manager that distributes jobs among the ten transcoders. Then we’ll review performance results for one-to-one streaming and then H.264 and HEVC ladder generation, and finish with a look at the server’s ultra-efficient power consumption.

NETINT Transcoding Server with 10 T408 Video Transcoders

Hardware Specs

Built on the Supermicro 1114S-WN10RT 1RU server platform, the NETINT Video Transcoding Server features ten NETINT Codensity ASIC-powered T408 video transcoders, and runs Ubuntu 20.04.05 LTSThe server ships with 128 GB of DDR4-3200 RAM and a 400GB M.2 SSD drive with 3x PCIe slots and ten NVME slots to house the ten U.2 T408 video transcoders.

You can buy the server with any of three AMD EPYC processors with 8 to 64 cores. We performed the tests for this review on the 32-core AMD EPYC 7543P CPU that doubles to 64 threads with multithreading.  The server configured with the AMD EPYC 7713P processor with 64-cores and 128-threads sells for USD $11,500, and the economical AMD EPYC 7232P processor-based server with 8-cores and 16-threads lists for USD $7,000.

Regarding the server hardware, Supermicro is a leading server and storage vendor that designs, develops, and manufactures primarily in the United States. Supermicro adheres to high-quality standards, with a quality management system certified to the ISO 9001:2015 and ISO 13485:2016 standards and an environmental management system certified to the ISO 14001:2015 standard. Supermicro is also a leader in green computing and reducing data center footprints (see the white paper Green Computing: Top Ten Best Practices for a Green Data Center). As you’ll see below, this focus has resulted in an extremely power-efficient machine when operated with NETINT video transcoders.

Let’s explore the system - NETINT Video Transcoding Server

With this as background, let’s explore the system. Once up and running in Ubuntu, you can check T408 status via the ni_rsrc_mon_logan command, which reveals the number of T408s installed and their status. Looking at Figure 1, the top table shows the decoder performance of the installed T408s, while the bottom table shows the encoding performance.

Figure 1. Tracking the operation of the T408s, decode on top, encode on the bottom.

About the T408

T408s have been in service since 2019 and are being used extensively in hyper-scale platforms and cloud gaming applications. To date, more than 200 billion viewer minutes of live video have been encoded using the T408. This makes it one of the bestselling ASIC-based encoders on the market.

The NETINT T408 is powered by the Codensity G4 ASIC technology and is available in both PCIe and U.2 form factors. The T408s installed in the server are the U.2 form factor plugged into ten NVMe bays. The T408 supports close caption passthrough, and EIA CEA-708 encode/decode, along with support for High Dynamic Range in HDR10 and HDR10+ formats.

“To date, more than 200 billion viewer minutes of live video have been encoded using the T408. This makes it one of the bestselling ASIC-based encoders on the market.” 

ALEX LIU, Co-Founder,
COO at NETINT Technologies Inc.

The T408 decodes and encodes H.264 and HEVC on board but performs all scaling and overlay operations via the host CPU. For one-to-one same-resolution transcoding, users can select an option called YUV Bypass that sends the video transcoded by the T408 directly to the T408 encoder. This eliminates high-bandwidth trips through the bus to and from system memory, reducing the load on the bus and CPU. As you’ll see, in pure 1:1 transcode applications without overlay, CPU utilization is very low, so the T408 and server are very efficient for cloud gaming and other same-resolution, low-latency interactive applications. 

Netint Codensity, ASIC-based T408 Video Transcoder
Figure 2. The T408 is powered by the Codensity G4 ASIC.

Testing Overview

We tested the server with FFmpeg and GStreamer. As you’ll see, in most operations, performance was similar. In some simple transcoding applications, FFmpeg pulled ahead, while in more complex encoding ladder productions, particularly 4K encoding, GStreamer proved more performant, particularly for low-latency output.

Figure 3. The software architecture for controlling the server.  

Operationally, both GStreamer and FFmpeg communicate with the libavcodec layer that functions between the T408 NVME interface and the FFmpeg software layer. This allows existing FFmpeg and GStreamer-based transcoding applications to control server operation with minimal changes.

To allocate jobs to the ten T408s, the T408 device driver software includes a resource management module that tracks T408 capacity and usage load to present inventory and status on available resources and enable resource distribution. There are several modes of operation, including auto, which automatically distributes the work among the available resources.

Alternatively, you can manually assign decoding and encoding tasks to different T408 devices in the command line or application and control which streams are decoded by the host CPU or a T408. With these and similar controls, you can efficiently balance the overall transcoding load between the T408s and host CPU to maximize throughput. We used auto distribution for all tests.

Testing Procedures

We tested using Server version 1.0, running FFmpeg v4.3.1 and GStreamer v1.18 and T408 release 3.2.0. We tested with two use cases in mind. The first is a stream in-single stream out, either at the same resolution as the incoming stream or output at a lower resolution.  This mode of operation is used in many interactive applications like cloud gaming, real-time gaming, and auctions where the absolute lowest latency is required. We also tested scaling performance since many interactive applications scale the input to a lower resolution.

The second use case is ABR, where a single input stream is transcoded to a full encoding ladder. In both modes, we tested normal and low-latency performance. To simulate live streaming and minimize file I/O as a drag on system performance, we retrieved the source file from a RAM drive on the server and delivered the encoded file to RAM.

Play Video about NETINT Video Transcoding Server - ASIC technology at its best
HARD QUESTIONS ON HOT TOPICS
All you need to know about NETINT Transcoding Server powered by ASICs
Watch the full conversation on YouTube: https://youtu.be/6j-dbPbmejw

One-to-One Performance

Table 1 shows transcoding results for 4K, 1080p, and 720p in latency tolerant and low-delay modes. Instances is the number of full frame rate outputs produced by the system, with CPU utilization shown for reference. These results are most relevant for cloud gaming and similar applications that input a single stream, transcode the stream at full resolution, and distribute it.

As you can see, 4K results peak at 20 streams for all codecs, though results differ by the software program used to generate the streams. The number of 1080p outputs range from 70 – 80, while 720p streams range from 140 to 170. As you would expect, CPU utilization is extremely low for all test cases as the T408s are shouldering the complete decoding/encoding role. This means that performance is limited by T408 throughput, not CPU, and that the 64-core CPU probably wouldn’t produce any extra streams in this use case. For pure encoding operations, the 8-core server would likely suffice, though given the minimal price differential between the 8-core and 32-core systems, opting for the higher-end model is a prudent investment.

Latency

As for latency, in the normal mode, latency averaged around 45 ms for 4K transcoding and 34 ms for 1080p and 720p transcoding. In low delay mode, this dropped to around 24 ms for 4K, 7 ms for 1080p, and 3 ms for 720, all at 30 fps transcoding and measured with FFmpeg. For reference, at 30 fps, each frame is displayed for 33.33 ms. Even in latency-tolerant mode, latency is just over 1.36 frames for 4K and under a single frame for 1080p and 720p. In low delay modes, all resolutions are under a single frame of latency.

It’s worth noting that while software performance would drop significantly from H.264 to HEVC, hardware performance does not. Thus questions of codec performance for more advanced standards like HEVC do not apply when using ASICs. This is good news for engineers adopting HEVC, and those considering HEVC in the future. It means you can buy the server, comfortable in the knowledge that it will perform equally well (if not better) for HEVC encoding or transcoding.

Table 1. Full resolution transcodes with FFmpeg and Gstreamer
in regular and low delay modes.

Table 2 shows the performance when scaling from 4K to 1080p and from 1080p to 720p, again by the different codecs in and out. Since scaling is performed by the host CPU, CPU usage increases significantly, particularly on the higher volume 1080p to 720p output. Still, given that CPU utilization never exceeds 35%, it appears that the gating factor to system performance is T408 throughput. Again, while the 8-core system might be able to produce similar output if your application involves scaling, the 32-core system is probably better advised.

In these tests, latency was slightly higher than pure transcoding. In normal mode, 4K > 1080p latencies topped out at 46 ms and dropped to 39 ms for 1080p > 720p scaling, just over a single frame of latency. In low latency mode, these results dropped to 10 ms for 4K > 1080p and 10 ms for 1080p > 720p. As before, these latency results are for 30fps and were measured with FFmpeg.

Table 2: Performance while scaling from 4K to 1080p and 1080p to 720p.

The final set of tests involves transcoding to the AVC and HEVC encoding ladders shown in Table 3. These results will be most relevant to engineers distributing full encoding ladders in HLS, DASH, or CMAF containers.

Here we see the most interesting discrepancies between FFmpeg and GStreamer, particularly in low delay modes and in 4K results. In the 1080p AVC tests, FFmpeg produced 30 5-rung encoding ladders in normal mode but dropped to nine in low-delay mode. GStreamer produced 30 encoding ladders in both modes using substantially lower CPU resources. You see the same pattern in the 1080p four-rung HEVC output where GStreamer produced more ladders than FFmpeg using lower CPU resources in both modes.

Table 3. Full encoding ladders output in the listed modes.

FFmpeg produced very poor results in 4K testing, particularly in low latency mode, and it was these results that drove the testing with GStreamer. As you can see, GStreamer produced more streams in both modes and CPU utilization again remained very low. As with the previous results, the low CPU utilization means that the results reflect the encoding limits of the T408. For this reason, it’s unlikely that the higher end server would produce more encoding ladders.

In terms of latency, in normal mode, latency was 59 ms for the H.264 ladder, 72 ms for the 4 rung 1080p HEVC ladder, and 52 ms for the 4K HEVC ladder. These numbers dropped to 5 ms, 7 ms, and 9 ms for the respective configurations in low latency mode.

Power Consumption

Power consumption is an obvious concern for all video engineers and operations teams. To assess system power consumption, we tested using the IPMI Tool. When running completely idle, the system consumed 154 watts, while at maximum CPU, the unit averaged 400 watts with a peak of 425 watts.

We measured consumption during the three basic operations tested, pure transcoding, transcoding with scaling, and ladder creation, in each case testing the GStreamer scenario that produced the highest recorded CPU usage. You see the results in Table 4.

When you consider that CPU-only transcoding would yield a fraction of the outputs shown while consuming 25-30% more power, you can see that the T408 is exceptionally efficient when it comes to power consumption. The Watts/Output figure provides a useful comparison for other competitive systems, whether CPU or GPU-based.

Table 4. Power consumption during the specified operation.

Conclusion

With impressive density, low power consumption, and multiple integration options, the NETINT Video Transcoding Server is the new standard to beat for live streaming applications. With a lower price model available for pure encoding operations, and a more powerful model for CPU-intensive operations, the NETINT server family meets a broad range of requirements.

ASICs – The Time is Now

A brief review of the history of encoding ASICs reveals why they have become the technology of choice for high-volume video streaming services and cloud-gaming platforms.

Like all markets, there will be new market entrants that loudly announce for maximum PR effect, promising delivery at some time in the future. But, to date, outside of Google’s internal YouTube ASIC project called ARGOS and the recent Meta (Facebook) ASIC also for internal use only, NETINT is the only commercial company building ASIC-based transcoders for immediate delivery.

“ASICs are the future of high-volume video transcoding as NETINT, Google, and Meta have proven. NETINT is the only vendor that offers its product for sale and immediate delivery making the T408 and Quadra safe bets.”

Delaying a critical technology decision always carries risk. The risk is that you miss an opportunity or that your competitors move ahead of you. However, waiting to consider an announced and not yet shipping product means that you ALSO assume the manufacturing, technology, and supply chain risk of THAT product.

What if you delay only to find out that the announced delivery date was optimistic at best? Or, what if the vendor actually delivers, only for you to find out that their performance claims were not real? There are so many “what if’s” when you wait that it rarely is the right decision to delay when there is a viable product available.

Now let’s review the rebirth of ASICs for video encoding and see how they’ve become the technology of choice for high-volume transcoding operations.  

The Rebirth of ASICs for Video Encoding

An ASIC is an application specific integrated circuit that is designed to do a small number of tasks with high efficiency. ASICs are purpose-built for a specific function. The history of video encoding ASICs can be traced back to the initial applications of digital video and the adoption of the MPEG-2 standard for satellite and cable transmission.

Most production MPEG-2 encoders were ASIC-based.

As is the case for most new codec standards, the first implementation of MPEG-2 compression was CPU-based. Given the cost of using commodity servers and software, dedicated hardware is always necessary to handle the processing requirements of high-quality video encoding cost-effectively.

This led to the development and application of video encoding ASICs, which are specialized integrated circuits designed to perform the processing tasks required for video encoding. Encoding ASICs provide the necessary processing power to handle the demands of high-quality video encoding while being more cost-effective than CPU-based solutions.

With the advent of the internet, the demand for digital video continued to increase. The rise of on-demand and streaming video services, such as YouTube and Netflix, led to a shift towards CPU-based encoding solutions. This was due in part to the fact that streaming video required a more flexible approach to encoding including implementation agility with the cloud and an ability to adjust encoding parameters based on the available bandwidth and device capabilities.

As the demand for live streaming services increased, the limitations of CPU-based encoding solutions became apparent. Live streaming services, such as cloud gaming and real-time interactive video like gaming or conferencing, require the processing of millions of live interactive streams simultaneously at scale. This has led to a resurgence in the use of encoding ASICs for live-streaming applications. Thus, the rebirth of ASICs is upon us and it’s a technology trend that should not be ignored even if you are working in a more traditional entertainment streaming environment.

NETINT: Leading the Resurgence

NETINT has been at the forefront of the ASIC resurgence. In 2019, the company introduced its Codensity T408 ASIC-based transcoder. This device was designed to handle 8 simultaneous HEVC or H.264 1080p video streams, making it ideal for live-streaming applications.

The T408 was well-received by the market, and NETINT continued to innovate. In 2021, the company introduced its Quadra series. These devices can handle up to 32 simultaneous 1080p video streams, making it even more powerful than the T408, also adding the anticipated AV1 codec.

“NETINT has racked up a number of major wins including major names such as ByteDance, Baidu, Tencent, Alibaba, Kuaishou, and a US-based global entertainment service.”

As described by Dylan Patel, editor of the Semianalysis blog, in his article Meet NETINT: The Startup Selling Datacenter VPUs To ByteDance, Baidu, Tencent, Alibaba, And More, “NETINT has racked up a number of major wins including major names such as ByteDance, Baidu, Tencent, Alibaba, Kuaishou, and a similar sized US-based global platform.”

NETINT Quadra T1U Video Processing Unit
– NETINT’s second-generation of shipping ASIC-based transcoders.

Patel also reported that using the HEVC codec, NETINT video transcoders and VPUs crushed Nvidia’s T4 GPU, which is widely assumed to be the default choice when moving to a hardware encoder for the data center. The density and power consumption that can be achieved with a video ASIC is unmatched compared to CPUs and GPUs.

Patel commented further, “The comparison using AV1 is even more powerful… NETINT is the leader in merchant video encoding ASICs.”

“The comparison using AV1 is even more powerful…NETINT is the leader in video encoding ASICs.”

-Dylan Patel

ASIC Advantages

ASICs are designed to perform a specific task, such as encoding video, with a high degree of efficiency and speed. CPUs and GPUs are designed to perform a wide range of general-purpose computing tasks. As evidence of this fact, today, the primary application for GPUs has nothing to do with video encoding. In fact, just 5-10% of the silicon real estate on some of the most popular GPUs in the market are dedicated to video encoding or processing. Highly compute-intensive tasks like AI inferencing are the most common workload for GPUs today.

The key advantage of ASICs for video encoding is that they are optimized for this specific task, with a much higher percentage of gates on the chip dedicated to encoding than CPUs and GPUs. ASICs can encode much faster and with higher quality than CPUs and GPUs, while using less power and generating less heat.

“ASICs can encode much faster and with higher quality than CPUs and GPUs while using less power and generating less heat.”

-Dylan Patel

Additionally, because ASICs are designed for a specific task, they can be more easily customized and optimized for specific use cases. Though some assume that ASICs are inflexible, in reality, with a properly designed ASIC, the function it’s designed for may be tuned more highly than if the function was run on a general purpose computing platform. This can lead to even greater efficiency gains and improved performance.

The key takeaway is that ASICs are a superior choice for video encoding due to their application-specific design, which allows for faster and more efficient processing compared to general-purpose CPUs and GPUs.

Confirmation from Google and Meta

Recent industry announcements from Google and Meta confirm these conclusions. When Google announced the ASIC-based Argos VCU (Video Coding Unit) in 2021, the trade press rightfully applauded. CNET announced that “Google supercharges YouTube with a custom video chip.” Ars Technica reported that Argos brought “up to 20-33x improvements in compute efficiency compared to… software on traditional servers.” SemiAnalysis reported that Argos “Replaces 10 Million Intel CPUs.”

Google’s Argos confirms the value of encoding ASICs
(and shipped 2 years after the NETINT T408).

As described in the article “Argos dispels common myths about encoding ASICs” (bit.ly/ASIC_myths), Google’s experience highlights the benefits of ASIC-based transcoders. That is, while many streaming engineers still rely on software-based transcoding, ASIC-based transcoding offers a clear advantage in terms of CAPEX, OPEX, and environmental sustainability benefits. The article goes on to address outdated concerns about the shortcomings of ASICs, including sub-par quality and the lack of upgradeability.

The article discusses several key findings from Google’s presentation on the Argos ASIC-based transcoder at Hot Chips 33, including:

  • Encoding time has grown by 8000% due to increased complexity from higher resolutions and frame rates. ASIC-based transcoding is necessary to keep video services running smoothly.
  • ASICs can deliver near-parity to software-based transcoding quality with properly designed hardware.
  • ASICs quality and functionality can be improved and changed long after deployment.
  • ASICs deliver unparalleled throughput and power efficiency, with Google reporting a 90% reduction in power consumption.

Though much less is known about the Meta ASIC, its announcement prompted Facebook’s Director of Video Encoding, David Ronca, to proclaim, “I propose that there are two types of companies in the video business. Those that are using Video Processing ASICs in their workflows, and those that will.”

“…there are two types of companies in the video business. Those that are using Video Processing ASICs in their workflows, and those that will.”

Meta proudly announces its encoding ASIC
(3 years after NETINT’s T408 ships).

Unlike the ASICs from Google and Meta, you can actually buy ASIC-based transcoders from NETINT, and in fact scores of tens of thousands of units are operating in some of the largest hyperscaler networks and video streaming platforms today. The fact that two of the biggest names in the tech industry are investing in ASICs for video encoding is a clear indication of the growing trend towards application-specific hardware in the video field. With the increasing demand for high-quality video streaming across a variety of devices and platforms, ASICs provide the speed, efficiency, and customization needed to meet these needs.

Avoiding Shiny New Object Syndrome

ASICs as the best method for transcoding high volumes of live video has not gone unnoticed, meaning you should expect product announcements that are made pointing to “availability later this year.” When these occur around prominent trade shows, it can indicate a rushed announcement made for the show, and that the later availability may actually be “much later…”

It’s useful to remember that while waiting for a new product from a third-party supplier to become available, companies face three distinct risks: manufacturing, technology, and supply chain.

Manufacturing Risk:

One of the biggest risks associated with waiting for a new product is the manufacturing risk, which means that the product may have issues in manufacturing. That is, there is always a chance that the manufacturing process may encounter unexpected problems, causing delays and increasing costs. For example, Intel has faced manufacturing issues with its 10nm processors, which resulted in delays for its upcoming processors. As a result, Intel lost market share to competitors such as AMD and NVIDIA, who were able to release their products earlier.

Technology Risk:

Another risk associated with waiting for a new product is technology risk, or that the product may not conform to the expected specifications, leading to performance issues, security concerns, or other problems. For example, NVIDIA’s RTX 2080 Ti graphics card was highly anticipated, but upon release, many users reported issues with its performance, including crashes, artifacting, and overheating. This led to a delay in the release of the RTX 3080, as NVIDIA had to address these issues before releasing the new product. Similarly, AMD’s Radeon RX7900 XTX graphics card has been plagued with claims of overheating. 

Supply Chain Risk:

The third risk associated with waiting for a new product is supply chain risk. This means that the company may be unable to get the product manufactured and shipped on time due to issues in the supply chain. For example, AMD faced supply chain issues with its Radeon RX 6800 XT graphics card, leading to limited availability and higher prices.

The reality is that any company building and launching a cloud gaming or streaming service is assuming its own technology and market risks. Compounding that risk by waiting for a product that “might” deliver minor gains in quality or performance (but equally might not) is a highly questionable decision, particularly in a market where even minor delays in launch dates can tank a new service before its even off the ground.

Clearly, ASICs are the future of high-volume video transcoding; NETINT, Google, and Meta have all proven this. NETINT is the only vendor of the three that actually offers its product for sale and immediate delivery; in fast-moving markets like interactive streaming and cloud gaming, this makes NETINT’s shipping transcoders, the T408 and Quadra, the safest bets of all.

All You Need to Know About the NETINT Product Line

Quadra - All You Need to Know About the NETINT Product Line

This article will introduce you to the NETINT product line and Codensity ASIC generations. We will focus primarily on the hardware differences, since all products share a common software architecture and feature set, which are briefly described at the end of the article.

PRODUCT GALLERY. Click the product image to visit product page

Codensity G4-Powered Video Transcoder Products

The Codensity G4 was the first encoding ASIC developed by NETINT. There are two G4-based transcoders, the T408 (Figure 1), is available in a U.2 form factor and as an add-in card, and the T432 (Figure 2), which is available as an add-in card. The T408 contains a single G4 ASIC and draws 7 watts under full load, while the T432 contains four G4 ASICs and draws 27 watts.

The T408 costs $400 in low volumes, while the T432 costs $1,500. The T432 delivers 4x the raw performance of the T408.

Netint Codensity, ASIC-based T408 Video Transcoder
Figure 1. The NETINT T408 is powered by a single Codensity G4 ASIC.

T408 and T432 decode and encode H.264 and HEVC on the device but perform all scaling, overlay, and deinterlacing on the host CPU.

If you’re buying your own host, the selected CPU should reflect the extent of processing that it needs to perform and the overhead requirements of the media processing framework that is running the transcode function. 

When transcoding inputs without scaling, as in a cloud gaming or conferencing application, a modest CPU can suffice. If you are creating standard encoding ladders, deinterlacing multiple streams, or frequently scaling incoming videos, you’ll need a more capable CPU. For a turn-key solution, check out the NETINT Logan Video Server options.

Netint Codensity, ASIC-based T432 Video Transcoder
Figure 2. The NETINT T432 includes four Codensity G4 ASICs.

The T408 and T432 run on multiple versions of Ubuntu and CentOS; see here for more detail about those versions and recommendations for configuring your server.

The NETINT Logan Video Server

The NETINT Video Transcoding Server includes ten T408 U.2 transcoders. It is targeted for high-volume transcoding applications as an affordable turn-key replacement for existing hardware transcoders or where a drop-in solution to a software-based transcoder is preferred.

The lowest priced model costs $7,000 and is built on the Supermicro 1114S-WN10RT server platform powered by an AMD EPYC 7232P CPU Series Processor with eight CPU cores and 16 threads running Ubuntu 20.04.05 LTS. The server ships with 128 GB of DDR4-3200 RAM and a 400GB M.2 SSD drive with 3x PCIe slots and ten NVME slots that house the ten T408 transcoders. At full transcoding capacity, the server draws 220 watts while encoding or transcoding up to ten 4Kp60 streams or as many as 160 720p60 video streams.

The server is also offered with two more powerful CPUs, the AMD EPYC 7543P Server Processor (32-cores/64-threads, $8,900) and the AMD EPYC 7713P Server Processor (64-cores/128-threads, $11,500). Other than the CPU, the hardware specifications are identical.

FIGURE 3. The NETINT Video Transcoding Server.

All Codensity G4-based products support HDR10 and HDR10+ for H.264 and H.265 encode and decode, as well as EIA CEA-708 closed captions for H.264 and H.265 encode and decode. In low-latency mode, all products support sub-frame latency. Other features include region-of-interest encoding, a customizable GOP structure with eight presets, and forced IDR frame inserts at any location.

The T408, T432, and NETINT Server are targeted toward high-volume interactive applications that require inexpensive, low-power, and high-density transcoding using the H.264 and HEVC codecs.

Codensity G5-Powered Live Transcoder Products

In addition to roughly quadrupling the H.264 and HEVC throughput of the Codensity G4, the Codensity G5 is our second-generation ASIC that adds AV1 encode support, VP9 decode support, onboard scaling, cropping, padding, graphical overlay, and an 18 TOPS (Trillions of Operations Per Second) artificial intelligence engine that runs the most common frameworks all natively in silicon.

Codensity G5 also includes audio DSP engines for encoding and decoding audio codecs such as MP3, AAC-LC, and HE AAC. All this on-board activity minimizes the role of the CPU allowing Quadra products to operate effectively in systems with modest CPUs.

Where the G4 ASIC is primarily a transcoding engine, the G5 incorporates much more onboard processing for even greater video processing acceleration. For this reason, NETINT labels Codensity G4-based products as Video Transcoders and Codensity G5-based products as Video Processing Units or VPUs.

The Codensity G5 is available in three products (Figure 4), the U.2-based Quadra T1 and PCIe-based Quadra T1A, which include one Codensity G5 ASIC, and the PCIe-based , which includes two Codensity G5 ASICs. Pricing for the T1 starts at $1,500. 

In terms of power consumption, the T1 draws 17 Watts, the T1A 20 Watts, and the T2 draws 40 Watts.

Figure 4. The Quadra line of Codensity G5-based products.

All Codensity G5-based products provide the same HDR and close caption support as the Codensity G4-based products. They have also been tested on Windows, MacOS, Linux and Android OS with support for virtual machine and container virtualization, including Single Root I/O Virtualization [SRIOV].

From a quality perspective, the Codensity G4-based transcoder products offer no configuration options to optimize quality vs. throughput. Quadra Codensity G5-powered VPUs offer features like lookahead and rate-distortion optimization that allow users to customize quality and throughput for their particular applications.

Play Video about Hard Questions - NETINT product line
HARD QUESTIONS ON HOT TOPICS – WHAT DO YOU NEED TO UNDERSTAND ABOUT NETINT PRODUCTS LINE
Watch the full conversation on YouTube: https://youtu.be/qRtnwjGD2mY

AI-Based Video Processing

Beyond VP9 ingest and AV1 output, and superior on-board processing, the Codensity G5 AI engine is a game changer for many current and future video processing applications. Each Codensity G5 ASIC includes two onboard Neural Processing Units (NPUs). Combined with Quadra’s integrated decoding, scaling, and transcoding hardware, this creates an integrated AI and video processing architecture that requires minimal interaction from the host CPU.

Today, in early 2023, the AI-enabled processing market is nascent, but Quadra already supports several applications like AI-based region of interest filter, background removal (see Quadra App Note APPS553), and others. Additional features under development include an automatic facial ID for video conferencing, license plate detection and OCR for security, object detection for a range of applications, and voice-to-text.

Quadra includes an AI Toolchain workflow that enables importing models from AI tools like Caffe, TensorFLow, Keras, and Darknet for deployment on Quadra. So, in addition to the basic models that NETINT provides, developers can design their own applications and easily implement them on Quadra

Like NETINT’s Codensity G4 based products, Quadra VPUs are ideal for interactive applications that require low CAPEX and OPEX. Quadra VPUs offer increased onboard processing that enables lower-cost host systems and the ability to customize throughput and quality, deliver AV1 output, and deploy AI video applications.

The NETINT Quadra 100 Video Server

The NETINT Quadra 100 Video Server includes ten Quadra T1 U.2 VPUs and is targeted for ultra high-volume transcoding applications and for services seeking to deliver AV1 stream output.  

The Quadra 100 Video Server costs $20,000 and is built on the Supermicro 1114S-WN10RT server platform powered by an  AMD EPYC 7543P Server Processor (32-cores/64-threads) running Ubuntu 20.04.05 LTS. The server ships with 128 GB of DDR4-3200 RAM and a 400GB M.2 SSD drive with 3x PCIe slots and ten NVME slots that house the ten T1 U.2 VPUs. At full transcoding capacity, the server draws around 500 watts while encoding or transcoding up to 20 8Kp30 streams or as many as 640 720p30 video streams.

The Quadra server is also offered with two different CPUs, the AMD EPYC 7232P Server Processor (8-cores/16-threads, price TBD) and the AMD EPYC 7713P Server Processor (64-cores/128-threads, price TBD). Other than the CPU, the hardware specifications are identical.

Media Processing Frameworks - Driving NETINT Hardware

In addition to SDKs for both hardware generations, NETINT offers highly efficient FFmpeg and GStreamer SDKs that allow operators to apply an FFmpeg/libavcodec or GStreamer patch to complete the integration.

In the FFmpeg implementation, the libavcodec patch on the host server functions between the NETINT hardware and FFmpeg software layer, allowing existing FFmpeg-based video transcoding applications to control hardware operation with minimal changes.

The NETINT hardware device driver software includes a resource management module that tracks hardware capacity and usage load to present inventory and status on available resources and enable resource distribution. User applications can build their own resource management schemes on top of this resource pool or let the NETINT server automatically distribute the decoding and encoding tasks.

In automatic mode, users simply launch multiple transcoding jobs, and the device driver automatically distributed the decode/encode/processing tasks among the available resources. Or, users can assign different hardware tasks to different NETINT devices, and even control which streams are decoded by the host CPU or NETINT hardware. With these and similar controls, users can most efficiently balance the overall transcoding load between the NETINT hardware and host CPU and maximize throughput.

In all interfaces, the syntax and command structure is similar for T408s and Quadra units which simplifies migrating from G4-based products to Quadra hardware. It is also possible to operate T408 and Quadra hardware together in the same system.

That’s the overview. For more information on any product, please check the following product pages (click the image below to see product page). 

PRODUCT GALLERY. Click the product image to visit product page

Maximizing Cloud Gaming Performance with ASICs

Maximizing Cloud Gaming Performance with ASICs

Ask ten cloud gamers what an acceptable level of latency is for cloud gaming, and you’ll get ten different answers. However, they will all agree that lower latency is better.

At NETINT, we understand. As a supplier of encoders to the cloud gaming market, our role is to supply the lowest possible latency at the highest possible quality and the greatest encoding density with the lowest possible power consumption. While this sounds like a tall order, because our technology is ASIC based, it’s what we do for cloud gaming and high-volume video streaming workloads of all types.

In this article, we’ll take a quick look at the technology stack for cloud gaming and the role of compression. Then we’ll discuss the performance of the NETINT Quadra VPU (video processing unit) series using the four measuring sticks of latency, density, video quality, and power consumption.

The Cloud Gaming Technology Stack

Figure 1 illustrates the different elements of the cloud gaming technology stack, particularly how the various transfer, compute, rendering, and encoding activities contribute to overall latency.

At the heart of every cloud gaming center is a game engine that typically runs the operating system native to the game, usually Android or Windows, though Linux and macOS is not uncommon. (see here for Meta’s dual OS architecture)

Since most games rely on GPU for rendering, all cloud gaming data centers have a healthy dose of GPU resources. These functions are incorporated in the cloud compute and graphics engine shown on the left, which creates the frames sent to the encode function for encoding and transmission to the gamer.

As illustrated in Figure 1, Nokia budgets 100 ms for total latency. Inside the data center, which is shown on the left, Nokia allows 15 ms to receive the data, 40 ms to process the input and render the frame, 5 ms to encode the frame, and 15 seconds to return it to the remote player. That’s a lot to do in the time it takes a sound wave to travel just 100 feet.

Maximizing Cloud Gaming Performance with ASICs - figure 1
Figure 1. Cloud gaming latency budget from Nokia.

NETINT’s Quadra VPU series is ideal for the standalone encode function. All Quadra VPUs are powered by the NETINT Codensity G5 ASIC. It’s called a video processing unit because in addition to H.264, HEVC, and VP9 decode, and H.264, HEVC, and AVI encode, Quadra VPUs offer onboard scaling, overlay, and an 18 TOPS AI engine (per chip).

Quadra is available in several single-chip solutions (T1 and T1A) and a dual-chip solution (T2) and starts at $1,500 in low quantities. Depending upon the configuration that you purchase, you can install up to ten Quadra VPUs in a single 1RU server and twenty Quadra VPUs in a 2RU server.

Cloud Gaming Latency and Density

Table 1 reports latency and density for a single Quadra VPU. As you would expect, latency depends on video resolution by way of the available network bandwidth and, to a much lesser degree, the number of jobs being processed.

Game producers understand the resolution/latency tradeoff and design the experience around this. So, a cloud gaming vendor might deliver a first-person shooter game at 720p to minimize latency while providing a better UX on medium bandwidth connections and a slower-paced role-playing or strategy game at larger resolutions to optimize the visual experience. As you can see, a single Quadra VPU can service both scenarios, with 4K latency under 20 ms and 720p latency around 4 ms at extremely high stream counts.

Maximizing Cloud Gaming Performance with ASICs - table 1
Table 1. Quadra throughput and average latency for AVC and HEVC.

In terms of density, the jobs shown in Table 1 are for a single Quadra VPU. Though multiple units won’t scale linearly, performance will increase substantially as you install additional units into a server. Because the Quadra is focused solely on video processing and encoding operations, it outperforms most general-purpose GPUs, CPUs, and even FPGA-based encoders from a density perspective.

Quadra Output Quality

From a quality perspective, hardware transcoders are typically benchmarked against the x264 and x265 codecs running in FFmpeg. Though FFmpeg’s throughput is orders of magnitude lower, these codecs represent well known and accepted quality levels. NETINT recently compared Quadra quality against x264 and x265 in a low latency configuration using a CGI-based data set.

Table 2 shows the results for H.264, with Rate-Distortion Optimization Quantization enabled and disabled. Enabling RDOQ increases quality slightly but decreases throughput. Quadra exceeded x264 quality in both configurations using the veryfast preset, typical for live streaming.

Maximizing Cloud Gaming Performance with ASICs - table 2
Table 2. The NETINT Quadra VPU series delivers better H.264 quality
than the x264 codec using the veryfast preset.

For HEVC, Table 3 shows the equivalent x265 preset with RDOQ disabled (the high throughput, lower-quality option) at three Rate Distortion Optimization levels, which also trade-off quality for throughput. Even with RDOQ disabled and with RDO set to 1 (low quality. high throughput) Quadra delivers the equivalent of x265 Medium quality. Note that most live streaming engineers use superfast or ultrafast to produce even a modest number of HEVC streams in a software-only encoding scenario.

Table 3. The NETINT Quadra VPU series delivers better quality
than the x265 codec using the medium preset.

Low Power Transcoding for Cloud Gaming

At full power, Quadra T1 draws 70 watts. Though some GPUs offer similar power consumption, they typically deliver much fewer streams.

In this comparison with the NVIDIA T4, the Quadra T1 drew .71 watts per 1080p stream, about 84% less than the 3.7 watts per stream required by the T4. This obviously translates to an 84% reduction in energy costs and carbon emissions per stream. In terms of CAPEX, Quadra costs $53.57 per 1080p stream, 63% cheaper than the T4’s $144/stream.

When it comes to gameplay, most gamers prioritize latency and quality. In addition to delivering these two key QoE elements, cloud gaming vendors must also focus on CAPEX, OPEX, and sustainability.  By all these metrics, the ASIC-based Quadra is the most ideal encoder for any cloud gaming production workflow. 

Argos dispels common myths about encoding ASICs

Argos dispels common myths about encoding ASICs

Even in 2023, many high-volume streaming producers continue to rely on software-based transcoding, despite the clear CAPEX, OPEX, and environmental benefits of ASIC-based transcoding. Part of the inertia relates to outdated concerns about the shortcomings of ASICs, including sub-par quality and lack of flexibility to add features or codec enhancements.

As a parent, I long ago concluded that there were no words that could come out of my mouth that would change my daughter’s views on certain topics. As a marketer, I feel some of that same dynamic, that no words can come out of my keyboard that would shake the negative beliefs about ASICs from staunch software-encoding supporters.

So, don’t take our word that these beliefs are outdated; consider the results from the world’s largest video producer, YouTube. The following slides and observations are from a Google presentation by Aki Kuusela and Clint Smullen on the Argos ASIC-based transcoder at Hot Chips 33 back in August 2021. The slides are available here, and the video here

In the presentation, the speakers discussed why YouTube developed its own ASIC and the performance and power efficiency achieved during the first 16 months of deployment. Their comments go a long way toward dispelling the myths identified above and make for interesting reading.

Advanced Codecs Means Encoding Time Has Grown by 8,000% Since H.264

In discussing why Google created its own encoder, Kuusela explained that video was getting harder to compress, not only from a codec perspective but from a resolution and frame rate perspective.  Here’s Kuusela (all quotes grabbed from the YouTube video and  lightly edited for readability).

“In order to sustain the higher resolutions and frame rate requirements of video, we have to develop better video compression algorithms with improved compression efficiency. However, this efficiency comes with greatly increased complexity. For example, if we compare the vp9 from 2013 to the decade older H.264, the time to encode videos in software has grown to 10x. The more recent AV1 format from 2018 is already 200 times more time-consuming than the h.264 standard.

If we further compound this effect with the increase in resolution and frame rate for top-quality video, we can see that the time to encode a video from 2003 to 2018 has grown eight thousand-fold. It is very obvious that the CPU performance improvement has not kept up with this massive complexity growth, and to keep our video services running smoothly, we had to consider warehouse scale acceleration. We also knew things would not get any better with the next generation of compression.”

Argos dispels common myths about encoding ASICs - 1
Figure 1. Google moved to hardware
to address skyrocketing encoding times.

Reviewing Figure 1, it should be noted that though few engineers use VP9 as extensively as YouTube, if you swap HEVC for VP9, the complexity difference between H.264 is the same. Beyond the higher resolutions and frame rates engineers must support to remain competitive, the need for hardware becomes even more apparent when you consider the demands of live production.

“Near Parity” with Software Encoding Quality

One consistent concern about ASICs has been quality, which admittedly lagged in early hardware generations. However, Google’s comparison shows that properly designed hardware can deliver near-parity to software-based transcoding.

Kuusela doesn’t spend a lot of time on the slide shown in Figure 2, merely stating that “we also wanted to be able to optimize the compression efficiency of the video encoder based on the real-time requirements and time available for each encoder and to have full access to all quality control algorithms such as bitrate allocation and group of picture selection. So, we could get near parity to software-based encoding quality with our no-compromises implementation.”

Figure 2. Argos delivers “near-parity”
with software encoders.

NETINT’s data more than supports this claim. For example, Table 1 compares the NETINT Quadra VPU with various x265 presets. Depending upon the test configuration, Quadra delivers quality on par with the x265 medium preset. When you consider that software-based live production often necessitates using the veryfast or ultrafast preset to achieve marginal throughput, Quadra’s quality far exceeds that of software-based transcoding.

Argos dispels common myths about encoding ASICs - table 1
Table 1. Quadra HEVC quality compared to x265
in high-quality latency tolerant configuration.

ASIC Performance Can Improve After Deployment

Another concern about ASIC-based transcoders is the inability to upgrade, and accelerated obsolescence. Proper ASIC design allows ASICs to balance encoding tasks between hardware, firmware, and control software to ensure continued upgradeability.

Figure 3 shows how the bitrate of VP9 and H.264 continued to improve compared to software in the months after the product launch, even without changes to the firmware or kernel driver. The second Google presenter, Clint Smullen attributed this to a hybrid hardware/software design, commenting that “Using a software approach was critical both to supporting the quality and feature development in the video core as well as allowing customer teams to iteratively improve quality and performance.”

Figure 3. Argos continued to improve after deployment
without changes to firmware or the kernel driver.

The NETINT Codensity G4 ASIC included in the T408 and the NETINT Codensity G5 ASIC that powers our Quadra family of VPUs, both use a hybrid design that distributes critical functions between the ASIC, driver software, and firmware.

We optimize ASIC design to maximize functional longevity as explained here on the role of firmware in ASIC implementations, “The functions implemented in the hardware are typically the lower-level parts of a video codec standard that do not change over time, so the hardware does not need to be updated. The higher levels parts of the video codecs are in firmware and driver software and can still be changed.”

As Google’s experience and NETINT’s data show, well-designed ASICs can continue improving in quality and functionality long after deployment. 

90% Reduction in Power Consumption

Few engineers question the throughput and power efficiency of ASICs, and Google’s data bears this out. Commenting on Figure 4, Smullen stated, “For H.264 transcoding a single VCU matches the speed of the baseline system while using about one-tenth of the system level power. For VP9, a single 20 VCU machine replaces multiple racks of CPU-only systems.”

Figure 4. Throughput and comparative efficiency
of Argos vs software-only transcoding.

NETINT ASICs deliver similar results. For example, a single T408 transcoder (H.264 and HEVC) delivers roughly the same throughput as a 16-core computer encoding with software and draws only about 7 watts compared to 250+ for the computer. NETINT Quadra draws 20 watts and delivers roughly 4x the performance of the T408 for H.264, HEVC, and AV1. In one implementation, a single 1RU rack of ten Quadras can deliver 320 1080p streams or 200 720p cloud gaming sessions, which like Argos, replaces multiple racks of CPUs.

Time to Reconsider?

As Google’s experience with YouTube and Argos shows, ASICs deliver unparalleled throughput and power efficiency in high-volume publishing workflows. If you haven’t considered ASICs for your workflow, it’s time for another look.