Mobile cloud gaming and technology suppliers

Cloud gaming is the perfect application for ASIC-based transcoding. NETINT products are extensively deployed in cloud gaming overseas. High-profile domestic...

Video games are a huge market segment, projected to reach US$221.4 billion in 2023, expanding to an estimated US$285 billion by 2027. Of that, cloud gaming grossed an estimated US$3 billion+ in 2022 and is projected to produce over US$12 billion in revenue by 2026.

While the general video game market generates minimal revenue from encoder sales, cloud gaming is the perfect application for ASIC-based transcoding. NETINT products were designed, in part, for cloud gaming and are extensively deployed in cloud gaming overseas. We expect to announce some high-profile domestic design wins in 2023.

If you’re not a gamer, you may not be familiar with what cloud gaming is and how it’s different from PC or console-based gaming. This is the first of several introductory articles to get you up to speed on what cloud gaming is, how it works, who the major players are, and why it’s projected to grow so quickly. 

What is cloud gaming

Figure 1, from this article, illustrates the difference between PC/console gaming and cloud gaming. On top is traditional gaming, where the gamer needs an expensive, high-performance console or game computer to process the game logic and render the output. To the extent that there is a cloud component, say for multiple players, the online server tracks and reports the interactions, but all computational and rendering heavy lifting is performed locally.

Mobile cloud gaming and technology suppliers - figure 1
Figure 1. The difference between traditional and cloud gaming. From this article.

On the bottom is cloud gaming. As you can see, all you need on the consumer side is a screen and game controller. All of the game logic and rendering are performed in the cloud, along with encoding for delivery to the consumer.

Cloud gaming workflow

Figure 2 shows a high-level cloud workflow – we’ll dig deeper into the cloud gaming technology stack in future articles, but this should help you grasp the concept. As shown, the gamer’s inputs are sent to the cloud, where a virtual instance of the game interprets, executes, and renders the input. The resultant frames are captured, encoded, and transmitted back to the consumer, where the frames are decoded and displayed. 

#image_title
Figure 2. A high-level view of the cloud side of cloud gaming from this seminal article.

Cloud gaming and consumers' benefits

Cloud gaming services incorporate widely different business models, pricing levels, available games, performance envelopes, and compatible devices. In most cases, however, consumers benefit because:

  • They don’t need a high performant PC or game console to play games – they can play on most connected devices. This includes some Smart TVs for a true, big-screen experience.
  • They don’t need to download, install, or maintain games on their game platform.
  • They don’t need to buy expensive games to get started.
  • They can play the same game on multiple platforms, from an expensive gaming rig or console to a smartphone or tablet, with all ongoing game information stored in the cloud so you can immediately pick up where you left off.

Publishers benefit because they get instant access to users on all platforms, not just the native platforms the games were designed for. So, console and PC-based games are instantly accessible to all players, even those without the native hardware. Since games aren’t downloaded during cloud gaming, there’s no risk of piracy, and the cloud negates the performance advantages long-held by those with the fastest hardware, leveling the playing field for game play.

Gaming experience

Speaking of performance, what’s necessary to achieve a traditional local gameplay experience? Most cloud platforms recommend a 10 Mbps download speed at a minimum for mobile, with a wired Ethernet connection recommended for computers and smart TVs. As you would expect, your connection speed dictates performance, with 4K ultra-high frame rate games requiring faster connection speeds than 1080p@30fps gameplay.

As mentioned at the top, cloud gaming is expected to capture an increasing share of overall gameplay revenue going forward, both from existing gamers who want to play new games on new platforms and new gamers. Given the revenue numbers involved, this makes cloud gaming a critical market for all related technology suppliers. 

Argos dispels common myths about encoding ASICs

Argos dispels common myths about encoding ASICs

Even in 2023, many high-volume streaming producers continue to rely on software-based transcoding, despite the clear CAPEX, OPEX, and environmental benefits of ASIC-based transcoding. Part of the inertia relates to outdated concerns about the shortcomings of ASICs, including sub-par quality and lack of flexibility to add features or codec enhancements.

As a parent, I long ago concluded that there were no words that could come out of my mouth that would change my daughter’s views on certain topics. As a marketer, I feel some of that same dynamic, that no words can come out of my keyboard that would shake the negative beliefs about ASICs from staunch software-encoding supporters.

So, don’t take our word that these beliefs are outdated; consider the results from the world’s largest video producer, YouTube. The following slides and observations are from a Google presentation by Aki Kuusela and Clint Smullen on the Argos ASIC-based transcoder at Hot Chips 33 back in August 2021. The slides are available here, and the video here

In the presentation, the speakers discussed why YouTube developed its own ASIC and the performance and power efficiency achieved during the first 16 months of deployment. Their comments go a long way toward dispelling the myths identified above and make for interesting reading.

Advanced Codecs Means Encoding Time Has Grown by 8,000% Since H.264

In discussing why Google created its own encoder, Kuusela explained that video was getting harder to compress, not only from a codec perspective but from a resolution and frame rate perspective.  Here’s Kuusela (all quotes grabbed from the YouTube video and  lightly edited for readability).

“In order to sustain the higher resolutions and frame rate requirements of video, we have to develop better video compression algorithms with improved compression efficiency. However, this efficiency comes with greatly increased complexity. For example, if we compare the vp9 from 2013 to the decade older H.264, the time to encode videos in software has grown to 10x. The more recent AV1 format from 2018 is already 200 times more time-consuming than the h.264 standard.

If we further compound this effect with the increase in resolution and frame rate for top-quality video, we can see that the time to encode a video from 2003 to 2018 has grown eight thousand-fold. It is very obvious that the CPU performance improvement has not kept up with this massive complexity growth, and to keep our video services running smoothly, we had to consider warehouse scale acceleration. We also knew things would not get any better with the next generation of compression.”

Argos dispels common myths about encoding ASICs - 1
Figure 1. Google moved to hardware
to address skyrocketing encoding times.

Reviewing Figure 1, it should be noted that though few engineers use VP9 as extensively as YouTube, if you swap HEVC for VP9, the complexity difference between H.264 is the same. Beyond the higher resolutions and frame rates engineers must support to remain competitive, the need for hardware becomes even more apparent when you consider the demands of live production.

“Near Parity” with Software Encoding Quality

One consistent concern about ASICs has been quality, which admittedly lagged in early hardware generations. However, Google’s comparison shows that properly designed hardware can deliver near-parity to software-based transcoding.

Kuusela doesn’t spend a lot of time on the slide shown in Figure 2, merely stating that “we also wanted to be able to optimize the compression efficiency of the video encoder based on the real-time requirements and time available for each encoder and to have full access to all quality control algorithms such as bitrate allocation and group of picture selection. So, we could get near parity to software-based encoding quality with our no-compromises implementation.”

Figure 2. Argos delivers “near-parity”
with software encoders.

NETINT’s data more than supports this claim. For example, Table 1 compares the NETINT Quadra VPU with various x265 presets. Depending upon the test configuration, Quadra delivers quality on par with the x265 medium preset. When you consider that software-based live production often necessitates using the veryfast or ultrafast preset to achieve marginal throughput, Quadra’s quality far exceeds that of software-based transcoding.

Argos dispels common myths about encoding ASICs - table 1
Table 1. Quadra HEVC quality compared to x265
in high-quality latency tolerant configuration.

ASIC Performance Can Improve After Deployment

Another concern about ASIC-based transcoders is the inability to upgrade, and accelerated obsolescence. Proper ASIC design allows ASICs to balance encoding tasks between hardware, firmware, and control software to ensure continued upgradeability.

Figure 3 shows how the bitrate of VP9 and H.264 continued to improve compared to software in the months after the product launch, even without changes to the firmware or kernel driver. The second Google presenter, Clint Smullen attributed this to a hybrid hardware/software design, commenting that “Using a software approach was critical both to supporting the quality and feature development in the video core as well as allowing customer teams to iteratively improve quality and performance.”

Figure 3. Argos continued to improve after deployment
without changes to firmware or the kernel driver.

The NETINT Codensity G4 ASIC included in the T408 and the NETINT Codensity G5 ASIC that powers our Quadra family of VPUs, both use a hybrid design that distributes critical functions between the ASIC, driver software, and firmware.

We optimize ASIC design to maximize functional longevity as explained here on the role of firmware in ASIC implementations, “The functions implemented in the hardware are typically the lower-level parts of a video codec standard that do not change over time, so the hardware does not need to be updated. The higher levels parts of the video codecs are in firmware and driver software and can still be changed.”

As Google’s experience and NETINT’s data show, well-designed ASICs can continue improving in quality and functionality long after deployment. 

90% Reduction in Power Consumption

Few engineers question the throughput and power efficiency of ASICs, and Google’s data bears this out. Commenting on Figure 4, Smullen stated, “For H.264 transcoding a single VCU matches the speed of the baseline system while using about one-tenth of the system level power. For VP9, a single 20 VCU machine replaces multiple racks of CPU-only systems.”

Figure 4. Throughput and comparative efficiency
of Argos vs software-only transcoding.

NETINT ASICs deliver similar results. For example, a single T408 transcoder (H.264 and HEVC) delivers roughly the same throughput as a 16-core computer encoding with software and draws only about 7 watts compared to 250+ for the computer. NETINT Quadra draws 20 watts and delivers roughly 4x the performance of the T408 for H.264, HEVC, and AV1. In one implementation, a single 1RU rack of ten Quadras can deliver 320 1080p streams or 200 720p cloud gaming sessions, which like Argos, replaces multiple racks of CPUs.

Time to Reconsider?

As Google’s experience with YouTube and Argos shows, ASICs deliver unparalleled throughput and power efficiency in high-volume publishing workflows. If you haven’t considered ASICs for your workflow, it’s time for another look.

How Scaling Method and Technique Impacts Quality and Throughput

How Scaling Method and Technique Impacts Quality and Throughput

The thing about FFmpeg is that there are almost always multiple ways to accomplish the same basic function. In this post, we look at four approaches to scaling to reveal how the scaling method and techniques used impact quality and throughput.

We found that if you’re scaling using the default -s function (-s 1280×720), you’re leaving a bit of quality on the table compared to other methods. How much depends upon the metric you prefer; about ten percent if you’re a VMAF (hand raised here) or SSIM fan, much less if you still bow to the PSNR gods. More importantly, if you’re chasing throughput via cascaded scaling with fast scaling algorithms (flags=fast_bilinear), you’re probably losing quality without a meaningful throughput increase.

That’s the TL/DR; here’s the backstory.

The Backstory

NETINT sells ASIC-based hardware transcoders. One key advantage over software-only/CPU-based encoding is throughput, so we perform lots of hardware vs. software benchmarking. Fairness dictates that we use the most efficient FFmpeg command string when deriving the command string for software-only encoding.

In addition, the NETINT T408 transcoder scales in software using the host CPU, so we are vested in techniques that increase throughput for T408 transcodes. In contrast, the NETINT Quadra scales and performs overlays in hardware and provides an AI engine, which is why it’s designated a Video Processing Unit (VPU) rather than a transcoder.

One proposed scaling technique for accelerating both software-only and T408 processing is cascading scaling, where you create a filter complex that starts at full resolution, scales to the next lower resolution, then uses the lower resolution to scale to the next lower resolution. Here’s an example.

filter_complex “[0:v]split=2[out4k][in4k];[in4k]scale=2560:1440:flags=fast_bilinear,split=2[out1440p][in1440p];[in1440p]scale=1920:1080:flags=fast_bilinear,split=3[out1080p][out1080p2][in1080p];[in1080p]scale=1280:720:flags=fast_bilinear,split=2[out720p][in720p];[in720p]scale=640:360:flags=fast_bilinear[out360p]”

So, rather than performing multiple scales from full resolution to the target (4K > 2K, 4K to 1080p, 4K > 720p, 4K to 360p), you’re performing multiple scales from lower resolution sources (4K > 2K > 1080p >720p > 360p). The theory was that this would reduce CPU cycles and improve throughput, particularly when coupled with a fast scaling algorithm. Even assuming a performance increase (which turned out to be a bad assumption), the obvious concern is quality; how much does quality degrade because the lower-resolution transcodes are working from a lower-resolution source?

In contrast, if you’ve read this far,  you know that the typical scaling technique used by most beginning FFmpeg producers is the -s command (-s 1280×720). For all rungs below 4K, FFmpeg scales the source footage down to the target resolution using the bicubic scaling algorithm,

So, we had two proposed methods which I expanded to four, as follows.

  • Default (-s 1280×720)
  • Cascade using fast bilinear
  • Cascade using Lanczos
  • Video filter using Lanczos (-vf scale=1280×720 -sws_flags lanczos)

I tested the following encoding ladder using the HEVC codec.

  • 4K @ 12 Mbps
  • 2K @ 7 Mbps
  • 1080p @ 3.5 Mbps
  • 1080p @ 1.8 Mbps
  • 720p @ 1 Mbps
  • 360p @ 500 kbps

I encoded two 3-minute 4Kp30 files, excerpts from the Netflix Meridian and Harmonic Football test clips using the x265 codec and ultrafast preset. You can see full command strings at the end of the article. I measured throughput in frames per second and measured the 2K to 360p rung quality with VMAF, PSNR, and SSIM, compiling the results into BD-Rate comparisons in Excel.

I tested on a Dell Precision 7820 tower driven by two 2.9 GH Intel Xeon Gold (6226R) CPUs running Windows 10 Pro for Workstations with 64 GB of RAM. I tested with FFmpeg 5.0, a version downloaded from www.gyan.dev on December 15, 2022.

Performance

How Scaling Method and Technique Impacts Quality and Throughput - table 1
TABLE 1. FPS BY SCALING METHOD

Table 1 shows that cascading delivered negligible performance benefits with the two test files and the selected encoding parameters. I asked the engineer who suggested the cascading scaling approach why we saw no throughput increase. Here’s a brief exchange. 

Engineer: It’s not going to make any performance difference in your example anyways but it does reduce the scaling load

       Me: Why wouldn’t it make a performance difference if it reduces the scaling load?

Engineer: Because, as your example has shown, the x265 encoding load dominates. It would make a very small difference

       Me: Ah, so the slowest, most CPU-intensive process controls overall performance.

Engineer: Yes, when you compare 1000+1 with 1000+10 there is not too much difference.

What this means, of course, is that these results may vary by the codec. If you’re encoding with H.264, which is much faster, cascading scaling might increase throughput. If you’re encoding with AV1 or VVC, almost certainly not.

Given that the T408 transcoder is multiple times faster than real-time, I’m now wondering if cascaded scaling might increase throughput when producing with the T408. You probably wouldn’t attempt this approach if quality suffered, but what if cascaded scaling improved quality? Sound far-fetched? Read on.

Quality Results

Table 2 shows the combined VMAF results for the two clips. Read this by choosing a row and moving from column to column. As you would suspect, green is good, and red is bad. So, for the Default row, that technique produces the same quality as Cascade – Fast Bilinear with a bitrate reduction of 18.55%. However, you’d have to boost the bitrate by 12.89% and 11.24%, respectively, to produce the same quality as Cascade – Lanczos and  Video Filter – Lanczos.

How Scaling Method and Technique Impacts Quality and Throughput - table 2
Table 2. BD-Rate comparisons for the four techniques using the VMAF metric.

From a quality perspective, the Cascade approach combined with the fast bilinear algorithm was the clear loser, particularly compared to either method using the Lanczos algorithm. Even if there was a substantial performance increase, which there wasn’t, it’s hard to see a relevant use case for this algorithm.

The most interesting takeaway was that cascading scaling with the Lanczos algorithm produced the best results, slightly higher than using a video filter with Lanczos. The same pattern emerged for PSNR, where Cascade – Lanc was green in all three columns, indicating the highest-quality approach. 

How Scaling Method and Technique Impacts Quality and Throughput - table 3
Table 3. BD-Rate comparisons for the four techniques using the PSNR metric.

Ditto for SSIM.

How Scaling Method and Technique Impacts Quality and Throughput - table 4
Table 4. BD-Rate comparisons for the four techniques using the SSIM metric.

The cascading approach delivering better quality than the video filter was an anomaly. Not surprisingly, the engineer noted:

Engineer: It is odd that cascading with Lanczos has better quality than direct scaling. I’m not sure why that would be.

       Me: Makes absolutely no sense. Is anything funky in the two command strings?

Engineer: Nothing obvious but I can look some more.

Later analysis yielded no epiphanies. Perhaps they can come from a reader.

The Net Net

First, the normal caveats; your mileage may vary by codec and content. My takeaways are:

  • Try cascading scaling with Lanczos with the T408,
  • For software encodes, never use -s again.
  • Use cascade or the simpler video filter approach. 
  • With most software-based encoders, faster scaling methods may not deliver performance increases but could degrade quality.

Further, as we all know, there are several, if not dozens, additional approaches to scaling; if you have meaningful results that prove one is substantially better, please share them with me via THIS email.

Finally, taking a macro view, it’s worth remembering that a $12,000 + workstation could only produce 25 fps when producing a live 4K ladder to HEVC using x265’s ultrafast preset. Sure, there are faster software encoders available. Still, hardware encoding is the best answer for affordable live 4K transcoding from both an OPEX and CAPEX perspective.

Command Strings:

Default:

c:\ffmpeg\bin\ffmpeg -y -i  football_4K30_all_264_short.mp4 -y ^

-c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 12M -maxrate 12M  -bufsize 24M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_4K_8_bit_12M_default.mp4 ^

-s 2560×1440 -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 7M -maxrate 7M  -bufsize 14M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_2K_8_bit_7M_default.mp4  ^

-s 1920×1080 -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 3.5M -maxrate 3.5M  -bufsize 7M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_8_bit_3_5M_default.mp4 ^

-s 1920×1080 -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1.8M -maxrate 1.8M  -bufsize 3.6M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_1_8M_default.mp4 ^

-s 1280×720  -c:v libx265 -an  -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1M -maxrate 1M  -bufsize 2M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_720p_1M_default.mp4 ^

-s 640×360  -c:v libx265 -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v .5M -maxrate .5M  -bufsize 1M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 -report Fball_x265_360p_500K_default.mp4

Cascade – Fast Bilinear

c:\ffmpeg\bin\ffmpeg -y -i  football_4K30_all_264_short.mp4 -y ^

-filter_complex “[0:v]split=2[out4k][in4k];[in4k]scale=2560:1440:flags=fast_bilinear,split=2[out1440p][in1440p];[in1440p]scale=1920:1080:flags=fast_bilinear,split=3[out1080p][out1080p2][in1080p];[in1080p]scale=1280:720:flags=fast_bilinear,split=2[out720p][in720p];[in720p]scale=640:360:flags=fast_bilinear[out360p]” ^

-map [out4k] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 12M -maxrate 12M  -bufsize 24M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_4K_8_bit_cascade_12M_fast_bi.mp4 ^

-map [out1440p] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 7M -maxrate 7M  -bufsize 14M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_2K_8_bit_cascade_7M_fast_bi.mp4  ^

-map [out1080p] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 3.5M -maxrate 3.5M  -bufsize 7M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_8_bit_cascade_3_5M_fast_bi.mp4 ^

-map [out1080p2] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1.8M -maxrate 1.8M  -bufsize 3.6M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_8_bit_cascade_1_8M_fast_bi.mp4 ^

-map [out720p]  -c:v libx265 -an  -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1M -maxrate 1M  -bufsize 2M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_720p_8_bit_cascade_1M_fast_bi.mp4 ^

-map [out360p]  -c:v libx265 -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v .5M -maxrate .5M  -bufsize 1M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 -report Fball_x265_360p_8_bit_cascade_500K_fast_bi.mp4

Cascade – Lanczos

c:\ffmpeg\bin\ffmpeg -y -i  football_4K30_all_264_short.mp4 -y ^

-filter_complex “[0:v]split=2[out4k][in4k];[in4k]scale=2560:1440:flags=lanczos,split=2[out1440p][in1440p];[in1440p]scale=1920:1080:flags=lanczos,split=3[out1080p][out1080p2][in1080p];[in1080p]scale=1280:720:flags=lanczos,split=2[out720p][in720p];[in720p]scale=640:360:flags=lanczos[out360p]” ^

-map [out4k] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 12M -maxrate 12M  -bufsize 24M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_4K_8_bit_cascade_12M_lanc.mp4 ^

-map [out1440p] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 7M -maxrate 7M  -bufsize 14M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_2K_8_bit_cascade_7M_lanc.mp4  ^

-map [out1080p] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 3.5M -maxrate 3.5M  -bufsize 7M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_8_bit_cascade_3_5M_lanc.mp4 ^

-map [out1080p2] -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1.8M -maxrate 1.8M  -bufsize 3.6M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_8_bit_cascade_1_8M_lanc.mp4 ^

-map [out720p]  -c:v libx265 -an  -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1M -maxrate 1M  -bufsize 2M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_720p_8_bit_cascade_1M_lanc.mp4 ^

-map [out360p]  -c:v libx265 -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v .5M -maxrate .5M  -bufsize 1M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 -report Fball_x265_360p_cascade_500K_lanc.mp4

Video Filter – Lanczos

c:\ffmpeg\bin\ffmpeg -y -i  football_4K30_all_264_short.mp4 -y ^

-c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 12M -maxrate 12M  -bufsize 24M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_4K_12M_filter_lanc.mp4 ^

-vf scale=2560×1440 -sws_flags lanczos -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 7M -maxrate 7M  -bufsize 14M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_2K_7M_filter_lanc.mp4  ^

-vf scale=1920×1080 -sws_flags lanczos  -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 3.5M -maxrate 3.5M  -bufsize 7M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_3_5M_filter_lanc.mp4 ^

-vf scale=1920×1080 -sws_flags lanczos  -c:v libx265 -an -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1.8M -maxrate 1.8M  -bufsize 3.6M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_1080p_1_8M_filter_lanc.mp4 ^

-vf scale=1280×720 -sws_flags lanczos -c:v libx265 -an  -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v 1M -maxrate 1M  -bufsize 2M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 Fball_x265_720p_1M_filter_lanc.mp4 ^

-vf scale=640×360 -sws_flags lanczos  -c:v libx265 -force_key_frames expr:gte^(t,n_forced*2^) -tune psnr -b:v .5M -maxrate .5M  -bufsize 1M -preset ultrafast  -x265-params open-gop=0:b-adapt=0:aq-mode=0:rc-lookahead=16 -report Fball_x265_360p_500K_filter_lanc.mp4

Is power consumption your company’s priority?

Is power consumption your company's priority?

Power consumption is a priority for NETINT customers and a passion for NETINT engineers and technicians. Matthew Ariho, a system engineer in SoC Engineering at NETINT, recently answered some questions about:

  • How to test power consumption
  • Which computer components draw the most power
  • Why using older computers is bad for your power bills, and
  • The best way for video-centric data centers to reduce power consumption.

What are the different ways to test power consumption (and cost)?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

There are software and hardware-based solutions to this problem. I use one of each as a means of confirming any results.

One software tool is the IPMItool linux package which provides a simple command-line interface to IPMI-enabled devices through a Linux kernel driver. This tool polls the instantaneous, average and peak and minimum instantaneous power draw of the over a sampling period.

Is power consumption your company's priority?

On the hardware side of things, you can use different forms of multimeters, like the Kill-A-Watt meter and a 208VAC power bar are examples of such devices available in our lab.

What are their pros and cons (and accuracy)?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

The IPMItool is great because it provides a lot of information. It is fairly simple to set up and use. There is a question of reliability because it is software based, it depends on readings whose source I’m not familiar with.

The multimeters (like the Kill-A-Watt meter), while also simple to use, do not have any logging capabilities which makes measurements like average or steady state power draw difficult to measure. Both methods have a resolution of 1W which is not ideal but more than sufficient for our use cases.

What activities to you run when you test power consumption?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

We run multi-instances that mimic streaming workloads but only to the point that each of those instances is performing up to par with our standards (for example, 30 fps).

What’s the range of power consumption you’ve seen?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

I’ve seen reports of power consumption of up to 450 watts, but personally never tested a unit that drew that much. Typically, without any load on the T408 devices, the power consumption hovers around 150W, which increases to 210 to 220W during peak periods.

What’s the difference between Power Supply rating and actual power consumption (and are they related)?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

Power supplies take in 120VAC or 208VAC and convert to various DC voltages (12V, 5V, 3.3V) to power different devices in a computer. This conversion process inherently has several inefficiencies. The extent of these inefficiencies depends on the make of the power supply and the quality of components used.

Power supplies are offered with an efficiency rating that certify how efficiently a power supply will function at different loads. Power consumption measured at the wall will always be less than power supplied within a computer.

What are the hidden sources of excessive power that most people don’t know about?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

The operating system of a computer can consume a lot of power performing background tasks though this has become less of a problem with more efficient CPUs on the market. Other sources of excessive power are bloatware that are usually unnecessary programs that run in the background.

What distinguishes a power-hungry computer from an efficient one – what should the reader look for?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

The power supply rating is something to watch. Small variations in the power supply rating make significant differences in efficiency. The difference between a PSU rated at 80 PLUS and a PSU rated at 80 PLUS Bronze is about 2% to 5% depending on the load. This number only grows with better rated PSUs.

Other factors including the components of the computer. Recently, newer devices (CPUs, GPUs and motherboards) have been made with beyond significant generational improvements in efficiency. A top-of-the-line computer from 3 years ago simply cannot compete with some mid-range computers in terms of both power efficiency or performance. So, while sourcing older but cheaper components in the past may have been a good decision, nowadays, its not as clear cut.

Which components draw the most power?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

CPUs and GPUs. Even consumer CPUs can draw over 200W sustained. GPUs on the lower end consume around 150W and now more recently over 400W.

How does the number of cores in a computer impact power usage?

Is power consumption your company's priority? - Matthew Ariho
Matthew Ariho

I’m really not an expert on server components and it is hard to say without having examples. There are too many options to provide a conclusion on a proper trend. There are AMD 64 core server CPUs that pull about 250 to 270 W and 12 to 38 core Intel server CPUs that do about the same. Ultimately architectural advantages/features determine performance and efficiencies when comparing CPUs across manufacturer or even CPUs from the same manufacturer.

You can't manage what you don't measure.

One famous quote attributed to Peter Drucker is that you can’t manage what you don’t measure. As power consumption becomes increasingly important, it’s incumbent upon all of us to both measure and manage it.

Insights from the Bitmovin Video Developer Report

Insights from the Bitmovin Video Developer Report

The Bitmovin Video Developer Report, now in its 6th edition, is one of the most far-reaching and useful documents available to streaming professionals (now with no registration required). It’s a report that I happily download each December and generally refer to frequently during the next twelve months.

Like the proverbial elephant, what you find important in the report depends upon your interests. I typically zero in on video codec usage, encoding practices, and the most important problems and opportunities facing streaming developers. As discussed below, this year’s edition has some surprises, like the fact that more respondents are currently working with H.266/VVC than AV1.

Beyond this, the report also tracks details on development frameworks, content distribution, monetization practices, DRM, video analytics, and many other topics. This makes it extraordinarily valuable to anyone needing a finger on the pulse of streaming industry practices.

Let’s start with some details about how Bitmovin compiles the data and then jump to what I found most interesting.

Gathering the Data

Bitmovin collected the data between June and September 2022. A total of 424 respondents from over 80 countries answered the survey. Geographically, EMEA led the charge with 43%, followed by North America (34%), APAC (14%), and Latin America (8%). Regarding job function, 34% of respondents were manager/CEO/VP level, 23% developer/engineer, 14% technical manager, 10% product manager, 9% architect/consultant, 7% in R&D, and 3% in sales and marketing.

A quarter of respondents worked in OTT streaming services, 21% in online video platforms, 15% for broadcasters, 12% for integrators, 7% for publishers, 6% for telcos, 5% for social media sites, with 10% other. In terms of company size, 35% worked in companies with 300+ employees, 17% 101-300, 19% 51 – 100, and 29% 1 – 50. In other words, a very useful cross-section of geography, industry, job function, and company size.

To be clear, the results are not actual data from Bitmovin’s cloud encoding facility, which would be useful in its own right. Rather, the respondents answered questions about their current practices and future plans in each of the listed topics.

Current and Planned Codec Usage

Figure 1 shows current and planned codec usage for live encoding, with current usage in blue and planned usage in red. The numbers exceed 100% (of course) because most respondents use multiple codecs.

It’s always a surprise to see H.264 at less than 100%, but there’s 78% clear as day. Even given the breadth of industries that responded to the survey, it’s tough to imagine any publisher not supporting H.264.

Insights from the Bitmovin Video Developer Report - 1
Figure 1. Answers to the question, “Which streaming formats are you using in production for distribution and which ones are you planning to introduce within the next year?”

HEVC was next at 40%, with AV1 in fifth at 18%, bracketed by VP8 (19%) and VP9 (17%), presumably more for WebRTC than OTT. These are the codecs most likely to be used to actually publish video in 2022. Other codecs presumably implemented by infrascture providers were H.266/VVC a suprising third at 19%, with LCEVC and EVC both at 16%.

Looking ahead, HEVC looks to be most likely to succeed in 2023 with 43% of respondents planning to implement, with AV1 next at 34%, H.264/AVC at 33%, and VVC at 20%. Given that CanIUse lists AV1 support at 73% while VVC isn’t even listed, you’d have to assume that actual AV1 deployments in the near term will dwarf H.266/VVC, but you can’t ignore the interest this standard based codec is receiving from the industry. VOD encoding tracks these results fairly closely for both current and planned usage.

Video Quality Related Findings

Quality is a constant concern for video professionals and quality-related data appeared in several questions. In terms of challenges faced by respondents, “finding the root case of quality issues” ranked fifth with 23%, while “quality of experience” ranked ninth, with 19%.

Interestingly, in response to the question, “For which of the following video use cases do you expect to use machine learning (ML) or artificial intelligence (AI) to improve the video experience for your viewers,” 33% cited “video quality optimization,” which ranked third, while 30% cited “quality of experience (QoE),” which ranked fourth.

With so many respondents looking for futuristic means to improve quality, it was ironic that so many ignored content-aware encoding (CAE), a proven method of improving both quality and quality of experience. Specifically, only 33% percent of respondents were currently using CAQ, with 35% planning to implement CAE within the next 12 months. If you’re not in either of these camps, consider yourself scolded.

Live Encoding Practices

Lastly, I focused on live encoding practices, finding that 53% of respondents used commercial encoders, which presumably include both hardware and software. In comparison, 34% encode via open source, which is all software. What’s interesting is how poorly this group dovetails with both the most significant challenge faced by respondents and the largest opportunity for innovation perceived by respondents.

Figure 2. Answers to the question, “Where do you encode video?”

Specifically, controlling cost was the most significant challenge in the report, selected by 33% of respondents. On a cost per stream basis, considering both CAPEX and OPEX, software-encoding is by far more expensive than encoding with hardware, particularly ASICs.

The most significant opportunity for innovation reported by respondents was live streaming at scale, again at 33%. In this regard, the same lack of throughput that makes CPU-driven open-source encoding the most expensive solution makes it the least scalable. Simply stated, publishers currently encoding with CPU-driven open-source codecs can help address both their biggest challenge and their most significant opportunity by switching to ASIC-based transcoding.

Insights from the Bitmovin Video Developer Report - 3
Figure 3. Responses to the question, “Where do you see the most opportunity for innovation in your service?

Curious? Download our white paper, How to Slash CAPEX, OPEX, and Carbon Emissions Using the NETINT T408 Video Transcoder here. Or, compute how long it will take to recoup your investment in ASIC-based encoding through reduced power costs via calculators available here.

And don’t forget to download the Bitmovin Video Developer Report, here.

How NETINT enables ASIC upgradeability with Software

ASIC upgradeability with Software - NETINT technologies

ASICs provide a tremendous energy efficiency, and yet suffer from being fixed function with limited programmability. This was a core engineering challenge that we addressed in the development of the Codensity ASIC family with upgradeable firmware that can be used for a variety of purposes, including adding new features and improving coding performance, and functionality.

To explore these capabilities, we spoke with two members of the NETINT development team, Neil Gunn, who is NETINT’s Video Firmware Tech Lead, and Savio Lam, a firmware engineer. In this short discussion, they describe how firmware allows Codensity video transcoders and VPU’s to evolve and improve long after leaving the foundry. 

This conversation focuses mainly on our Codensity G4 ASIC, however the capability to upgrade firmware applies to all of our ASIC platforms including the Codensity G5.

What do you do with NETINT?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

I am a firmware architect and also develop the firmware and to a lesser extent, the host side software (libxcoder and FFmpeg) for NETINT transcoding ASICs. I started at NETINT in 2018 working on T408 (Codensity G4 based) firmware development. Then, I moved to Quadra (Codensity G5 based) as a software architect and firmware/software developer. I continue to support T408 in the background.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

I am a firmware engineer working on our video transcoding products.

What did you do on the T408?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

I implemented a number of video features in the firmware such as 10-bit transcoding, close captions, HDR10, HDR10+, HLG10, Dolby Vision, HRD, Region of Interest, encoder parameter change, etc. I also worked on bug fixes and customer issues.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

I worked on the system design and integration. I mainly developed code that controls how video data comes in and out of our transcoder in the most efficient and reliable way.

What is firmware in an ASIC?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

The firmware is software that runs on embedded CPUs within the ASIC. The firmware provides a high-level interface to the low-level encoding and decoding hardware. The firmware does a lot of the high-level bitstream processing, such as creating VPS, SPS, and PPS headers, and SEI processing, leaving the ASIC hardware to do the low-level number crunching. Functions that consume a lot of processing and are likely not to change are implemented in hardware.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

To add to what Neil has already described, the firmware in our T408 ASIC manages several significant functions. For example, it comprises code responsible for the NVMe protocol, which allows us to efficiently receive and return up to 8GB/s of video input and output data. To properly consume and process the video data, the firmware sets up and schedules tasks to the appropriate hardware blocks.

Our firmware is also the brain that oversees the bigger picture part of the rate control. In this role, it’s part of a feedback loop that inputs subpicture data from low-level hardware blocks and uses that data to make better decisions that improve picture quality.

To sum up, the firmware is the brain that controls all the hardware blocks in the ASIC and gives instructions to each of them to perform their tasks as efficiently as possible.

How is firmware different from the gates burned into the chip?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

Firmware, like all software, can be changed, unlike actual gates in a chip. It’s called firmware because it’s a little harder to change than software. Firmware is stored in Flash memory which can be reprogrammed through an upgrade process. A T408 firmware release typically consists of new host-side software and firmware that must be version-matched for proper operation. Software provided to our customers with the release simplify the upgrade for one or more T408s in a system.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

There is logic in our T408 ASIC, which could have been designed as part of the hardware for better performance. However, that would significantly limit us from adding and improving the certain product features to suit different customer needs. We believe we have found the right balance on deciding what should be implemented in the firmware or hardware.

What functions can you adjust and/or improve within firmware?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

Things like the codec headers, seis, and rate control, to a certain extent, can be adjusted and/or improved within the firmware. Some lower-level rate control features are fixed in the hardware. Lower-level parts of the encoding standard are fixed in the hardware as these require a lot of processing and are unlikely to change.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

As Neil said, we are quite flexible when it comes to adding or improving support for different video metadata. And as we both explained earlier, since the firmware is also part of the brain that operates the picture rate control for encoding, we can continue to improve quality to a certain degree post-ASIC development.

Do you have any examples of significant improvements with the T408?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

We significantly reduced codec delay on both the encoder and decoder. Our low delay mode removes all frame buffering and encodes and decodes a single frame at a time. Our encoder uses a low delay GOP and sets flags in the bitstream appropriately so that another decoder knows that it doesn’t need to add any delay while decoding.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

Based on different customers’ feedback, we have made several improvements (or fixes) in the past to our rate control through firmware fixes which improved or resolved some of the video quality-related problems they have encountered.

When you hear people say ASICs are obsolete the day they come out of the foundry, what’s your response?

NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Neil Gunn

It’s not true. It is true that the hardware is fixed in an ASIC. Still, the functions implemented in the hardware are typically the lower-level parts of a video codec standard that do not change over time and so the hardware does not need to be updated. The higher levels parts of the video codecs are in firmware and driver software and can still be changed. For example, the T408 encoder hardware is designed for H.264 and H.265. We cannot add new codecs to the T408, but we can add new features to the existing codecs.

SAVIO LAM - NEIL GUNN - How NETINT enables ASIC upgradeability with Software
Savio Lam

There is a fine balance between what needs to be implemented in hardware for performance and what needs to be implemented in the firmware for flexibility (programmability). We think we struck the perfect balance with the Codensity G4 which is what makes it a great ASIC.

This conversation focuses mainly on our Codensity G4 ASIC, however the capability to upgrade firmware applies to all of our ASIC platforms including the Codensity G5.

Computing Payback Period on T408s

Computing Payback Period on T408s

One of the most power-hungry processes performed in data centers is software-based live transcoding, which can be performed much more efficiently with ASIC-based transcoders. With power costs soaring and carbon emissions an ever-increasing concern, data centers that perform high-volume live transcoding should strongly consider switching to ASIC-based transcoders like the NETINT T408. Computing the Payback Period is easy with this calculator.

To assist in this transition, NETINT recently published two online calculators that measure the cost savings and payback period for replacing software-based transcoders with T408s. This article describes how to use these calculators and shows that data centers can recover their investment in T408 transcoders in just a few months, even less if you can repurpose servers previously used for encoding for other uses. Most of the data shown are from a white paper that you can access here.

About the T408

Briefly, NETINT designs, develops, and sells ASIC-powered transcoders like the T408, which is a video transcoder in a U.2 form factor containing a single ASIC. Operating in x86 and ARM-based servers, T408 transcoders output H.264 or HEVC at up to 4Kp60 or 4x 1080p60 streams per T408 module and draw only 7 watts.

Simply stated, a single T408 can produce roughly the same output as a 32-core workstation encoding in software but drawing anywhere from 250 – 500 watts of power. You can install up to 24 T408s in a single workstation, which essentially replaces 20 – 24 standalone encoding workstations, slashing power costs and the associated carbon emissions.

In a nutshell, these savings are why large video publishers like YouTube and Meta are switching to ASICs. By deploying NETINT’s T408s, you can achieve the same benefits without the associated R&D and manufacturing costs. The new calculators will help you quantify the savings.

Determining the Required Number of T408s

The first calculator, available here, computes the number of T408s required for your production. There are two steps; first, enter the rungs of your encoding ladder into the table as shown. If you don’t know the details of your ladder, you can click the Insert Sample HD or 4K Ladder buttons to insert sample ladders.

After entering your ladder information, insert the number of encoding ladders that you need to produce simultaneously, which in the table is 100. Then press the Compute button (not shown in the Figure but obvious on the calculator).

Calculator 1: Computing the number of required T408 transcoders.

This yields a total of 41 T408s. For perspective, the calculator should be very accurate for streams that don’t require scaling, like 1080p inputs output to 1080p. However, while the T408 decodes and transcodes in hardware, it relies on the host CPU for scaling. If you’re processing full encoding ladders, as we are in this example, throughput will be impacted by the power of the host CPU.

As designed, the calculator assumes that your T408 server is driven by a 32-core host CPU. On an 8-16 core system, expect perhaps 5 – 10% lower throughput. On a 64-core system, throughput could increase by 15 – 20%. Accordingly, please consider the output from this calculator as a good rough estimate accurate to about plus or minus 20%.

To compute the payback period, click the Compute Payback Period shown in Figure 1. To restart the calculation, refresh your browser.

Computing Payback Period

Computing the payback period requires significantly more information, which is summarized in the following graphic.

Calculator 2: Information needed to compute the payback period.

Step by step

  1. Choose your currency in the drop-down list.

  2. Enter your current cost per KW. The $0.25/KW is the approximate UK cost as of March 2022 from this source, which you can also access by clicking the information button to the right of this field. This information button also contains a link to US power costs here.

  3. Enter the number of encoders currently transcoding your live streams. In the referenced white paper, 34 was the number of required servers needed to produce 100 H.264 encoding ladders.

  4. Enter the power consumption per encoding server. The 289 watts shown were the actual power consumption measured for the referenced white paper. If you don’t know your power consumption, click the Info button for some suggested values.

  5. Enter the number of encoding servers that can be repurposed. The T408s will dramatically improve encoding density; for example, in the white paper, it took 34 servers transcoding with software to produce the same streams as five servers with ten T408s each. Since you won’t need as many encoding servers, you can shift them to other applications, which has an immediate economic benefit. If you won’t be able to repurpose any existing servers for some reason, enter 0 here.

  6. Enter the current cost of the encoding servers that can be repurposed. This number will be used to compute the economic benefit of repurposing servers for other functions rather than buying new servers for those functions. You should use the current replacement cost for these servers rather than the original price.

  7. Enter the number of T408s required. If you start with the first calculator, this number will be auto-filled.

  8. Enter your cost for the T408s. $400 is the retail price of the T408 in low quantities. To request pricing for higher volumes, please check with a NETINT sales representative. You can arrange a meeting HERE. 

  9. Enter the power consumption for each T408. The T408 draws 7 watts of power which should be auto-filled.

  10. Enter the number of computers needed to host the T408s. You can deploy up to ten T408s in a 1RU server and up to 24 T408s in a 2RU server. We assumed that you would deploy using the first option (10 T408s in a single 1RU) and auto-filled this entry with that calculation. If the actual number is different, enter the number of computers you anticipate buying for the T408s.

  11. Enter the price for computers purchased to run T408s (USD). If you need to purchase new computers to house the T408, enter the cost here. Note that since the T408 decodes incoming H.264 and HEVC streams and transcodes on-board to those formats, most use cases work fine on workstations with 8-16 cores, though you’ll need a U.2 expansion chassis to house the T408s. Check this link for more information about choosing a server to house the T408s. We assumed $3,000 because that was the cost for the server used in the white paper.

    If you’re repurposing existing hardware, enter the current cost, similar to number 6.

 

  1. Enter power consumption for the servers (watts/hour). As mentioned, you won’t need a very powerful computer to run the T408s, and CPU utilization and power consumption should be modest because the T408s are doing most of the work. This number is the base power consumption of the computer itself; the power utilized by the T408s will be added separately.

When you’ve entered all the data, press the Calculate button.

Interpreting the Results

The calculator computes the payback period under three assumptions:

  • Simple: Payback Period on T408 Purchases
  • Simple: Payback Period on T408 + New Computers
  • Comprehensive: Consider all costs
Figure 3. Simple payback on T408 purchases.

This result divides the cost of the T408 purchases by the monthly savings and shows a payback period of around 11 months. That said, if five servers with T408s essentially replaced 34 servers, unless you’re discarding the 29 servers, the third result is probably a more accurate reflection of the actual economic impact.

Figure 4. Simple: Payback Period on T408 + New Computers

This result includes the cost of the servers necessary to run the T408s, which extends the payback period to about 20.5 months. Again, however, if you’re able to allocate existing encoding servers into other roles, the third calculation is a more accurate reflection.

Figure 5. Comprehensive: consider all costs.

This result incorporates all economic factors. In this case, the value of the repurposed computers ($145,000) exceeds the costs of the T408s and the computers necessary to house them ($103,600), so you’re ahead the day you make the switch.

However you run the numbers, data centers driving high-volume live transcoding operations will find that ASIC-based transcoders will pay for themselves in a matter of months. If power costs keep rising, the payback period will obviously shrink even further.

2022-Opportunities and Challenges for the Streaming Video Industry

2022-Opportunities and Challenges for the Streaming Video Industry

As 2022 comes to a close, for those in the streaming video industry, it will be remembered as a turbulent year marked by new opportunities, including the emergence of new video platforms and services.

2022 started off with Meta’s futuristic vision of the internet known as the Metaverse. The Metaverse can be described as a combination of virtual reality, augmented reality, and video where users interact within a digital universe. The Metaverse continues to evolve with the trend of unique individual, one-to-one video streaming experiences in contrast to one-to-many video streaming services which are commonplace today. 

Recent surveys have shown that two-thirds of consumers are planning to cut back on streaming subscriptions due to rising costs and diminishing discretionary income. With consumers becoming more value-conscious and price-sensitive, Netflix and other platforms have introduced new innovative subscriber models. Netflix’s subscription offering, in addition to SVOD (Subscription Video on Demand), now includes an Ad-based tier, AVOD (Advertising Video on Demand).  

Netflix shows the way

This new ad-based tier targets the most price sensitive customers and it is projected that AVOD growth will lead SVOD by 3x in 2023. Netflix can potentially earn over $4B in advertising revenue, making them the second largest ad support platform only after YouTube. This year also saw Netflix making big moves into mobile cloud gaming with the purchase of its 6th gaming studio. Adding gaming to their product portfolio serves at least two purposes: it expands the number of platforms that can access their game titles and serves as another service to maintain their existing users.

These new services and platforms are a small sample of the continued growth in new streaming video services where business opportunities abound for video platforms willing to innovate and take risks.

Stop data center expansion

The new streaming video industry landscape requires platforms to provide innovative new services to highly cost sensitive customers in a regulatory environment that discourages data center expansion. To prosper in 2023 and beyond, video platforms must address key issues to prosper and add services and subscribers.

  • Controlling data center sprawl – new services and extra capacity can no longer be contingent on the creation of new and larger data centers.
  • Controlling OPEX and CAPEX – in the current global economic climate, costs need to be controlled to keep prices under control and drive subscriber growth. In addition, in today’s economic uncertainty, access to financing and capital to fund data expansion cannot be assumed.
  • Energy consumption and environmental impact are intrinsically linked, and both must be reduced. Governments are now enacting environmental regulations and platforms that do not adopt green policies do so at their own peril.

Application Specific Integrated Circuit

For a vision of what needs to be done to address these issues, one only needs to glimpse into the recent past at YouTube’s Argos VCU (Video Coding Unit). Argos is YouTube’s in-house designed ASIC (Application Specific Integrated Circuit) encoder that, among other objectives, enabled YouTube to reduce their encoding costs, server footprint, and power consumption. YouTube is encoding over 500 hours (about 3 weeks) of content per minute.

To stay ahead of this workload, Google designed their own ASIC, which enabled them to eliminate millions of Intel CPUs. Obviously, not everyone has their own in-house ASIC development team, but whether you are a hyperscale platform, commercial, institutional, or government video platform, the NETINT Codensity ASIC-powered video processing units are available.

To enable faster adoption, NETINT partnered with Supermicro, the global leader in green server solutions. The NETINT Video Transcoding Server is based on a 1RU Supermicro server powered with 10 NETINT T408 ASIC-based video transcoder modules. The NETINT Video Transcoding Server, with its ASIC encoding engine, enables a 20x reduction in operational costs compared to CPU/software-based encoding. The massive savings in operational costs offset the CAPEX associated with upgrading to the NETINT video transcoding server.

Supermicro and T408 Server Bundle

In addition to the extraordinary cost savings, the advantages of ASIC encoding include enabling a reduction in the server footprint by a factor of 25x or more, which has a corresponding reduction in power consumption and, as a bonus, is also accompanied by a 25x reduction in carbon emissions. This enables video platforms to expand encoding capacity without increasing their server or carbon footprints, avoiding potential regulatory setbacks.

In need of environmentally friendly technologies

2022 has seen the emergence of many new opportunities with the launch of new innovative video services and platforms. To ensure the business success of these services, in the light of global economic uncertainty and geopolitical unrest, video platforms must rethink how these services are deployed and embrace new cost-efficient, environmentally friendly technologies.

Introduction to AI Processing on Quadra

Intro to AI Processing on Quadra - NETINT technologies

The intersection of video processing and artificial intelligence (AI) delivers exciting new functionality, from real-time quality enhancement for video publishers to object detection and optical character recognition for security applications. One key feature in NETINT’s Quadra Video Processing Units are two onboard Neural Processing Units (NPUs). Combined with Quadra’s integrated decoding, scaling, and transcoding hardware, this creates an integrated AI and video processing architecture that requires minimal interaction from the host CPU. As you’ll learn in this post, this architecture makes Quadra the ideal platform for executing video-related AI applications.

This post introduces the reader to what AI is, how it works, and how you deploy AI applications on NETINT Quadra. Along the way, we’ll explore one Quadra-supported AI application, Region of Interest (ROI) encoding.

About AI

Let’s start by defining some terms and concepts. Artificial intelligence refers to a program that can sense, reason,  act, and adapt. One AI subset that’s a bit easier to grasp is called machine learning, which refers to algorithms whose performance improves as they are exposed to more data over time.

Machine learning involves the five steps shown in the figure below. Let’s assume we’re building an application that can identify dogs in a video stream. The first step is to prepare your data. You might start with 100 pictures of dogs and then extract features, or represent them mathematically, that identify them as dogs: four legs, whiskers, two ears, two eyes, and a tail. So far, so good.

AI Processing on Quadra - figure 1
Figure 1. The high-level AI workflow (from Escon Info Systems)

To train the model, you apply your dog-finding algorithm to a picture database of 1,000 animals, only to find that rats, cats, possums, and small ponies are also identified as dogs. As you evaluate and further train the model, you extract new features from all the other animals that disqualify them from being a dog, along with more dog-like features that help identify true canines. This is the ”machine learning” that improves the algorithm.

As you train and evaluate your model, at some point it achieves the desired accuracy rate and it’s ready to deploy.

The NETINT AI Tool Chain

Then it’s time to run the model. Here, you export the model for deployment on an AI-capable hardware platform like the NETINT Quadra. What makes Quadra ideal for video-related AI applications is the power of the Neural Processing Units (NPU) and the proximity of the video to the NPUs. That is, since the video is entirely processed in Quadra, there are no transfers to a CPU or GPU, which minimizes latency and enables faster performance. More on this is below.

Figure 2 shows the NETINT AI Toolchain workflow for creating and running models on Quadra. On the left are third-party tools for creating and training AI-related models. Once these models are complete, you use the free NETINT AI Toolkit to input the models and translate, export, and run them on the Quadra NPUs – you’ll see an example of how that’s done in a moment. On the NPUs, they perform the functions for which they were created and trained, like identifying dogs in a video stream.

AI Processing on Quadra - figure 2
Figure 2. The NETINT AI Tool Chain.

Quadra Region of Interest (ROI) Filter

Let’s look at a real-world example. One AI function supplied with Quadra is an ROI filter, which analyzes the input video to detect faces and generate Region of Interest (ROI) data to improve the encoding quality of the faces. Specifically, when the AI Engine identifies a face, it draws a box around the face and sends the box’s coordinates to the encoder, with encoding instructions specific to the box.

Technically, Quadra identifies the face using what’s called a YOLOv4 object detection model. YOLO stands for You Only Look Once, which is a technology that requires only a single pass of the image (or one look) for object detection. By way of background, YOLO is a highly regarded family of “deep learning object detection models. The original versions of YOLO are implemented using the DARKNET framework, which you see as an input to the NETINT AI Toolkit in Figure 2.

Deep learning is different from the traditional machine learning described above in that it uses large datasets to create the model, rather than human intervention. To create the model deployed in the ROI filter, we trained the YOLOv4 model in DARKNET using hundreds of thousands of publicly available image data with labels (where the labels are bounding boxes on people’s faces). This produced a highly accurate model with minimum manual input, which is faster and cheaper than traditional machine learning. Obviously, where relevant training data is available, deep learning is a better alternative than traditional machine learning.

Using the ROI Function

Most users will access the ROI function via FFmpeg, where it’s presented as a video filter with the filter-specific command string shown below. To execute the function, you call the filter (ni_quadra_roi), enter the name and location of the model (yolov4_head.nb), and a QP value to adjust the quality within each box (qpoffset=-0.6). Negative values increase video quality, while positive values decrease it so that the command string would increase the quality of the faces by approximately 60% over other regions in the video.  

-vf ‘ni_quadra_roi=nb=./yolov4_head.nb:qpoffset=-0.6’

Obviously, this video is highly compressed; in a surveillance video, the ROI filter could preserve facial quality for face detection; in a gambling or similar video compressed at a higher bitrate, it could ensure that the players’ or performers’ faces look their best.

Figure 3. The region of interest filter at work; original on LEFT, ROI filter on the RIGHT

In terms of performance, a single Quadra unit can process about 200 frames per second or at least six 30fps streams. This would allow a single Quadra to detect faces and transcode streams from six security cameras or six player inputs in an interactive gambling application, along with other transcoding tasks performed without region of interest detection.

Figure 4 shows the processing workflow within the Quadra VPU. Here we see the face detection operating within Quadra’s NPUs, with the location and processing instructions passing directly from the NPU to the encoder. As mentioned, since all instructions are processed on Quadra, there are no memory transfers outside the unit, reducing latency to a minimum and improving overall throughput and performance. This architecture represents the ideal execution environment for any video-related AI application.

Figure 4. Quadra’s on-board AI and encoding processing.

NETINT offers several other AI functions, including background removal and replacement, with others like optical character recognition, video enhancement, camera video quality detection, and voice-to-text on the long-term drawing board. Of course, via the NETINT Tool Chain, Quadra should be able to run most models created in any machine learning platform.

Here in late 2022, we’re only touching the surface of how AI can enhance video, whether by improving visual quality, extracting data, or any number of as-yet unimagined applications. Looking ahead, the NETINT AI Tool Chain should ensure that any AI model that you build will run on Quadra. Once deployed, Quadra’s integrated video processing/AI architecture should ensure highly efficient and extremely low-latency operation for that model.

NETINT Quadra vs. NVIDIA T4 – Benchmarking Hardware Encoding Performance

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer

This article is the second in a series about benchmarking hardware encoding performance. In the first article, available here, I delineated a procedure for testing hardware encoders. Specifically, I recommended this three-step procedure:

  1. Identify the most critical quality and throughput-related options for the encoder.
  2. Test across a range of configurations from high quality/low throughput to low quality/high throughput to identify the operating point that delivers the optimum blend of quality and throughput for your application.
  3. Compute quality, cost per stream, and watts per stream at the operating point to compare against other technologies.

After laying out this procedure, I applied it to the NETINT Quadra Video Processing Unit (VPU) to find the optimum operating point and the associated quality, cost per stream, and watts per stream. In this article, we perform the same analysis on the NVIDIA T4 GPU-based encoder.

About The NVIDIA T4

The NVIDIA T4 is powered by NVIDIA Turing Tensor Cores and draws 70 watts in operation. Pricing varies by the reseller, with $2,299 around the median price, which puts it slightly higher than the $1,500 quoted for the NETINT Quadra T1  VPU in the previous article.

In creating the command line for the NVIDIA encodes, I checked multiple NVIDIA documents, including a document entitled Video Benchmark Assumptions, this blog post entitled Turing H.264 Video Encoding Speed and Quality, and a document entitled Using FFmpeg with NVIDIA GPU Hardware acceleration that requires a login. I readily admit that I am not an expert on NVIDIA encoding, but the point of this exercise is not absolute quality as much as the range of quality and throughput that all hardware enables. You should check these documents yourself and create your own version of the optimized command string.

While there are many configuration options that impact quality and throughput, we focused our attention on two, lookahead and presets. As discussed in the previous article, the lookahead buffer allows the encoder to look at frames ahead of the frame being encoded, so it knows what is coming and can make more intelligent decisions. This improves encoding quality, particularly at and around scene changes, and it can improve bitrate efficiency. But lookahead adds latency equal to the lookahead duration, and it can decrease throughput.

Note that while the NVIDIA documentation recommends a lookahead buffer of twenty frames, I use 15 in my tests because, at 20, the hardware decoder kept crashing. I tested a 20-frame lookahead using software decoding, and the quality differential between 15 and 20 was inconsequential, so this shouldn’t impact the comparative results.

I also tested using various NVIDIA presets, which like all encoding presets, trade off quality vs. throughput. To measure quality, I computed the VMAF harmonic mean and low-frame scores, the latter a measure of transient quality. For throughput, I tested the number of simultaneous 1080p30 files the hardware could process at 30 fps. I divided the stream count into price and watts/hour to determine cost/stream and watts/stream.

As you can see in Table 1, I tested with a lookahead value of 15 for selected presets 1-9, and then with a 0 lookahead for preset 9. Line two shows the closest x264 equivalent score for perspective.

In terms of operating point for comparing to  Quadra, I choose the lookahead 15/preset 4 configuration, which yielded twice the throughput of preset 2 with only a minor reduction in VMAF Harmonic mean. We will consider low-frame scores in the final comparisons.

In general, the presets worked as they should, with higher quality and lower throughput at the left end, and the reverse at the right end, though LA15/P4 performance was an anomaly since it produced lower quality and higher throughput than LA15/P6. In addition, dropping the lookahead buffer did not produce the performance increase that we saw with Quadra, though it also did not produce a significant quality decrease.

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer - Table 1
Table 1. H.264 options and results.

Table 2 shows the T4’s HEVC results. Though quality was again near the medium x265 preset with several combinations, throughput was very modest at 3 or 4 streams at that quality level. For HEVC, LA15/P4 stands out as the optimal configuration, with four times or better throughput than other combinations with higher-quality output.

In terms of expected preset behavior, LA15/P4 was again quite the anomaly, producing the highest throughput in the test suite with slightly lower quality than LA15/P6, which should deliver lower quality. Again, switching from LA 15 to LA 0 produced neither the expected spike in throughput nor a drop in quality, as we saw with the Quadra for both HEVC and H.264.

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer - Table 2
Table 2. HEVC options and results.

Quadra vs. T4

Now that we have identified the operating points for Quadra and the T4, let us compare quality, throughput, CAPEX, and OPEX. You see the data for H.264 in Table 3.

Here, the stream count was the same, so Quadra’s advantage in cost per stream and watts per stream related to its lower cost and more efficient operation. At their respective operating points, the Quadra’s VMAF harmonic mean quality was slightly higher, with a more significant advantage in the low-frame score, a predictor of transient quality problems.

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer - Table 3
Table 3. Comparing Quadra and T4 at H.264 operating points.

Table 4 shows the same comparison for HEVC. Here, Quadra output 75% more streams than the T4, which increases the cost per stream and watts per stream advantages. VMAF harmonic means scores were again very similar, though the T4’s low frame score was substantially lower.

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer - Table 4
Table 4. Comparing Quadra and T4 at HEVC operating points. 

Figure 5 illustrates the low-frames and low-frame differential between the two files. It is the result plot from the Moscow State University Video Quality Measurement Tool (VQMT), which displays the VMAF score, frame-by-frame, over the entire duration of the two video files analyzed, with Quadra in red and the T4 in green. The top window shows the VMAF comparison for the entire two files, while the bottom window is a close-up of the highlighted region of the top window, right around the most significant downward spike at frame 1590.

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer - Picture 1
Figure 5. The downward green spikes represent the low-frame scores in the T4 encode.

As you can see in the bottom window in Figure 5, the low-frame region extends for 2-3 frames, which might be borderline noticeable by a discerning viewer. Figure 6 shows a close-up of the lowest quality frame, Quadra on the left, T4 on the right, and the dramatic difference in VMAF score, 87.95 to 57, is certainly warranted. Not surprisingly, PSNR and SSIM measurements confirmed these low frames.

Hardware Encoding - Benchmarking Hardware Encoding Performance by Jan Ozer - Picture 2
Figure 6. Quality comparisons, NETINT Quadra on the left, T4 on the right.

It is useful to track low frames because if they extend beyond 2-3 frames, they become noticeable to viewers and can degrade viewer quality of experience. Mathematically, in a two-minute test file, the impact of even 10 – 15 terrible frames on the overall score is negligible. That is why it is always useful to visualize the metric scores with a tool like VQMT, rather than simply relying on a single score.

Summing Up

Overall, you should consider the procedure discussed in this and the previous article as the most important takeaway from these two articles. I am not an expert in encoding with NVIDIA hardware, and the results from a single or even a limited number of files can be idiosyncratic.

Do your own research, test your own files, and draw your own conclusions. As stated in the previous article, do not be impressed by quality scores without knowing the throughput, and expect that impressive throughput numbers may be accompanied by a significant drop in quality.

Whenever you test any hardware encoder, identify the most important quality/throughput configuration options, test over the relevant range, and choose the operating point that delivers the best combination of quality and throughput. This will give the best chance to achieve a meaningful apples vs. apples comparison between different hardware encoders that incorporates quality, cost per stream, and watts per stream.