Video Transcoder vs. Video Processing Unit (VPU)

When choosing a product for live stream processing, half the battle is knowing what to search for. Do you want a live transcoder, a video processing unit (VPU), a video coding unit (VCU), Scalable Video Processor (SVP) or something else? If you’re not quite sure what these terms mean and how they relate, this short article will educate you in four minutes or less.  

In the Beginning, There Were Transcoders

Simply stated, a transcoder is any technology, software or hardware, that can input a compressed stream (decode) and output a compressed stream (encode). FFmpeg is a transcoder, and for video-on-demand applications, it works fine in most low-volume applications.

Video Transcoder versus Video Processing Unit aka VPU - board 1

For live applications, particularly high-volume live interactive applications (think Twitch), you’ll probably need a hardware transcoder to achieve the necessary cost per stream (CAPEX), operating cost per stream, and density.

For example, the NETINT Video Transcoding Server, a single 1RU server with ten NETINT T408 Video Transcoders, can deliver up to 80 H.264/HEVC 1080p30 streams while drawing under 250 watts. Performed in software using only the CPU, this same output could take up to ten separate 1RU servers, each drawing well over 250 watts.

Netint Codensity, ASIC-based T408 Video Transcoder
The NETINT T408 Video Transcoder.

Speaking of the T408, if Websters defined a transcoder (it doesn’t), it might have a picture of the T408 as the perfect example of a transcoder. Based on custom transcoding ASICs, the T408 is inexpensive ($400), capable (4K @ 60 FPS or 4x 1080p60 streams), flexible (H.264 and HEVC), and exceptionally efficient (only 7 watts).

What doesn’t the T408 do? Well, that leads us to the difference between a transcoder and a VPU.

The difference between a transcoder and a Video Processing Unit (VPU)

First, the T408 doesn’t scale video. If you’re building a full encoding ladder from a high-resolution source, all the scaling for the lower rungs is performed by the host CPU. In addition, the T408 doesn’t perform overlay in hardware. So, if you insert a logo or other bug over your videos, again, the CPU does the heavy lifting.

Finally, the T408 was launched in 2019, the first ASIC-based transcoder to ship in quite a long time. So, it’s not surprising that it doesn’t incorporate any artificial intelligence processing capabilities.

What is a Video Processing Unit (VPU)?

What’s a Video Processing Unit? A hardware device that does all that extra stuff, scaling, overlay, and AI. You see this in the transcoding pipeline shown below, which is for the NETINT Quadra.

Video Transcoder versus Video Processing Unit aka VPU - diagram 1

When it came to labeling the Quadra, you see the problem; It does much more than a video transcoder. Not only does it outperform the T408 by a factor of four, it adds AV1 output and all the additional hardware functionality. It’s much more than a simple video transcoder, it’s a video processing unit (VPU).

Video Transcoder versus Video Processing Unit aka VPU - board 2

As much as we’d like to lay claim to the acronym, it actually existed before we applied it to the Quadra. It’s not surprising. It follows the terminology for CPU (central processing unit) and GPU (graphical processing unit). And, if Websters defined VPU (it doesn’t). Oh, you get the point. Here’s the required Quadra glamour shot.

Netint Codensity, ASIC-based Quadra T1A Video Processing Unit
The NETINT Quadra Video Processing Unit.

VCUs and M(SVP)

While NETINT was busy developing ASIC-based transcoders and VPUs for the mass market, large video publishers like YouTube and Meta produced their own ASICs to achieve similar benefits (and produce more acronyms). In 2021, when Google shipped their own ASIC-based transcoder called Argos, they labeled it a Video Coding Unit, or VCU.

Like the T408 and Quadra, the benefits of this ASIC-based technology are profound; as reported by CNET, “Argos handles video 20 to 33 times more efficiently than conventional servers when you factor in the cost to design and build the chip, employ it in Google’s data centers, and pay YouTube’s colossal electricity and network usage bills.” Interestingly, despite YouTube’s heavy usage of the AV1 codec, Argos encodes only H.264 and VP9, not AV1.

In May 2023, Meta released their own ASIC, which, like Argos, outputs H.264 and VP9, but not AV1. Called the Meta Scalable Video Processor (MSVP), the unit delivered impressive results, including “a throughput gain of ~9x for H.264 when compared against libx264 SW encoding…[and] a throughput gain of ~50x when compared with libVPX speed 2 preset.” Meta also noted that the unit drew only 10 watts of power, which is skimpy but also about 43% higher than the T408.

Of course, neither Google or Meta sells their ASIC to third parties, so if want the CAPEX and OPEX efficiencies that ASIC-based VPUs deliver, you’ll have to buy from NETINT.

Of course, neither Google or Meta sells their ASIC to third parties, so if want the CAPEX and OPEX efficiencies that ASIC-based VPUs deliver, you’ll have to buy from NETINT. The bottom line is that whether you call it a transcoder, VPU, VCU, or MSVP, you’ll get the highest throughput and lowest power consumption if it’s powered by an ASIC.

Play Video about Hard Questions on Hot Topics w Jan Ozer and Anita Flejter- ASIC-based Video Transcoder versus Video Processing Unit (VPU)
HARD QUESTIONS ON HOT TOPICS:
ASIC-based Video Transcoder versus Video Processing Unit (VPU)
Watch the full conversation on YouTube: https://youtu.be/iO7ApppgJAg

Which AWS CPU is Best for FFmpeg – AMD, Graviton, or Intel?

If you encode with FFmpeg on AWS, you probably know that you have three CPU options: AMD, Graviton, and Intel. Which delivers the most bang for the buck?

For those in a hurry, it’s Graviton for x264 and AMD for x265, often by a significant margin. But the devil is always in the details, and if you want to learn how we tested and how big a difference your CPU selection makes, you can follow the narrative or hopscotch through the fancy charts below. We conclude with a look at the optimal core count for those encoding with AMD CPUs.

Testing the AWS CPUs

Let me start by saying that this was my first foray into CPU testing on AWS, and while it appears straightforward, some unconsidered complexity may have skewed the results. If you see any errors or other factors worth considering, please drop me a note at jan.ozer@netint.com.

Second, your source clip and command string may produce different results than those shown below. If you’re spending big to encode with FFmpeg on AWS, don’t consider my results the final word; instead, consider them as evidence that your CPU choice really does matter and as motivation to perform your own tests. 

Those caveats aside, let’s dig into the testing.

Codecs/Configurations/Command Strings

I tested three test cases.

  • 8-bit 1080p30 with x264
  • 8-bit 1080p30 with x265
  • 10-bit 4K60p with x265

I present the command strings at the bottom of this article. Note that I used the veryslow preset for x264, slower for x265 at 1080p30, and slow for the 4K60 HEVC encodes. Why such demanding presets? Because based upon a total cost of distribution (encoding and bandwidth), the optimal economic decision when view counts will exceed 10,000 views is to use a high-quality preset.

Based upon a total distribution cost (encoding and bandwidth), the optimal economic decision when view counts exceed 10,000 views is to use a high-quality preset.

Remember, presets don’t determine quality; your quality expectations do. Most compressionists target a VMAF score of between 93-95 VMAF points for the top rung of their encoding ladders. Using the veryslow preset, you might achieve that at, say, 3 Mbps. Using ultrafast, you might need a bit rate of as much as 5 Mbps to achieve the same quality. Ultrafast might cut your encoding time/cost by 90%, but you only pay that once, while you pay bandwidth costs for each video view. Even at a cost per GB of $0.02, it takes less than 10,000 views for the veryslow preset to break even based on lower bandwidth costs.

Instances and Pricing

I tested using the 8-core instances and on-demand pricing shown in Table 1. I tested all systems running Ubuntu version 22.04. Note that the cost delta between Intel and AMD is ten percent, a number I’ll refer to below.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - Instances and on-demand pricing tested.
Table 1:  Instances and on-demand pricing tested.

Encoding Procedure

As you’ll see in the charts below, I started encoding a single FFmpeg instance and kept adding simultaneous encodes until the cost per stream began to increase, indicating that spinning up another instance was more cost effective than adding additional encodes to the same system.

FFmpeg Versions

Here’s where things get a bit complicated. My premise was that I would produce the optimal results using FFmpeg versions compiled specifically for each CPU tested. I downloaded builds for Graviton, AMD, and Intel from https://johnvansickle.com/ffmpeg/ and happily contributed via PayPal. However, I was also in touch with MulticoreWare, who requested that I test with an advanced version of their x265 codec that was optimized for Graviton.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - Instances and on-demand pricing tested - Figure 1. I tested with CPU-specific versions of FFmpeg 6.0 from https://johnvansickle.com/ffmpeg/.
Figure 1. I tested with CPU-specific versions of FFmpeg 6.0 from https://johnvansickle.com/ffmpeg/.

Before testing, I compared the performance of the stock version of FFmpeg (Version 4.4) with the CPU-specific versions from Vansickle on the AMD and Intel platforms and for x264 on Graviton. In all cases, the Vansickle version produced the same or better throughput with identical quality.

Note that in other tests on different AMD instances with core counts ranging from 2 – 32, the Vansickle version was not always the best performer. So, if you try the Vansickle versions or your own CPU-specific compiled versions, you should verify that it outperforms the native version in all relevant use cases.

Note that the MulticoreWare version of FFmpeg performed much better on the Graviton system than the generic version of 4.4 or the Vansickle version, though still far behind Intel and particularly AMD. As you’ll see clearly below, if you’re running x265 on a Graviton system using high quality presets, you’re missing a great opportunity to shave your costs.

For the record, I tried upgrading the stock version of FFmpeg on the Ubuntu system to version 6.0 but ran into multiple issues that ultimately corrupted the system and forced me to start back at ground zero. Unfortunately, Ubuntu operation and maintenance are not a core-strengths of mine, but since I ran all tests using Version 6.0, whether supplied by Vansickle or MulticoreWare, the results should be representative.

Table 2 shows the different versions of FFmpeg that I ran on the three systems for the three test cases.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - IThe FFmpeg versions deployed on the three systems for the three test cases.
Table 2. The FFmpeg versions deployed on the three systems for the three test cases.

Results

Here are the results for the three test cases.

1080p x264

Figure 2 shows the cost per hour to produce a 1080p30 stream using FFmpeg and the x264 codec. One of the more interesting testing results was that the combination of FFmpeg and Ubuntu handled multiple instances of FFmpeg with minimal overhead, particularly on the Graviton CPU. You see this with the cost per hour for Graviton remaining consistent through twelve instances, while it increased slightly for Intel after 10 instances and AMD after 12.

In all cases, you see the cost per instance drop significantly when moving from single to multiple simultaneous encodes. If you’re performing a single 1080p x264 encode on an 8-core system, you’re probably wasting money.

On the other hand, once each CPU hits the lowest cost per hour, it’s time to consider adding another instance. The cost per stream will remain the same, but your encoding speed will double. So, if you’re encoding on a Graviton system, your encoding time will double if you perform twelve simultaneous encodes as opposed to six, but your cost per hour will be almost exactly the same. If you spin up another 8-core system and encode six simultaneous encodes on the two systems, your cost will be almost identical, but your throughput will double.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel -Cost per hour to produce a single 1080p stream using the x264 codec and FFmpeg. Graviton is clearly the most cost effective.
Figure 2. Cost per hour to produce a single 1080p stream using the x264 codec and FFmpeg. Graviton is clearly the most cost-effective.

1080p x265

What a difference a codec makes. Where Graviton was the clear leader for x264, it’s the clear laggard for x265. Again, I produced the Graviton results shown in Figure 3 using a version of FFmpeg supplied by x265 developer MulticoreWare; the results would have been much worse with either the Vansickle version or the stock version. As you may know, Graviton is an Arm-based CPU that uses a different instruction set than Intel or AMD CPUs. While the x264 codec was Arm-friendly, the x265 codec was decidedly the reverse, at least using the high-quality presets that I used in my tests.

Interestingly, for both Intel and AMD, we realized the lowest cost per stream at relatively low simultaneous stream counts, two for Intel and two and three for AMD. If your testing confirms this, you should consider adding instances once you achieve this threshold rather than adding additional encodes to existing instances.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - Cost per hour to produce a single 1080p stream using the x265 codec and FFmpeg.
Figure 3. Cost per hour to produce a single 1080p stream using the x265 codec and FFmpeg.

Comparing the lowest cost Intel ($6.60) to the lowest cost AMD ($5.49), shows a cost delta of about 17%. As shown in Table 1, 10% of this relates to pricing, leaving about a 7% performance delta.

For the record, note that an Amazon engineer ran similar tests here and found that Graviton was faster for both x264 and x265. Note, however, that the author used the ultrafast preset, while I used higher quality presets for the stated reasons. Have a look and draw your own conclusions.

4K60 x265

In 4K60p testing, the Graviton was clearly overwhelmed from both a cost and performance aspect, unable to complete even three simultaneous encodes. The overall cost delta between Intel and AMD narrowed slightly, dropping to 13.7% overall, with 10% relating to pricing. The actual throughput delta between the two in these tests is 3.7%.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - Figure 4. Cost per hour to produce a single 4K60p stream using the x265 codec and FFmpeg.
Figure 4. Cost per hour to produce a single 4K60p stream using the x265 codec and FFmpeg.

This 4K60 test stressed memory usage much more so than the 1080p tests, limiting successful simultaneous transcodes to two for Graviton and four for AMD and Intel. Interestingly, in these tests, AMD produced the lowest cost per stream while running a single encode, and Intel did so at 2. With these challenging encodes; you may want to spin up new machines after only one or two encodes rather than attempting more simultaneous encodes. Or, perhaps, try a machine with more cores. Hold that thought until the last section.

For reference, Table 3 summarizes the lowest cost per hour for the three test cases.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - Cost per hour for the three test cases on the three tested CPUs.
Table 3. Cost per hour for the three test cases on the three tested CPUs.

Which leads us to the last section.

What’s the Optimal Number of Cores for FFmpeg?

AWS offers multiple core counts in all three CPU flavors: what’s the optimal core count? To evaluate this, I ran tests on multiple AMD CPUs for all three test cases and present the results below.

Let’s talk about expectations first. AWS charges linearly for the machine cores, so an 8-core system costs twice as much as a 4-core system and a quarter of a 32-core system. Given the results presented above, where FFmpeg/Ubuntu proved highly efficient when processing multiple instances, I expected a similar cost per hour for all CPUs. The results were close.

With x264, 2-core and 8-core systems were slightly more affordable than 16-core, though a 32-core system finally caught up at 32 simultaneous transcodes. If you’re going to run a 32-core system for 1080p30/x264 encodes, you need to be running quite a few simultaneous encodes to achieve the optimal cost per stream.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - x264 encoding cost for the CPU core counts shown.
Figure 5. x264 encoding cost for the CPU core counts shown.

With x265 encoding at 1080p, the results were closer to what I expected, though again, the 2-core and 8-core systems were slightly more affordable. Unlike x264, the 32-core system became slightly more expensive as the number of simultaneous encodes increased, making eight simultaneous streams the most affordable.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - x265 encoding cost for 1080p30 encodes, and the CPU core counts shown.
Figure 6. x265 encoding cost for 1080p30 encodes and the CPU core counts shown.

When encoding 4K videos, the phrase “go big or go home” comes to mind. Here, 32-cores delivered the lowest cost, though only by a fraction, and only at four simultaneous encodes. After that, the cost per hour increases slightly through eight encodes and then starts a more serious climb.

Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel - x265 encoding cost for 4K60 encodes and the CPU core counts shown.
Figure 7. x265 encoding cost for 4K60 encodes and the CPU core counts shown.

As you can see, all these results are highly codec and source material specific. The most important takeaway from this article should not be that Graviton is best for x264 and AMD best for x265. It should be that real differences exist between the performance of the CPUs, and these differences may translate to significant cost differentials. If you’re spending even a few thousand dollars a month on AWS for FFmpeg encoding, it makes sense to run tests like these to identify the most cost-effective CPU and core-count.

Test Strings

1080p30 x264:

ffmpeg -y -i Orchestra.mp4 -c:v libx264 -profile:v high  -preset veryslow -g 60 -keyint_min 60 -sc_threshold 0  -b:v 4200k -pass 1  -f mp4 /dev/null

ffmpeg -y -i Orchestra.mp4 -c:v libx264  -preset veryslow -g 60 -keyint_min 60 -sc_threshold 0  -b:v 4200k -maxrate 8400k -bufsize 8400k -pass 2  orchestra_x264_output.mp4

1080p30 x265:

ffmpeg  -y -i Football_short.mp4 -c:v libx265 -preset slower -x265-params keyint=60:min-keyint=60:scenecut=0:bitrate=3500:pass=1  -f mp4 /dev/null

ffmpeg  -y -i Football_short.mp4 -c:v libx265 -preset slower -x265-params keyint=60:min-keyint=60:scenecut=0:bitrate=3500:vbv-maxrate=7000:vbv-bufsize=7000:pass=2  Football_x265_HD_output.mp4

4K60 x265:

ffmpeg -y -i Football_4K60.mp4 -c:v libx265 -preset slow -x265-params keyint=120:min-keyint=120:scenecut=0:bitrate=12500K:pass=1  -f mp4 /dev/null

ffmpeg -y -i Football_4K60.mp4 -c:v libx265 -preset slow -x265-params keyint=120:min-keyint=120:scenecut=0:bitrate=12500K:vbv-maxrate=25000K:vbv-bufsize=25000K:pass=2  Football_4K_output.mp4 

Play Video about Hard Questions on Hot Topics - NETINT Technologies about AMD, Graviton, and Intel - three CPU options to encode with FFmpeg on AWS
HARD QUESTIONS ON HOT TOPICS: AMD, Graviton, and Intel
– three CPU options to encode with FFmpeg on AWS
 
Watch the full conversation on YouTube: https://youtu.be/BOZZuiemMAU