All You Need to Know About the NETINT Product Line

Quadra - All You Need to Know About the NETINT Product Line

This article will introduce you to the NETINT product line and Codensity ASIC generations. We will focus primarily on the hardware differences, since all products share a common software architecture and feature set, which are briefly described at the end of the article.

PRODUCT GALLERY. Click the product image to visit product page

Codensity G4-Powered Video Transcoder Products

The Codensity G4 was the first encoding ASIC developed by NETINT. There are two G4-based transcoders, the T408 (Figure 1), is available in a U.2 form factor and as an add-in card, and the T432 (Figure 2), which is available as an add-in card. The T408 contains a single G4 ASIC and draws 7 watts under full load, while the T432 contains four G4 ASICs and draws 27 watts.

The T408 costs $400 in low volumes, while the T432 costs $1,500. The T432 delivers 4x the raw performance of the T408.

Netint Codensity, ASIC-based T408 Video Transcoder
Figure 1. The NETINT T408 is powered by a single Codensity G4 ASIC.

T408 and T432 decode and encode H.264 and HEVC on the device but perform all scaling, overlay, and deinterlacing on the host CPU.

If you’re buying your own host, the selected CPU should reflect the extent of processing that it needs to perform and the overhead requirements of the media processing framework that is running the transcode function. 

When transcoding inputs without scaling, as in a cloud gaming or conferencing application, a modest CPU can suffice. If you are creating standard encoding ladders, deinterlacing multiple streams, or frequently scaling incoming videos, you’ll need a more capable CPU. For a turn-key solution, check out the NETINT Logan Video Server options.

Netint Codensity, ASIC-based T432 Video Transcoder
Figure 2. The NETINT T432 includes four Codensity G4 ASICs.

The T408 and T432 run on multiple versions of Ubuntu and CentOS; see here for more detail about those versions and recommendations for configuring your server.

The NETINT Logan Video Server

The NETINT Video Transcoding Server includes ten T408 U.2 transcoders. It is targeted for high-volume transcoding applications as an affordable turn-key replacement for existing hardware transcoders or where a drop-in solution to a software-based transcoder is preferred.

The lowest priced model costs $7,000 and is built on the Supermicro 1114S-WN10RT server platform powered by an AMD EPYC 7232P CPU Series Processor with eight CPU cores and 16 threads running Ubuntu 20.04.05 LTS. The server ships with 128 GB of DDR4-3200 RAM and a 400GB M.2 SSD drive with 3x PCIe slots and ten NVME slots that house the ten T408 transcoders. At full transcoding capacity, the server draws 220 watts while encoding or transcoding up to ten 4Kp60 streams or as many as 160 720p60 video streams.

The server is also offered with two more powerful CPUs, the AMD EPYC 7543P Server Processor (32-cores/64-threads, $8,900) and the AMD EPYC 7713P Server Processor (64-cores/128-threads, $11,500). Other than the CPU, the hardware specifications are identical.

FIGURE 3. The NETINT Video Transcoding Server.

All Codensity G4-based products support HDR10 and HDR10+ for H.264 and H.265 encode and decode, as well as EIA CEA-708 closed captions for H.264 and H.265 encode and decode. In low-latency mode, all products support sub-frame latency. Other features include region-of-interest encoding, a customizable GOP structure with eight presets, and forced IDR frame inserts at any location.

The T408, T432, and NETINT Server are targeted toward high-volume interactive applications that require inexpensive, low-power, and high-density transcoding using the H.264 and HEVC codecs.

Codensity G5-Powered Live Transcoder Products

In addition to roughly quadrupling the H.264 and HEVC throughput of the Codensity G4, the Codensity G5 is our second-generation ASIC that adds AV1 encode support, VP9 decode support, onboard scaling, cropping, padding, graphical overlay, and an 18 TOPS (Trillions of Operations Per Second) artificial intelligence engine that runs the most common frameworks all natively in silicon.

Codensity G5 also includes audio DSP engines for encoding and decoding audio codecs such as MP3, AAC-LC, and HE AAC. All this on-board activity minimizes the role of the CPU allowing Quadra products to operate effectively in systems with modest CPUs.

Where the G4 ASIC is primarily a transcoding engine, the G5 incorporates much more onboard processing for even greater video processing acceleration. For this reason, NETINT labels Codensity G4-based products as Video Transcoders and Codensity G5-based products as Video Processing Units or VPUs.

The Codensity G5 is available in three products (Figure 4), the U.2-based Quadra T1 and PCIe-based Quadra T1A, which include one Codensity G5 ASIC, and the PCIe-based , which includes two Codensity G5 ASICs. Pricing for the T1 starts at $1,500. 

In terms of power consumption, the T1 draws 17 Watts, the T1A 20 Watts, and the T2 draws 40 Watts.

Figure 4. The Quadra line of Codensity G5-based products.

All Codensity G5-based products provide the same HDR and close caption support as the Codensity G4-based products. They have also been tested on Windows, MacOS, Linux and Android OS with support for virtual machine and container virtualization, including Single Root I/O Virtualization [SRIOV].

From a quality perspective, the Codensity G4-based transcoder products offer no configuration options to optimize quality vs. throughput. Quadra Codensity G5-powered VPUs offer features like lookahead and rate-distortion optimization that allow users to customize quality and throughput for their particular applications.

Play Video about Hard Questions - NETINT product line
HARD QUESTIONS ON HOT TOPICS – WHAT DO YOU NEED TO UNDERSTAND ABOUT NETINT PRODUCTS LINE
Watch the full conversation on YouTube: https://youtu.be/qRtnwjGD2mY

AI-Based Video Processing

Beyond VP9 ingest and AV1 output, and superior on-board processing, the Codensity G5 AI engine is a game changer for many current and future video processing applications. Each Codensity G5 ASIC includes two onboard Neural Processing Units (NPUs). Combined with Quadra’s integrated decoding, scaling, and transcoding hardware, this creates an integrated AI and video processing architecture that requires minimal interaction from the host CPU.

Today, in early 2023, the AI-enabled processing market is nascent, but Quadra already supports several applications like AI-based region of interest filter, background removal (see Quadra App Note APPS553), and others. Additional features under development include an automatic facial ID for video conferencing, license plate detection and OCR for security, object detection for a range of applications, and voice-to-text.

Quadra includes an AI Toolchain workflow that enables importing models from AI tools like Caffe, TensorFLow, Keras, and Darknet for deployment on Quadra. So, in addition to the basic models that NETINT provides, developers can design their own applications and easily implement them on Quadra

Like NETINT’s Codensity G4 based products, Quadra VPUs are ideal for interactive applications that require low CAPEX and OPEX. Quadra VPUs offer increased onboard processing that enables lower-cost host systems and the ability to customize throughput and quality, deliver AV1 output, and deploy AI video applications.

The NETINT Quadra 100 Video Server

The NETINT Quadra 100 Video Server includes ten Quadra T1 U.2 VPUs and is targeted for ultra high-volume transcoding applications and for services seeking to deliver AV1 stream output.  

The Quadra 100 Video Server costs $20,000 and is built on the Supermicro 1114S-WN10RT server platform powered by an  AMD EPYC 7543P Server Processor (32-cores/64-threads) running Ubuntu 20.04.05 LTS. The server ships with 128 GB of DDR4-3200 RAM and a 400GB M.2 SSD drive with 3x PCIe slots and ten NVME slots that house the ten T1 U.2 VPUs. At full transcoding capacity, the server draws around 500 watts while encoding or transcoding up to 20 8Kp30 streams or as many as 640 720p30 video streams.

The Quadra server is also offered with two different CPUs, the AMD EPYC 7232P Server Processor (8-cores/16-threads, price TBD) and the AMD EPYC 7713P Server Processor (64-cores/128-threads, price TBD). Other than the CPU, the hardware specifications are identical.

Media Processing Frameworks - Driving NETINT Hardware

In addition to SDKs for both hardware generations, NETINT offers highly efficient FFmpeg and GStreamer SDKs that allow operators to apply an FFmpeg/libavcodec or GStreamer patch to complete the integration.

In the FFmpeg implementation, the libavcodec patch on the host server functions between the NETINT hardware and FFmpeg software layer, allowing existing FFmpeg-based video transcoding applications to control hardware operation with minimal changes.

The NETINT hardware device driver software includes a resource management module that tracks hardware capacity and usage load to present inventory and status on available resources and enable resource distribution. User applications can build their own resource management schemes on top of this resource pool or let the NETINT server automatically distribute the decoding and encoding tasks.

In automatic mode, users simply launch multiple transcoding jobs, and the device driver automatically distributed the decode/encode/processing tasks among the available resources. Or, users can assign different hardware tasks to different NETINT devices, and even control which streams are decoded by the host CPU or NETINT hardware. With these and similar controls, users can most efficiently balance the overall transcoding load between the NETINT hardware and host CPU and maximize throughput.

In all interfaces, the syntax and command structure is similar for T408s and Quadra units which simplifies migrating from G4-based products to Quadra hardware. It is also possible to operate T408 and Quadra hardware together in the same system.

That’s the overview. For more information on any product, please check the following product pages (click the image below to see product page). 

PRODUCT GALLERY. Click the product image to visit product page

Reducing Power Consumption in Data Centers: A Response to the European Energy Crisis

Reducing power consumption - European Energy Crisis

Encoding technology refreshes are seldom CFO driven. For European data centers, over the next few years, they may need to be as reducing power consumption in data centers becomes a primary focus.

Few European consumers or businesses need to be reminded that they are in the midst of a power crisis. But a recent McKinsey & Company article entitled Four themes shaping the future of the stormy European power market provides interesting insights into the causes of the crisis and its expected duration. Engineering and technical leaders, don’t stop reading because this crisis will impact the architecture and technology decisions you may be making.

The bottom line, according to McKinsey? Buckle up, Europe, “With the frequency of high-intensity heat waves expected to increase, additional outages of nuclear facilities planned in 2023, and further expected reductions in Russian gas imports, we expect that wholesale power prices may not reduce substantially (defined as returning to three times higher than pre-crisis levels) until at least 2027.” If you haven’t been thinking about steps your organization should take to reduce power consumption and carbon emissions, now is the time.

Play Video about Hard Questions - Reducing Power Consumption in Europe - NETINT technologies
HARD QUESTIONS ON HOT TOPICS – EUROPEAN ENERGY CRISIS AS PER MCKINSEY REPORT
WATCH THE FULL CONVERSATION ON YOUTUBE: https://youtu.be/yiYSoUB4yXc

The Past

The war in Ukraine is the most obvious contributor to the energy crisis, but McKinsey identifies multiple additional contributing factors. Significantly, even before the War, Europe was in the midst of “structural challenges” caused by its transition from carbon-emitting fossil fuels to cleaner and more sustainable sources like wind, solar, and hydroelectric.

Then, in 2022, the shock waves began. Prior to the invasion of Ukraine in February, Russia supplied 30 percent of Europe’s natural gas, which dropped by as much as 50% in 2022, and is expected to decline further. This was exacerbated by a drop of 19% in hydroelectric power caused by drought and a 14% drop in nuclear power caused by required maintenance that closed 32 of France’s 56 reactors. As a result, “wholesale prices of both electricity and natural gas nearly quadrupled from previous records in the third quarter of 2022 compared with 2021, creating concerns for skyrocketing energy costs for consumers and businesses.”

Figure 1. As most European consumers and businesses know, prices skyrocketed in 2022
and are expected to remain high through 2027 and beyond.

Four key themes

Looking ahead, McKinsey identifies four key themes it expects to shape the market’s evolution over the next five years.

  • Increase in Required Demand

McKinsey sees power usage increasing from 2,900 terawatt-hours (TWh) in 2021 to 3,700 TWh in 2030, driven by multiple factors. For example, the switch to electric cars and other modes of transportation will increase power consumption by 14% annually. In addition, the manufacturing sector, which needs power for electrolysis, will increase to 200 TWh by 2030.

  • The Rise of Intermittent Renewable Energy Sources

By 2030, wind and solar power will provide 60% of Europe’s energy, double the share in 2021. This will require significant new construction but could also face challenges like supply chain issues, material shortages, and a scarcity of suitable land and talent.

  • Balancing Intermittent Energy Sources

McKinsey sees the energy market diverging into two types of sources; intermittent sources like solar, wind, and hydroelectric, and dispatchable sources like coal, natural gas, and nuclear that can be turned on and off to meet peak requirements. Over the next several years, McKinsey predicts that “a gap will develop between peak loads and the dispatchable power capacity that can be switched on to meet it.”

To close the gap, Europe has been aggressively developing clean energy sources of dispatchable capacity, including utility-scale battery systems, biomass, and hydrogen. In particular, hydrogen is set to play a key role in Europe’s energy future, as a source of dispatchable power and as a means to store energy from renewable sources.

All these sources must be further implemented and massively scaled, with “build-outs remaining highly uncertain due to a reliance on supportive regulations, the availability of government incentives, and the need for raw materials that are in short supply, such as lithium ion.”

  • New and evolving markets and rules

Beyond temporary measures designed to reduce costs for energy consumers, European policymakers are considering several options to reform how the EU energy market operates. These include

  • A central buyer model: A single EU or national regulatory agency would purchase electricity from dispatchable sources at fixed prices under long-term contracts and sell it to the market at average cost prices.
  • Decoupled day-ahead markets: Separate zero marginal cost energy resources (wind, solar) and marginal cost resources (coal) into separate markets to prioritize dispatching of renewables.
  • Capacity remuneration mechanism: Grid operator provides subsidies to producers based on forecast cost of keeping power capacity in the market to ensure a steady supply of dispatchable electricity and protect consumers.

McKinsey closes on a positive note, “Although the European power market is experiencing one of its most challenging periods, close collaboration among stakeholders (such as utilities, suppliers, and policy makers) can enable Europe’s green-energy transition to continue while ensuring a stable supply of power.”

The future of the European power market is complex and subject to many challenges, but policymakers and stakeholders are working to address them and find solutions to ensure a stable and affordable energy system for consumers and businesses.

In the meantime, the mandate for data centers isn’t new as video engineers are being asked to reduce power consumption to save OPEX, reduce carbon footprint to ensure ESG metrics are hit by the company, and minimize the potential disruption of energy instability.

If you’re in this mode, NETINT’s ASIC-based transcoders can help by offering the lowest available power draw of any silicon solution (CPU, GPU, FPGA), and thus the highest possible density.

Cloud or on-premise – streaming publisher’s dilemma

Publisher's dilemma - cloud or on-premise

Processing your media in the cloud or on-premises is one of the most critical decisions facing a streaming video service. Two recent articles provide strong opinions and insights on this decision and are worthy of review. Our take? Do the math and make your own decision.

The first article is “Why we’re leaving the cloud.”

By way of background, Hansson is co-owner and CTO of software developer 37signals, the developer of the project management platform Basecamp , and the premium email service Hey.

After running the two platforms on AWS for a number of years, Hannson commented that “renting computers is (mostly) a bad deal for medium-sized companies like ours with stable growth. The savings promised in reduced complexity never materialized.” As an overview, he asserts that the cloud excels at two ends of the spectrum: 1) simple and low-traffic applications and 2) highly irregular load with wild swings or towering peaks in usage.

When Hey first launched, running in AWS allowed the new service to seamlessly onboard the 300,000 users that signed up in the first three weeks, wildly exceeding the forecast of 30,000 in 6 months. However, since then, Hansson reported, these capacity spikes never reoccured, and by “continuing to operate in the cloud, we’re paying an at times almost absurd premium for the possibility that [they] could.”

In abandoning the cloud, Hansson had to stare down two common beliefs. First, is that the cloud simplifies systems and computer management. As it relates to his own businesses, he reports that “anyone who thinks running a major service like HEY or Basecamp in the cloud is “simple” has clearly never tried. Some things are simpler, others more complex, but on the whole, I’ve yet to hear of organizations at our scale being able to materially shrink their operations team, just because they moved to the cloud.”

He also tackles perceptions regarding the complexity of running equipment on-premise. “Up until very recently, everyone ran their own servers, and much of the progress in tooling that enabled the cloud is available for your own machines as well. Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.”

“Up until very recently, everyone ran their own servers, and much of the progress in tooling that enabled the cloud is available for your own machines as well. Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.”

In “Media Processing in the Cloud or On-Prem—Which Is Right for You?” , Alex Emmermann, Director of Business Development for Cloud Products at Telestream, takes a more moderate view (as you would expect).

Emmermann starts by pointing out where the cloud makes sense, zeroing in on the same capacity swings as Hansson. “A typical painful example is when capacity requirements shift underneath you, such as a service becoming more popular than you had initially allocated resources for. For example, when running a media services operation, there are many situations that can stress systems... In media processing, full-catalog licenses, mergers, or content migrations can cause enormous capacity requirements for transcoding and QC.”

Emmermann also introduces the concept of hybrid operations. “For many companies, a wholesale move may feel too risky, so a hybrid approach works well by allowing excess capacity requirements to burst into the cloud as required. This allows run rate systems to continue functioning while taking immediate advantage of cloud scaling when and if required. Depending on the needs of the service, a hybrid setup could continue to run indefinitely and very cost-effectively if on-prem CapEx resources have already been spent and the resources are in place to keep them running.”

In terms of companies that should operate on premises, Emmerman cites two examples. First are companies with significant CAPEX investments in encoding gear. “For the many thousands of busy on-premises servers processing run-rate media workflows throughout the world, they’re efficiently and cheaply doing what they need to do and will no doubt continue to do so for a long time.” He also mentions that inexpensive and reliable connectivity is an absolute requirement, and “there are certain places on the planet that may not have reliable interconnectivity to a cloud provider.”

All told, Emmerman concludes, “There’s no question that any media company investing in new services or wanting to have the capacity to say yes to any customer request will want to do this with a public cloud provider… On the other hand, any steady-state, on-premises service that is happily functioning as designed and only occasionally requires a small capital refresh will be happy to stay the course.”

Our Take? Do the Math

Play Video about Hard-Questions-on-Hot-Topics-1-cloud-or-on-prem
HARD QUESTIONS ON HOT TOPICS – CLOUD OR ON PREMISES, HOW TO DO THE MATH?
Watch the full conversation on YouTube: https://youtu.be/GSQsa4oQmCA

Anyone who has ever provisioned an EC2 instance from AWS and paid the hourly rate has wondered, “how does that compare to buying your own system?” We’re certainly not immune.

Given the impetus of this article, we decided to put pencil to paper or keyboard to a spreadsheet. We recently launched the NETINT Video Transcoding Server, which costs $7,000 and includes ten T408 transcoders that can output H.264 and HEVC. In benchmarking the entry-level system, it produced 21 five-rung H.264 ladders and 27 4-rung H.264 ladders. What would it cost to produce the same number of streams in AWS?

We checked the MediaLive price list here and confirmed it with the pricing calculator estimate here (Figure 3 for HEVC). Though a single hour of H.264 live streaming costs $0.46, this adds up to $4,004.17/per year. This jumps to $1.527 per hour for HEVC, or $13,375.55 per year. Both are for a single ladder.

Figure 3. Yearly cost for streaming a single five-rung HEVC encoding ladder.

To compare this to our streaming server, we multiplied each ladder by the number of ladders the server could produce, and extended all calculations out to five years. This translates to a five-year cost of $420,441 for H.264 and a staggering $1,805,712 for HEVC.

To compute the same five-year cost for the server, we added $69/month for colocation charges to the $7,000 base price. This came to $11,140 for either format.

Cloud or on-premise - streaming publisher's dilemma - table 1
Table 1. Five-year cost comparison, AWS MediaLive pricing compared to the NETINT server.

This comparison brought to mind Hansson’s comment that “Amazon, in particular, is printing profits renting out servers at obscene margins.” Surely, no streaming publisher is using MediaLive for 24/7 365 operations.

Taking a step back, it’s tough not to agree with the key points from both authors. The cloud does make the most sense when you need instant capacity for peak encoding. For steady-state operations, owning your own gear is always going to be cheaper.

All that said, run the numbers no matter what you’re doing in the cloud. While the results probably won’t be as startling as those shown in Table 1, you won’t know until you do the math.

Maximizing Cloud Gaming Performance with ASICs

Maximizing Cloud Gaming Performance with ASICs

Ask ten cloud gamers what an acceptable level of latency is for cloud gaming, and you’ll get ten different answers. However, they will all agree that lower latency is better.

At NETINT, we understand. As a supplier of encoders to the cloud gaming market, our role is to supply the lowest possible latency at the highest possible quality and the greatest encoding density with the lowest possible power consumption. While this sounds like a tall order, because our technology is ASIC based, it’s what we do for cloud gaming and high-volume video streaming workloads of all types.

In this article, we’ll take a quick look at the technology stack for cloud gaming and the role of compression. Then we’ll discuss the performance of the NETINT Quadra VPU (video processing unit) series using the four measuring sticks of latency, density, video quality, and power consumption.

The Cloud Gaming Technology Stack

Figure 1 illustrates the different elements of the cloud gaming technology stack, particularly how the various transfer, compute, rendering, and encoding activities contribute to overall latency.

At the heart of every cloud gaming center is a game engine that typically runs the operating system native to the game, usually Android or Windows, though Linux and macOS is not uncommon. (see here for Meta’s dual OS architecture)

Since most games rely on GPU for rendering, all cloud gaming data centers have a healthy dose of GPU resources. These functions are incorporated in the cloud compute and graphics engine shown on the left, which creates the frames sent to the encode function for encoding and transmission to the gamer.

As illustrated in Figure 1, Nokia budgets 100 ms for total latency. Inside the data center, which is shown on the left, Nokia allows 15 ms to receive the data, 40 ms to process the input and render the frame, 5 ms to encode the frame, and 15 seconds to return it to the remote player. That’s a lot to do in the time it takes a sound wave to travel just 100 feet.

Maximizing Cloud Gaming Performance with ASICs - figure 1
Figure 1. Cloud gaming latency budget from Nokia.

NETINT’s Quadra VPU series is ideal for the standalone encode function. All Quadra VPUs are powered by the NETINT Codensity G5 ASIC. It’s called a video processing unit because in addition to H.264, HEVC, and VP9 decode, and H.264, HEVC, and AVI encode, Quadra VPUs offer onboard scaling, overlay, and an 18 TOPS AI engine (per chip).

Quadra is available in several single-chip solutions (T1 and T1A) and a dual-chip solution (T2) and starts at $1,500 in low quantities. Depending upon the configuration that you purchase, you can install up to ten Quadra VPUs in a single 1RU server and twenty Quadra VPUs in a 2RU server.

Cloud Gaming Latency and Density

Table 1 reports latency and density for a single Quadra VPU. As you would expect, latency depends on video resolution by way of the available network bandwidth and, to a much lesser degree, the number of jobs being processed.

Game producers understand the resolution/latency tradeoff and design the experience around this. So, a cloud gaming vendor might deliver a first-person shooter game at 720p to minimize latency while providing a better UX on medium bandwidth connections and a slower-paced role-playing or strategy game at larger resolutions to optimize the visual experience. As you can see, a single Quadra VPU can service both scenarios, with 4K latency under 20 ms and 720p latency around 4 ms at extremely high stream counts.

Maximizing Cloud Gaming Performance with ASICs - table 1
Table 1. Quadra throughput and average latency for AVC and HEVC.

In terms of density, the jobs shown in Table 1 are for a single Quadra VPU. Though multiple units won’t scale linearly, performance will increase substantially as you install additional units into a server. Because the Quadra is focused solely on video processing and encoding operations, it outperforms most general-purpose GPUs, CPUs, and even FPGA-based encoders from a density perspective.

Quadra Output Quality

From a quality perspective, hardware transcoders are typically benchmarked against the x264 and x265 codecs running in FFmpeg. Though FFmpeg’s throughput is orders of magnitude lower, these codecs represent well known and accepted quality levels. NETINT recently compared Quadra quality against x264 and x265 in a low latency configuration using a CGI-based data set.

Table 2 shows the results for H.264, with Rate-Distortion Optimization Quantization enabled and disabled. Enabling RDOQ increases quality slightly but decreases throughput. Quadra exceeded x264 quality in both configurations using the veryfast preset, typical for live streaming.

Maximizing Cloud Gaming Performance with ASICs - table 2
Table 2. The NETINT Quadra VPU series delivers better H.264 quality
than the x264 codec using the veryfast preset.

For HEVC, Table 3 shows the equivalent x265 preset with RDOQ disabled (the high throughput, lower-quality option) at three Rate Distortion Optimization levels, which also trade-off quality for throughput. Even with RDOQ disabled and with RDO set to 1 (low quality. high throughput) Quadra delivers the equivalent of x265 Medium quality. Note that most live streaming engineers use superfast or ultrafast to produce even a modest number of HEVC streams in a software-only encoding scenario.

Table 3. The NETINT Quadra VPU series delivers better quality
than the x265 codec using the medium preset.

Low Power Transcoding for Cloud Gaming

At full power, Quadra T1 draws 70 watts. Though some GPUs offer similar power consumption, they typically deliver much fewer streams.

In this comparison with the NVIDIA T4, the Quadra T1 drew .71 watts per 1080p stream, about 84% less than the 3.7 watts per stream required by the T4. This obviously translates to an 84% reduction in energy costs and carbon emissions per stream. In terms of CAPEX, Quadra costs $53.57 per 1080p stream, 63% cheaper than the T4’s $144/stream.

When it comes to gameplay, most gamers prioritize latency and quality. In addition to delivering these two key QoE elements, cloud gaming vendors must also focus on CAPEX, OPEX, and sustainability.  By all these metrics, the ASIC-based Quadra is the most ideal encoder for any cloud gaming production workflow.