Eco-Friendly Video Encoding Servers

In 2022, both the public and private sectors are increasingly calling for hyper-scale video services and streaming platforms to reduce their environmental impact. To accomplish this, the streaming industry requires video encoding solutions to meet the video quality requirements of users while emitting fewer carbon emissions and consuming less power. Hence the Eco-Friendly Video Encoding Servers.

Supermicro & NETINT teamed up on Eco-Friendly Video Encoding Servers for the Data Center

With IT managers, video encoding engineers, and public cloud video infrastructure suppliers under pressure to reduce costs and cut carbon emissions, NETINT and Supermicro have teamed up to offer resource-saving encoding solutions that reduce TCO by 20x while simultaneously lowering carbon emissions and power usage. 

Supermicro is a leading supplier of servers and storage systems with an innovative architecture that utilizes modular hardware subsystems that enable the reuse of system enclosures, networking, storage, cooling fans, and power supplies. This modular architecture enables compute resources to be refreshed independently, allowing datacenters to reduce refresh cycle costs. 

NETINT video processing units enable any hyperscale video service or platform to move from computationally expensive software to ultra-efficient ASIC-based encoding. Leveraging unique ASIC CodensityTM architecture, NETINT VPUs features ultra-low latency AV1, AVIF, HEVC, and H.264 real-time encoding. 

Introducing the Supermicro 4K Video Transcoding Server

NETINT and Supermicro have collaborated to produce the Supermicro 4K Video Transcoding Server.  The Supermicro 4K Video Transcoding Solution is powered by ten NETINT T408 video transcoders and is built on the Supermicro A+ Server 1114S-WN10RT platform and features advanced decoding and encoding capability at up to 4K resolution with 10-bit HDR. The Supermicro 4K Video Transcoding Server, with its ultra-high performance ASIC-powered encoding capabilities, can encode up to 40 live HEVC and H.264 broadcast quality 1080p60 streams simultaneously while reducing TCO and carbon emissions by up to 20x compared to software-based encoding! 

NETINT VPUs can be easily installed in all Supermicro servers using U.2 or PCIe interfaces, enabling an easy upgrade path for any video service or platform wanting to move from software operating on x86 CPUs to much more economical and environmentally friendly hardware-based encoding without disrupting an existing transcoding workflow

The Supermicro 4K Video Transcoding Server future proofs hyper-scale real-time
streaming video platforms with higher levels of performance compared to CPU-based software-encoding systems while simultaneously reducing TCO by as much as 10x and carbon emissions 20x.

Argos for the Rest of Us – With AV1

We've been shipping ASIC-based transcoders since 2019!

When Google announced the ASIC-based Argos VCU in 2021, the trade press rightfully applauded. CNET announced that “Google supercharges YouTube with a custom video chip.” Ars Technica reported that Argos brought “ up to 20-33x improvements in compute efficiency compared to… running software on traditional servers.” SemiAnalysis reported that  Argos “Replaces 10 Million Intel CPUs.”

Of course, here at NETINT, none of these benefits were particularly surprising. We’ve been shipping ASIC-based transcoders since 2019. And while our transcoders have a different focus than Argos – we do live, they do VOD – we’ve been delivering the same performance improvements and savings to our customers since we started shipping. In fact, In 2021 NETINT customers encoded 200 billion minutes using our ASIC-based transcoding units.

Unlike Argos, you can actually buy Quadra units to “supercharge” your own applications

We’re currently shipping our fifth generation Quadra video processing unit (VPU) technology, which can transcode 1x 8Kp60/4x 4Kp60/16x 1080p60 in real-time. Unlike Argos, Quadra encodes AV1, along with HEVC and H.264. Also, unlike Argos, you can actually buy Quadra units to “supercharge” your own applications. If you only need H.264 and HEVC, check out the T408 for a lower price point. 

The bottom line is that if you’re producing live or interactive events on a large scale, you should check our ASIC-based encoders. While we can’t promise to replace 10 million Intel CPUs, the 20-33x improvement in computing efficiency is right in our wheelhouse.

Netint Codensity, ASIC-based Quadra T2A Video Processing Unit
Figure 1: Elegant, simple, efficient. By utilizing the ubiquitous PCIe 2.5″ U.2 and AIC form factors, Quadra VPUs offer an elegant, simple, yet effective hardware architecture for scaling video encoding and processing in the data center.

Interactive Applications Powered by Video

Video & Interactive Media at the Cloud Edge

ASIC-based video encoders deliver the TCO required for cloud gaming, RTC, and interactive video applications.
This article describes the future of video streaming and the architecture of emerging cloud edge services utilizing high-performance video infrastructures. It examines the technologies and architectures of streaming application services and describes the economics and viability of operating these services.

Few topics in technology can match the exposure of edge computing and its bold vision for the future. Placing compute resources closer to users and sensors achieves lower latency, allowing new, cloud-based interactive services to run with local applications’ responsiveness. In principle, consumer applications can span interactive social video, game streaming, virtual reality, and augmented reality experiences powered by servers that live at the network edge. Users engage apps just like they do today, but using an application service in the cloud. In the case of interactive applications – like games and social media or communications apps – video compression handles the packing and unpacking of an application’s visual output from the edge to the user’s screen. Since all the work is performed in servers at the edge, client devices can be cheap, low power, wearable, and mobile. Reducing the cost to deliver low latency is critical to emerging interactive edge services.

Operational costs are managed by placing application servers and video encoding systems together in regional/metropolitan data centers. In the same way, latency can be significantly reduced, and video quality increased by employing specialized chips – Applications-Specific Integrated Circuits (ASICS) – for video encoding. Using these specialized video encoders at local points of presence can achieve the latency required for cell-tower placement without the cost.

Advanced video CODECs must be combined with low-latency processing to deliver visual fidelity for interactive applications and collaborative social interaction. Dedicated video processing, including ASIC video encoders and core logic, can radically reduce the hardware required for video processing. And finally, placing compute at regional points of presence creates a balance between the vision of edge computing and the economics of operating consumer services at scale.

Powering the Exponential Growth of Video

Video technology has demonstrated remarkable versatility, with the potential to transmit real-time, interactive experiences to ubiquitous devices like PCs, TVs, and smartphones. Achieving more fluid collaborative and interactive experiences requires higher resolutions and framerates at lower latency. The challenge is operational cost, maintaining the quality of service, guaranteeing high visual quality, and reducing motion-to-photon latency. A new generation of dedicated video transcoders can reduce the compute requirement at the same quality by ten times (10X) and performance per watt by twenty times (20X), thus improving the video services’ viability.

NETINT solves the conundrum video services face when seeking to increase their user base economically while maintaining a high video quality experience. The company is a pioneer in ASIC-based video encoding and has introduced a family of dedicated video processors that combine video encoding, solid-state storage, and machine learning. These processors radically reduce the server footprint required for interactive entertainment like gaming and mixed reality to scale delivery on desktop, mobile, or head-mounted display applications from the cloud edge.

Forty Million Cloud Encoded HD Streams By 2025

Figure 1: Summary of this paper’s analysis of TCO, environmental impact, and density of today’s best-in-class CPU, GPU, and ASIC encoders. Source: TIRIAS Research

The legacy of internet video is video on demand (VOD) – stored video delivered as entertainment or instructional content whenever the viewer requests the file. This content is uploaded, transcoded into multiple quality and resolution levels, and distributed (streamed) over the internet. Live streams require real-time transcoding for rapid distribution to users. These streams can include a full spectrum of interactivity, from mobile social streams to interactive applications. Cloud-based applications can include games and apps that originated on PCs or smartphone platforms, running in the cloud, encoded as video, and experienced like native applications by remote users. TIRIAS Research estimates that high-quality, high-definition cloud video streams will expand from 4 million encoded concurrent streams today to over 40 million by 2025.

Growth in social or user-generated video consumption has been powered mainly by mobile networks starting with 4G and now 5G connected devices. Emerging application streaming services require alignment of content providers, service providers, processing technology, and low latency network infrastructure.

Cloud Edge Architectures for Emerging Applications

Figure 3: Latency requirements for emerging cloud streaming applications become increasingly viable as low latency targets can be achieved. Source: TIRIAS Research

Emerging applications and services require technology that can scale economically to many users without concern for resource constraints.

Cloud streaming is extremely demanding, presenting a combination of technical challenges.

Unlike a webpage where content comes from many locations, streamed video can be aggregated in the cloud and packaged for rapid delivery at the cloud edge.

High video quality and low latency provide users with a sense that they are experiencing a local, native application.

Compute Density Creates Viable Edge System Architectures

Moving video processing to the edge requires a massive improvement in computational efficiency. Locating compute servers at the cloud edge implicates cell towers and local points of presence, removing distance as a network latency factor. The cloud edge in the broadest sense can describe small data centers operating at local points of presence or within cell tower base stations. Base station deployments are expensive and physical space is minimal, creating formidable hurdles for consumer and commercial services. Cloud servers’ placement in regional points of presence is the most viable model, and it allows a metropolitan area to be serviced by a single data center. The opportunity to tune the server environment and provide best in class systems architecture, networking, and operational efficiency allows high availability with a latency optimized solution.

The Total Operating Cost of Streaming

The major factors determining the operating cost of operating cloud or edge servers include the capital cost of acquiring the servers and their projected useful life, power consumption, cooling, and facilities.

In the case of edge computing, the cost to deploy servers increases as you get closer to the edge. Generally, placing servers at scale in base stations is the most expensive – space is almost always incredibly limited, and operational costs such as rent, site cooling, and maintenance can be high. Dedicated rooftop enclosures, remote locations, and tall office building locations boost the expense of adding on-premises edge servers.

The higher cost and space constraints of base station placement lead some to conclude that we must settle for “close to the base station” within regional presence points. Cloud servers’ placement in regional points of presence is the most viable model allowing a major metropolitan area to be serviced by a single data center.

ASIC Encoding – Lowest TCO

The example below shows the operating cost for video encoding is $580,533 using a software-based solution and $110,606 using GPUs per one thousand 1080p30 channels running x265 Medium. With ASIC based encoders, we can significantly reduce encoding cost by a factor of ten to a total cost of ownership (TCO) of just $52,403. These calculations use $0.08 per kWh power costs and a 3X multiple to include cooling and facilities. Servers are priced and amortized over three years.

Figure 6: The operating cost for video encoding alone is $580,533 with software based and $110,606 with GPUs per 1000 1080p30 channels at x265 Medium. Using ASIC based encoders significantly reduces encode costs, with a TCO of $52,403. These calculations use $0.08 per kWh power costs and a 3X multiple to include cooling and facilities. Servers are priced and amortized over 3 years.

The Future

Edge deployment of interactive application services will require high density compute to minimize footprint, maximize reliability, and decrease operating costs. Applications at the cloud edge will rely heavily on video encoding technology and upgraded network technology, including 5G, to deliver these experiences to end-users with the lowest possible latency. As the network and video encoding improve, the cloud will become almost instantaneous, providing experiences to users that fool their senses and make remote applications seem like they are running locally on high-performance clients. Emerging ASIC based encoders are a breakthrough in density, performance, and cost, enabling edge applications to scale economically at the edge. By lowering latency, they allow real-time technologies, including machine learning and sensor data fusion, to create entirely new, intelligent, and interactive experiences.

For more information on NETINT edge encoders, visit www.netint.ca online. To read more on the cutting edge of video processing, edge computing, machine learning and more visit www.TIRIASresearch.com/research online.

Video Encoding in the Cloud on Arm

By deploying Android in the cloud, mobile gaming applications can be virtualized to deliver a scalable and efficient platform that combines hardware and software for high-density real-time video encoding.

 

“Mobile gaming is expected to outpace console growth and, according to IDC, will continue to be a dynamic market with a bright future as 5G becomes a key driver.”

“Mobile gaming is expected to outpace console growth and, according to IDC, will continue to be a dynamic market with a bright future as 5G becomes a key driver.”

Introduction

This article examines the practical and commercial use cases driving the interest and development of a new cloud-based infrastructure built using Arm-based servers from Ampere hosting NETINT Codensity™ NVMe Video Transcoders running in the Canonical Anbox Cloud. The combined solution offers an example for how a service provider, developer, or hyperscale social gaming platform can take advantage of Arm native computing in a cloud context. A case study focused on cloud gaming demonstrates each solution component’s advantage when deploying a scalable and efficient Arm-based system as a new computing class.

Arm Native Cloud infrastructure

The Arm architecture dominates the mobile processor market with its unrivaled ability to maximize power efficiency. As a result, there are now billions of Arm-based chips used in mobile phones, laptops, tablets, IoT devices, and embedded applications throughout the world. Arm’s ability to deliver high performance at a fraction of the power required by existing architectures has driven a new era in mobile computing, and soon video.

As a result, another period of innovation using Arm is coming to the data center. Facilitating this migration is an Arm ecosystem powering a cloud-native computing environment from the edge to the core that is enabled by an ultra-efficient architectural platform built from the ground up with the ability to scale to unimagined video encoding density.

Ampere’s market-leading server platform leads the industry beginning with the Ampere® Altra™ based on the Arm RISC architecture. Combining server compute density, instruction set compatibility, and a virtualization layer capable of instantiating hundreds of Android instances offers a unique platform for innovation and new use cases to lead the next wave of interactive, connected applications and services.

Existing x86 server solutions based on 30-year-old technology cannot deliver the scalability, cost, efficiency, power, and core density required by interactive video applications like cloud mobile gaming. The Arm Native Cloud enables interactive video workloads to run on Arm-compatible servers using Ampere processors that provide the highest architectural compatibility possible.

Canonical Anbox Cloud

Canonical Anbox Cloud allows highly efficient containerized workloads using Android as a guest operating system to power interactive application experiences through a platform that provides more control over performance and infrastructure costs. It offers the flexibility to scale video encoding operations based on user demand and service request fluctuations.

Anbox Cloud can be hosted in the public cloud for near-infinite capacity, high reliability, and demand elasticity. Or on edge infrastructure where low latency and data privacy are a priority. Public and private cloud service providers can easily integrate Anbox Cloud into their offering to enable mobile interactive video and gaming applications in a PaaS or SaaS service model. Telecommunication providers can create innovative value-added services based on virtualized mobile devices for their 4G, LTE, and 5G mobile network customers.

Anbox Cloud Software Stack

Anbox Cloud is built on existing software technologies from Canonical, namely LXD as a container hypervisor and the Ubuntu operating system. The Anbox runtime environment integrates WebRTC based streaming to a remote user utilizing either software or hardware-accelerated video encoding.

NETINT Running on Anbox Cloud Powers Cloud Mobile Gaming

While cloud mobile gaming represents an enormous market opportunity, the use case presents significant challenges regarding scalability and performance. Game developers need to ensure excellent user experiences, which means they must achieve low latency while maintaining high video quality. On the other hand, they need to operate on a cost-effective platform that can be easily scaled.
Video Features include:

  • Real-time H.264 and H.265 Decoding and Encoding.
  • Deterministic, ultra-low latency encoding that is able to deliver the most responsive gameplay experience.
  • 8K/4K/HD video resolutions supported.
  • HDR format support for HLG/HDR 10/HDR 10+/Dolby Vision.
  • Flexible 2.5″ NVMe U.2 modular plug and play form factor for easy integration into common data center server configurations.
  • High-Density, able to encode 16 720p30 streams per T408 U.2 module and 64 720p30 streams per T432 add-in card (AIC).
  • Ultra-Low Power consumption with just 7 watts per T408 U.2 module and 27 watts per T432 AIC.
  • Native support for containerization and virtualization.
  • Integrated with FFmpeg.

The Anbox solution is tuned for the Ampere eMAG and Altra servers to showcase cloud gaming efficiency and performance. In the image below, it’s easy to see the superior density and performance of BombSquad1, a 3D mobile game, running at extremely high density on an eMag server. The game ran at various frame rates and at 720p resolution using hardware-accelerated vs. software video encoding to show a good representative model of real-world 3D game performance.

Advantage of Hardware Accelerated Video Encoding

The above graph highlights the benefit of using hardware accelerated video encoding to offload the CPU intensive operations’ video streaming. In this case study, the video encoder required no more than 60% of available CPU cycles under high-density stress testing. As the density increases, the frame rates significantly degrade until the system saturates.

Using our Codensity technology, the encoding overhead is removed from the CPU load, delivering a two-times increase in density. With this added capacity, the platform can support a higher number of instances per server at more stable frame rates than the same solution using software-based encoding operating only the CPU.

Anbox Cloud enables graphic and memory-intensive mobile games to be scaled to many users while retaining the responsiveness and ultra-low latency demanded by gamers. The next figure shows the result of adding more video acceleration hardware to the same eMag system to utilize the spare cycles freed up by offloading the game streams’ encoding.

Balancing CPU, GPU, and ASIC Utilization for Optimal Performance
As this figure illustrates, the value achieved by balancing the system across compute (CPU), video rendering (GPU), and video streaming (Encoders) allow Anbox Cloud to deliver the maximum game instance density for the entire platform reducing the cost per instance of delivering games in the cloud significantly
Canonical, Ampere, and NETINT Establish Video Encoding in the Cloud on Arm

The total cost of ownership (TCO) of the video platform is a critical evaluation criterion for any service provider considering a cloud-based service for gaming. The TCO of an Arm-based system running native Android instances is the most cost-effective method for delivering a cloud gaming service. Based on analysis from this study, it can be clearly seen that the platform delivers a three times improvement in overall TCO for the mobile cloud gaming use case. NETINT, Ampere, and Canonical are proud to be driving a new era for cloud gaming using a combination of Arm and ASIC-based hardware, along with Android native applications.

NETINT Joins the Alliance for Open Media

WAKEFIELD, Mass. – March 30, 2021 – The Alliance for Open Media (AOMedia) today announced that NETINT Technologies, an innovator of ASIC-based video processing solutions for low-latency video transcoding, has joined the organization at the Promoter level. As a member of the Alliance, NETINT Technologies will collaborate with AOMedia members, the leading internet and media technology companies, to advance open standards for media compression and delivery over the web while promoting hardware video encoding adoption.

NETINT Technologies recently unveiled the world’s first commercially available hardware AV1 transcoder enabling video operators to easily upgrade their x86 or Arm-based software encoding operations to hardware-based video encoding. With the explosion of video-enabled communication apps and entertainment services, there is a significant need for high quality video encoding solutions that integrate easily with existing cloud and data center hosted video encoding workflows.

NETINT Technologies’ Codensity™ G5 ASIC-based AV1 video transcoders enable video operators to unlock the full potential for high-quality video in cloud mobile gaming, live streaming, video conferencing, remote desktop, social video streaming, OTT, and AR/VR applications and services.

“We are thrilled to join AOMedia as the first company to integrate AV1 into a data center ASIC-based hardware encoder – a video streaming industry first,” said Alex Liu, Co-Founder and COO of NETINT. “Like AOMedia, we are passionate about building impactful video solutions that leave an indelible mark on the world and we look forward to working with our fellow members to deliver better video streaming experiences.”

As a result of AV1’s improved data compression over existing standards, fewer bits need to be streamed to reach a high visual quality level and user experience. NETINT Technologies’ hardware transcoders are available in the compact 2.5” U.2 NVMe and AIC (add-in-card) form factors making it simple for video services and streaming platforms to reduce TCO while increasing their density and performance up to 10X without replacing their x86 or Arm-based servers.

“We are excited to welcome NETINT Technologies to AOMedia,” said Matt Frost, AOMedia Vice President of Communications and Membership, and Director at Google. “NETINT’s expertise in ASIC-based hardware encoding for hyperscale and premium video platforms will benefit the video streaming ecosystem. We look forward to collaborating with NETINT on our goal to grow hardware adoption of the AV1 standard in hardware and increase openness and interoperability of internet video.”

About NETINT Technologies

NETINT Technologies is an innovator of ASIC-based video processing solutions for low-latency video transcoding that operates on x86 and ARM-based servers. Users of NETINT solutions realize a 10X increase in encoding density and a 20X reduction in carbon emissions compared to CPU-based software encoding solutions.

NETINT makes it seamless to move from software to hardware-based video encoding so that hyper-scale services and platforms can unlock the full potential in their computing infrastructure. NETINT is a VC-backed company made up of silicon innovators passionate about building high-impact solutions that leave an indelible mark on the world. NETINT R&D and business offices are in Vancouver, Toronto, and Shanghai. Visit netint.com to learn more.

About the Alliance for Open Media

Launched in 2015, the Alliance for Open Media (AOMedia) was formed to define and develop media technologies to address marketplace demand for an open standard for video compression and delivery over the web. Board-level, Founding Members include Amazon, Apple, Arm, Cisco, Facebook, Google, Intel, Microsoft, Mozilla, Netflix, NVIDIA, Samsung Electronics and Tencent. AOMedia’s open-source, royalty-free video codec AV1 is a significant milestone in the ability to deliver a next-generation video format that is interoperable, open, optimized for internet delivery and scalable to any modern device at any bandwidth. Visit www.aomedia.org or follow AOMedia on Twitter at @a4omedia.

How ASICs Can Save The Planet

Earth day includes three days of events focused on global climate action. For most participants, Earth day is more than three days to act compulsory about the environment, it’s a chance to get educated and focused on the issues that society and businesses are grappling with when pushing for meaningful environmental change.

Many companies have taken oaths to become carbon neutral in the technology industry, and most are actively pushing their sustainability initiatives forward. With the great strides made, a significant pollutant still exists, and that is the proliferation of video streaming services. Cisco reports upwards of 80% of the network is consumed by video during the peak time of the day. Most companies reporting carbon emissions do so for their owned and controlled data centers but do not account for the rest of the network, and that is where the problem lies.

Due to improvements in the energy efficiency of data centers, networks, and devices, streaming video has presented a net positive for the environment compared with traditional (physical) forms of media distribution. However, slowing efficiency gains, massive growth in users, and new demands from emerging technologies like artificial intelligence (AI) that require significant computing power raise new concerns about the overall environmental impacts of the sector — so what can we do?

The answers involve several structural changes to how we architect our networks and the technologies and standards used to encode streaming content, whether for VOD or Live delivery. Energy consumption can be managed more effectively by placing application and video encoding servers in regional data centers located close to the user. With this approach, latency is significantly reduced, and the video quality increased by employing specialized chips – Application-Specific Integrated Circuits (ASICs) – for video encoding. ASIC-based video encoders can reduce the hardware required for video processing and transcoding by a factor of ten while increasing the performance per watt by twenty times.

Energy Efficient Low Latency Video

ASIC video transcoders provide the lowest latency encoding available as compared with CPU and even GPU and FPGA approaches. With just 8ms for 1080p using HEVC, AR/VR, desktop as a service, cloud gaming, mobile gaming, and other applications requiring real-time performance are enabled. The challenge in video encoding is to balance variables that generally work against one another, such as visual quality and bitrate, or latency and visual quality. With high-energy efficient ASICs, these are no longer opposing forces.

The Trouble With CPU-Based Software Encoding

Moving video encoding to the edge requires a massive improvement in computational efficiency and performance (density) since base station deployments are expensive and physical space is minimal. For services to meet subscriber demand sustainably, a radical reduction in cost and carbon emissions footprint is needed.

TIRIAS Research published in a 2020 study that transcoding 1,000 live HD video streams using CPU-based software requires 125 1RU servers and carries an operating cost of more than $580,535 each year. A software encoding workflow running in the typical cloud data center will throw off 217 Metric Tons of CO2 compared to 11.7 Metric Tons for ASIC-based encoding operations.

With NETINT ASIC-based video transcoders, 12.5 1RU servers can encode 1,000 live HD channels compared to 25 Nvidia T4 GPU instances. ASIC-powered high-density video encoding servers can easily co-locate with application processors in regional POPs or cell tower base stations. The advantage in power consumption between ASICs and GPU or CPU-based encoding is proportional to the environmental impact and reduced carbon emissions. 

Environmental Considerations of Video Encoding

Datacenter and large-scale service operators face a renewed challenge of decreasing energy consumption to lower operating costs and emissions while maintaining the quality of service their subscribers expect. Environmental impact is hard to manage as users embrace new services, computational workloads, and usage increase, placing pressure on data centers to increase capacity which drives up power and carbon emissions.

Rarely does an opportunity arrive to create an order of magnitude reduction in carbon emissions. Figure 2 shows the power consumption and CO2 impact of video encoding by comparing CPU and ASIC-based video encoding operation data, kWh, and CO2 emissions per year.

 

Using 1,000 concurrent live 1080p30 channels for a baseline, ASIC-based encoding efficiency results in 27,375 kWh per year, leading to 11.6 Metric Tons of CO2 emissions. Comparatively, CPU-based encoding consumes 509,175 kWh per year, throwing off 217 Metric Tons of CO2. TIRIAS Research estimates that by 2025, with over 40 million concurrent streams, cloud encoding using x86 CPUs will produce 8,680,000 Metric Tons/Year of CO2 emissions, equivalent to the exhaust output of 1,000,000 conventional automobiles. By comparison, video encoding based on ASICs would reduce the number of cars to 53,917. Video encoding ASICs deliver a 20X improvement in energy efficiency over CPUs.

Streaming Sustainability – The Future

According to The Shift Project — watching Internet-delivered (streamed) video accounts for the most significant chunk of network traffic. Online video generates 300 Million Metric Tons of CO2 each year, roughly 1% of global emissions. Approximately one-third of the video traffic is adult content. Premium on-demand video services like Netflix and Amazon Prime account for a third. The final third includes watching YouTube and social media videos.

About games — Researchers behind a study at the University of California found US gamers use 2.4% of their household’s electricity or 32 terawatt-hours of energy every year. Particularly worrisome is that streaming games consume more energy than console-based gameplay, meaning that carbon emissions may worsen as more people migrate to cloud gaming platforms.

The environmental future that we want will require changes in how we encode and distribute video if we hope to simultaneously meet our user’s expectations while making progress towards sustainability. ASIC-powered encoders offer a breakthrough in density, performance, and cost, enabling edge-located video applications to scale economically while significantly reducing environmental impact. By lowering latency, ASIC transcoders can enable real-time video solutions and interactive entertainment experiences like cloud gaming and AR/VR to be delivered sustainably and profitably.

Ultra-Low Latency Cloud Video Applications

The Case for Ultra-low Latency Performance

Emerging video services require encoding technology that can scale economically to any number of users. Cloud streaming is extremely demanding, presenting a combination of technical challenges. Unlike delivering complex web pages where content can be sourced from many locations, streamed video can be aggregated in the cloud and packaged for rapid delivery. High video quality and low latency must be achieved together to provide users with a sense that they are experiencing a local, native application.

 Latency requirements for emerging cloud streaming applications become increasingly viable as low latency targets can be achieved.

The instantaneous cloud is a new phase in computing where cloud technologies evolve to deliver native experiences to any smartphone, PC, or XR display. Delivering a sense of local application presence, or immersive virtual reality (VR) total sensory presence, requires low latency and high visual quality to ensure users have parity or better in their cloud-based application experience. The cloud can provide more general compute and 3D graphics performance than smartphones or low-performance PC clients. However, driving the visual experience with low latency is a relatively expensive and challenging problem to solve from the cloud. Moving these experiences closer to users can lower latency by avoiding long runs, network hops, and network congestion.

Network Topology

Making the instantaneous cloud a reality requires servers to be placed near users, within regional points of presence or cell tower base stations, accessing client devices over fast, low latency networks including fiber to the home and 5G. Traditionally servers in the cloud, in large, centralized data centers delivered applications with relatively high latency, and these high latencies remain today. Real-world local networks (~100 miles) can deliver just under 20 ms latency with ideal conditions and latencies of 40 ms to 80 ms are more typical. Target network latencies for 5G are below 10 ms nationwide (US) and below 5 ms for regional data centers.

The deployment of 5G infrastructure and placing servers in a regional data center within a fiber run in the same city as the central offices or base stations of network service providers can reduce network latency significantly while simultaneously providing virtually identical latency compared to base station placement of servers but without the associated cost. Partnerships that improve connectivity and the network performance between regional data centers and internet service providers are critical to delivering low latency applications. Network service providers, local data centers, and cloud service providers must coordinate to reduce latency.

Network service providers – which in the US include AT&T, Verizon, T-Mobile, Comcast, Charter Spectrum, and Lumen CenturyLink, – seeking to avoid the expense of base station deployments can employ network central offices for deploying servers. Often hosting legacy services and infrastructure, these central offices and points of presence must evolve to support large-scale, high-performance applications services. Network service providers are in a perfect storm of soaring demand and pressure to upgrade. The deployment of 5G, COVID-19, and high network utilization is putting pressure on resources to deploy additional data center infrastructure.

Local ISPs, Data Centers ISPs (Internet service providers) and data centers can create improved connectivity to mobile and home network base stations, lowering the latency of these connections by deploying fiber and tightening network relationships. Agnostic to both network providers and cloud service providers, they are well situated to become a deployment hub for cloud servers.

Cloud application providers with a high level of vertical integration, such as Amazon, Google, and Microsoft, already have low latency Content Distribution Networks (CDNs) making them ideally situated to provide cloud services at low latency all the way to the wired/wireless network service providers. Amazon’s low latency CDNs and the Twitch service can be utilized to provide cloud video services and cloud gaming. Google has created a local presence and runs fiber in many cities giving them the opportunity to optimize for latency in their data centers. Microsoft has launched game streaming to augment Xbox subscriptions and provide those subscriptions to any client platform even Android and iOS.

Cloud Video Requirements

The emerging cloud streaming experiences poised for growth have common technical requirements. First, they require visuals of high quality and resolution. Second, they require low latency, so that user input can be rapidly incorporated into the visual output. Third, they require scalability such that users can access these services reliably and service providers can offer them economically.

High Visual Quality

Users have become accustomed to crisp, high-quality images on their smartphones and HD screens. Game graphics are particularly sensitive to visual artifacts from compression and feature fast movement diminishing frame to frame compression. Achieving better latency has required sacrificing visual quality and/or bitrate, and the step function improvements offered by emerging video codecs including AV1 come with commensurate increases in latency and compute requirements. Today, game streaming at 1080p utilizing H.265 or VP9 video encoding will require about 3GB/hour or 7Mb/s with peaks over 15 Mb/s. In the future, cutting-edge AAA games threaten to make server hardware obsolete quickly. Running new titles will require switching to more modern graphics hardware, and user expectations shaped by native PC gameplay will require 3D rendering, resolution, and video encoding quality to be at parity with increasingly capable native PC and game console gameplay.

Low Latency Video

The raw latency of video encoding is a function of the complexity of the target video CODEC and the performance of the underlying processor. Today’s ASICs are the lowest latency encoders available with 4ms (720P) to 8ms (1080p) using H.265, a complex and computationally expensive CODEC. If this is combined with an ideal network latency of 5ms for local loops, there is a plentiful latency budget remaining for complex interactive applications. Game streaming to home and mobile devices has become a beachhead for consumer interactive application streaming, and the network is proving critical to the user experience.

A wide range of low latency use cases is driving the emergence of streaming architectures and video processing technologies. Games that require fast response, quick pointing precision, or easily noticed visual feedback suffer when latencies begin to pass several frames, which at 60FPS is 16.6 ms between frames. For emerging augmented reality (AR) and immersive VR, tracking user movements and translating camera pose into a real-time visual experience requires sub-frame motion to photon latency. Movements and head/pose tracking must be incorporated as soon as possible, ideally in the next frame of visual output. This next frame requirement creates the often-cited “20 ms motion to photon” requirement for immersive VR – 20 ms being the time required to achieve worst-case three frames latency (miss only two frames) at 90 frames per second. In virtual reality, the term presence describes the user’s sensation that they are in a particular space as if it were reality. To deliver presence in cloud-based VR, every step in the computing and networking architecture must work together to achieve sub 20 ms latency with high-quality, high-resolution, high-framerate visuals.

Variables that normally work against one another must be simultaneously tamed– visual quality vs. bitrate or latency vs. visual quality can no longer be opposing forces. Looking forward, the demand for lower latency continues far into the future – with everything from immersive VR to cloud-powered virtual agents requiring a fast response to user input.

The deployment of 5G front and back ends and utilizing compute resources closer to the user is critical. From a server perspective, dynamic resource allocation should allow better scalability in achieving parity with native experiences, but the high-quality displays in today’s smartphones create pressure on both resolution and visual quality. Achieving high quality with low latency remains a gap that high speed, ASIC encoding targets to improve the experience on mobile devices and complement low latency 5G networks.

To achieve the scale and economics required, video processing will need to scale in the cloud, and processing inefficiencies once tolerable now need to be eliminated.

New streaming architectures need to reduce latency by placing compute closer to users and reducing upstream bandwidth by processing application data and video minimizing backhaul traffic to central data centers. The technologies linking the data center, regional point of presence, wireless or wired network base stations, and users must be tightly integrated. This tight integration requires service providers to work together with application service providers to ensure the quality of service for user applications.

High-Density Video Encoding in the Cloud

Cloud computing offers an exciting vision where immersive user experiences are decoupled from the compute resources necessary to deliver those experiences. This decoupling gives service providers a special opportunity to deliver a broad range of interactive services using a wide variety of devices from smartphones to connected televisions to game consoles to computers.

However, the explosive growth of these new video services comes at a cost. In this article, you’ll learn about the adverse impact on the environment as software-based encoding infrastructures struggle to meet this new demand. In addition to the environmental impact, software-based encoding solutions do not scale economically. We’ll show how new technologies like ASIC-based video transcoders and advanced video codecs are reducing TCO and reducing carbon emissions impacts making new services like cloud gaming, virtual desktop, real-time video conferencing, and AR/VR viable.

Service providers offering interactive video services also face additional technical challenges including demanding latency requirements. Let’s look at how cloud architectures must evolve to adapt to these new operational requirements.

The Cloud – Reimagined

Placing compute resources in the Cloud enables a new generation of interactive video services to operate with the responsiveness of a local application. New interactive services shift unprecedented stress onto network computing resources, requiring service providers to look to more efficient video encoding technologies so that they can scale their operations with better cost and energy efficiency.

New cloud architectures that combine hardware encoders that are hosted on commodity x86 and Arm-based servers promise to resolve user experience gaps while changing the economics of high-scale video for the better. These architectures distribute the video encoding and compute functions closer to the user, while reducing end-to-end latency to the range of 100-200ms, decreasing backhaul traffic, and enabling new forms of data/sensor/display processing.

Consumer applications like interactive social video, cloud, and mobile game streaming, virtual and augmented reality enable users to engage anywhere, at any time, and on any device using an application service in the cloud running remotely. The primary way to reduce latency and improve the responsiveness and viewer experience is to position compute resources closer to the user. This means taking a decentralized approach where some of the architectural computing blocks are located outside of the data center (cloud) to the network edge or alternate location.

As we move these compute resources closer to the user, achieving the economy of scale needed for hyperscale data centers becomes challenging. Fortunately, these costs can be managed by placing application servers and video encoding systems together in regional/metropolitan data centers so that video and application control traffic need only to hop from the regional data center to the local base station or the network service provider central office. In so doing, system latency is significantly reduced, and video quality is increased by employing purpose-built video encoding hardware powered by ASICs that are hosted on x86 and Arm-based servers.

Advanced Video CODECs are driving the need for ASICs.

Video technology is instrumental in nearly every app or product in the market and this means balancing bitrate, video quality, framerate, and resolution will forever drive the development of next-generation codec technologies that can deliver a higher compression advantage. As new video codec standards are developed, they come with a promise of halving the bitrate from the current/previous standard. Consider that H.264/AVC delivered a 50% bitrate efficiency over MPEG2, while HEVC delivers a 50% advantage over H.264/AVC, and AV1 is well on track to beat HEVC by 50%. But with each standard comes an ever-higher order of computing complexity. The most significant inhibitor to a service adopting a more efficient video codec is the limitation on how much computing resources can be spent on video encoding tasks.

Environmental and Economic Considerations for the Cloud.

Dedicated video encoding and image processing using ASIC-based video encoders can radically reduce the hardware required for video processing, enabling the video encoding operation to be decentralized and operated at regional points of presence to create a balance between Cloud computing and the economic realities of operating a consumer service at hyper-scale.

For services needing to scale, hurdles include cost, quality of service, visual quality, and motion-to-photon latency. Unfortunately, the cost of deployment and operations increases sharply as compute resources move closer to the user. Making cloud computing affordable for interactive applications requires network topologies that are able to balance the economics of high-scale with the need to drive low latency and excellent visual quality. ASIC-based encoding has enabled a new generation of video transcoders that can improve encoding server density by a factor of ten while reducing power consumption and improving the environmental impact by a factor of twenty.

NETINT is a pioneer in ASIC-based encoding with a family of dedicated video transcoders that are plug-n-play to your existing x86 and Arm-based servers. These innovative video transcoding solutions can radically reduce your server footprint for video encoding and they are a unique solution that can scale economically to deliver any native desktop, mobile, or head-mounted display application from the cloud, while simultaneously minimizing environmental impacts and cutting TCO.

World’s First AV1 Encoder for the Data Center

Leveraging next-generation NETINT ASIC technology, new video transcoders will enable up to 100 live 4Kp60 AV1 video streams on a standard x86 or Arm-based server.

The move to advanced video codecs like AV1 has created a software squeeze as codec complexity is increasing faster than the performance optimization of software encoders. NETINT Technologies unveiled a family of ASIC-based AV1 video transcoders packaged in a 2.5″ U.2 NVMe and PCIe Add-In-Card (AIC) form factor. This unique solution is designed so that video operators can easily upgrade their x86 or Arm-based data center servers to hardware-based video encoding.

The all new Codensity ASIC-powered video transcoders offer an upgrade path to hardware using x86 and Arm servers while boasting speeds up to 7,680 FPS for 4K AV1 broadcast quality. The efficiency of an ASIC combined with the codec advantages of AV1 means applications and services that require real-time ultra-low latency performance can make a step function improvement in the user experience that they deliver.

“As a royalty-free codec that is more efficient than other standards-based codecs, the potential for AV1 to bring a tangible benefit to streaming apps and services is very real,” commented Matt Frost, Director, Product Management at Google, “even still, compression efficiency comes at a cost of computing. I am delighted that the AV1 ecosystem will soon benefit from having an ASIC-based hardware encoding solution available for the data center.”

“NETINT Codensity ASICs are the video and neural processing and encoding engines that power all NETINT video transcoding solutions,” stated Alex Liu, Co-Founder and COO of NETINT, “our groundbreaking ASIC technology offers a 10X density improvement over software encoders and a 20X reduction in energy consumption as compared to CPU-based workflows. With AV1, we are thrilled to enable the industry so that they can realize the benefits of this incredible codec.”

QUADRA AV1 VIDEO TRANSCODERS

NETINT is on a mission to switch the world from software to hardware-based video encoding using ASICs. The all-new Codensity G5 ASIC combines superior AV1, AVIF, HEVC, and H.264 real-time encoding with support for 8K HDR including Dolby Vision, and hardware acceleration for video intelligence, data mining, and other ML and AI applications and services.

The second-generation NETINT video transcoders, Quadra T1, Quadra T2, and Quadra T4 can be easily installed in virtually any x86 or Arm-based server. This provides an easy upgrade path for any video service or platform that is wanting to move from software to supercharged hardware-based encoding without needing to replace their transcoding and media processing infrastructures.

Quadra T1 ships in the ubiquitous U.2 NVMe form factor and includes a single Codensity G5 ASIC that can encode four (4) live 4Kp60 10-bit AV1 streams with broadcast quality.

Quadra T2 includes two (2) Codensity G5 ASICs and is packaged in an HHHL AIC form factor. The T2 is capable of encoding eight (8) live 4Kp60 10-bit AV1 streams at broadcast quality.

Quadra T4 comes packaged as an FH 3/4L AIC and features four (4) Codensity G5 ASICs. The T4 is able to encode sixteen (16) live 4Kp60 10-bit AV1 streams with broadcast quality.

AVAILABILITY

A sampling of the new Quadra T1, T2, and T4 AV1 video transcoders is already available since the second quarter (Q2) of 2021. Interested video service operators and platforms are encouraged to contact NETINT at sales@netint.com for more information and to reserve an evaluation unit.

ABOUT NETINT

NETINT Technologies is an innovator of ASIC-based video processing solutions for low-latency video transcoding that operates on x86 and ARM-based servers. Users of NETINT solutions realize a 10X increase in encoding density and a 20X reduction in carbon emissions compared to CPU-based software encoding solutions.  

NETINT makes it seamless to move from software to hardware-based video encoding so that hyper-scale services and platforms can unlock the full potential in their computing infrastructure. NETINT is a VC-backed company made up of silicon innovators passionate about building impactful solutions that leave an indelible mark on the world. NETINT offices and R&D facilities are in Vancouver, Toronto, and Shanghai. To learn more, visit netint.com