
Video Transcoder versus Video Processing Unit aka VPU
Video Transcoder versus Video Processing Unit aka VPU When choosing a product for live stream processing, half the battle is knowing what to search for.
Cloud computing offers an exciting vision where immersive user experiences are decoupled from the compute resources necessary to deliver those experiences. This decoupling gives service providers a special opportunity to deliver a broad range of interactive services using a wide variety of devices from smartphones to connected televisions to game consoles to computers.
However, the explosive growth of these new video services comes at a cost. In this article, you’ll learn about the adverse impact on the environment as software-based encoding infrastructures struggle to meet this new demand. In addition to the environmental impact, software-based encoding solutions do not scale economically. We’ll show how new technologies like ASIC-based video transcoders and advanced video codecs are reducing TCO and reducing carbon emissions impacts making new services like cloud gaming, virtual desktop, real-time video conferencing, and AR/VR viable.
Service providers offering interactive video services also face additional technical challenges including demanding latency requirements. Let’s look at how cloud architectures must evolve to adapt to these new operational requirements.
Placing compute resources in the Cloud enables a new generation of interactive video services to operate with the responsiveness of a local application. New interactive services shift unprecedented stress onto network computing resources, requiring service providers to look to more efficient video encoding technologies so that they can scale their operations with better cost and energy efficiency.
New cloud architectures that combine hardware encoders that are hosted on commodity x86 and Arm-based servers promise to resolve user experience gaps while changing the economics of high-scale video for the better. These architectures distribute the video encoding and compute functions closer to the user, while reducing end-to-end latency to the range of 100-200ms, decreasing backhaul traffic, and enabling new forms of data/sensor/display processing.
Consumer applications like interactive social video, cloud, and mobile game streaming, virtual and augmented reality enable users to engage anywhere, at any time, and on any device using an application service in the cloud running remotely. The primary way to reduce latency and improve the responsiveness and viewer experience is to position compute resources closer to the user. This means taking a decentralized approach where some of the architectural computing blocks are located outside of the data center (cloud) to the network edge or alternate location.
As we move these compute resources closer to the user, achieving the economy of scale needed for hyperscale data centers becomes challenging. Fortunately, these costs can be managed by placing application servers and video encoding systems together in regional/metropolitan data centers so that video and application control traffic need only to hop from the regional data center to the local base station or the network service provider central office. In so doing, system latency is significantly reduced, and video quality is increased by employing purpose-built video encoding hardware powered by ASICs that are hosted on x86 and Arm-based servers.
Video technology is instrumental in nearly every app or product in the market and this means balancing bitrate, video quality, framerate, and resolution will forever drive the development of next-generation codec technologies that can deliver a higher compression advantage. As new video codec standards are developed, they come with a promise of halving the bitrate from the current/previous standard. Consider that H.264/AVC delivered a 50% bitrate efficiency over MPEG2, while HEVC delivers a 50% advantage over H.264/AVC, and AV1 is well on track to beat HEVC by 50%. But with each standard comes an ever-higher order of computing complexity. The most significant inhibitor to a service adopting a more efficient video codec is the limitation on how much computing resources can be spent on video encoding tasks.
Dedicated video encoding and image processing using ASIC-based video encoders can radically reduce the hardware required for video processing, enabling the video encoding operation to be decentralized and operated at regional points of presence to create a balance between Cloud computing and the economic realities of operating a consumer service at hyper-scale.
For services needing to scale, hurdles include cost, quality of service, visual quality, and motion-to-photon latency. Unfortunately, the cost of deployment and operations increases sharply as compute resources move closer to the user. Making cloud computing affordable for interactive applications requires network topologies that are able to balance the economics of high-scale with the need to drive low latency and excellent visual quality. ASIC-based encoding has enabled a new generation of video transcoders that can improve encoding server density by a factor of ten while reducing power consumption and improving the environmental impact by a factor of twenty.
NETINT is a pioneer in ASIC-based encoding with a family of dedicated video transcoders that are plug-n-play to your existing x86 and Arm-based servers. These innovative video transcoding solutions can radically reduce your server footprint for video encoding and they are a unique solution that can scale economically to deliver any native desktop, mobile, or head-mounted display application from the cloud, while simultaneously minimizing environmental impacts and cutting TCO.
Video Transcoder versus Video Processing Unit aka VPU When choosing a product for live stream processing, half the battle is knowing what to search for.
Which AWS CPU is Best for FFmpeg? AMD, Graviton, or Intel If you encode with FFmpeg on AWS, you probably know that you have