Skip to content
Key Insights on Edge Computing and Low Latency Video Streaming

Key Insights on Edge Computing and Low Latency Video Streaming

Live and on-demand video have become integral parts of our digital lives. As consumers, we expect a seamless, high-quality viewing experience across devices. As publishers and distributors, delivering that experience poses immense technical challenges.

In this article, we’ll share key insights from an interview with Will Law, a Chief Architect at Akamai Technologies. With over 20 years of experience in streaming media, Will provides an inside look at cutting-edge topics like edge computing, low latency video, and Media over QUIC.

Overview

Whether you’re a streaming engineer or someone who wants to understand this complex landscape better, you’ll learn:

  • The practical benefits and applications of edge computing
  • How CDNs leverage edge servers to enable advanced features
  • The status of low latency streaming protocols and their adoption
  • What Media over QUIC aims to achieve and how it could evolve streaming

Let’s dive in and explore the frontiers of streaming technology.

Why Edge Computing Matters for Streaming Media

With thousands of edge servers worldwide, CDNs like Akamai have a distinct advantage. Instead of relying on a few centralized data centers, they can distribute intelligence to nodes at the “edge” of the network.

This unlocks capabilities like:

  • Personalization – Create unique streams with features tailored to each viewer. For example, inserting localized ads or generating custom watermark fingerprints.
  • Latency reduction – Process requests and manipulate playlists closer to the user for faster response times. Critical for low latency streaming.
  • Security – Scale up defenses against DDoS attacks by distributing the load across a wider footprint.
  • Reliability – More points of presence mean more options for failover and redundancy.
  • Scalability – Handle large concurrent audiences by spreading the load and not overwhelming centralized infrastructure.

The use cases for edge computing are constantly expanding. Here are some of the most popular streaming applications, according to Will:

  • Geo-localization – Tailor streams by location. For example, provide relevant languages or content restrictions based on geographic region.
  • Targeted advertising – Insert personalized ads into live or VOD streams to increase monetization.
  • Pseudo-live channels – Assemble “live” streams on the fly by sequencing VOD assets and interpolating ads.
  • Cloud gaming – Deploy multiplayer game servers strategically to minimize latency for players based on their location.
  • Game streaming – Render sophisticated games in the cloud using virtual machines at the edge, allowing for rich experiences on simple clients.

The demand for edge computing will only grow as audiences expect more personalization. Publishers should consider how they can benefit.

The Slow March Towards Low Latency Streaming

For some live streaming experiences, low latency is critical. Shaving even a few milliseconds can mean a more engaging experience for viewers. However, newer protocols like Low Latency HLS and DASH have seen sluggish adoption since being introduced 4+ years ago. Why the slow rollout?

It comes down to complexity, cost and risk:

  • Low Latency DASH is simpler to implement on the server-side but requires more complex bandwidth estimation in clients.
  • Low Latency HLS has a higher barrier for origin servers but is simpler for players to implement.
  • HLS and DASH clients on older TVs and set-top boxes often do not support the low-latency modes of these protocols.
  • Either way, it ultimately requires upgrading both server and client workflows and then exposing the playback to a higher risk of rebuffering. Unless the lower latency enables a new business case, or opens new markets, or creates a competitive barrier, the risk is often judged to be not worth the reward.

Despite the friction, market pressure is mounting to reduce latency. Major broadcasters are targeting the 6-10 second range to achieve “broadcast equivalent latency.” But Will suggests even lower thresholds are necessary to enable truly interactive experiences:

  • 3 seconds or less – Enables real-time engagement with polls, chats, hosts, etc.
  • Sub-1 second – The gold standard for live gaming, auctions, and betting applications.
  • Sub 400ms – The threshold for real-time communications

CDNs will play a critical role in scaling sub-3 second latency by processing requests at the edge. Expect gradual adoption driven by live sports and competitive market pressure driven by end-users. 

Can Media over QUIC Unify Streaming?

QUIC is a modern transport protocol poised to replace TCP for HTTP and other IP-delivered traffic. But what about it’s impact on streaming media?

The IETF Media over QUIC working group aims to develop a new protocol which leverages QUIC for video delivery. Some potential advantages according to Will:

  • Tunable latency – Set target latency anywhere from real-time to VOD.
  • A unified protocol – Replace separate networks for WebRTC, HLS, DASH with a consolidated QUIC-based pipeline.
  • Cacheability – Design for compatibility with CDN caching and relay servers despite low latency constraints.
  • Pub/sub model – Clients can subscribe once to a feed instead of having to request each segment. Subscription is better fit for media, which is inherently a temporal sequence of data.

Media over QUIC shows promise but faces adoption and deployment challenges across players, devices and CDNs. Given the years-long adoption cycles, a QUIC-driven overhaul of streaming is far from imminent. But long term, the flexibility and performance of QUIC are likely to drive the next evolution in streaming architectures

Hardware Acceleration Driving Better Edge Transcoding

As edge computing spreads, so does the demand for media processing at edge locations. But video encoding is extremely computationally intensive. In the past, media processing has predominantly been performed using CPUs. However, new specialized hardware like VPUs (video processing units) enable much faster performance per watt and cheaper transcoding

This hardware acceleration makes edge-based transcoding more affordable. And encoding speed is crucial for latency-sensitive applications.

For providers offering edge transcoding, a shift from pure CPU is inevitable given order-of-magnitude efficiency gains with VPUs and ASICs. Expect these capabilities to unlock smarter real-time processing directly embedded in edge networks.

The catch? More advanced codecs like AV1 and VVC require exponentially more computing power, not only in the encoders, but in every decoder.  So better silicon helps but video engineering is inevitably an optimization problem between quality, cost and power consumption.

Key Takeaways on the Cutting Edge of Streaming

Let’s recap the key points:

  • Edge computing unlocks crucial benefits like personalization, security, and low latency at scale. CDNs like Akamai are primed to enable these features.
  • New low-latency streaming protocols reduce end-to-end latency but see gradual adoption due to complexity and costs.
  • 3 seconds or less is the new threshold for interactive live streaming with engaged audiences.
  • QUIC offers advantages for media delivery, especially low latency streaming. But adoption and support by CDNs will take time.
  • ASIC and GPU-based transcoding – Will notes the massive boost in encoding efficiency from new hardware-accelerated technologies like VPUs. To stay competitive on cost, cloud transcoding will likely shift from pure CPU to GPU and ASIC implementations.

The world of streaming is changing rapidly. As standards and protocols evolve, publishers and distributors should stay abreast of new edge capabilities that can enhance experiences today.

Embracing these technologies early, while balancing economics, performance, and scale, is the best way to serve demanding users and growing audiences.

Play Video about Streaming Media Mastery: Edge Computing, Low-Latency Delivery, and More A Technical Dialogue with Akamai's Will Law
Picture of Jan Ozer

Jan Ozer

is Senior Director of Video Marketing at NETINT.

Jan is also a contributing editor to Streaming Media Magazine , writing about codecs and encoding tools. He has written multiple authoritative books on video encoding, including ‘Video Encoding by the Numbers: Eliminate the Guesswork from your Streaming Video’ and ‘ Learn to Produce Video with FFmpeg: In Thirty Minutes or Less’ and has produced multiple training courses relating to streaming media production.

WATCH ON DEAMAND: Building Your Own Live Streaming Cloud

ACCESS NOW:  ASIC-Based Transcoding
for High-volume Use Cases
Including social media, broadcast, interactive platforms, and service providers