Zapping: Low-Latency Premium Streaming Across Latin America

Zapping Low-Latency CDN Technology

Learn why live-streaming platform Zapping built its own low-latency technology and CDN to stream Latin American content using NETINT Streaming Video Servers, accelerating Zapping’s rapid expansion. “Zapping is the Netflix of the live streaming here in Chile, in Latin America. We developed all our technology; the encoders, low-latency solution, and the apps… We developed our own CDN…” Nacho Opazo, Zapping Co-founder and CTO.

Zapping low latency CDN case study-1
FIGURE 1. Nacho Opazo, Zapping Co-founder and CTO on a rare vacation away from the office,
Source: https://www.linkedin.com/in/nachopazo/overlay/photo/

Background

Zapping is a live-streaming platform in Latin America that started in Chile and has since expanded into Brazil, Peru, and Costa Rica. Ignacio (Nacho) Opazo, the co-founder and CTO, has been the driving force behind the company’s technological innovations.

The verb zapping refers to the ability to switch content streams with minimal delay. Give him a minute and Nacho will gladly demonstrate their superior low latency performance in the hyper-responsive mobile app he designed and developed. He’s also responsible for Zapping’s content delivery network (CDN), custom low-latency technology, and user interfaces on smart TVs.

Zapping streams free channels available via terrestrial broadcast, as well as content from HBO, Paramount, Fox, TNT Sports, Globo, and many others. Though this includes a broad range of content types, from local news to primetime TV to premium movies, what really moves the viewership needle in South America is sports, specifically soccer.

Latin America is a competitive marketplace; in addition to terrestrial TV, other market entrants include DirectTV, Entel, and MovieStar, along with free-to-air content in some markets. This makes soccer coverage a key driver for subscription growth, and it presents multiple operational challenges, including latency, video quality, and bandwidth consumption. With aggressive expansion plans, Zapping needed to achieve these requirements with a focus on capital management and optimizing operating costs.

FIGURE 2. Innovative, feature-rich players and broad compatibility are key to Zapping’s outstanding customer experience.
Source: https://www.zapping.com/compatibilidad

The Challenges of Soccer Broadcasting

Latency is a critical issue for soccer coverage and all live sports. As Nacho described, 

Here in Chile, the soccer matches are premium. So you need to hire a cable operator, and you can hear your neighbor screaming if they have a cable operator with lower latency. Latency is one of the key questions we get asked about in social media. In Brazil, it is more complicated because some soccer matches are free to air. So, our latency has to be lower than free-to-air in Brazil. One potential solution here was to install a server with a low latency transcoder in the CDN of each soccer broadcaster to ensure that Zapping’s streams originate from as close to the original signal as possible."

Zapping competed with these same services regarding quality, which is a key determinant of quality of experience (QoE). Soccer is incredibly fast-moving and presents a many compression challenges, from midfield shots of tiny players advancing and defending to finely detailed shots of undulating crowds and waving flags to close-ups of fouled players rolling in the grass. Zapping needed a transcoder to preserve detail and color accuracy without breaking the bandwidth bank. Like latency, Zapping’s bandwidth problems vary by country. In all countries, soccer’s popularity stresses the internet in general. 

Video files are huge, and when you have a soccer match, thousands of people come to your servers and saturate the region's internet... In the beginning, we saw low bandwidth connections - like 10 Gbps trunks between ISPs, and we saturated that trunk with our service.”

Beyond general capacity, some countries have suboptimal infrastructures for high-bandwidth soccer matches, like low-speed inter-trunk connections. “In the beginning, we saw low bandwidth connections – like 10 Gbps trunks between ISPs, and we saturated that trunk with our service.” Problems like these convinced Zapping to create their own CDN to ensure high-speed delivery.

In Chile, Zapping found a different problem. “Here in Chile, we have a really good internet. We have a connection of one gigabyte to the users, one gigabyte per second, and fiber optic. But 80% of our viewers watch on Smart TVs that they don’t upgrade that often, and these devices don’t have good Wi-Fi connections. So, Wi-Fi is the problem in Chili.” While Zapping’s CDN was a huge help in avoiding bandwidth bottlenecks, the best general-purpose solution was to implement HEVC.

80% of our viewers watch on Smart TVs that they don’t upgrade that often, and these devices don't have good Wi-Fi connections. So, Wi-Fi is the problem in Chile.”

To summarize these requirements, Zapping needed a transcoding system affordable enough to install and operate in data centers around South America that delivered high-quality H.264 and HEVC output with exceptionally low latency.

From CPU to GPU to ASIC

Nacho considered all options to find the right transcoding system. “I started encoding with CPUs using Quick Sync from Intel. but my problem was getting more density for the rack unit. Intel enabled five sockets per a 1RU rack unit, which was really low. Though the video quality was good, the amount of power that you needed, and the amount of heat that you produced was really high.”

Nacho next tried NVIDIA GPUs, starting with the P2000 and moving to T4. Configured with an 80-core Intel CPU and two T4s, the NVIDIA-powered system could produce about 50 complete ladders per 1RU rack unit, an improvement, but still insufficient. Nacho then learned about NETINT’s first-generation T408 technology

I was looking to get more density with my servers and found a NETINT article that claimed that you could output 122 channels per rack unit. (...) I found that the power draw was really low, as was the latency, and the quality of both H.264 and HEVC is really good.”

Looking ahead, Nacho foresees the need for even more density.

Right now we're trying the [second generation] NETINT Quadra processor. I need to get more dense. Brazil is a really big country. We need more power and more density in the rack.”

Nacho was sold on the hardware performance but had to integrate the NETINT transcoders into his encoding stack, which was a non-issue. 

We control the encoders with FFmpeg, and converting over to the NETINT transcoders was really seamless for us. Really, really easy.”

Just as Nacho finalized his testing, NETINT started offering a server package that included ten T408s in a Supermicro server with all software pre-installed. These proved perfectly suited to Zapping’s technology and expansion plans.

The servers are really, really good. For us, buying the server is better because it's ready to use. As we deploy our platform in Latin America, we send a server to each country. It’s as simple as sliding it into a rack, installing our software, and we’re ready to go."

Delivering Better Soccer Matches

Zapping low latency CDN case study-3 - Quadra Server
FIGURE 3.  Nacho will deploy the Quadra Video Server for the greatest density, lowest cost and latency, and highest quality H.264 and HEVC.

Armed with NETINT servers, Nacho proceeded to attack each of the challenges discussed above. 

For the latency, we talk with the channel distributor and put a NETINT server inside the CDN of each broadcaster. And we can skip the satellite uplink and save one or two seconds of latency.”

Nacho originally implemented his own low-latency protocols but now is experimenting with low-latency HLS. “With LL HLS, we can get six seconds ahead from free to air. Let’s talk in about three months and see what that looks like.”

Nacho also implemented a “turbo mode” that toggles the viewer in and out of Zapping’s low-latency mode. Viewers prioritizing low latency can enable turbo mode at the risk of slightly lower quality and a greater likelihood of buffering issues. Viewers who prioritize video quality and minimal buffering over ultra-low latency can disable turbo mode. As Nacho explained, “If you have a bad connection, like bad Wi-Fi, you can turn off the low latency and watch the match in a 30-second buffer like the normal buffer of HLS.”

Nacho also aggressively converted to HEVC output. 

For us, HEVC is really, really important. We get a 40% lower bit rate than H.264 with the same quality image. That’s full HD quality at 6 Mbps per second, which is really good compared to competitors using H.264 at 5 Mbps in full HD. And the user knows we’re delivering HEVC. We have that in our UX. The user can turn HEVC on and off and really see the difference.”

Regarding the HEVC switch, Nacho explained, “If we know that your TV or device is HEVC compatible, we play HEVC by default. But there are so many setup boxes, and some signal their codec compatibilities incorrectly. If we’re not sure, we turn off the HEVC by default, and the user can try it, and if it works, great; if not, they play H.264.”

After much experimentation, Nacho extended HEVC’s low-bitrate quality to other broadcasts as well. ‘For CNN or talk shows, we are trying a 600 kilobyte per second HEVC, and it looks really, really good, even on a big screen.”

Play Video about Voices of Video with Ignacio Opazo from Zapping
FIGURE 4. Voices of Video with Ignacio Opazo from Zapping – Unveiling the Powerhouse Behind Zapping

The Live Streaming Netflix of Latin America

One of Zapping’s unique strengths is that it considers itself a technology company, along with being a content company. This aggressive approach has enabled Zapping to achieve significant success in Chile and to expand into Latin America.

Zapping is the Netflix of the live streaming here in Chile, in Latin America. We developed all our technology; the encoders, our low-latency, and the apps in each platform. We developed our own CDN; I think it's bigger than Akamai and Fastly here in Chile. We are taking the same steps as Netflix. That you make your platform, you make the UI, you make the encoding process and then you must deliver.”

Nacho is clear about how NETINT’s products have contributed to his success. “NETINT servers are an affordable, functional, and high-performant element of our success, providing unparalleled density along with excellent low-latency and H.264 and HEVC quality, all at extremely low power consumption. NETINT has helped accelerate our expansion while increasing our profitability.”

Innovative technologists like Nacho and Zapping choose and rely on equally innovative tools and building blocks to deliver critical functions and components of their services. We’re proud that Nacho has chosen NETINT servers as the technology of choice for expanding operations in Latin America, and look forward to a long and successful collaboration.

ON-DEMAND: Building Your Own Live Streaming Cloud

From Cloud to Local Transcoding For Minimum Latency and Maximum Quality

From Cloud to Local Transcoding

Over the last ten years or so, most live productions have migrated towards a workflow that sends a contribution stream from the venue into the cloud for transcoding and delivery. For live events that need absolute minimum latency and maximum quality, it may be time to rethink that workflow, particularly if you’ve got multiple sharable inputs at the venue.

So says Bart Snoeks, Account & Partnership Director of THEO Technologies (“THEO”). By way of background, THEO invented and has commercially implemented the High-Efficiency Streaming Protocol (HESP), an adaptive HTTP- based video streaming protocol that enables sub-second end-to-end latency. You see how HESP compares to other low latency protocols in the table shown in Figure 1 from the HESP Alliance website – the organization focused on promoting and further advancing HESP.

Figure 1. HESP compared to other low latency protocols.

THEO has productized HESP as a real-time streaming service called THEOlive, which targets applications like live sports and betting, casino igaming, live auctions, and other events that require high-quality video at exceptionally low latency with delivery at scale. For example, in the case of in-play betting, cutting latency from 8 to 10 seconds (HLS) to under one second expands the betting window during the critical period just before the event.

When streaming casino games, ultra-low latency promotes fluent interactions between the players and ensures that all players see the turn of the cards in real time. When latency is lower, players can bet more quickly, increasing the number of hands that can be played.

According to Snoeks, a live streaming workflow that sends a contribution stream to the cloud for transcoding will always increase latency and can degrade quality as re-transcoding is needed. It’s especially poorly suited for stadium venues with multiple camera locations that want to enhance the attendee experience with multiple live feeds. In those latency-critical use cases you are actually adding network latency with a roundtrip to and from the cloud. Instead, it makes much more sense creating your encoding ladder and packaging on-site and pulling that directly from the origin to a private CDN for delivery.

Let’s take a step back and examine these two workflows.

Live Streaming Workflows

As stated at the top, most live-streaming productions encode a single contribution stream on-site and send that into the cloud for transcoding to a full ladder, packaging, and delivery. You see this workflow in Figure 2.

Figure 2. Encoding a contribution stream on-site to deliver to the cloud for transcoding, packaging, and delivery

This schema has multiple advantages. First, you’re sending a single stream to the cloud, lowering bandwidth requirements. Second, you’re centralizing your transcoding assets in a single location in the cloud, which typically enables better utilization.

According to Snoeks, however, this workflow will add 200 to 500  milliseconds of latency at a minimum, depending on the encoding speed, quality, and contribution protocol. In addition, though high-quality contribution encoders can minimize generational loss from the contribution stream, lower-quality transcoders can noticeably degrade the quality of the final output. You also need a contribution encoder for each camera, which can jack up hardware costs in high-volume igaming and similar applications.

Instead, for some specific use cases, you should consider the workflow shown in Figure 3. Here, you transcode on-site and send the full encoding ladder to a public CDN for external delivery and to a private CDN or equivalent for local viewing. This decreases latency to a minimum and produces absolute top quality as you avoid the additional transcoding step.

From Cloud to Local Transcoding - Figure-2
Figure 3. Encoding and packaging the encoding ladder on site and transmitting the streams to a public CDN for external viewers and a private CDN for local viewers.

This schema is particularly useful for venues that want to enhance the in-stadium experience with multiple camera feeds. Imagine a stock car race where an attendee only sees his driver on the track once every minute or so. Encoding on-site might allow attendees to watch the camera view from inside their favorite driver’s car with near real-time latency. It might let golf fans follow multiple groups while parked at a hole or following their favorite player.

If you’re encoding input from many cameras, say in a casino or even racetrack environment, the cost of on-site encoding might be less than the cost of the individual contribution encoders. So, you get the best of all worlds, lower cost per stream, lower latency, higher quality, and a better in-person experience where applicable.

If you’re interested in learning about your transcoding options, check out our symposium Building Your Own Live Streaming Cloud, where you can hear from multiple technology experts discussing transcoding options like CPU-only, GPU, and ASIC-based transcoding and their respective costs, throughput, and density.

If you’re interested in learning more about HESP, THEO in general, or THEOlive, watch for an upcoming episode of Voices of Video, where I interview Pieter-Jan Speelman, CTO of THEO Technologies. We’ll discuss HESP’s history and evolution, the power of THEOlive real-time streaming technology, and how to use it in your live production stack. Make sure you don’t miss it!

Now ON-DEMAND: Symposium on Building Your Live Streaming Cloud

From Cloud to Control. Building Your Own Live Streaming Platform

Cloud services are an effective way to begin live streaming. Still, once you reach a particular scale, it’s common to realize that you’re paying too much and can save significant OPEX by deploying transcoding infrastructure yourself. The question is, how to get started?

NETINT’s Build Your Own Live Streaming Platform symposium gathers insights from the brightest engineers and game-changers in the live-video processing industry on how to build and deploy a live-streaming platform.

In just three hours, we’ll cover the following:

  • Hardware options for live transcoding and encoding to cut costs by as much as 80%.
  • Software options for producing, delivering, and playing your live video streams.
  • Co-location selection criteria to achieve cloud-like performance with on-premise affordability.

You’ll also hear from two engineers who will demystify the process of assembling a live-streaming facility, how they identified and solved key hurdles, along with real costs and performance data.

Cloud? Or your own hardware?

It’s clear to many that producing live streams via a public cloud like AWS can be vastly more expensive than owning your hardware. (You can learn more by reading “Cloud or On-Premises? The Streaming Dilemma” and “How to Slash CAPEX, OPEX, and Carbon Emissions Using the NETINT T408 Video Transcoder”). 

To quote serial entrepreneur David Hansson, who recently migrated two SaaS services from the cloud to on-premise, “Don’t let the entrenched cloud interests dazzle you into believing that running your own setup is too complicated. Everyone and their dog did it to get the internet off the ground, and it’s only gotten easier since.” 

For those who have only operated in the cloud, there’s fear of the unknown. Fear buying hardware transcoders, selecting the right software, and choosing the best colocation service. So, we decided to fight fear with education and host a symposium to educate streaming engineers on all these topics.  

“Building Your Own Live Streaming Cloud” will uncover how owning your encoding stack can slash operating costs and boost performance with minimal CAPEX.

Learn to select the optimal transcoding hardware, transcoding and packaging software, and colocation facilities. We’ll also discuss strategies to reduce carbon emissions from your transcoding engine. 

This FREE virtual event takes place on August 17th, from 11:00 AM – 2:15 PM EST.

Five issues tackled by nine experts:

Transcoding Hardware Options:

Learn the pros and cons of CPU, GPU, and ASIC-based transcoding via detailed throughput and cost examples shared by Kenneth Robinson, Manager of Field Application Engineers at NETINT Technologies. Then Ilya Mikhaelis, Streaming Backend Tech Lead at Mayflower, will describe his company’s journey from CPU to GPU to ASICs, covering costs, power consumption, latency, and density metrics.

Software Options:

Jan Ozer from NETINT will identify the three categories of transcoding software: multimedia frameworks, media servers, and other tools. Then, you’ll hear from experts in each category, starting with Romain Bouqueau, founder of Motion Spell, who will discuss the capabilities of the GPAC multimedia framework. Barry Owen, Chief Solutions Architect at Wowza, will discuss Wowza Streaming Engine’s suitability for private clouds. Lastly, Adrian Roe, Director at Id3as, developer of Norsk, will demonstrate Norsk’s simple, scripting-based operation, and extensive production and transcoding features.

Housing Options:

Once you select your hardware and software, the next step is finding the right co-location facility to house your live streaming infrastructure. Kyle Faber, with experience in building Edgio’s video streaming infrastructure, will guide you through the essential factors to consider when choosing a co-location facility.

Minimizing the Environmental Impact:

As responsible streaming professionals, it’s essential to address the environmental impact of our operations. Barbara Lange, Secretariat of Greening of Streaming, will outline actionable steps video engineers can take to minimize power consumption when acquiring and deploying transcoding servers.

Pulling it All Together:

Stef van der Ziel, founder of live-streaming pioneer Jet-Stream, will share lessons learned from his experience in creating both Jet-Stream’s private cloud and cloud transcoding solutions for customers. In his closing talk, Stef will demystify the process of choosing hardware, software, and a hosting facility, bringing all the previous discussions together into a cohesive plan.

Full Agenda:

11:00 am. – 11:10 am EST

Introduction (10 minutes):
Mark Donnigan, Head of Strategic Marketing at NETINT Technologies
Welcome, overview, and what you will learn.

 

11:10 am. – 11:40 am EST

Choosing transcoding hardware (30 minutes):
Kenneth Robinson, Manager of Field Application Engineers at NETINT Technologies
You have three basic approaches to transcoding, CPU-only, GPU, and ASICs. Kenneth outlines the pros and cons of each approach with extensive throughput and CAPEX and OPEX examples for each.

 

11:40 am. – 12:00 pm EST

From CPU to GPU to ASIC: Our Transcoding Journey (20 minutes):
Ilya Mikhaelis, Streaming Backend Tech Lead at Mayflower
Charged with supporting very high-volume live transcoding operations, Ilya started with libx264 software transcoding, which consumed massive power but yielded low stream density per server. Then he experimented with GPUs and other hardware and ultimately transitioned to an ASIC-based solution with much lower power consumption and much higher stream density per server. Ilya will detail the costs, power consumption, and density of all options, providing both data and an invaluable evaluation framework.

 

12:00 pm. – 12:10 pm EST

Choosing your live production software (10 minutes): 
Jan Ozer, Senior Director of Video Technology at NETINT Technologies
The core of every live streaming system is transcoding and packaging software. This comes in many shapes and sizes, from open-source software like FFmpeg and GPAC, to streaming servers like Wowza, and production systems like Norsk. Jan discusses these multiple options so you can cohesively and affordably build your own live-streaming ecosystem.

 

12:10 pm. – 1:10 pm EST

Speed Round (60 minutes):
20-minute presentations from GPAC, Wowza, and NORSK.
Speakers from GPAC, Wowza, and NORSK discussing the features, functions, operational paradigms, and cost structure of their live software offering.

Speakers include:

  • Adrian Roe, CEO at id3as, Product: Norsk, Title: Make Live Easy with NORSK SDK
  • Romain Bouqueau, Founder and CEO, Motion Spell (home for GPAC Licensing), Product: GPAC Title of Talk: Deploying GPAC for Transcoding and Packaging
  • Barry Owen, Chief Solutions Architect at Wowza, Title of Talk: Start Streaming in Minutes with Wowza Streaming Engine



1:10 pm. – 1:40 pm EST

Choosing a co-location facility (30 minutes): 
Kyle Faber, Senior Director of Product Management at Edgio.
Once you’ve chosen your hardware and software, you need a place to install them. If you don’t have your own connected data center, you may consider a colocation facility. In his talk, Kyle addresses the key factors to consider when choosing a co-location facility for your live streaming infrastructure.

 

1:40 pm. – 1:55 pm EST

How to Greenify Your Encoding Stack (15 minutes):
Barbara Lange, Secretariat of Greening of Streaming.
Learn how video streaming companies can work to significantly reduce their energy footprint and contribute to a greener streaming industry. Implement hardware and infrastructure optimization using immersion cooling and data center design improvements to maximize energy efficiency in your streaming infrastructure.

 

1:55 pm. – 2:15 pm EST

Closing Keynote (20 minutes):
Stef van der Ziel, Founder Jet-Stream
Jet-stream has delivered streaming solutions since its launch in 1994 and offers its own live streaming platform. One focus has been creating custom transcoding solutions for customers seeking to create their own private cloud for various applications. In his closing talk, Stef will demystify the process of choosing hardware, software, and a hosting facility and wrap a pretty bow around all previous presentations.

Key Cloud Gaming Concepts with Blacknut’s Olivier Avaro

Cloud Gaming Primer - key concepts - NETINT Technologies

Recently, our Mark Donnigan interviewed Olivier Avaro, the CEO of Blacknut, the world’s leading pure-player cloud gaming service. As an emerging market, cloud gaming is new to many, and the interview covered a comprehensive range of topics with clarity and conciseness. For this reason, we decided to summarize some of the key concepts and include them in this post. If you’d like to listen to the complete interview, and we recommend you do, click here. Otherwise, you can read a lightly edited summary of the key topics below.

For perspective, Avaro founded Blacknut in 2016, and the company offers consumers over seven hundred premium titles for a monthly subscription, with service available across Europe, Asia, and North America on a wide range of devices, including mobiles, set-top-boxes, and Smart TVs. Blacknut also distributes through ISPs, device manufacturers, OTT services, and media companies, offering a turnkey service, including infrastructure and games that allow businesses to instantly offer their own cloud gaming service.

Cloud Gaming Primer - the key points covered in the interview

The basic cloud gaming architecture is simple.

The architecture of cloud gaming is simple. You take games, you put them on the server in the cloud, and you virtualize and stream it in the form of a video stream so that you don’t have to download the game on the client side. When you interact with the game, you send a command back to the server, and you interact with the game this way.

Of course, bandwidth needs to be sufficient, let’s say six megabits per second. Latency needs to be good, let’s say less than 80 milliseconds. And, of course, you need to have the right infrastructure on the server that can run games. This means a mixture of CPU, GPU, storage, and all this needs to work well.

But cost control is key.

We passed the technology inflection point where actually the service becomes to be feasible. Technically feasible, the experience is good enough for the mass market. Now, the issue is on the unique economics and how much it costs to stream and deliver games in an efficient manner so that it is affordable for the mass market.

Public Cloud is great for proof of concept.

We started deploying the service based on the public cloud because this allowed us to test the different metrics, how people were playing the service, and how many hours. And this was actually very fast to launch and to scale…That’s great, but they are quite expensive.

But you need your own infrastructure to become profitable.

So, to optimize the economics, we built what we call the hybrid cloud for cloud gaming, which is a combination of both the public cloud and private cloud. So, we must install our own servers based on GPUs, CPUs, and so on so we can improve the overall performance and the unique economics of the system.

Cost per concurrent user (CCU) is the key metric.

The ultimate measure is the cost per concurrent user that you can get on a specific bill of material. If you have a CPU plus GPU architecture, the game is going to slice the GPU in different pieces in a more dynamic manner and in a more appropriate manner so that you can run different games and as many games as possible.

GPU-only architectures deliver high CCUs, which decreases profitability.

There are some limits on how much you can slice the GPU and still be efficient and so there are some limits in this architecture because it all relies on the GPU. We are investigating different architectures using a VPU, like NETINT’s, that will offload the GPU of the task of encoding and streaming the video so that we can augment the density.

VPU-augmented architectures decrease CCU by a factor of ten.

I think in terms of some big games, because they rely much more on the GPU, you will probably not augment the density that much. But we think that overall, we can probably gain a factor of ten on the number of games that you can run on this kind of architecture. So, passing from a max of 20, 24 games to running two hundred games on an architecture of this kind.

Which radically increases profitability.

So, augmenting the density by a factor of ten means also, of course, diminishing the cost per CCU by a factor of ten. So, if you pay $1 currently, you will pay ten cents, and that makes a whole difference. Because let’s assume basic gamers will play 10 hours per month or 30 hours per month; if this costs $1 per hour, this is $30, right? If this is ten cents, then costs are from $1 to $3, which I think makes the match work on the subscription, which is between 5 to 15 euros per month

The secret sauce is peer-to-peer DMA.

[Author’s note: These comments, explaining how NETINT VPU’s deliver a 10x performance advantage over GPUs, are from Mark Donnigan].

Anybody who understands basic server architecture, it’s not difficult to think, wait a second, isn’t there a bottleneck inside the machine? What NETINT did was create a peer-to-peer sharing inside the DMA (Direct Memory Access). So, the GPU will output a rendered frame, and it’s transferred inside memory, so that the VPU can pick that up, encode it, and there’s effectively zero latency because it’s happening in the memory buffer.

5G is key to successful gameplay in emerging markets.

[Back to Olivier] What we’ve been doing with Ericsson is using 5G networks and defining specific characteristics of what is a slice in the 5G network. So, we can tune the 5G network to make it fit for gaming and to optimize the delivery of gaming with 5G.

So, we think that 5G is going to get much faster in those regions where actually the internet is not so great. We’ve been deploying the Blacknut service in Thailand, Singapore, Malaysia, now in the Philippines. And this has allowed us to reach people in regions where there is no cable or bandwidth with fiber.

Latency needs to be eighty milliseconds or less (much less for first-person shooter games).

You can get a reasonably good experience at 80 milliseconds for most games. But for first-person shooter games, you need to be close to frame accuracy, which is very difficult in cloud gaming. You need to go down to thirty milliseconds and lower, right?

That’s only feasible with the optimal network infrastructure.

And that’s only feasible if you have a network that allows for it. Because it’s not only about the encoding part, the server side, and the client side; it’s also about where the packets are going through the networks. You need to make sure that there is some form of CDN for cloud gaming in place that makes the experience optimal.

Edge servers reduce latency.

We are putting a server at the edge of the network. So, inside the carrier’s infrastructure, the latency is super optimized. So that’s one thing that is key for the service. We started with a standard architecture, with CPU and GPU. And now, with the current VPU architecture, we are putting whole servers consisting of AMD GPU and NETINT VPU. We build the whole package so that we put this in the infrastructure of the carrier, and we can deploy the Blacknut cloud gaming on top of it.

The best delivery resolution is device dependent.

The question is, again, the cost and the experience. Okay? Streaming 4K on a mobile device does not really make sense. The screen is smaller, so you can screen a smaller resolution and that’s sufficient. On a TV, likely you need to have a bigger resolution. Even if there is a great upscale available on most TV sets, we stream 720p on Samsung devices, and that’s super great, right? But of course, scaling up to 1080p will provide a much better experience. So, on TVs and for the game that requires it, I think we’re indeed streaming the service at about 1080p.

Frame rates must match game speed.

When playing a first-person shooter, if you have the choice and you cannot stream 1080p, you would probably stream 720p at 60 FPS rather than 1080p at 30 FPS. But if you have different games with elaborate textures, the resolution is more important, then maybe you will actually select more 1080p and 30 fps resolution.

What we build is fully adaptable. Ultimately, you should not forget that there is a network in between. And even if technically you can stream 4K or 8K, the networks may not sustain it. Okay? And then you’ll have a worse experience streaming 4K than at 1080p 60 FPS resolution.