What Can a VPU Do for You?

For Cloud-Gaming, a VPU can deliver 200 simultaneous 720p30 game sessions from a single 2RU server.

When you encode using a Video Processing Unit (VPU) rather than the built-in GPU encoder, you will decrease your cost per concurrent user (CCU) by 90%, enabling profitability at a much lower subscription price. How is this technically feasible? Two technology enablers make this possible. First, extraordinarily capable encoding hardware, known as a VPU (video processing unit), dedicated to the task of high-quality video encoding and processing. And second, peer-to-peer direct memory access (DMA) that enables video frames to be delivered at the speed of memory compared to the much slower NVMe buss between the GPU and VPU. Let’s discuss these in reverse order.

Peer-to-Peer Direct Memory Access (DMA)

Within a cloud gaming architecture, the primary role of the GPU is to render frames from the game engine output. These frames are then encoded into a standard codec that is easily decoded on a wide cross section of devices. Generally this is H.264 or HEVC, though AV1 is becoming of interest to those with a broader Android user based. Encoding on the GPU is efficient from a data transfer standpoint because the rendering and encoding occurs on the same silicon die; there’s no transfer of the rendered YUV frame to a separate transcoder over the slower PCIe or NVMe busses. However, since encoding requires substantial GPU resources, this dramatically reduces the overall throughput of the system. Interestingly, it’s the encoder that is often at full capacity and, thus the bottleneck, not the rendering engine. Modern GPU’s are built for general-purpose graphical operations, thus, more real estate is devoted to this compared to video encoding.

By installing a dedicated video encoder in the system and using traditional data transfer techniques, the host CPU can easily manage the transfer of the YUV frame from the GPU to the transcoder but as the number of concurrent game sessions increase the probability of dropped frames or corrupted data makes this technique not usable.

NETINT, working with AMD enabled peer-to-peer direct memory access (DMA) to overcome this situation. DMA is a technology that enables devices within a system to exchange data in memory by allowing the GPU to send frames directly to the VPU whereby removing the situation of the buss becoming clogged as the concurrent session count increases above 48 720p streams.

What can a VPU do for you?

The Benefits of Peer-to-Peer DMA

Peer-to-peer DMA delivers multiple benefits. First, by eliminating the need for CPU involvement in data transfers, peer-to-peer DMA significantly reduces latency, which translates to a more responsive and immersive gaming experience for end-users. NETINT VPUs feature latencies as low as 8ms in fully loaded and sustained operation.

In addition, peer-to-peer DMA relieves the CPU of the burden of managing inter-device data transfers. This frees up valuable CPU cycles, allowing the CPU to focus on other critical tasks, such as game logic and physics calculations, optimizing overall system performance and producing a smoother gaming experience.

By leveraging peer-to-peer communications, data can be transferred at greater speeds and efficiency than CPU-managed transfers. This improves productivity and scalability for cloud gaming production workflows.

These factors combine to produce higher throughput without the need for additional costly resources. This cost-effectiveness translates to improved return on investment (ROI) and a major competitive advantage.

Extraordinarily Capable VPUs

Peer-to-peer DMA has no value if the encoding hardware used is not equally capable. With NETINT VPUs, that isn’t the case here.

The reference system that produces 200 720p30 cloud gaming sessions is built on the Supermicro AS-2015CS-TNR server platform with a single GPU and two Quadra T2A VPUs. This server supports AV1, HEVC, and H.264 video game streaming at up to 8K and 60fps, though as may be predicted, the simultaneous stream counts will be reduced as you increase framerate or resolution.

Quadra T2A is the most capable of the Quadra VPU line, the world’s first dedicated hardware to support AV1. With its embedded AI and 2D engines, the Quadra T2A can support AI-enhanced video encoding, region of interest, and content-adaptive encoding. Quadra T2A coupled with a P2P DMA enabled GPU, allows cloud gaming providers to achieve unprecedented high throughput with ultra-low latency.

Quadra T2A is an AIC (HH HL) form-factor video processing unit with two Codensity G5 ASICs that operates in x86 or Arm-based servers requiring just 40 watts at maximum load. It enables cloud gaming platforms to transition from software or GPU-only based encoding with up to a 40x reduction in the total cost of ownership.

What Can A VPU Do For You?

What Can A VPU Do For You?

It make’s Cloud Gaming profitable, finally.

Peer-to-peer DMA is a game-changing technology that reduces latency and increases system throughput. When paired with an extraordinarily capable VPU like the NETINT Quadra T2A, now you can deliver an immersive gaming experience at a CCU that cannot be matched by any competing architecture.

Key Cloud Gaming Concepts with Blacknut’s Olivier Avaro

Recently, our Mark Donnigan interviewed Olivier Avaro, the CEO of Blacknut, the world’s leading pure-player cloud gaming service. As an emerging market, cloud gaming is new to many, and the interview covered a comprehensive range of topics with clarity and conciseness. For this reason, we decided to summarize some of the key concepts and include them in this post. If you’d like to listen to the complete interview, and we recommend you do, click here. Otherwise, you can read a lightly edited summary of the key topics below.

For perspective, Avaro founded Blacknut in 2016, and the company offers consumers over seven hundred premium titles for a monthly subscription, with service available across Europe, Asia, and North America on a wide range of devices, including mobiles, set-top-boxes, and Smart TVs. Blacknut also distributes through ISPs, device manufacturers, OTT services, and media companies, offering a turnkey service, including infrastructure and games that allow businesses to instantly offer their own cloud gaming service.

Cloud Gaming Primer - the key points covered in the interview

The basic cloud gaming architecture is simple.

The architecture of cloud gaming is simple. You take games, you put them on the server in the cloud, and you virtualize and stream it in the form of a video stream so that you don’t have to download the game on the client side. When you interact with the game, you send a command back to the server, and you interact with the game this way.

Of course, bandwidth needs to be sufficient, let’s say six megabits per second. Latency needs to be good, let’s say less than 80 milliseconds. And, of course, you need to have the right infrastructure on the server that can run games. This means a mixture of CPU, GPU, storage, and all this needs to work well.

But cost control is key.

We passed the technology inflection point where actually the service becomes to be feasible. Technically feasible, the experience is good enough for the mass market. Now, the issue is on the unique economics and how much it costs to stream and deliver games in an efficient manner so that it is affordable for the mass market.

Public Cloud is great for proof of concept.

We started deploying the service based on the public cloud because this allowed us to test the different metrics, how people were playing the service, and how many hours. And this was actually very fast to launch and to scale…That’s great, but they are quite expensive.

But you need your own infrastructure to become profitable.

So, to optimize the economics, we built what we call the hybrid cloud for cloud gaming, which is a combination of both the public cloud and private cloud. So, we must install our own servers based on GPUs, CPUs, and so on so we can improve the overall performance and the unique economics of the system.

Cost per concurrent user (CCU) is the key metric.

The ultimate measure is the cost per concurrent user that you can get on a specific bill of material. If you have a CPU plus GPU architecture, the game is going to slice the GPU in different pieces in a more dynamic manner and in a more appropriate manner so that you can run different games and as many games as possible.

GPU-only architectures deliver high CCUs, which decreases profitability.

There are some limits on how much you can slice the GPU and still be efficient and so there are some limits in this architecture because it all relies on the GPU. We are investigating different architectures using a VPU, like NETINT’s, that will offload the GPU of the task of encoding and streaming the video so that we can augment the density.

VPU-augmented architectures decrease CCU by a factor of ten.

I think in terms of some big games, because they rely much more on the GPU, you will probably not augment the density that much. But we think that overall, we can probably gain a factor of ten on the number of games that you can run on this kind of architecture. So, passing from a max of 20, 24 games to running two hundred games on an architecture of this kind.

Which radically increases profitability.

So, augmenting the density by a factor of ten means also, of course, diminishing the cost per CCU by a factor of ten. So, if you pay $1 currently, you will pay ten cents, and that makes a whole difference. Because let’s assume basic gamers will play 10 hours per month or 30 hours per month; if this costs $1 per hour, this is $30, right? If this is ten cents, then costs are from $1 to $3, which I think makes the match work on the subscription, which is between 5 to 15 euros per month

The secret sauce is peer-to-peer DMA.

[Author’s note: These comments, explaining how NETINT VPU’s deliver a 10x performance advantage over GPUs, are from Mark Donnigan].

Anybody who understands basic server architecture, it’s not difficult to think, wait a second, isn’t there a bottleneck inside the machine? What NETINT did was create a peer-to-peer sharing inside the DMA (Direct Memory Access). So, the GPU will output a rendered frame, and it’s transferred inside memory, so that the VPU can pick that up, encode it, and there’s effectively zero latency because it’s happening in the memory buffer.

5G is key to successful gameplay in emerging markets.

[Back to Olivier] What we’ve been doing with Ericsson is using 5G networks and defining specific characteristics of what is a slice in the 5G network. So, we can tune the 5G network to make it fit for gaming and to optimize the delivery of gaming with 5G.

So, we think that 5G is going to get much faster in those regions where actually the internet is not so great. We’ve been deploying the Blacknut service in Thailand, Singapore, Malaysia, now in the Philippines. And this has allowed us to reach people in regions where there is no cable or bandwidth with fiber.

Latency needs to be eighty milliseconds or less (much less for first-person shooter games).

You can get a reasonably good experience at 80 milliseconds for most games. But for first-person shooter games, you need to be close to frame accuracy, which is very difficult in cloud gaming. You need to go down to thirty milliseconds and lower, right?

That’s only feasible with the optimal network infrastructure.

And that’s only feasible if you have a network that allows for it. Because it’s not only about the encoding part, the server side, and the client side; it’s also about where the packets are going through the networks. You need to make sure that there is some form of CDN for cloud gaming in place that makes the experience optimal.

Edge servers reduce latency.

We are putting a server at the edge of the network. So, inside the carrier’s infrastructure, the latency is super optimized. So that’s one thing that is key for the service. We started with a standard architecture, with CPU and GPU. And now, with the current VPU architecture, we are putting whole servers consisting of AMD GPU and NETINT VPU. We build the whole package so that we put this in the infrastructure of the carrier, and we can deploy the Blacknut cloud gaming on top of it.

The best delivery resolution is device dependent.

The question is, again, the cost and the experience. Okay? Streaming 4K on a mobile device does not really make sense. The screen is smaller, so you can screen a smaller resolution and that’s sufficient. On a TV, likely you need to have a bigger resolution. Even if there is a great upscale available on most TV sets, we stream 720p on Samsung devices, and that’s super great, right? But of course, scaling up to 1080p will provide a much better experience. So, on TVs and for the game that requires it, I think we’re indeed streaming the service at about 1080p.

Frame rates must match game speed.

When playing a first-person shooter, if you have the choice and you cannot stream 1080p, you would probably stream 720p at 60 FPS rather than 1080p at 30 FPS. But if you have different games with elaborate textures, the resolution is more important, then maybe you will actually select more 1080p and 30 fps resolution.

What we build is fully adaptable. Ultimately, you should not forget that there is a network in between. And even if technically you can stream 4K or 8K, the networks may not sustain it. Okay? And then you’ll have a worse experience streaming 4K than at 1080p 60 FPS resolution.

Revolutionizing Online Media Distribution and Delivery

Streaming technologies have revolutionized the digital media landscape, transforming how content is distributed and delivered to audiences worldwide. One pioneering figure in this field is Alex Zambelli, whose career at Microsoft has been closely intertwined with the rise of streaming as the dominant digital media distribution method. Zambelli’s work with NBC Sports, particularly during the 2008 Beijing Olympics and subsequent events, was pivotal in advancing online streaming capabilities and earning industry recognition. This article, based on Jan Ozer‘s conversation with Alex during Voices of Video, explores Zambelli’s contributions to streaming technologies, the implementation of multi-view camera angles in Sunday Night Football, and key considerations in livestreaming from insights gained during Olympic events.

Evolution of Streaming Technologies

Alex Zambelli’s career at Microsoft has coincided with the transition from physical media to streaming as the dominant method of distributing digital media. Around 2007, streaming started gaining momentum, gradually overtaking CDs, DVDs, and Blu-rays. Zambelli’s focus on streaming technologies led him to work on Microsoft’s Silverlight, a competitor to Flash, which facilitated the creation of rich web experiences and premium media delivery, including digital rights management. This technology was a significant milestone in the evolution of streaming.

Zambelli’s collaboration with NBC Sports began with the 2008 Beijing Olympics, where they aimed to pioneer online streaming of all Olympics content. Initially, they utilized Windows Media and Silverlight, incorporating adaptive streaming capabilities. The subsequent transition to Microsoft’s Smooth Streaming technology for the 2010 Vancouver Olympics marked a significant advancement. This technology offered on-demand and live streams in high definition, providing viewers with an immersive and seamless experience. These groundbreaking endeavors earned Zambelli and the team recognition from the industry, including nominations for sports Emmys.

Multi-View Camera Angles in Sunday Night Football

The implementation of Smooth Streaming technology played a crucial role in enabling the seamless transition between camera angles in Sunday Night Football broadcasts. By utilizing a single manifest that contained all four camera angles, switching between views became as smooth as switching between bitrates in modern streaming protocols like DASH or HLS. This technology, developed by the broadcast team, allowed viewers to simultaneously watch multiple camera angles, enhancing the overall viewing experience.

Key Considerations in Livestreaming: Insights from Olympic Events

Livestreaming presents unique challenges compared to on-demand streaming due to its real-time nature. Issues such as packet loss, segment loss, blackouts, and ad insertions demand immediate attention and resolution. Unlike on-demand streaming, where there is some leeway to address content or delivery chain issues over time, livestreaming requires constant vigilance. Even a brief interruption or technical problem can significantly impact the viewer experience.

Successful livestreaming events often involve collaborative efforts from multiple companies, including Microsoft, NBC, Akamai, and iStreamPlanet. These events require dedicated teams ready to address and resolve any issues that arise in real time. The nature of livestreaming necessitates a higher level of focus and attention compared to on-demand streaming. It is crucial to prioritize and allocate sufficient resources to ensure the seamless execution of live events. The potential for unexpected issues or failures makes constant monitoring and immediate troubleshooting essential, as even a minor disruption can have significant consequences.

Voices of Video - Cloud Gaming being Real

Play Video about NETINT Technologies about scalable distribution in the age of DRM: Key Challenges and Implications - Voices of Video with Alex Zambelli and Jan Ozer
VOICES OF VIDEO
Scalable distribution in the age of DRM: Key Challenges and Implications.
Watch the full conversation on YouTube: https://youtu.be/s_afoa71muM
 

Evolution of Video Codecs and Streaming Protocols

The evolution of video codecs and streaming protocols has played a vital role in shaping the streaming landscape. In the early 2000s, the popular video codecs for streaming were VC-1 (supported by Silverlight) and H.264 (supported by Flash). However, the introduction of HTML5 posed challenges for streaming solutions, as the HTML specification lacked the necessary APIs to provide the required level of control and functionality for streaming.

Silverlight and Flash emerged as proprietary plugins that advanced streaming technology beyond what HTML could offer at the time. They provided opportunities to overcome HTML’s limitations and introduced features such as media stream sources and content protection (DRM) to enhance the streaming experience. Silverlight’s media stream source concept, which later influenced HTML’s media source extensions, allowed developers to handle their own segment downloading and parsing, passing the video and audio streams to a media buffer for decoding and rendering. Content protection was a crucial aspect addressed by Silverlight and Flash, as HTML lacked a robust solution for DRM.

Around 2011-2012, Silverlight and Flash gradually phased out as HTML5 matured, offering the necessary APIs for implementing streaming protocols like DASH, HLS, and Smooth Streaming within the browser while incorporating DRM capabilities. HTML5 overcame initial growing pains and established itself as the predominant platform for streaming. By 2014-2015, HTML5 had evolved sufficiently to support basic streaming functionalities and content protection with DRM.

Optimizing Encoding Quality and Cost

Achieving optimal encoding quality while considering cost is a crucial concern for content creators and distributors. At Warner Brothers Discovery, the x264 and x265 codecs are commonly used for transcoding purposes, employing the slow or slower presets to achieve higher quality outputs. This approach balances encoding cost with desired video quality.

Recent discussions within the organization have prompted exploration into the idea of customizing presets based on specific resolutions and content complexities. The focus is on optimizing encoding efficiency by adjusting presets according to the intricacy of the content and the resolution being processed. Different resolutions have varying encoding requirements, and applying the very slow preset to all resolutions may result in unnecessary computational overhead for lower resolutions. Similarly, content complexity plays a role in determining the appropriate preset, as not all content requires the very slow preset. Customizing presets based on resolution and content characteristics allows for more efficient allocation of computational resources.

The popularity and viewership of specific content also factor into the choice of preset. Content with a larger audience may benefit from the slower preset due to potential CDN savings resulting from improved video quality. On the other hand, smaller-scale content with fewer viewers may not necessitate the same level of complexity in encoding. Balancing encoding quality and cost requires thoughtful consideration of these factors.

Adaptive Encoding Ladders: Variations, Frame Rates, and Device Considerations

Adaptive encoding ladders play a crucial role in delivering content based on source resolution and frame rate. At Warner Brothers Discovery, these encoding ladders consist of approximately six to eight different variations, allowing flexibility in content delivery. The source resolution determines the stopping point within the UHD ladder, minimizing the need for multiple permutations of the ladders themselves.

Variations in frame rates necessitate different encoding ladders. The introduction of high frame rates, especially with reality TV content, requires separate encoding ladders to preserve the temporal resolution. Encoding ladders also differ for SDR and HDR content, with distinctions made between HDR10 and Dolby Vision 5, offering specific encoding settings for each.

While currently the same encoding ladders are used for all devices, specific subsets of the ladder may be delivered to certain devices to accommodate their capabilities. Device differentiation is particularly important for high frame rates or resolutions above 1080p. By intentionally capping the manifest delivered to devices that cannot handle certain capabilities, compatibility and optimal viewing experiences can be ensured. Differentiating encoding ladders for various devices is essential for maintaining consistent quality across different devices.

VBR Control, Per-Title Encoding, and DRM Considerations in Video Encoding

Video encoding involves crucial considerations such as VBR control, per-title encoding, and DRM integration. At Warner Brothers Discovery, the x264 and x265 codecs employ a CRF (Constant Rate Factor) rate control with a bitrate and buffer cap for VBR (Variable Bit Rate) encoding. This approach ensures control over codec levels, peak rates, and overall encoding quality.

VBR control is achieved by using VBV (Video Buffering Verifier) buffer size and VBV max rate parameters. These parameters allow for setting the highest average bitrate for the video, while CRF brings the average bitrate below the specified max rate in most cases. This method enables per-title encoding, achieving CDN savings without compromising quality. Differentiating encoding ladders based on resolutions, frame rates, and HDR formats is essential to conform to content licensing agreements and compatibility requirements.

DRM has a significant impact on the encoding ladder. Licensing agreements often demand different security levels for various resolutions, necessitating the assignment of different encryption keys and playback policies to different security groups. The use of hardware-backed DRM, such as Widevine L1 and PlayReady SL3000, is often required for higher resolutions. The trend in the industry is moving towards increased use of DRM across the entire encoding ladder, with a focus on stricter requirements for HDR content. Content licensing agreements are evolving to require comprehensive DRM implementation for improved content protection.

Exploring Hardware and Software DRM: Implementation and Impact on Video Streaming

The choice between hardware and software DRM implementations has implications for video streaming security and performance. Hardware DRM involves integrating DRM clients into the secure video path of the system, tightly coupling with the hardware decoder. This ensures secure decoding and decryption of video streams, preventing unauthorized access to the content. Hardware-based DRM establishes a secure video path or secure media path, where the decrypted and decoded bits cannot be retrieved or accessed by applications. This level of security is achieved through close integration with the hardware decoder, ensuring protection throughout the entire decoding process.

On the other hand, software DRM performs decoding and decryption in software, introducing a potential vulnerability where the decoded bits could be compromised or accessed by unauthorized parties. Software DRM lacks the same level of hardware integration and security provided by hardware-based DRM.

The limitations of software-based DRM can impact the resolution of premium content when viewing it on certain platforms or browsers without hardware support. For example, Chrome’s support for Widevine DRM is limited to L3, the software-based implementation. This can result in inferior video quality compared to browsers like Edge or Safari, which support hardware DRM, allowing for a more secure video path and higher quality streaming.

Unifying Packaging Formats: HLS, DASH, and CMAF in Video Streaming

Standardizing packaging formats is crucial for compatibility and interoperability in video streaming. Warner Brothers Discovery and Hulu have been utilizing both HLS (HTTP Live Streaming) and DASH (Dynamic Adaptive Streaming over HTTP) for content distribution. HLS is predominantly used for Apple devices, while DASH is employed for other devices.

The commonality between HLS and DASH lies in their utilization of the CMAF (Common Media Application Format) standard. CMAF serves as a standardized version of fragmented MP4 (fMP4), specifying the necessary boxes and encryption application for fMP4 media segments used in HLS and DASH. CMAF is not a streaming protocol itself but encompasses two components.

Firstly, it defines a refined version of fMP4 for HLS and DASH, establishing a more precise set of guidelines for compatibility. Many existing HLS and DASH implementations using fMP4 media segments are already CMAF-compliant.

Secondly, CMAF specifies a hypothetical logical media presentation model, outlining the relationship between tracks, segments, fragments, and chunks. This model closely resembles HLS or DASH without explicitly using those terms. It provides a framework for addressing different levels of the media presentation.

HLS and DASH can be considered as the physical implementations of the logical media presentation model described by CMAF. The HLS-DASH interoperability specification, such as CTA 5005, heavily relies on CMAF, serving as a unifying model and describing how both HLS and DASH integrate with CMAF. This unification allows for similar concepts to be described across both formats, enhancing compatibility and simplifying the streaming ecosystem.

Exploring Hardware and Software DRM: Implementation and Impact on Video Streaming

The streaming industry faces challenges related to content publishing and compatibility across diverse platforms and devices. The Consumer Technology Association (CTA) plays a crucial role in addressing these challenges and streamlining content publishing processes. The CTA is actively working to enhance interoperability within the streaming industry, allowing publishers to focus primarily on content development rather than compatibility concerns.

The CTA’s WAVE initiative serves as a platform for fostering efforts to streamline content publishing and compatibility. One major challenge in the streaming landscape is the presence of numerous application development platforms. For example, within Warner Brothers Discovery, there are approximately a dozen or 16 different application development platforms utilized for their streaming service, with some overlap between certain platforms such as Android TV and Fire TV.

Developers often encounter the unique scenario of building multiple versions of the same application in various programming languages using different platform APIs. This complexity arises due to the diversity of devices and platforms requiring tailored applications. This situation is unparalleled compared to other industries where typically a web app, iOS app, and Android app cover the majority of development needs.

The multitude of application development platforms poses challenges in areas such as encoding and packaging. Determining device capabilities becomes arduous without a standardized specification or set of APIs that can provide consistent and reliable information across different platforms.

The standardization of device media capabilities detection APIs is a crucial step towards enhancing compatibility in the streaming industry. Efforts within the World Wide Web Consortium (W3C) to define these APIs in HTML are underway. However, it is important to note that not all platforms utilize HTML, necessitating the presence of similar APIs across all platforms. Once standardized APIs for media capabilities detection are established, developing a standardized method for signaling these capabilities to servers becomes essential. This facilitates targeting specific devices based on their capabilities and enables actions such as manifest filtering.

Standardization efforts are vital for simplifying content publishing and enhancing compatibility in the streaming industry. By establishing standardized specifications and APIs, the industry can overcome compatibility challenges and streamline the development and distribution of streaming content.

The Leverage Is Imperative

The evolution of streaming technologies has brought about significant advancements in digital media distribution and delivery. Pioneers like Alex Zambelli have played a crucial role in driving innovation and pushing the boundaries of what is possible in online streaming. The implementation of multi-view camera angles, considerations in livestreaming, advancements in video codecs and streaming protocols, and optimization of encoding quality and cost are key areas that shape the streaming landscape. Standardization efforts, hardware and software DRM implementations, and the role of organizations like the CTA further contribute to enhancing compatibility and simplifying content publishing in the streaming industry. As the streaming industry continues to evolve, leveraging these advancements and best practices is imperative to deliver high-quality, seamless streaming experiences to audiences worldwide.

Unlocking the Potential of Cloud Gaming with VPUs

In this interview, Olivier Avaro, the CEO of Blacknut, discusses the emergence and potential of cloud gaming. Blacknut aims to bring the joy of gaming to the mass market by offering a large catalog of games through cloud-based distribution. Avaro highlights the maturity of both users and technology, making cloud gaming a feasible and attractive option. The interview explores the transition from physical discs to streaming, the importance of cost-effectiveness in delivery, and the architectural advancements in cloud gaming systems.

Avaro emphasizes the potential of hybrid cloud infrastructure and the role of GPU and VPU in maximizing the number of concurrent players and reducing costs. He acknowledges the challenge of making cloud gaming affordable for a wider range of consumers, including those in emerging markets. However, he emphasizes that the cost of delivering the service can be kept within a reasonable range, with subscription prices ranging from $5 to $15 per month, depending on the economic conditions of the region.

The technical infrastructure of cloud gaming is explored in detail. Avaro explains the basic architecture, where games are stored on cloud servers and streamed to users’ devices, eliminating the need for downloads. The key requirements for a seamless experience include sufficient bandwidth, low latency, and a well-equipped server infrastructure comprising CPUs, GPUs, and storage. Initially deployed on public cloud platforms for scalability, Blacknut has devised a hybrid cloud approach to optimize the economics of the service. This involves the incorporation of private cloud servers, allowing for improved performance and cost efficiency.

The interview addresses an innovative architectural aspect of Blacknut’s system. Avaro discusses the decision to offload video encoding from the GPU to a dedicated video processor unit (VPU) provided by NETINT.

This approach increases the density of concurrent game sessions, enabling up to 200 players on a single server. This breakthrough in density enhances the economic viability of cloud gaming platforms by significantly reducing costs.

These insights offer valuable perspectives on the advancements in cloud gaming, the importance of cost considerations, and the technological infrastructure that underpins its success.

Avaro also addresses challenges related to unstable internet connectivity in certain regions, discussing collaborations with Ericsson to leverage 5G networks and optimize network characteristics for gaming. While geographical limitations exist, Blacknut is actively expanding its presence to provide global access to its gaming service.

Voices of Video - Cloud Gaming being Real

Play Video about Voices of Video - NETINT Technologies about Cloud Gaming being Real. A conversation with the CEO of Blacknut
VOICES OF VIDEO
Cloud Gaming being Real. A conversation with the CEO of Blacknut
Watch the full conversation on YouTube: https://youtu.be/w9Pho6G_bdM
 

Mark Donnigan:
So we are at the top of the hour, and looks like we should get started. Oliver, are you ready to talk about cloud gaming?

Oliver Avaro:
Absolutely ready.

Mark Donnigan:
Excellent, excellent. Well, welcome to those who are joining us live. This is the May edition of Voices of Video. And if you haven’t joined us before, Voices of Video is a conversation, or some might say a real dialogue. Not a podcast, I guess a videocast. We go live on LinkedIn and also a lot of other platforms. And we are talking each month with innovators in the video space. And so this month I am super excited to have Oliver Avaro, who is the CEO of a company called Blacknut. And we are talking about cloud gaming. I will let Oliver tell us all about what his company does. But welcome to Voices of Video, Oliver.

Oliver Avaro:
Look, thanks a lot, Mark, for the nice introduction. So my name is Oliver Avaro, I’m the CEO of Blacknut, which in short is doing to games what Spotify did for music, right? So we are distributing game from the cloud, large catalog of games, more than 700 games so far, and this for a simple subscription fee, right? I was long time a gamer. I enjoyed it a lot when I was a teenager. I enjoyed it a lot with friends, with my family, later with my kids. And I started Blacknut in 2016 with the big ambition to actually brings this joy of gaming, this good emotion, all the also positive value of playing together to the mass market. We deployed the tech for about three years. I think cloud gaming does require a bit of technology to work efficiently. Then we started deploy it all over the world and this is where we are today.

Mark Donnigan:
So we are at the top of the hour, and looks like we should get started. Oliver, are you ready to talk about cloud gaming?

Oliver Avaro:
Absolutely ready.

Is Blacknut CEO a gamer himlsef?

Mark Donnigan:
I love it. So I have to ask the question, sometimes when we’re building advanced technologies, we get so into the technology, we don’t get to do the thing that we originally set up to do like play games. So are you still a gamer? Set aside time each day to play?

Oliver Avaro:
I set aside each time to play a little bit. That’s true. And I have to say that I was a… The first game I played was on the Commodore 64 machine, it was named Boulder Dash, right? The older of the audience will know about it. Now I’m still, I’ve been playing with my kid of course on the Wii, all the Nintendo games. And Mario and Super Mario Kart and Super Mario Galaxy, right? And to be truly honest, I’m still playing a bit with my kid, but mostly I’m touching a bit Pokemon Go sometimes to still get a conversation with my wife on gaming.

Mark Donnigan:
That’s good. That’s good. Well, I am really excited for this conversation. And I was just thinking back as I was making some notes for what I thought we should talk about. And in 2007 I had the distinct privilege, and I really do consider it to be a privilege, to be a part of a company, one of the early, early innovators of streaming what we call now OTT, and at the time it was transactional VOD. The company still exists, it’s called Voodoo. And we had this crazy idea to take the Blockbuster, those who have been around for a little while will remember Blockbuster video stores in the US. Other countries, they had the equivalent. And eventually I think Blockbuster did expand outside the US. But you’d go to the video store, you’d rent a disc, DVD, and then eventually Blu-ray, and you would drive home so excited for the family to join around the TV and watch it.

And I can remember how shocking it was to have built this amazing experience where every title was in stock. And those of us who remember the video store, remember that that was part of the challenge, on new release day you had to rush down to the store to be the first in line so you could even get the movie, because they only had so many copies. And then of course you had to worry about did I return it, did I return it by the deadline or do I have to pay for a second day. There was a lot about the experience that actually wasn’t so great. And yet we were shocked at how many people said, “Why would I want to stream over the internet? DVD is great. This is amazing. Look at the quality. No one’s going to want to replace the DVD.” Well, 15 years later, obviously that sounds absolutely crazy, as now the entire world is streaming and we can’t even imagine a world without it.

But as I was thinking about cloud gaming, it feels like maybe we’re a little bit further than we were in 2007, but they’re still not everybody’s convinced. And I’m even surprised that major publishers that I’m coming across, and it’s not a foregone conclusion that the console is going to be replaced with streaming. And so let’s start there. Oliver, I have to imagine that a lot of what you’re spending time doing, aside from building the technology, is making the case for why internet delivery of a game experience is going to be better and is ultimately better than something that’s installed on a PC, downloaded or a console. So what insights do you have to share about where we are in this transition from consoles and discs to streaming for games?

Oliver Avaro:
And Mark, I think the analogy with the Blockbusters I think is very relevant. And I feel that first, in terms of market maturity for the end user, we are probably at that point where people would question, “Why should I do that? I can download a game, why should I actually stream it? Why do something different?” Right? And when I created Blacknut, actually a person that I highly respect told me, “Wow.” People will not use it because they can download it, right? Now, if you look at where we are right now with people now consuming all the media, like audio and video and your musics and books in a streaming manner, it seemed that definitely having those people accessing games the same way seems to be actually, it’s the right idea or the right next step, right?

And I do think that there is a bit more of maturity of people actually willing to access games this way. Now, there has been probably an inflection points in terms of technology maturity. I think the technology, meaning basically the hardware you can have on the cloud, the bandwidth you have available on your home, as a kind of device you have to run it and so on, is good enough to provide actually a great experience. And I do think that we are at the time here where we’re passing this inflection point that probably years ago it was not sufficient. And we have seen lot of companies trying to do this, but actually failing and failing really badly. But actually learning a lot from these failures.

So I think we’re at a very exciting time now where we have this maturity in terms of technology. We have the maturity of the end user, because they are used to consume this kind of media with audio, video, eBooks and so on. So probably they’re craving to get access to game, and more and more people are gaming. And we have also the maturity of the content owner and the publisher. So I think we’re at a very, very good time in the market.

Deliver at ultra low latency. Possible?

Mark Donnigan:
Well, I definitely agree that we are much further advanced than we were. I think of some of the things that we had to do, Voodoo in 2007 actually required an appliance, a device with a hard drive in it that we could download the first 30 seconds, maybe a minute of every single title in the library in it. At that time, the library was not as big as what the libraries are today. But just because streaming bandwidth was 768 kilobits. Maybe 1.5 megabits was really fast. If you were really lucky you had 5 megabits. My, how we’ve grown. So it’s definitely we’re in a better position.

Before we get into the technology, because that’s where we’re going to spend the bulk of our time today. But something that I think also you’re in a really good position to address is, is the cost side. So certainly, we’re at a place today with the cloud that you can deliver anything, really anywhere via the cloud. So the notion that you can do cloud gaming, i.e., it’s possible to deliver an ultra low latency, very high quality experience from the cloud. I don’t think anybody conceivably would say, “Oh, I don’t believe that. That’s not possible.” But there is a real issue of the cost. And so why don’t you address where we’re at in terms of just delivery cost, and I’m speaking of OpEx. Where are we at? I mean, is this possible but not affordable, or is this possible and affordable, even for someone who might not be able to charge their consumer a whole lot of money? Not all markets are the US or Western Europe, or some of these regions where consumers are willing to pay $10, $15, $20 a month.

Oliver Avaro:
No, that really is a key issue, Mark. Because, as you mentioned, I think we passed the technology inflection point where actually the service becomes to be feasible. Technically feasible, the experience is good. We think it’s good enough for the mass market. I am sure that some people will be unhappy with it. Really, core gamers will say, “Well…”

Mark Donnigan:
Sure.

Oliver Avaro:
Probably the same people that when the DVD came they say, “Well, I still want to listen to my vinyl on my turntable because this is what I’m using to listen my music. And you will not beat that quality with digital sound.” Right? But for the mass market, I think we got to the point where the feasibility is here. Of course we need good bandwidth, stable, very low jitter, so the variation of the latency. But we are here right.

Now, the issue is indeed on the unique economics and how much it costs to actually stream and deliver games in an efficient manner, so that it is affordable basically for the mass market. And one thing here is I think the gaming is not done. Okay? There is some challenges. As you know, the cost of streaming depends on the number of hours per month, let’s say that you stream. We think that we got at least some maturity where it’s becoming available so that you get to a price point which is what people expect, which is between $5 to $15, depending on the how poor are the country is. So we think this is realistic. But of course, it depends on the intensity of the player, how much they play. And if you want somehow to really sustain and to have great economics, there is still some improvement to be done. Okay? And I would say we have the baseline architecture that allows the service to be profitable, to make it really work, really scale. There is still some margin of improvement. And we have ways actually to improve this unique economics.

Technical infrastructure

Mark Donnigan:
So you’re saying right now that to the end user, which means that the actual cost to deliver the service has to be less. But to the end user, about $5 a month to $15 a month is a target that is possible to reach?

So $5 a month, even in more emerging markets where maybe subscription prices cannot be what they are say in the US, feels like that’s doable. So that’s actually good to hear. Tell us what is the technical… Let’s talk now about what the technical infrastructure looks like and what it takes to deliver. How have you built your system? And then we will get to the broader architecture of Blacknut and what exactly you’re offering. But let’s start with what is your system built on? What does it look like? What are you deploying? Is this a cloud service? Is it run all on prem?

Oliver Avaro:
So basically, the architecture of cloud gaming is somehow simple. You take games, you put them on the server in the cloud and you’re going basically to virtualize it and stream it in the form of a video stream or in some other format so that you don’t have to download the game on the client side, and you can play it as you are playing a video stream. And when you interact with the game, you send a command back to the server and then you interact with the game this way. And so of course bandwidth need to be sufficient, let’s say 6 megabit per second. Latency need to be good, let’s say less than 80 milliseconds. And of course you need to have the right infrastructure on the server that can run games. No games mean a mixture of CPU, GPU, storage, and all this need to work well.

We start deploying the service based on public cloud, because this allow us to test the different metrics, how people were playing the service, how many hours. And this was actually very fast to launch and to scale. So this is what the public clouds, the hyperscaler, SCP, and so on provides. That’s great, but they are quite expensive as you know. So to optimize the economics, we actually built and invented in Blacknut what we call the hybrid cloud for cloud gaming, which is a combination of both the public cloud and private cloud. So we have to install our own servers based on GPUs, CPUs and so on, either directly in Blacknut or with some partners like Radian Arc so that we can improve the overall performances and the unique economics of the system. That I think allowed us to build a profitable service. I think if you just match basically the public cloud currently, I think this is super hard to get something which is viable. But with this kind of hybrid cloud, I think it’s actually very doable.

Mark Donnigan:
And these are standard x86, commercial, off-the-shelf, Intel, AMD machines. I mean, there’s nothing special required or have you gone to a purpose-built design?

Oliver Avaro:
No, the current design is basically definitely specific for the private cloud, but it’s based on standard x86. And for GPU we use a AMD or NVIDIA. Okay? We have a mixture of different providers, but basically this is, I would say reasonably standard architecture, with a mix of CPU, GPU and storage.

Cloud gaming use case

Mark Donnigan:
The cloud gaming use case is a primary one and that’s obviously why we got introduced. And you are using Netin, which we will get to. But kind of the key measure from a technology perspective, and it maps directly back to cost, for a cloud gaming installation is the number of concurrent sessions per server. Obviously, just stands to reason that the more concurrent sessions or players that you can get on a server, well, it’s going to be less expensive to operate and to run. So that’s not too difficult to understand.

One of the things that’s really interesting is, and I’d like for you to talk about this architecture where you have the GPU rendering the game, but you’re actually not doing the video encoding on the GPU. So what does that look like? And also, talk to us about the evolution, because that’s not where you started. And most cloud gaming platforms today are attempting to keep everything on the GPU, which has some advantages, but it has some very distinct disadvantages and trade-offs. And the disadvantage is you just can’t get the density, which means that your cost per stream likely cannot meet that economic bar where you can really affordably deliver to a wider number of players. I.e., you can’t drive your cost down so you have to charge more, and there’s people who will say, “Well that’s too expensive.” But talk to us about this architecture.

Oliver Avaro:
So that’s correct, Mark. I think the ultimate measure is the cost per CCU, right? The cost per concurrent user that you can get on a specific bill of material. If you have a CPU plus GPU architecture, the game is going to actually slice the GPU in different pieces in the more dynamic manner and in the more appropriate manner so that you can run different game and as much game as possible. Right? So typically if you get on the standard GPU, you can run probably a big game, like a large game and you can cut the GPU in four pieces. If you run a medium game, you can run it maybe in 6 or 8 pieces. And if you run a smaller game, then maybe you can get to, I don’t know, 20 pieces, right?

There is some limits on how much you can slice the GPU for the GPU to be still efficient. And likely, for example, the NVIDIA centralized you to slice one GPU in 24 pieces, but that’s it, right? And so there is some limits in this architecture because it all rely on the GPU. We are indeed investigating different architectures where indeed we are using a VPU, like NETINT is providing a video processor that will somehow offload the GPU of the task of encoding and streaming the video so that we can augment the density. And we see it in as terms of full architecture as something which will be a bit more flexible. I think in terms of number of big games, because they rely much more on the GPU, probably you will not augment the density that much. But we think that overall, probably we can gain a factor of 10 on the number of games that you can overall run on this kind of architecture. So passing from a max of 20, 24 games to a time 10, right? Running 200 games on architecture of this kind.

Mark Donnigan:
Yeah, that’s really remarkable. And just in case somebody isn’t doing the quick math here, what you’re saying is that is it with this CPU plus GPU plus VPU, which the VPU is the ASIC based video encoder, all in the same chassis, so the same server, we’re not talking about different servers, you can get up to 200 game players simultaneously, so concurrent players. Which just radically changes the economics. And in our experience, working with publishers and working with platforms, cloud gaming platforms, nearly everybody has said literally without that it’s not even really economical to build the platform. In other words, you end up having to charge your customer so much, and where the experience is, it’s not viable.

Oliver Avaro:
That’s correct.

Mark Donnigan:
Yeah, that’s important.

Oliver Avaro:
And for certain category of games, definitely you can reach this level. So actually augmenting the density by a factor of 10 means also of course diminishing the cost per CCU by a factor of 10. So if you pay $1, currently you will pay 10 cents, and that makes a whole difference. Because let’s assume basic gamers will play 10 hours per month or 30 hours per month, if this is $1, this is $30, right? If this is 10 cents, then you go to one to $3, which I think makes the match work on the subscription, which is between 5 to 15 euro per month.

Is hardware super expensive

Mark Donnigan:
One of the questions that comes up, and I know we’ve had this conversation with you, is how is this possible? Because anybody who understands basic server architecture, basically it’s not difficult to think, well, wait a second, isn’t there a bottleneck inside the machine? And this must require a really super hot rodded machine. So maybe the cost savings is offset by super expensive hardware. And I think it’s important to note that the reason why this is possible is first of all, the VPU is built on NVMe architecture. So it’s using the exact same storage protocol as your hard drive, as the SSDs that are in the machine. And what we have done, what Netin has done is actually created a peer-to-peer sharing inside the DMA. So basically the GPU will output a frame, a rendered frame, and it’s transferred literally inside memory, so that then the VPU can pick that up, encode it, and there’s effectively zero latency, at least in terms of the latency is so low because it’s happening in the memory buffer.

And so if anybody’s listening and raising an eyebrow wondering, “Well wait a second, surely there’s a bottleneck.” And especially if you’re talking 60 frame per second, which by the way, our benchmarks are generally always at 60 frames per second. Because unless it’s real casual games, you need that frame rate to really deliver a great experience. Even above resolution in some cases, it’s better to get the frame rate up than to increase the size of the frame.

Oliver Avaro:
Absolutely. Absolutely.

Mark Donnigan:
Yeah. Let me just pause here and say that we would love to have questions. And so feel free, on whatever platform, if you’re on YouTube or LinkedIn or wherever watching us right now, just type in and I will try and pick those up. I have looks like, like we already have one. I think this is actually a really good one. I’m going to pick this up right here. But feel free to enter questions in the chat. So Oliver, the question is, “I live in a country where stable internet is not always available.” And by the way, I would say that this isn’t only a country issue, internet varies, right? And the expectation of users is more and more that they don’t think about the fact that I’m in a car, I happen to be in an area where there’s great coverage, but seven miles down the road that changes, right? They want to keep playing and keep enjoying this great experience.

So the question is, “I live in a country where stable internet is not always available. How will this affect the gaming experience?” And yeah, I mean, that’s the question. So what’s your experience and how are you guys solving for this?

Oliver Avaro:
You see, in Netflix or Spotify, you can actually buffer content so that even if your bandwidth is a bit clumsy, you can actually store that content in the CDM and keep the experience good enough, right? Or you can download the video and make it work. So definitely you have some way to solve that problem in I would say cold media, right? Media that you can encode in one way, then stream later. In games, this is completely different.

Mark Donnigan:
Yeah, you can’t do that.

Oliver Avaro:
Because we have to encode, stream, deliver, and then in text integration right away. So if your bandwidth is not enough, if the quality of the bandwidth is not enough, and not only in terms of the size of the bandwidth but also in terms of characteristic. The latency, how this latency is stable and so on, then the experience will be great, right?

So what we’ve been doing actually with Ericsson, okay, is to use 5G networks and to define specific characteristic of what is a slice in the 5G network. So we can tune the 5G network to make it fit for gaming. And to optimize basically the delivery of gaming with 5G. So we think that 5G is going to get much faster in those region where actually the internet is not so great. We’ve been deploying the Blacknut service in Thailand, in Singapore, in Malaysia, now in the Philippines and so on. And this has allowed us to actually reach people in regions where there is no cable or bandwidth with fiber and this kind of things. So look, I’m not going to solve a problem where bandwidth is not available, but maybe bandwidth will come faster with 5G and that could be the solution.

Mark Donnigan:
Yeah, I want to make a comment there, and thank you for the answer. We are seeing, so it’s very interesting, and I’ll use India as an example. So for years in video streaming, the Indian market was used as an example of where it was very difficult to deliver high quality, and especially if you wanted to deliver say 720p, and 1080p was almost assumed at a certain period of time it’s not even possible. Because the network capacity and the speeds were just so low.

What has happened is, and India’s a great case study here, but it’s really almost all regions of the world, as these infrastructures, these wireless infrastructures have been upgraded, they leapfrogged literally from 3G or in some cases even 2.5G and before, and just went all the way to 5G. And so in the last five years there has been such a fundamental shift in bandwidth availability that in some cases, some of these regions of the world, not only is it definitely no longer true that they’re slow, they’re faster than some of the more developed countries. So I do want to make that statement there. One question, Oliver, can you talk about is this webRTC? What protocols you’re using? There’s a lot of talk right now about QUIC. And I think that would be interesting for some of the listeners who might be wondering even what protocols you’re using.

Oliver Avaro:
So we use standard codeX to start with the bottom line. We have not embedded codeX, we have been into the standardization industry of audio and video for quite some years, and I think you have great experts here doing great technology. And this technology is actually embedded into the chipset, into the hardware, so actually you can rely on hardware encoding and decoding capabilities. So we do think standard codeX is basically a must have, right? Of course you need to configure them the right way because you have to code real time. Okay? So you cannot use a particular techniques to wait for a couple of frames or more, so you have to optimize this. But basically we use standard codeX.

Then on the protocols on top of this we have actually a large variety of protocol. It depends on the device on which you are streaming. So it can goes from full-property protocol that we have invented and patented in Blacknut, to standard webRTC. Okay? So if you look at devices like Samsung and LG, which are basically the top manufacturers, I think the service has been launched on LG. We are going to announce, I think our launch with Samsung in very short time. And these devices support webRTC, and that basically is the only way to implement and to support the cloud gaming solution efficiently. So short answer, we use a wide range of protocol, always the one that is the most appropriate and provides the best experience to the end user. We’re using at of course new protocol, new standards, experimenting this. But I would say for the main streamline new solution, we use our own solution plus webRTC. It’s the only… that they’re there.

The end-to-end latency targets

Mark Donnigan:
The end-to-end latency targets, I think previously you made the comment about 80 milliseconds. But give us some guidelines, what is, obviously the answer is as low as possible, but what’s the upper limit where the game experience just falls apart? It’s just not playable?

Oliver Avaro:
You know that the limit for conventional video is about 150 milliseconds. For playing games, this is much lower, probably half of it. So I think you can get a reasonably good experience at 80 milliseconds for actually most of the game that does not require this kind of fast reaction. But then if you want to go to FPS or this kind of thing, that really need to… to nearly be reactive at the frame accuracy, which is very of course difficult in cloud gaming, you need to go down to the 30 millisecond and lower, right? And then I think it’s only feasible if you have a network that allows for it. Because it’s not only about the encoding part, the server side and the client side, it’s also on where the packets are going through the networks. Okay?

Because you can have the most efficient systems in terms of encoding latency and decoding latency, but if you bucket instead of going directly from the server to the end user, go here and there and transit in many places, then your experience will be crappy. And Mark, this is actually a real issue, because we for example had a great demonstration with Ericsson in Barcelona of the Mobile World Congress. And we had servers in Madrid, but when we first make the first test, we discovered that the packets were going from Madrid to Paris, and back to Barcelona, right? So this need a bit of intelligence and technology to make this connection as efficient as possible.

Mark Donnigan:
Tell us about Blacknut, what exactly you guys deliver?

Oliver Avaro:
We provide basically a cloud gaming service, which is, let’s say categorize it as a game as a service. Okay? This means that for the subscription fee per month you get access to the real stuff. You get access to 700 games. We are adding 10 to 15 new games per month, which is I think the fastest pace in terms of increasing game on the market. And we provide this experience on all single devices that can actually receive a video. Okay? So that’s what we do. And we distribute this service either B2C, so direct to the consumer. So if you go on your Blacknut webpage, you can subscribe, you can access to the games. But we also distribute it through carriers, so telecommunication carriers, operators all over the world. We currently have about 20 signed agreement with the carriers live actually. More than 40 signed, and we are signing and delivering one to two new carriers per month. So that’s the pace where we are in Blacknut. And there’s the choice to use carriers here is for the reason I explained to you that it’s good to have.

Mark Donnigan:
Optimization of the network.

Oliver Avaro:
You need to know where the packets are going. You need to make sure that there is some form of CDN for cloud gaming that is in place here that makes the experience optimal.

Mark Donnigan:
Yeah, it completely makes sense to me, especially because you mentioned the 5G optimization. And obviously carriers, yeah, they’ve been investing now for years in building out their 5G networks. But they’re always looking for reasons to drive more value and to really extract the full potential off the 5G or out of the 5G investment. So yeah, it really makes sense.

Oliver Avaro:
That’s the kind of thing we’re doing as well with our partner Radian Arc, and we are putting a server at the edge of the network. So inside the carrier’s infrastructure so that the latency is really super optimized. So that’s one thing that is key for the service.

The architecture

Mark Donnigan:
What is the architecture of that edge server? What’s in it? What CPU, GPU, VPU. Describe that.

Oliver Avaro:
We started with a standard architecture, with CPU and GPU. And now with the current VPU architecture, we are putting actually a whole servers consisting in AMD GPU, Netin VPU. And basically we build the whole package so that we put this in the infrastructure of the carrier and we can deploy the Blacknut cloud gaming on top of it.

Mark Donnigan:
And are you delivering to only a handful of fixed resolutions? If I was on a TV for example, do I get 4K or do you limit to 1080p or how do you handle that?

Oliver Avaro:
Again, great question. Okay? We actually can handle multiple resolution. I think we can stream from 720p up to 4K. The technology basically has no limits for it, right? And streaming 4K or even 8K is a problem that has somehow been solved already, from a technical matter. The question is, again, the cost and the experience. Okay? Streaming 4K on the mobile device does not really make sense. I think the screen is a bit more so you can screen a smaller resolution and that’s sufficient. On a TV likely you need to have a bigger resolution. Even if actually there is great upscale available on most of the TV sets, we stream 720p on Samsung devices and that’s super great, right? But of course scaling up to 1080p will provide a much better experience. So on TVs and for the game that require it, I think we’re indeed streaming the service about 1080p for the game that requires this.

Mark Donnigan:
Do you also find that frame rate is almost more important than resolution?

Oliver Avaro:
For certain games, absolutely. But again, it is game dependent. Of course-

Mark Donnigan:
It’s game, yeah.

Oliver Avaro:
If you are on a FPS, you probably, if you have the choice and you cannot stream 1080p, you would probably stream 720p at 60 FPS rather than 1080p 30 FPS, right?

Mark Donnigan:
Yes.

Oliver Avaro:
If you have to make some trade-off. But if you have different games where the textures, the resolution is more important, then maybe you will actually select more 1080p and 30 fps resolution. And what we build is actually fully adaptable. Ultimately, you should not forget that there is a network in between. And even if technically you can stream 4K or 8K, the networks may not sustain it. Okay? And then actually you’ll have less good experience streaming 4K than actually a 1080p 60 FPS resolution.

Gaming anywhere where you live?

Mark Donnigan:
Okay. I see a question just came in and it is how do we know where the service is available or is it available anywhere you live? And so I think you can answer that question, but why don’t you also explain are there geographical limitations? Is your content available anywhere? And then as an extension, I don’t think you actually talked about how many publishers you have. You did talk about every month you’re onboarding I think 10 or 12 new games. But yeah, so are there geographical restrictions? How can someone access this?

Oliver Avaro:
Great. Let’s start with content. Okay? Indeed, we have more than 700 games right now, 10 to 15 new games per month. And we actually try not to have geographical limitation on the content. Okay? So this being the content we have on the catalog is, from a licensing point of view, available worldwide. So that’s basically what we do. And we do have exceptions, as usual. But basically, a large part of the catalog is available worldwide. Now deploys this catalog of different region, we are available in more than 45 countries. We definitely need to have servers that are close enough to the end user so that the streaming experience is good enough. And we think that a reduce of between 750 to 1,500 kilometers probably the maximum. So I think we will actually put some point of presence in those geographical areas so that basically the latency, limited by the speed of light, that does not harm the service.

So of course if you look at it, we have Europe very much covered. We have US and Canada very much covered. We have a large portion of Southeast Asia, Korean and Japan very much covered. We are now expanding in Latin America, which is a bit harder. We have a strong presence now as well in the Middle East, with partners like STC in the region. And of course we have some zone that are less covered. Africa is not well covered at all. South Africa is, but basically the rest of Africa is a bit harder to reach.

Mark Donnigan:
By the way, what is the website? Why don’t you give out the URL there?

Oliver Avaro:
www.blacknut.com
I think try the service. We’ll be very happy to support and give feedback. I’m very interested in the feedback as well.

Mark Donnigan:
It’s super exciting. And as I said in the beginning, for me personally, having been really in the very early stages of the transition from physical entertainment delivery, I’m talking about movies specifically, like DVDs, to streaming. I’m just super excited to also now, 15 years later, be there with games. And there’s a lot of work to be done. And as you pointed out, the experience is absolutely not exactly mapped. We can’t throw out the console yet. But the opportunity to bring really the gaming experience to a much wider audience is really enabled with streaming. So by the way, so I think there’s a follow on question here. Do you have infrastructure in South Africa? You mentioned Africa’s not covered as well, but…

Oliver Avaro:
Yes, we do have the capacity to deploy the service in South Africa, absolutely.

Mark Donnigan:
To deploy in South Africa. Okay, great. Great. Well, we’re right up against time and thank you for everyone who joined us live. Really appreciate it. And thank you, Oliver. It’s amazing what you’ve built. And we’re super excited to be working with Blacknut.

Oliver Avaro:
Thank you everyone. Thanks, Mark.