Zapping: Low-Latency Premium Streaming Across Latin America

Zapping Low-Latency CDN Technology

Learn why live-streaming platform Zapping built its own low-latency technology and CDN to stream Latin American content using NETINT Streaming Video Servers, accelerating Zapping’s rapid expansion. “Zapping is the Netflix of the live streaming here in Chile, in Latin America. We developed all our technology; the encoders, low-latency solution, and the apps… We developed our own CDN…” Nacho Opazo, Zapping Co-founder and CTO.

Zapping low latency CDN case study-1
FIGURE 1. Nacho Opazo, Zapping Co-founder and CTO on a rare vacation away from the office,
Source: https://www.linkedin.com/in/nachopazo/overlay/photo/

Background

Zapping is a live-streaming platform in Latin America that started in Chile and has since expanded into Brazil, Peru, and Costa Rica. Ignacio (Nacho) Opazo, the co-founder and CTO, has been the driving force behind the company’s technological innovations.

The verb zapping refers to the ability to switch content streams with minimal delay. Give him a minute and Nacho will gladly demonstrate their superior low latency performance in the hyper-responsive mobile app he designed and developed. He’s also responsible for Zapping’s content delivery network (CDN), custom low-latency technology, and user interfaces on smart TVs.

Zapping streams free channels available via terrestrial broadcast, as well as content from HBO, Paramount, Fox, TNT Sports, Globo, and many others. Though this includes a broad range of content types, from local news to primetime TV to premium movies, what really moves the viewership needle in South America is sports, specifically soccer.

Latin America is a competitive marketplace; in addition to terrestrial TV, other market entrants include DirectTV, Entel, and MovieStar, along with free-to-air content in some markets. This makes soccer coverage a key driver for subscription growth, and it presents multiple operational challenges, including latency, video quality, and bandwidth consumption. With aggressive expansion plans, Zapping needed to achieve these requirements with a focus on capital management and optimizing operating costs.

FIGURE 2. Innovative, feature-rich players and broad compatibility are key to Zapping’s outstanding customer experience.
Source: https://www.zapping.com/compatibilidad

The Challenges of Soccer Broadcasting

Latency is a critical issue for soccer coverage and all live sports. As Nacho described, 

Here in Chile, the soccer matches are premium. So you need to hire a cable operator, and you can hear your neighbor screaming if they have a cable operator with lower latency. Latency is one of the key questions we get asked about in social media. In Brazil, it is more complicated because some soccer matches are free to air. So, our latency has to be lower than free-to-air in Brazil. One potential solution here was to install a server with a low latency transcoder in the CDN of each soccer broadcaster to ensure that Zapping’s streams originate from as close to the original signal as possible."

Zapping competed with these same services regarding quality, which is a key determinant of quality of experience (QoE). Soccer is incredibly fast-moving and presents a many compression challenges, from midfield shots of tiny players advancing and defending to finely detailed shots of undulating crowds and waving flags to close-ups of fouled players rolling in the grass. Zapping needed a transcoder to preserve detail and color accuracy without breaking the bandwidth bank. Like latency, Zapping’s bandwidth problems vary by country. In all countries, soccer’s popularity stresses the internet in general. 

Video files are huge, and when you have a soccer match, thousands of people come to your servers and saturate the region's internet... In the beginning, we saw low bandwidth connections - like 10 Gbps trunks between ISPs, and we saturated that trunk with our service.”

Beyond general capacity, some countries have suboptimal infrastructures for high-bandwidth soccer matches, like low-speed inter-trunk connections. “In the beginning, we saw low bandwidth connections – like 10 Gbps trunks between ISPs, and we saturated that trunk with our service.” Problems like these convinced Zapping to create their own CDN to ensure high-speed delivery.

In Chile, Zapping found a different problem. “Here in Chile, we have a really good internet. We have a connection of one gigabyte to the users, one gigabyte per second, and fiber optic. But 80% of our viewers watch on Smart TVs that they don’t upgrade that often, and these devices don’t have good Wi-Fi connections. So, Wi-Fi is the problem in Chili.” While Zapping’s CDN was a huge help in avoiding bandwidth bottlenecks, the best general-purpose solution was to implement HEVC.

80% of our viewers watch on Smart TVs that they don’t upgrade that often, and these devices don't have good Wi-Fi connections. So, Wi-Fi is the problem in Chile.”

To summarize these requirements, Zapping needed a transcoding system affordable enough to install and operate in data centers around South America that delivered high-quality H.264 and HEVC output with exceptionally low latency.

From CPU to GPU to ASIC

Nacho considered all options to find the right transcoding system. “I started encoding with CPUs using Quick Sync from Intel. but my problem was getting more density for the rack unit. Intel enabled five sockets per a 1RU rack unit, which was really low. Though the video quality was good, the amount of power that you needed, and the amount of heat that you produced was really high.”

Nacho next tried NVIDIA GPUs, starting with the P2000 and moving to T4. Configured with an 80-core Intel CPU and two T4s, the NVIDIA-powered system could produce about 50 complete ladders per 1RU rack unit, an improvement, but still insufficient. Nacho then learned about NETINT’s first-generation T408 technology

I was looking to get more density with my servers and found a NETINT article that claimed that you could output 122 channels per rack unit. (...) I found that the power draw was really low, as was the latency, and the quality of both H.264 and HEVC is really good.”

Looking ahead, Nacho foresees the need for even more density.

Right now we're trying the [second generation] NETINT Quadra processor. I need to get more dense. Brazil is a really big country. We need more power and more density in the rack.”

Nacho was sold on the hardware performance but had to integrate the NETINT transcoders into his encoding stack, which was a non-issue. 

We control the encoders with FFmpeg, and converting over to the NETINT transcoders was really seamless for us. Really, really easy.”

Just as Nacho finalized his testing, NETINT started offering a server package that included ten T408s in a Supermicro server with all software pre-installed. These proved perfectly suited to Zapping’s technology and expansion plans.

The servers are really, really good. For us, buying the server is better because it's ready to use. As we deploy our platform in Latin America, we send a server to each country. It’s as simple as sliding it into a rack, installing our software, and we’re ready to go."

Delivering Better Soccer Matches

Zapping low latency CDN case study-3 - Quadra Server
FIGURE 3.  Nacho will deploy the Quadra Video Server for the greatest density, lowest cost and latency, and highest quality H.264 and HEVC.

Armed with NETINT servers, Nacho proceeded to attack each of the challenges discussed above. 

For the latency, we talk with the channel distributor and put a NETINT server inside the CDN of each broadcaster. And we can skip the satellite uplink and save one or two seconds of latency.”

Nacho originally implemented his own low-latency protocols but now is experimenting with low-latency HLS. “With LL HLS, we can get six seconds ahead from free to air. Let’s talk in about three months and see what that looks like.”

Nacho also implemented a “turbo mode” that toggles the viewer in and out of Zapping’s low-latency mode. Viewers prioritizing low latency can enable turbo mode at the risk of slightly lower quality and a greater likelihood of buffering issues. Viewers who prioritize video quality and minimal buffering over ultra-low latency can disable turbo mode. As Nacho explained, “If you have a bad connection, like bad Wi-Fi, you can turn off the low latency and watch the match in a 30-second buffer like the normal buffer of HLS.”

Nacho also aggressively converted to HEVC output. 

For us, HEVC is really, really important. We get a 40% lower bit rate than H.264 with the same quality image. That’s full HD quality at 6 Mbps per second, which is really good compared to competitors using H.264 at 5 Mbps in full HD. And the user knows we’re delivering HEVC. We have that in our UX. The user can turn HEVC on and off and really see the difference.”

Regarding the HEVC switch, Nacho explained, “If we know that your TV or device is HEVC compatible, we play HEVC by default. But there are so many setup boxes, and some signal their codec compatibilities incorrectly. If we’re not sure, we turn off the HEVC by default, and the user can try it, and if it works, great; if not, they play H.264.”

After much experimentation, Nacho extended HEVC’s low-bitrate quality to other broadcasts as well. ‘For CNN or talk shows, we are trying a 600 kilobyte per second HEVC, and it looks really, really good, even on a big screen.”

Play Video about Voices of Video with Ignacio Opazo from Zapping
FIGURE 4. Voices of Video with Ignacio Opazo from Zapping – Unveiling the Powerhouse Behind Zapping

The Live Streaming Netflix of Latin America

One of Zapping’s unique strengths is that it considers itself a technology company, along with being a content company. This aggressive approach has enabled Zapping to achieve significant success in Chile and to expand into Latin America.

Zapping is the Netflix of the live streaming here in Chile, in Latin America. We developed all our technology; the encoders, our low-latency, and the apps in each platform. We developed our own CDN; I think it's bigger than Akamai and Fastly here in Chile. We are taking the same steps as Netflix. That you make your platform, you make the UI, you make the encoding process and then you must deliver.”

Nacho is clear about how NETINT’s products have contributed to his success. “NETINT servers are an affordable, functional, and high-performant element of our success, providing unparalleled density along with excellent low-latency and H.264 and HEVC quality, all at extremely low power consumption. NETINT has helped accelerate our expansion while increasing our profitability.”

Innovative technologists like Nacho and Zapping choose and rely on equally innovative tools and building blocks to deliver critical functions and components of their services. We’re proud that Nacho has chosen NETINT servers as the technology of choice for expanding operations in Latin America, and look forward to a long and successful collaboration.

ON-DEMAND: Building Your Own Live Streaming Cloud

From CPU to GPU to ASIC: Mayflower’s Transcoding Journey

Ilya’s transcoding journey took him from $10 million to under $1.5 million CAPEX while cutting power consumption by over 90%. This analytical deep-dive reveals the trials, errors, and successes of Mayflower’s quest, highlighting a remarkable reduction in both cost and power consumption.

From CPU to GPU to ASIC: The Transcoding Journey

Ilya Mikhaelis

Ilya Mikhaelis is the streaming backend tech lead for Mayflower, which builds and hosts streaming infrastructures for multiple publishers. Mayflower’s infrastructure handles over 10,000 incoming streams and over one million plus outgoing streams at a latency that averages one to two seconds.

Ilya’s challenge was to find the most cost-effective technology to transcode the incoming streams. His journey took him from CPU-based transcoding to GPU and then two generations of ASIC-based transcoding. These transitions slashed total production transcoding costs from $10 million dollars to just under $1.5 million dollars while reducing power consumption by over 90%, from 325,000 watts to 33,820 watts.

Ilya’s rigorous textbook-worthy testing methodology and findings are invaluable to any video engineer seeking the highest quality transcoding technology at the lowest capital cost and most efficient power usage. But let’s start at the beginning.

The Mayflower Internal CDN

As Ilya describes it, “Mayflower is a big company, under which different projects stand. And most of these projects are about high-load, live media streaming. Moreover some of Mayflower resources were included  in the top 50 of the most visited sites worldwide. And all these streaming resources are handled by one internal CDN, which was completely designed and implemented by my team.”

Describing the requirements, Ilya added, “The typical load of this CDN is about 10,000 incoming simultaneous streams and more than one million outgoing simultaneous streams worldwide. In most cases, we target a latency of one to two seconds. We try to achieve a real-time experience for our content consumers, which is why we need a fast and effective transcoding solution.”

To build the CDN, Mayflower used bare metal servers to maximize network and resource utilization and run a high-performance profile to achieve stable stream processing and keep encoder and decoder queues around zero. As shown in Figure 1, the CDN inputs streams via WebRTC and RTMP and delivers with a mix of WebRTC, HLS, and low latency HLS. It uses customized WebRTC inside the CDN to achieve minimum latency between servers.

Figure 1. Mayflower’s Low Latency CDN
Figure 1. Mayflower’s Low Latency CDN .

Ilya’s team minimizes resource wastage by implementing all high-level network protocols, like WebRTC, HLS, and low latency HLS, on their own. They use libav, an FFmpeg component, as a framework for transcoding inside their transcoder servers.

The Transcoding Pipeline

In Mayflower’s transcoding pipeline (Figure 2), the system inputs a single WebRTC stream, which it converts to a five-rung encoding ladder. Mayflower uses a mixture of proprietary and libav filters to achieve a stable frame rate and stable load. The stable frame rate is essential for outgoing streams because some protocols, like low latency HLS or HLS, can’t handle variable frame rates, especially on Apple devices.

Figure 2. Mayflower’s Low Latency CDN.
Figure 2. Mayflower’s Low Latency CDN.

CPU-Only Transcoding - Too Expensive, Too Much Power

After creating the architecture, Ilya had to find a transcoding technology as quickly as possible. Mayflower initially transcoded on a Dell R940, which currently costs around $20,000 as configured for Mayflower. When Ilya’s team first implemented software transcoding, most content creators input at 720p. After a few months, as they became more familiar with the production operation, most switched to 1080p, dramatically increasing the transcoding load.

You see the numbers in Figure 3. Each server could produce only 20 streams, which at a server cost of $20,000 meant a per stream cost of $1,000. At this capacity, scaling up to handle the 10,000 incoming streams would require 500 servers at a total cost of $10,000,000.

Total power consumption would equal 500 x 650, or 325,000 watts. The Dell R940 is a 3RU server; at an estimated monthly cost of $125 for colocation, this would add $750,000 per year. 

Figure 3. CPU-only transcoding was very costly and consumed excessive power.
Figure 3. CPU-only transcoding was very costly and consumed excessive power.

These numbers caused Ilya to pause and reassess. “After all these calculations, we understood that if we wanted to play big, we would need to find a cheaper transcoding solution than CPU-only with higher density per server, while maintaining low latency. So, we started researching and found some articles on companies like Wowza, Xilinx, Google, Twitch, YouTube, and so on. And the first hint was GPU. And when you think GPU, you think NVIDIA, a company all streaming engineers are aware of.”

“After all these calculations, we understood that if we wanted to play big, we would need to find a cheaper transcoding solution than CPU-only with higher density per server, while maintaining low latency.”

GPUs - Better, But Still Too Expensive

Ilya initially considered three NVIDIA products: the Tesla V100, Tesla P100, and Tesla T4. The first two, he concluded, were best for machine learning, leaving the T4 as the most relevant option. Mayflower could install six T4s into each existing Dell server. At a current cost of around $2,000 for each T4, this produced a total cost of $32,000 per server.

Under capacity testing, the T4-enabled system produced 96 streams, dropping the per-stream cost to $333. This also reduced the required number of servers to 105, and the total CAPEX cost to $3,360,000.

With the T4s installed, power consumption increased to 1,070 watts for a total of 112,350 watts. At $125 per month per server, the 105 servers would cost $157,500 annually to house in a colocation facility.

Figure 4. Capacity and costs for an NVIDIA T4-based solution.
Figure 4. Capacity and costs for an NVIDIA T4-based solution.

Round 1 ASICs: The NETINT T432

The NVIDIA numbers were better, but as Ilya commented, “It looked like we found a possible candidate, but we had a strong sense that we needed to further our research. We decided to continue our journey and found some articles about a company named NETINT and their ASIC-based solutions.”

Mayflower first ordered and tested the T432 video transcoder, which contains four NETINT G4 ASICs in a single PCIe card. As detailed by Ilya, “We received the T432 cards, and the results were quite exciting because we produced about 25 streams per card. Power consumption was much lower than NVIDIA, only 27 watts per card, and the cards were cheaper. The whole server produced 150 streams in full HD quality, with a power consumption of 812 watts. For the whole production, we would pay about 2 million, which is much cheaper than NVIDIA solution.”

You see all this data in Figure 5. The total number of T432-powered servers drops to 67, which reduces total power to 54,404 watts and annual colocation to $100,500.

Figure 5. Capacity and costs for the NETINT T432 solution.
Figure 5. Capacity and costs for the NETINT T432 solution.

While costs and power consumption kept improving, Ilya noticed that the CDN’s internal queue started increasing when processing with T432-equipped systems. Initially, Ilya thought the problem was the lack of onboard scaling on the T432, but then he noticed that “even when producing all these ABR ladders, our CPU load was about only 40% during high load hours. The bottleneck was the card’s decoding and encoding capacity, not onboard scaling.”

Finally, he pinpointed the increase in the internal queue to the fact that the T432’s decoder couldn’t maintain 4K60 fps decode for H.264 input. This was unacceptable because it increased stream latency. Ilya went searching one last time; fortunately, the solution was close at hand.

Round 2 ASICs: The NETINT Quadra T2 - The Transcoding Monster

Ilya next started testing with the NETINT Quadra T2 video processing unit, or VPU, which contains two NETINT G5 chips in a PCIe card. As with the other cards, Ilya could install six in each Dell server.

“All those disadvantages were eliminated in the new NETINT card – Quadra…We have already tested this card and have added servers with Quadra to our production. It really seems to be a transcoding monster.”

Ilya’s team liked what they found. “All those disadvantages were eliminated in the new NETINT card – Quadra. It has a hardware scaler inside with an optimized pipeline: decoder – scaler – encoder in the same VPU. And H264 4K60 decoding is not a problem for it. We have already tested this card and have added servers with Quadra to our production. It really seems to be a transcoding monster.”

Figure 6 shows the performance and cost numbers. Equipped with the six T2 VPUs, each server could output 270 streams, reducing the number of required servers from 500 for CPU-only to a mere 38. This dropped the per stream cost to $141, less than half of the NVIDIA T4 equipped system, and cut the total CAPEX down to $1,444,000. Total power consumption dropped to 33,820 watts, and annual colocation costs for the 38 3U servers were $57,000.

Figure 6. Capacity and costs for the NETINT Quadra T2 solution.
Figure 6. Capacity and costs for the NETINT Quadra T2 solution.

Cost and Power Summary

Figure 7 presents a summary of costs and power consumption, and the numbers speak for themselves. In Ilya’s words, “It is obvious that Quadra T2 dominates by all characteristics, and according to our team experience, it is the best transcoding solution on the market today.”

Figure 7. Summary of costs and power consumption.
Figure 5. Capacity and costs for the NETINT T432 solution.

“It is obvious that Quadra T2 dominates by all characteristics, and according to our team experience, it is the best transcoding solution on the market today.”

Ilya also commented on the suitability of the Dell R940 system. “I want to emphasize that the DELL R940 isn’t the best server for VPU and GPU transcoders. It has a small density of PCIe slots and, as a result, a small density of VPU/GPU. Moreover, in the case of  Quadra and even T432, you don’t need such powerful CPUs.”

In terms of other servers to consider, Ilya stated, “Nowadays, you may find platforms on the market with even 16 PCIe slots. In such systems, especially if you use Quadra, you don’t need powerful CPUs inside because everything is done on the VPU. But for us, it was a legacy with which we needed to live.”

Video engineers seeking the optimal transcoding solution can take a lot from Ilya’s transcoding journey: a willingness to test a range of potential solutions, a rigorous focus on cost and power consumption per stream, and extreme attention to detail. At NETINT, we’re confident that this approach will lead you to precisely the same conclusion as Ilya, that the Quadra T2 is “the best transcoding solution on the market today.”

Now ON-DEMAND: Symposium on Building Your Live Streaming Cloud

ASIC vs. CPU-Based Transcoding: A Comparison of Capital and Operating Expenses

ASIC vs. CPU-Based Transcoding: A Comparison of Capital and Operating Expenses

As the title suggests, this post compares CAPEX and OPEX costs for live streaming using ASIC- based transcoding and CPU-based transcoding. The bottom line?

NETINT Transcoding Server with 10 T408 Video Transcoders
Figure 1. The 1 RU Deep Edge Appliance with ten NETINT T408 U.2 transcoders.

Jet-Stream is a global provider of live-streaming services, platforms, and products. One such product is Jet-Stream’s Deep Edge OTT server, an ultra-dense scalable OTT streaming transcoder, transmuxer, and edge cache that incorporates ten NETINT T408 transcoders. In this article, we’ll briefly review how Deep Edge compared financially to a competitive product that provided similar functionality but used CPU-based transcoding.

About Deep Edge

Jet-Stream Deep Edge is an OTT edge transcoder and cache server solution for telcos, cloud operators, compounds, and enterprises. Each Deep Edge appliance converts up to 80 1080p30 television channels to OTT HLS and DASH video streams, with a built-in cache enabling delivery to thousands of viewers without additional caches or CDNs.

Each Deep Edge appliance can run individually, or you can group multiple systems into a cluster, automatically load-balancing input channels and viewers per site without the need for human operation. You can operate and monitor Edge appliances and clusters from a cloud interface for easy centralized control and maintenance. In the case of a backlink outage, the edge will autonomously keep working.

Figure 2. Deep Edge operating schematic.

Optionally, producers can stream access logs in real-time to the Jet-Stream cloud service. The Jet-Stream Cloud presents the resulting analytics in a user-friendly dashboard so producers can track data points like the most popular channels, average viewing time, devices, and geographies in real-time, per day, week, month, and year, per site, and for all the sites.

Deep Edge appliances can also act as a local edge for both the internal OTT channels and Jet-Stream Cloud’s live streaming and VOD streaming Cloud and CDN services. Each Deep Edge appliance or cluster can be linked to an IP-address, IP-range, AS-number, country, or continent, so local requests from a cell tower, mobile network, compound, football stadium, ISP, city, or country to Jet-Stream Cloud are directed to the local edge cache. Each Deep Edge site can be added to a dynamic mix of multiple backup global CDNs, to tune scale, availability, and performance and manage costs.

Under the Hood

Each Deep Edge appliance incorporates ten NETINT T408 transcoders into a 1RU form factor driven by a 32-core CPU with 128 GB of RAM. This ASIC-based acceleration is over 20x more efficient than encoding software on CPUs, decreasing operational cost and CO2 footprint by order of magnitude. For example, at full load, the Deep Edge appliance draws under 240 watts.

The software stack on each appliance incorporates a Kubernetes-based container architecture designed for production workloads in unattended, resource-constrained, remote locations. The architecture enables automated deployment, scaling, recovery, and orchestration to provide autonomous operation and reduced operational load and costs.

The integrated Jet-Stream Maelstrom transcoding software provides complete flexibility in encoding tuning, enabling multi-bit-rate transcoding in various profiles per individual channel.

Each channel is transcoded and transmuxed in an isolated container, and in the event of a crash, affected processes are restarted instantly and automatically.

Play Video about ASIC vs. CPU-Based Transcoding: A Comparison of Capital and Operating Expenses
HARD QUESTIONS ON HOT TOPICS
 ASIC vs. CPU-Based Transcoding: A Comparison of Capital and Operating Expenses
Watch the full conversation on YouTube: https://youtu.be/pXcBXDE6Xnk

Deep Edge Proposal

Recently, Jet-Stream submitted a bid to a company with a contract to provide local streaming services to multiple compounds in the Middle East. The prospective customer was fully transparent and shared the costs associated with a CPU-based solution against which Deep Edge competed.

In producing these projections, Jet-Stream incorporated a cost per kilowatt of € 0.20 Euros and assumed that the software-based server would run at 400 Watts/hour while Deep Edge would run at 220 Watts per hour.  These numbers are consistent with lab testing we’ve performed at NETINT; each T408 draws only 7 watts of power, and because they transcode the incoming signal onboard, host CPU utilization is typically at a minimum.

Jet-Stream produced three sets of comparisons; a single appliance, a two-appliance cluster, and ten sites with two-appliance clusters. Here are the comparisons. Note that the Deep Edge cost includes all software necessary to deliver the functionality detailed above for standard features. In contrast, the CPU-based server cost is hardware-only and doesn’t include the licensing cost of software needed to match this functionality.    

Single Appliance

A single Deep Edge appliance can produce 80 streams, which would require five separate servers for CPU-based transcoding. Considering both CAPEX and OPEX, the five-year savings was €166,800.

ASIC vs. CPU-Based Transcoding: A Comparison of Capital and Operating Expenses - Table 1
Table 1. CAPEX/OPEX savings for a single
Deep Edge appliance over CPU-based transcoding.

A Two-Appliance Cluster

Two Deep Edge appliances can produce 160 streams, which would require nine CPU-based encoding servers to produce. Considering both CAPEX and OPEX, the five-year savings for this scenario was €293,071.

Table 2 CAPEX/OPEX savings for a dual-appliance
Deep Edge cluster over CPU-based transcoding.
.

Ten Sites with Two-Appliance Clusters

Supporting ten sites with 180 channels would require 20 Deep Edge appliances and 90 servers for CPU-based encoding. Over five years, the CPU-based option would cost over € 2.9 million Euros more than Deep Edge.

Table 3. CAPEX/OPEX savings for ten dual-appliance
Deep Edge clusters over CPU-based transcoding.

While these numbers border on unbelievable, they are actually quite similar to what we computed in this comparison, How to Slash CAPEX, OPEX, and Carbon Emissions with T408 Video Transcoder, which compared T408-based servers to CPU-only on-premises and AWS instances.

The bottom line is that if you’re transcoding with CPU-based software, you’re paying way too much for both CAPEX and OPEX, and your carbon footprint is unnecessarily high. If you’d like to explore how many T408s you would need to assume your current transcoding workload, and how long it would take to recoup your costs via lower energy costs, check out our calculators here.

Play Video about ASIC vs. CPU-Based Transcoding: A Comparison of Capital and Operating Expenses
Voices of Video: Building Localized OTT Networks
Watch the full conversation on YouTube: https://youtu.be/xP1U2DGzKRo