Imagine this: millions of fans around the world are glued to their screens, ready to watch the long-awaited boxing match between newcomer Jake Paul and legend Mike Tyson. Social media is buzzing, the event is trending worldwide, and anticipation is through the roof.
If you tuned in to the fight, you know what happened next: a live-stream disaster.
The stream buffers endlessly. Video freezes mid-action. Audio glitches turn commentary into unintelligible noise.
Fans flooded Twitter with complaints, turning Netflix’s breakout live-streaming moment into a public failure that exposed one very glaring truth: Netflix isn’t ready for streaming at scale.
The technical issues weren’t just bad luck — they were the result of critical System Design flaws:
Overloaded servers failed to handle sudden traffic spikes
CDN congestion left some regions struggling with poor quality
Encoding delays led to playback freezes and audio mismatches
And now, with NFL games set to live-stream on Christmas Day and a WWE event looming in 2025, Netflix has to prove it can fix its issues — or it risks losing to competitors like Amazon and Disney+.
So today, I'm going to guide you through:
What really happened during the Tyson vs. Paul fight
The hard truth about live-streaming at scale
How to recognize (and build) great System Design
Lessons from Netflix’s stumble every developer should know
Let’s hit the mat.
The event, featuring Katie Taylor vs. Amanda Serrano as the co-main event, was positioned as a live-streaming milestone for Netflix. It was heavily marketed, generating massive anticipation across social media.
At its peak, 65 million concurrent viewers tuned in for the main event, while the co-main event drew another 50 million viewers — an enormous audience by any standard.
But instead of delivering a seamless experience, Netflix struggled to keep up. What should have been a defining moment became a showcase of technical failures, underscoring why solid System Design is critical.
What fans experienced
Viewers aired their frustrations on Twitter, citing major technical issues like:
Buffering and playback problems: Streams stalled mid-action, frustrating viewers during critical moments.
Stream freezes: Interruptions broke the momentum of the event.
Poor video quality: Resolution dips tarnished the experience for many.
Audio Sync Issues: Commentary glitches left viewers disconnected from the action.
The NFL game is expected to draw over 115 million viewers on Christmas Day, a massive audience Netflix must be ready for. Netflix will definitely have to prepare.
These failures weren’t isolated mishaps; they revealed deeper flaws in Netflix’s live-streaming architecture:
Server overload:Netflix likely underestimated the surge in traffic during peak moments. Without dynamic load balancing and autoscaling, servers were overwhelmed, leading to dropped connections.
Network congestion:The sheer volume of concurrent viewers put strain on Netflix’s Content Delivery Network (CDN). A lack of redundancy or poorly distributed traffic resulted in regional bottlenecks, causing degraded quality for some viewers.
Encoding and transcoding delays:Inefficient processing pipelines caused delays in compressing and delivering live video segments, leading to frozen streams and mismatched audio.
CDN mismanagement:Without fallback systems or intelligent routing, delivery issues were compounded, further frustrating users.
And the Tyson vs. Paul fight wasn’t Netflix’s first foray into live-streaming – or its first failure. The company has faced recurring challenges in this space, with mixed results:
Chris Rock’s Selective Outrage (March 2023): A promising debut, but minor technical glitches disrupted some viewers
Love is Blind Reunion (April 2023): A high-profile failure, with delays and glitches forcing Netflix to release a pre-recorded version
TUDUM 2023: A global fan event marred by sporadic technical issues
The Netflix Cup: Netflix’s first live sports event saw buffering issues but showed progress
The 30th Annual Screen Actors Guild Awards: A relatively smooth stream, though not without minor quality dips
While these events demonstrate Netflix’s commitment to live-streaming, the Tyson vs. Paul fight highlighted unresolved scaling issues. Fixing these is critical for Netflix to succeed in its upcoming NFL and WWE streams.
Next, we’ll explore the broader challenges of scaling live-streaming systems, the optimization strategies Netflix needs, and what developers can learn from these missteps.
Live streaming isn’t as simple as hitting “play stream.” It’s a high-stakes operation that involves millions of users, unpredictable traffic surges, and impossibly slim margins for error. When things go wrong — like they did during the Tyson vs. Paul fight — everyone notices (and takes to social media).
To better understand the challenges facing live-streaming at scale, let’s first break down how live streams are delivered:
Content Ingestion: Raw media from the production site is sent to a video processing service
Processing: Encoders compress the raw video, and transcoders convert it into multiple formats and resolutions
Distribution: The processed content is segmented and cached on a Content Delivery Network (CDN). Edge servers store this data locally, making it quickly accessible to nearby users
Playback: The user’s media player retrieves and decodes the stream dynamically, adjusting quality on the fly with adaptive bitrate streaming based on network conditions
Now I can explain some of the challenges facing live streams, starting with the major ones Netflix faced during the event:
Scalability: Handling millions of simultaneous viewers pushes the system to its limits. During the Tyson vs. Paul event, surges in traffic overwhelmed servers, leading to buffering and delays. Unpredictable spikes in traffic remain a key challenge for future events.
Low latency: Users want real-time interactions, and missing crucial moments can frustrate them. Issues of freezing videos and poor video quality impacted the user experience of the Tyson vs. Paul event. Providing a high-quality streaming experience with low latency to millions of concurrent users is a big challenge.
Buffering and playback issues: Delays in delivering the next chunks of data can lead to buffering issues. The streaming provider has to prepare content that can be played without buffering delays or playback issues, as happened during the Tyson vs. Paul event.
But as bad as this stream was, it actually could have been worse for Netflix. Here are some other common challenges that can occur during a live stream:
Network congestion: Network congestion is the next big challenge during such large-scale events, which can lead to degraded video quality for some users, a problem that scales with spikes in the audience
System reliability: The next challenge is to ensure the system’s stability during the peak load. A minor system failure during a high-visibility event can snowball into a platform-wide outage. Future events like the NFL games will demand robust systems to prevent crashes under massive traffic
Device compatibility: With viewers accessing streams on a wide range of devices, from smart TVs to mobile phones, ensuring consistent playback across varying hardware and software configurations is a technical challenge that must be handled upfront.
Cost: Running a live streaming infrastructure for millions can be expensive. To remain profitable, platforms must balance the need for quality and reliability with cost-efficient resource allocation.
Given the range of challenges Netflix faced — and the ones it could encounter next time — the pressure is on for the company to overhaul its System Design.
These challenges may seem overwhelming, but they’re not unique to Netflix. For developers, they highlight how critical System Design is — not just for live streaming but for building any reliable, scalable application.
Availability and scalability are strictly related to back-of-the-envelope calculations; look at them before diving into the details of System Design. Estimating resources based on predicted users can help prepare upfront.
The Tyson vs. Paul fight is a great example of what not to do in your System Design, and how important it is to deliver the seamless experience viewers want.
This table outlines the challenges above, and provides specific actions you can take to avoid them and improve your System Design:
Challenges | Solutions |
Scalability |
|
Low latency |
|
Network congestion |
|
Reliability |
|
Buffering and playback |
|
Device compatibility |
|
Cost |
|
Note: Stress-testing the system under conditions that simulate real-world scenarios, such as peak traffic and high user concurrency, helps identify the bottlenecks and ensures scalability.
Next, let’s take a look at an example of a System Design for a live streaming service, so you can see what it should look like.
You’ll remember that In System Design, the workflow of live streaming starts with raw media files ingested to distributed servers for processing, which manages the intake of streams while providing redundancy and low latency. This processing is pivotal in ensuring the live stream adapts to diverse viewer environments, divided into the following:
Encoding: The first step in video processing is encoding, which converts raw data into a compressed format for efficient transmission over the internet. These files are decoded at the client end. It removes redundant data, enabling faster delivery, but may reduce quality. H.264, H.265, VP9, and AV1 are popular video encoding algorithms, and MP3, AAC, Dolby AC-3, etc. are audio encoding formats.
Transcoding: Compressing a media file is not enough, as client-side bandwidth can vary, and it might cause delays in playing high-quality media. For that, we use transcoding that converts the encoded media into multiple formats and resolutions to adapt to varying devices and network conditions.
Segmentation: The next step is to break down a transcoded video into smaller chunks (2–10 seconds) for easier streaming. These segments allow users to stream in real-time by adjusting the quality dynamically as conditions change using adaptive bit rate streaming (ABR).
In the last step, the videos are packed in a container format, such as .mp4, .avi, .flv, etc., and are sent to the content delivery network (Open Connect by Netflix), which delivers the content to edge servers (Open Connect Appliances (OCAs)), usually residing within internet exchange points (IXPs) or internet service providers (ISPs) and cached their to reduce bandwidth requirements.
We have detailed chapters on YouTube System Design and YouTube product architecture design that provide useful insights into how the streaming system works and how it can be scaled and optimized to handle peak demands.
Along with the proposed scalable System Design, we must also focus on utilizing optimized technologies and protocols that support low-latency live streaming.
We can use the following optimization strategies to ensure our system remains resilient, scalable, and efficiently deliver the content in real-time to end users:
We can use optimized streaming protocols, such as low-latency HTTP live streaming (LL-HLS) or low-latency dynamic adaptive streaming over HTTP (LL-DASH), to quickly deliver content in near real-time using HTTP/2.0.
We can also enable a multi-CDN setup and tiered CDNs that enable intelligent load balancing and reduce latency by caching content at edge servers.
As transcoding is the main bottleneck that can delay the delivery of content in live streaming, we can use hardware-accelerated transcoding to process the streams quickly. We can also utilize edge processing to offload intensive tasks like transcoding and adaptive bitrate adjustments closer to end-users, reducing the load on central servers and minimizing latency. Once the content is processed, the edge servers can relay the optimized chunks to the CDN to distribute the ready-to-stream segments to other edge servers.
We can further reduce segment sizes to as little as 1–2 seconds to lower the latency and enable instant playback. Moreover, we can enable chunk encoding to allow segments to be transmitted incrementally so playback begins as soon as data is available.
Should all segment sizes be equal in live streaming?
We can also implement error resilience mechanisms on the user’s end, like forward error correction (FEC) and retransmission, to address packet loss and ensure uninterrupted streams despite network fluctuations. Also, the synchronization tools monitor end-to-end latency and dynamically adjust playback speeds to ensure all viewers experience events simultaneously, a critical factor for live sports and interactive content.
Future live streaming events will require platforms to innovate in cloud scalability, AI-driven predictive traffic management, and edge computing to preemptively address the challenges related to live streaming.
For Netflix, the next chapter is clear: mastering System Design is critical to delivering flawless live-streaming experiences and rebuilding viewer trust. Its upcoming events will reveal whether it can scale effectively, stay reliable, and thrive in the face of real-world demands.
For developers, Netflix’s challenges underscore why System Design is a must-have skill. It’s what enables you to build systems that scale, remain reliable under pressure, and meet user demands — whether those users are millions of live viewers or just a handful of app users.
If you’re ready to elevate your System Design skills, here’s where to focus:
Explore core concepts like load balancing, System Design caching, and database design.
Practice designing systems such as a URL shortener or a simple e-commerce platform.
Study case studies from companies like Netflix and YouTube to understand real-world implementations.
Deepen your understanding of trade-offs like consistency vs. availability (CAP theorem) to optimize for complex scenarios.
Tackle high-concurrency challenges, such as live-streaming or managing large-scale social platforms.
Refine strategies for scaling architectures that support global, mission-critical systems.
One resource I’d recommend to anyone: Grokking the Modern System Design Interview. It’s tailored to help you prove your expertise in interviews — and show you’ve got the skills to build great systems.
In today’s high-stakes tech landscape, mastering System Design isn’t optional. It’s how you create systems that don’t just work but stand out and win users.
Happy learning!
System Design interviews are now part of every Engineering and Product Management Interview. Interviewers want candidates to exhibit their technical knowledge of core building blocks and the rationale of their design approach. This course presents carefully selected system design problems with detailed solutions that will enable you to handle complex scalability scenarios during an interview or designing new products. You will start with learning a bottom-up approach to designing scalable systems. First, you’ll learn about the building blocks of modern systems, with each component being a completely scalable application in itself. You'll then explore the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process. Finally, you'll design several popular services by using these modular building blocks in unique combinations, and learn how to evaluate your design.
Free Resources