Cloudflare has deployed a sub-second latency live streaming system at scale over the last few years. In this talk, we’ll provide insight on how this works under the cover, specifically focusing on protocols that Cloudflare Stream uses: HLS, DASH, RTMPS, SRT and WebRTC. In addition, we’ll talk about Cloudflare’s unique architecture that goes beyond the tools available to traditional content delivery networks like caching to distribute content at sub-second latency.
We will share our experiences and challenges in overcoming issues and enhancing the overall user experience in live streaming scenarios. We will explore the architectures and strategies we've developed to achieve sub-second live streaming - a critical goal in today's era of real-time communication and media such as:
Use of Real-Time Protocols: Cloudflare uses real-time protocols like WebRTC and SRT in facilitating low-latency communication, which facilitate lower latency and improved experience compared to common HTTP based protocols such as HLS and DASH.
Cloudflare’s network: Cloudflare uses a anycast network, which means that all clients connect to the closest of Cloudflare’s 285 locations. Even though this has some obvious latency benefits, it also comes with coordination challenges.
Edge Computing: By moving business logic to Cloudflare’s Workers platform and using edge-first primitives, Cloudflare Stream is able to improve experience for customers. We’ll talk about some of these primitives and how they compare to existing options when building low latency systems.
By the end of this talk you will have a deep understanding of the architecture, strategies, and practical considerations for building low latency live systems, and how Cloudflare’s systems work under the hood.