
"When LinkedIn's engineers published their announcement about Northguard & Xinfra earlier this year, it sent ripples through the event streaming community. Here was the company that created Apache Kafka - the backbone of modern data infrastructure - essentially saying they'd outgrown their own creation. But this isn't just another "we built a better Kafka" story. This is about what happens when you scale from 90 million to 1.2 billion users,"
"when your system processes 32 trillion records daily across 17 petabytes of data, & when your operational complexity grows beyond what even the most sophisticated tooling can manage. Let's start with the numbers that matter! In 2010, when Kafka was first developed, LinkedIn had 90 million members. Today, they serve over 1.2 billion users. That's not just a linear scaling problem - it's an exponential complexity challenge that touches every aspect of distributed systems design."
LinkedIn created Apache Kafka in 2010 when the company had 90 million members. LinkedIn now serves over 1.2 billion users, transforming a linear scaling problem into exponential complexity across distributed systems design. The platform processes roughly 32 trillion records daily and stores about 17 petabytes of data, creating operational burdens beyond existing tooling. These scale pressures led LinkedIn to develop Northguard and Xinfra as new internal systems to address throughput, storage, and operational complexity at global scale. The shift demonstrates the need to evolve core infrastructure when user growth and data volumes exceed original architectural assumptions.
Read at Medium
Unable to calculate read time
Collection
[
|
...
]