Read-Copy-Update (RCU): The Secret to Lock-Free Performance
Briefly

Read-Copy-Update (RCU): The Secret to Lock-Free Performance
"With pthread's rwlock (reader-writer lock) implementation, I got 23.4 million reads in five seconds. With read-copy-update (RCU), I had 49.2 million reads, a one hundred ten percent improvement with zero changes to the workload."
"Readers in a reader-writer lock must acquire shared access, triggering atomic operations and cache line invalidation across CPU cores. As core counts increase, this overhead compounds."
"RCU has a three-phase pattern: Readers have lock-free access to data, while writers copy-modify-swap pointers atomically and defer memory reclamation until a grace period has elapsed, ensuring all readers have finished."
"Apply RCU when read-to-write ratios exceed a ten-to-one ratio and a brief inconsistency is tolerable. For example, Kubernetes API serving, PostgreSQL MVCC, Envoy configuration updates, and DNS servers all use this pattern."
Read-copy-update (RCU) is a synchronization technique that dramatically improves read performance in read-heavy workloads by removing lock acquisition from the read path entirely. Unlike reader-writer locks that require atomic operations and cache line invalidation for each read, RCU allows readers lock-free access to data. Writers follow a three-phase pattern: copying data, modifying it, atomically swapping pointers, and deferring memory reclamation until a grace period ensures all readers have finished. This approach trades strong consistency for scalability, as readers may briefly observe stale data. RCU proves most effective when read-to-write ratios exceed ten-to-one and eventual consistency is acceptable, making it ideal for systems like Kubernetes API servers, PostgreSQL MVCC, Envoy configuration updates, and DNS servers. However, RCU introduces risks including use-after-free crashes and is unsuitable for systems requiring strong consistency or immediate access to latest data.
Read at InfoQ
Unable to calculate read time
[
|
]