Measure Carefully-Because You'll Fix What You Measure (Based on true story!)
Briefly

Measure Carefully-Because You'll Fix What You Measure (Based on true story!)
"We had a piece of code that looked great in production. It was blazing fast, and everyone was happy. It was profiled something like this (all code samples are symbolic, actual code was way more complex) Profiler.time { val operations: List[Future[String]] = List(Future(Wait500ms), Future(Wait500ms), Future(Wait500ms)) val futureAll: Future[List[String]] = Future.sequence(operations) futureAll.onComplete { case Success(results) => results.foreach(x => println(x.length)) case Failure(e) => println(s"Failed: $e") }} The profiler reported near-zero execution time. We felt proud of our "fast" system."
"We weren't measuring the real cost. we were measuring how fast we could promise to do work later In the first version, the profiler only measured scheduling . ` futureAll.onComplete { ... }` just attached a callback and returned immediately. The timer stopped before the actual work inside those Futures had even started. In the second version, using Await.result, the profiler finally measured execution. Now it included the 500ms waits. Nothing got slower; we just started measuring the truth."
A profiler that times only scheduling can report near-zero execution for asynchronous code. Code that attaches callbacks returns immediately, stopping timers before Futures execute. Blocking on completion, for example with Await.result, includes the actual work and the waits in the measured time. The observed performance difference often reflects what is measured rather than changed runtime behavior. Developers should decide whether they want to measure scheduling overhead or full completion time. Accurate measurement requires waiting for asynchronous tasks to finish when the goal is to capture true execution costs. Attaching callbacks will always look fast if measurements stop before tasks run.
Read at Medium
Unable to calculate read time
[
|
]