
"If you're trying to make sure your software is fast, or at least doesn't get slower, automated tests for performance would also be useful. But where should you start? My suggestion: start by testing big-O scaling. It's a critical aspect of your software's speed, and it doesn't require a complex benchmarking setup. In this article I'll cover: A reminder of what big-O scaling means for algorithms. Why this is such a critical performance property."
"Understanding this requires covering three key aspects: 1. Algorithm (not implementation) Let's say you want to find if a value, the needle, is in a contiguous array of memory. The (almost trivially simple) algorithm to do so involves comparing each item in the array in turn to the needle, stopping when you've reached a value that matches. If you find the needle, you return true. If you never find the needle, you return false."
Automated tests verify algorithm outputs and catch regressions. Performance tests are useful to ensure speed. Start by testing big-O scaling because it identifies scalability issues while avoiding complex benchmarking. Big-O expresses the upper bound on how runtime grows with input size. Distinguish algorithm from implementation because different implementations can differ in speed while implementing the same algorithm. Distinguish scalability from absolute speed because scalability describes growth behavior. Tools like the bigO library enable empirical identification of algorithmic scalability and testing of Python code's big-O behavior to catch regressions.
Read at PythonSpeed
Unable to calculate read time
Collection
[
|
...
]