What big O notation does and doesn't tell you

In my last blog post, I wrote about time complexity.

In summary, time complexity is a way to determine the efficiency or inefficiency of an algorithm by determining the maximum number of steps that it will take to execute an operation on a given input.

However, the Big O notation used to describe time complexity, will not always tell you the fastest algorithm. In fact, in certain use cases, it may actually imply a slower solution.

Let's start with one of the fundamental rules of big O notation:

All constants are ignored

Now let's consider an operation that has a time complexity of n + 1. In this case, Big O will simply list that as O(n). If n took one step, then the actual time complexity would be double what is implied by O(n).

But it gets worse: Suppose a particular operation has an actual time complexity of 500 * n or 500n, Big O will reduce that to O(n), which is 500 times less than the speed of the operation!

There are practical implications too.

Consider two well-known sorting algorithms, bubble sort and insertion sort. Both have a worst case Big O of O(N2).

Does that mean that both are equal? By this stage, you can probably guess that the answer is no.

Suppose you would like to sort an array of 5 elements, the maximum number of steps to achieve this will be 20 for bubble sort, but only 14 for insertion sort.

In fact insertion sort's maximum steps will be about double that of bubble sort, since even in  worst case scenario, insertion sort requires only one swap per pass, while bubble sort can require a swap for each comparison.

 

But despite this, Big O is still useful because it helps classify the long-term growth rate of an algorithm. So even though with certain input 500N will take longer than N2, beyond a point an O(500N) algorithm will overtake an O(N2) algorithm, and will stay faster from that point on.

While Big O will not always tell you the fastest algorithm for any particular operation or input, it will tell you the most scalable one.