The ABCs of Time Complexities

By Prajwal Haniya

Techletter #24 | May 18, 2023

Big O is the metric that we use to describe the efficiency of algorithms.

Big O determines the upper bound of the time. Ω (Big Omega) describes the lower bound. Θ (Big theta) means both O and Ω. Θ Gives a tight bound on runtime. What do you mean by the upper bound of the time complexity of an algorithm? That means we are referring to the maximum amount of time that the algorithm will take to execute, given a specific input size. In other words, it represents the worst-case scenario for the algorithm’s runtime.

For example, if an algorithm has a time complexity of O(n^2), it means that the runtime of the algorithm grows quadratically with the input size (n). Big O just describes the rate of increase.

// O(m + n)
for (let i...) {
    console.log(i);
}

for (let j...) {
    console.log(j);
}

// O (m * n) || O(n^2) when both have same  
for ( let i...) {
    for (let j...) {
        console.log(i, j)
    }
}

O(m * n): Represents an algorithm that has nested loops, where the outer loop iterates ’m' times and the inner loop iterates ‘n’ times.

O (n ^ 2): Represents an algorithm that has a single loop iterating ‘n’ times nested within another loop iterating ‘n’ times.

O(m * n) indicates that the runtime of the algorithm depends on both ’m' and ‘n’, so the rate of growth can vary depending on the relative sizes of ’m' and ‘n’. O(n^2), on the other hand, indicates a quadratic growth rate, meaning the runtime increases significantly as ‘n’ increases

You typically encounter logarithmic O(log n) run times when dealing with algorithms that have a divide-and-conquer or binary search nature. Logarithmic time complexities often arise when the input size is halved or divided by a constant factor at each step of the algorithm.

O(√n): indicates that the algorithm’s runtime increases proportionally to the square root of the input size. As ‘n’ increases, the runtime grows slower than linear time complexity (O(n)), but faster than logarithmic time complexity (O(log n)).

Algorithm Best Average Worst
Linear Search O(1) O(n) O(n)
Binary Search O(log n) O(log n) O(log n)
Jump Search O(√n) O(√n) O(n)
Interpolation Search O(log n) O(log n) O(n)
Algorithm Time Complexity: Best Time Complexity: Average Time Complexity: Worst
Bubble sort O(n) O(n^2) O(n^2)
Selection sort O(n^2) O(n^2) O(n^2)
Insertion sort O(n) O(n^2) O(n^2)
Merge sort O(n log(n)) O(n log(n)) O(n log(n))
Quick sort O(n log(n)) O(n log(n)) O(n^2)
Heap sort O(n log(n)) O(n log(n)) O(n log(n))

Credits for the above table: https://github.com/berkcetinsaya/SortingAlgorithms/blob/master/README.md

Let’s understand the time and space complexities along with the probelms we solve regulary. This is just a basic walkthrough.