A brief overview of Big O Notation with JavaScript

Alexandra Radevich
3 min readNov 4, 2020

If you’ve been learning to code for a while, you’ve probably heard ‘Big O’ thrown around quite a bit. It’s defined on Wikipedia as:

“A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.”

What does that mean for our code, and why should we care?

As you may have noticed, any single problem can have many solutions. If you’ve ever explored LeetCode, HackerRank or the like, you’ve probably seen the plethora of solutions to any one problem. Implicitly, this raises a question: How can we say which approach is better than another? How can we strive to improve our own code? How do we judge the quality of algorithms? One such measure, is Big O.

Big O Notation

Big O allows us to evaluate the time and space complexity of algorithms as the inputs grow. That final part is important, Big O is all about the worst case scenario of an algorithm, or the ‘limiting behavior’ as Wikipedia describes it. Time complexity refers to the runtime of an algorithm, and while most algorithms might preform at roughly the same speed with small inputs, any minor speed difference will be emphasized as the inputs grow larger. Space complexity refers to the memory being used by the algorithm, and similar to time complexity, larger inputs can put a noticeable strain on performance.

**Keep in mind, since Big O can refer to time or space complexity, a single algorithm can have different Big O values for these categories.**

For all of these, n is going to refer to the input.

O(1)

If an algorithm has O(1) time or space complexity, that means that no matter how much n grows - the time it takes to complete, or the space in memory that gets allocated, remains constant.

This is a very good Big O to have, since we can increase our inputs without any efficiency cost. The function below has a O(1) time complexity.

function returnSquare(n) {
return n*n
}

In this case, it doesn’t really matter if n is 10 or 10,000 — there is only 1 operation to perform.

O(n)

Here complexity grows proportionally ton.

In the function below, we want to add up all the numbers from 0 to n. Since we’re doing this with a loop, and our loop iterates up until n, the amount of iterations will directly depend on the value of our input. The runtime and number of operations of our function will be different depending on whether our input is 3 or 3,000.

function addUpToNumber(n) {    let total = 0;    for (let i = 1; i<= n; i++) {        total += i    }    return total}

O(n²)

As the input grows, the runtime or space complexity increases by n * n.

A good example of O(n²) time complexity is a nested loop:

function nestedLoop(n) {    for(let i = 0; i < n; i++) {        for(let j = 0; j < n; j++) {            console.log(i,j)        }    }}

As n grows, our runtime increases exponentially, since we are not only looping n times, but also looping n times in our inner loop.

O(log n)

The runtime is proportional to the logarithm of the input. O(log n) is only slightly steeper than O(1), as n gets larger runtime/space complexity grows gradually.

An example of O(log n) runtime is binary search:

function binarySearch(arr, val) {    let left = 0    let right = arr.length-1    while(left <= right) {        let pointer = Math.floor((right+left)/2);        if(arr[pointer] === val) {            return pointer        }        if(arr[pointer] < val) {            left = pointer + 1        }        if(arr[pointer] > val) {            right = pointer - 1        }    }    return -1}

Since we’re repeatedly halving our search interval, the runtime of this algorithm is faster than O(n), where the number of operations would equal n.

O(n log n)

Is more efficient than O(n²), but less efficient than O(n).

Merge sort has O(n log n) runtime. This one might be harder to wrap your brain around, so check out this break down or this video about the time complexity of merge sort.

From https://www.bigocheatsheet.com/

--

--