Time Complexity

The time complexity of an algorithm quantifies the amount of time taken for the algorithm to run, as a function of the input. But because there are many variables in determining the amount of time an operation might take, a more consistent way of thinking about time complexity, is the maximum amount of steps taken to perform an operation.

Big O notation is generally used to describe an algorithm's efficiency. Big O notation disregards constants and smaller components. In other words, Big O tends to ignore numbers unless they are exponents.

When determining the Big O of an algorithm, we use the worst-case scenario, because that is where the efficiency or inefficiency of an algorithm is most important.

Some of the most common notations are:

O(1) - Constant time

This means that irrespective of the length of the input, the steps taken to perform an operation will always remain the same.

function constantOperation(n) {
    for (i = 0; i <= 10; i++) {
        print i
    }
}

The above is an example of an O(1) algorithm. Because even though it contains a loop, the loop will take the same amount of steps to execute regardless of the input.

 

O(LogN) - Logarithmic time

In logarithmic time, the steps taken to perform an operation are proportional to the logarithm of the size of the input.

An example of logarithmic time is binary search. In the case of binary search, each operation removes half of the remaining elements from consideration, which is a log2n operation. In other words, suppose you're searching an ordered array with a length of 20 elements for a particular element. The first operation will compare the middle element of the array to the target element, and depending on the results, will determine which half of the array the element may be in. Each further operation will again divide the array and determine which half to discard.

 

O(n) - Linear time

This means that the cost of the operation increases proportionately with the increase in input.

function linearOperation(n) {
    for(i = 0; i <= n; i++) {
        if (i == 5) print 'found'
    }
}

In this case, for every increase in input, there is a commensurate increase in the steps taken to complete the operation.

An example of an O(n) operation is checking an array for a value. We perform this operation by looping over each element in the array, and so the cost of the operation increases as does the length of the array.

function linearLoops(n) {
    for (i = 0; i <= n; i++) {
        print i * 2
    }
    for (j = 0; j <= n; j++) {
        print j * 3
    }
}

The above example contains two separate loops. Because the loops are not nested,  this is a 2*n operation. And because we ignore constants, it is further reduced to just O(n).

 

O(n2) - Quadratic time

The cost of the algorithm increases proportionately to the square of the size of the input. This is commonly seen with nested loops as in this example.

function quadraticOperation(n) {
    for(i = 0; i <= n; i++) {
        for (j = 0; j <=n i++) {
            print j
        } 
    }
}