Big O Notation!

Big O Notation is a mathematical expression of the complexity of a computer program. That means, the complexity of a computer program is defined by the time and space it consumes to run. If we refer to the runtime of the program then it is called time complexity and when we refer to the RAM space occupied by the program, then it is called space complexity. And while defining this expression we only pick the fastest growing term to define Big O Notation, by dropping the constants. Why do we(developers) need to know about this is because this helps us in writing better code by choosing the best solution for the problems. It is important to solve a problem in the most optimized way for the best results.

Let us discuss with the help of a few examples. Say, we have an array as below:

let array1 = [22, 32, 13, 12, 29, 35]

And we want to search if 9 exists in the array. There are different ways of solving this problem. If we go by looking at each element of the array, it is called Linear Search and this looks simple for an array with the above length of 6. What if we have an array with ten thousand elements. This is quite possible in the real world. Say, we wanna look if a customer with a certain customer number exists in the array list of customers. It takes forever to figure this out by using Linear Search. Coming back to the complexity, the runtime for running this array with size 6 and size N, the time is directly proportional for Linear Search unless you find the element in the first few elements. However, we define the upper bound as O(n), for such programs.

Let us say we use this in a cubic function as below:

def cubesOfElements(anyArray):
     cubesArray = []
     for eachElement in anyArray:
           cubesArray.append(eachElement * eachElement * eachElement)
     return cubesArray
cubesOfElements(array1)
#Output is [10648, 32768, 2197, 1728, 24389, 42875]

In this above code the function cubesOfElements is undergoing 6 iterations. However, if we have 1million or 6million elements in the array passed, then it is going to take forever to compute. Next what if we have to find an element in a 2D array. And if you go by running two for loops to get to the element, then the complexity would grow and represent as O(n^2) called as Order of n squared.

In the same problem, storing one value in more number of variables, would increase the space complexity. Big O can be derived for any program and every algorithm. Traversing a linked list, finding lenght of a stack, are a couple of examples. We will discuss linked lists in detail in next article. For now, let us continue to look into another example.

Say we want to find a test paper of student name 'John', from a sorted set of papers. Or we want to find the mobile number of your friend whose name starts with letter 'M'. This indicates to use a search algorithm. And the complexity is different from algorithm to algorithm. Hence, it is important to know the Big O of an algorithm before applying to the problem. You can refer the other blogs on different search algorithms, where I have discussed Big O for respective algorithm in detail.

This is a basic idea of how to derive time and space complexity for a program. I hope this was helpful. Thanks for reading so far. Happy learning!!