In Computer Science, the efficiency of any program or algorithm is measured in terms of the Time and Space Complexity of that algorithm. In this article, we will talk about the Time Complexity of the algorithms and have a look at different algorithms with some common time complexities.

Time Complexity, in computer science, is measured as the amount of “computational time” it takes to execute the elementary operations/statements that execute in a fixed time. The time complexity is generally denoted by the Big O notation and it is taken as a function of the input of the algorithm.

To understand it better, let’s take a look at some programs:

```
long NumberOfHandShakes(int persons_in_the_room)
{
long handshakes = (persons_in_the_room * (personss_in_the_room - 1)) / 2;
return handshakes;
}
```

In this function, we are calculating the number of handshakes that would happen if all the persons present in the room shake hands with every other person present in the room. This function has two statements.

The first statement is a mathematical expression

```
long handshakes = (persons_in_the_room * (personss_in_the_room - 1)) / 2;
```

which will be executed in a fixed time.

The second statement returns the result, so it will also be executed in a fixed time.

Considering that both the statements take 1 unit of computational time, our function’s Time Complexity comes to be O(2).

Now, no matter what the size of the input is (whether the number of persons in the room is 2 or 758), the computational complexity of this function will always remain the same. It will execute these two statements in the same amount of time.

For such algorithms, where the computational complexity is independent of the size of the input, the algorithm is said to have a **Constant Time Complexity** and is always denoted as **O(1)**.

Other examples of Constant Time complexity are

- Finding if a number is Odd or Even
- Inserting a node in a Linked List
- Push or Pop function from a stack

## Logarithmic Time Complexity

Let’s have a look at the binary search algorithm. For those who are not familiar with Binary Search, it is an algorithm to search an element in a “Sorted List/Array”. At each step, we find the mid of the array and see if our element matches the number at the mid index. If so, we return the index as the position where the element is found, otherwise, we either move to the left of the mid or the right of the mid, depending upon whether our element is smaller or greater than the element at the mid.

Effectively, we are reducing the list to “half at every step”, that is why the number of steps needed to find the element is the order of Log 2(n) where n is the number of elements in the array/list and is denoted as **O(log n)**.

```
int BinarySearch(int[] a, int element)
{
int start = 0, end = a.length() - 1;
int mid;
while (start < end)
{
mid = (start+end)/2;
if( a[mid] == element)
return mid;
Elseif (a[mid] < element) start = start + 1; elseif(a[mid] > element)
end = end - 1;
}
return -1;
}
```

**Important**

- At this moment, you must be thinking that although there are a number of statements in this algorithm, we are only considering the number of steps it takes to find the element. As discussed initially, to calculate the time complexity of this function, we consider that each statement like the assignment of variables, calculation of mid, conditional statements and assignment statements, each is taking a fixed amount of time i.e 1 computational unit each. So, collectively, the time complexity of all the statements can be considered as O(1). Now each of this statement will be executed Log (n) number of times. Thus, the time complexity is generally measured in terms of the highest degree of complexity based on the size of the input.
- Also, there can be cases where we can find the element in the first iteration itself, when the element we are trying to search is lying on the middle index of the original array, which is the best case, however, to describe the time complexity of an algorithm, we always take the worst case and denote the Time complexity of the algorithm for worst case.

**P.S.** For those, who are not from a mathematical background, Log is the exact opposite of exponentiation. For example, if we keep multiplying a number, say 2, to itself for a number of times say n we get an expression something like **2 ^{n}**. Similarly, if we keep dividing an expression by a number, again 2, the resulting value can be calculated as

**Log (n)**to the base of the number by which we are dividing (2 here).

Algorithms with **Logarithmic Time complexity** are generally considered to be one of the good programs as we are eliminating a number of inputs or a sizeable amount at each step. Other examples of algorithms with Logarithmic Time complexity are:

- Finding the Binary equivalent of a decimal number -> Log
_{2}(n) - Finding the Sum of Digits of a number -> Log
_{10}(n)

**Note** that in these algorithms the time complexity is not based on the “number of elements” rather the “size of the input”.

## Other Common Time Complexities

### Linear Time Complexity

Any algorithm where the computational complexity or number of operations increases linearly with the increase in the size or number of input, the Time Complexity of the algorithm is said to be Linear and is denoted by **O(n)**

Some examples of algorithms where Time Complexity is Linear:

- Linear Search algorithm
- Find the sum of all the elements in an array
- Naive algorithm to find if a number is prime (by dividing it by every number smaller than itself)

### Linearithmic Time Complexity

Any algorithm with the time complexity of **O(n log(n))** is said to have Linearithmic Time Complexity. Generally, the sorting algorithms with “Divide and Conquer” properties like Merge Sort, Heap Sort, Tim Sort, and Cube Sort come under this category.

For example, Merge Sort keeps dividing the array into half at each step ( O(log N)) and then for each half it performs the same merge operation ( O(n) ), hence the time complexity is O(n log n)

### Quadratic Time Complexity

An algorithm where for each element of the input, if we have to perform n operations, the resulting time complexity will be O(n ^{*}n) or **O (n ^{2})** and is said to have Quadratic Time Complexity.

For example, in most of the simple sorting techniques like Bubble Sort or Selection Sort, we have to compare each element with all the other elements in the array thus performing n X n operations for an array of size n.

Generally, you can identify this time complexity with a nested loop that looks like this

```
for (int i = 0 ; i < n ; i++)
{
for (int j = 1 ; j < n ; j++)
{
Do something;
}
}
```

**NOTE** that some algorithms can have two looping structure one after the other but not nested. Something like

```
for (int i = 0 ; i < n ; i++)
{
for (int j = 1 ; j < n ; j++)
{
Do something;
}
}
for (int i = 0 ; i < n ; i++)
{
Do something;
}
for (int i = 0 ; i < n ; i++)
{
Do something;
}
```

Here the Time Complexity will be O(n^{2}) + O(n) + O(n) = O(n^{2}) + O(2n) . As said earlier, we will always consider the highest degree function while considering the time complexity and thus the Time complexity of such algorithms will be considered as O(n^{2}) only.

### Cubic Time Complexity

No candies for guessing! The algorithms with Time complexity of **O(n ^{3})** are said to have Cubic Time Complexity

Consider an example of finding the value of x, y and z in a quadratic equation like

`3x + 2y + z = 45`

Programatically, you will have to run through three nested loops to find a situation where with all combinations of x, y and z, 3x + 2y + z becomes 45, something like below

```
for (int x = 0 ; x < n ; x++)
{
for (int y = 0 ; y < n ; y++)
{
for (int z = 0 ; z < n ; z++)
{
if(3*x + 2*y + z == n)
{
print(x, y, z);
Return;
}
}
}
}
```

Another example of an algorithm with cubic time complexity is Matrix Multiplication

There are many other types of algorithms with various time complexities like **Exponential, Polynomial, Factorial** and several others, which I will leave to the reader to do some research and study more about.

Here is a graph showing various time complexities and how do they increase with the increasing input