Christmas Decoration
Christmas Decoration

New Year
New Year Sale Last Call – 50% Off ENDS Jan 15!Get 50% off for life!
20h37m49s

Christmas Decoration
Christmas Decoration

What is the time complexity of an algorithm?

Overview

In programming, there's a chance of having many ways to solve a problem, but we need to pick one that is most efficient in terms of cost. Time complexity and space complexity act like measurement scales of the cost of an algorithm.

In this answer, we'll discuss the following questions:

  1. What's the importance of time complexity?
  2. What is time complexity?
  3. What are different time complexity notations?
  4. What is the time complexity of the linear and binary search?

Let's start with the formal definition.

Time complexity

Time complexity measures the time taken to execute all statements of the algorithm. It should be as small as possible because the more time our algorithm takes, it will increase the cost of it. For instance, if a statement is executed multiple times nn and the time to execute this statement a single time is kk, then its time complexity would be nkn*k.

Note: The time complexity is the total amount of time taken by an algorithm to execute, as a function of the length of input data.

If an algorithm uses statements that are only executed a single time, it will require some time consistently. If it runs multiple times, like in a loop condition, its required time increases depending on the number of iterations. This raises the question of drawing a relationship between input and time required. For this, there are standardized notations.

Different time complexity notations

As we have observed, the time complexity is given as a function of input data length. There are different notations to represent it, like theta notation and omega notation, but our focus would be on Big-OOnotation as its widely used by developers.

The BigOOnotation

BigOOnotation represents the relationship between the input size nn and the number of times the statement is executed. It shows how quickly the processing time of an algorithm is growing related to its input size nn .

There are some important time complexities in big-OOnotations:

  1. Constant time: O(1) O(1)
  2. Linear time: O(n)O(n)
  3. Logarithmic time: O(logn)O(log n)
  4. Quadratic time:O(n2)O(n^2)
  5. Cubic time: O(n3)O(n^3)

Constant time: O(1) O(1)

An algorithm has constant time complexity when it's independent of the input size nn. In the following example, our algorithm doesn't depend upon the input:

#include <iostream>
#include <time.h>
using namespace std;
int main() {
// your code goes here
int n = 0;
cout << "Integer = " << n << endl;
return 0;
}
Code in constant time complexity

Linear time: O(n)O(n)

Algorithms have linear time complexity when their runtime increases with respect to the size of the input nn.

Linear Search

In linear search, the algorithm will compare each element of the array with the element searched. The search will go unless the element is found or it reaches the last element of the array.

Let's look at the figure below. We have an array of ten elements, and our element int search is at index 7:

Created with Fabric.js 3.6.6 1 = 7? No, move to next index 1 2 3 4 5 6 7 8 9 10 1 = 7? No, move to next index
1 of 7
#include <iostream>
using namespace std;
int main() {
// Let's assume we have an array of integers
int arr[10] = {1,2,3,4,5,6,7,8,9,10};
// And element to search is 7
int search = 7;
int iter = 0;
for (iter=0; iter<10; iter++)
{
if(arr[iter]== search)
{
cout <<"Element found"<<endl;
break;
}
}
if(iter == 10)
{
cout << "Element not found"<<endl;
}
return 0;
}
Linear search having linear time complexity

So, the linear search will take at least seven iterations to search 7. This is also known as the worst case. In general, it takes nnnumber of operations in this case.

Logarithmic time: O(logn)O(log n)

An algorithm is said to be in logarithmic time complexity if its processing time is proportional to the logarithmic of the input elements with size nn. A good example would be binary search.

Binary Search

The binary search is also known as the half traversal search because of its mechanism. The binary search algorithm compares the targeted value with the middle element. Let's suppose the central element is not equal to the targeted element. In that case, it eliminates the half where the targeted element can't lie, and the search goes on in the remaining half, following the exact mechanism until the targeted element is found. If the remaining half of the array becomes empty, that means the targeted element is not found.

Note: Binary search only works on sorted arrays.

#include <iostream>
using namespace std;
int binarySearch(int arr[], int left, int right, int x) {
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == x) {
return mid;
} else if (arr[mid] < x) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}
int main() {
// Let's assume we have an array of integers
int arr[10] = {1,2,3,4,5,6,7,8,9,10};
// And element to search is 7
int search = 7;
int iter = 0;
int output = binarySearch(arr, 0, 9, search);
if(output != -1)
{
cout <<"Element found"<<endl;
return 1;
}
if(output == -1)
{
cout << "Element not found"<<endl;
}
return 0;
}
Binary search have logarithmic time complexity

Created with Fabric.js 3.6.6 5 < 7, eliminate first half and start searching on second half 1 2 3 4 5 6 7 8 9 10
1 of 4

Now, we can see the difference in the number of operations. The binary search almost takes four operations, equivalent to log2 (10). So, the relationship between data and time is logarithmic.

Quadratic time: O(n2)O(n^2)

If the running time of an algorithm is increasing exponentially with the size of input data nn it is said to be in quadratic time complexity. For example, a program in a nested loop.

The following code chunk is in quadratic time because the outer loop run nntimes and the inner loop run nnn*ntimes which is equal to O(n2)O(n^2):

#include <iostream>
using namespace std;
int main() {
// your code goes here
int n = 5;
for (int i = 0; i <= n ; i++) //Time complexity O(n)
{
for(int j = 0 ; j <= n ; j++) //Time complexity O(n^2)
{
cout << "Demonstration of Quadratic time complexit" << endl;
}
}
return 0;
}
Algorithm following quadratic time complexity

Cubic time: O(n3)O(n^3)

If the processing time of an algorithm is proportional to the cube of input elements of size nn then its time complexity would be cubic.

Let's see the following code, whose running time is cubic because of three nested loops:

#include <iostream>
using namespace std;
int main() {
// your code goes here
int n = 5;
for (int i = 0; i <= n ; i++) //Time complexity O(n)
{
for(int j = 0 ; j <= n ; j++) //Time complexity O(n^2)
{
for(int k = 0; k <= n ; k++) //Time complexity O(n^3)
{
cout << "Demonstration of Cubic time complexit" << endl;
}
}
}
return 0;
}
Algorithm following cubic time complexity

There are many more time complexities, but these are the most used.

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved