How to calculate the time complexity

There are multiple ways to solve a problem in computer science, but specific performance evaluation functions include time and space complexity to pick an efficient one.

Note: If you want to learn about time complexity in-depth, click here.

Time complexity

Time complexity measures the time taken to execute all statements of the algorithm. It should be as small as possible because the more time our algorithm takes, the more it costs, leading to performance issues. For instance, if a statement is executed multiple times nn and the time to run this statement a single time is kk, then its time complexity would be nkn*k.

Note: The time complexity is the total amount of time taken by an algorithm to execute, as a function of the length of input data.

This raises the question of drawing a relationship between input and time required. For this, there are standardized notations.Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value.

Different notations for time complexity

There are many notations available to compute time complexity:

  • Big-O notation: OO

  • Omega Notation: ΩΩ

  • Theta Notation: θθ

Let's understand them briefly. However, our focus would be on big-O notation, as developers widely use it.

Big-O notation: OO

The big-O notation describes the asymptotic upper bound, the worst case of time complexity. It measures the maximum amount of space our algorithm takes to grow concerning input data size.

Omega Notation: ΩΩ

The omega notation describes the asymptotic lower bound. It is the best-case scenario in terms of time and space complexity. It's the minimum amount of time our algorithm takes to grow concerning the input data.

Theta Notation: θ

The theta notation means that the time complexity of an algorithm is between its upper-bound and lower-bound space complexity.

Measurement of time complexity

Most developers use big-O notation because estimating the maximum time required is relatively easy. So we'll learn to measure the upper- bound of an algorithm in this section.

Code

Let's consider the below code:

#include <iostream>
using namespace std;
int main() {
// your code goes here
int sum = 0;
for (int i = 0; i < 5; i++) //Control statement
{
sum = sum + i; //Assignment and Arithmetic operator
}
cout << "Sum = "<< sum;
return 0;
}

This code's time complexity depends on the number of times the statements get executed. Here, the for loop gets executed nn number of times, and hence complexity is O(n)O(n)

Let's consider more examples:

Example 1

#include <iostream>
using namespace std;
int main() {
// your code goes here
int a = 4;
int b = 6;
int c;
c = a + b; // Assignment and arithmetic operations
cout << c;
return 0;
}

The code chunk above is executing in O(1)O(1) time because the whole program consists of assignment, arithmetic operations, and all those statements that will be executed only once.

Example 2

#include <iostream>
using namespace std;
int main() {
// your code goes here
int sum = 0, i;
int arr[5] = {2,4,5,6,7};
for(i = 0; i < 5; i++) //Control statement
{
sum = sum + arr[i]; //Assignment and arithmetic operations
}
cout << sum;
return 0;
}

The code chunk above has a control statement that executes for nn times, and along with that, it also has some assignments, arithmetic, and an output statement cout that makes its time complexity O(n+3)O(n+3). For such cases, there's a rule here.

Note: For larger values of nn , the constant terms becomes negligible.

So, the complexities of arithmetic, logic, or any assignment operator can be ignored whenever a program consists of a control statement. The final complexity for this code would be O(n)O(n).

Example 3

#include <iostream>
using namespace std;
int main() {
// your code goes here
int sum = 0, i, j;
int arr[5] = {2,4,5,6,7};
for(i = 0; i < 5; i++) //Control statement 1
{
for(j = 0; j < 5; j++) //Control statement 2
{
sum = sum + arr[i]; //Assignment and arithmetic operations
}
}
cout << sum;
return 0;
}

Control statement 1 and control statement 2 both are executing nn times individually. So, the time complexity would be nnn*n which is equal to O(n2)O(n^2).

Example 4

This is the binary search algorithm that compares the targeted value with the middle element. Let's suppose the central element is not equal to the targeted element. In that case, it eliminates the half where the targeted element can't lie. The search continues in the remaining half, following the exact mechanism until the targeted element is found. The targeted element is not found in the remaining half of the array and becomes empty.

Note: Binary search only works on sorted arrays.

#include <iostream>
using namespace std;
int binarySearch(int arr[], int left, int right, int x) {
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == x) {
return mid;
} else if (arr[mid] < x) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}
int main() {
// Let's assume we have an array of integers
int arr[10] = {1,2,3,4,5,6,7,8,9,10};
// And element to search is 7
int search = 7;
int iter = 0;
int output = binarySearch(arr, 0, 9, search);
if(output != -1)
{
cout <<"Element found"<<endl;
return 1;
}
if(output == -1)
{
cout << "Element not found"<<endl;
}
return 0;
}

Let's have at the simulation below to understand how it works:

1 of 4

Calculate time complexity

The binary search almost takes four operations, equivalent to log2 (10), where 10 is the size of the input. So, the relationship between data and time is logarithmic.

The processing time is proportional to the logarithmic of the input elements with size nn. So, the complexity of this code is O(logn)O(logn).

Comparison

The important big-O notations are as follows:

Time Complexity

Description

O(1)

It is called constant complexity when the algorithm takes the same amount of time regardless of the input size n.

O(logn)

It's called logarithmic complexity when the algorithm takes space proportional to the log of the input size n.


O(n)

It's called linear complexity when an algorithm takes space directly proportional to the input size n.

O(nlogn)

It's called log-linear complexity when the space complexity of an algorithm grows proportionally to the input size n and a logarithmic factor log().

O(n^2)

It's called polynomial complexity when the space complexity grows proportionally to the square of the input size n.

O(2^n)

It's called exponential time complexity when an algorithm's growth doubles with each addition to the input dataset.

Free Resources

Copyright ©2024 Educative, Inc. All rights reserved