Home/Blog/Programming/Asymptotic runtime complexity: How to gauge algorithm efficiency
Home/Blog/Programming/Asymptotic runtime complexity: How to gauge algorithm efficiency

Asymptotic runtime complexity: How to gauge algorithm efficiency

17 min read
Aug 22, 2022

Become a Software Engineer in Months, Not Years

From your first line of code, to your first day on the job — Educative has you covered. Join 2M+ developers learning in-demand programming skills.

Algorithms are behind every computer program. To solve the same problem, usually, several algorithms can be designed. Thus, finding the best algorithm is the key to saving time and money and providing the best customer service.

Some algorithms are so inefficient (slow) that they cannot be used in practice. In such a case designing a new, more efficient algorithm can drastically change the landscape. For example, the Fourier transformation algorithm was slow and not usable. Thus, the invention of the Fast Fourier transform made a large variety of applications possible, including digital recording, image processing, pitch correction, etc.

This blog answers a fundamental question: how to calculate an algorithm’s efficiency. By answering it, we can compare algorithms to find the most suitable algorithm for a given task. This is an essential skill for a programmer to create a fast, memory-efficient, and responsive program that performs predictability under a known bound in all circumstances. Thus, any customer-centric enterprise will always use algorithms with known efficiency and memory bounds that are also proven to be correct. This blog will also teach you about the worst-case and the best-case time complexities of an algorithm and how to find them. We will also learn about famous asymptotic notations: Big O, Big Omega, and Big Theta.

The blog uses mathematical notations and formulas which might look tedious to some readers. Worry not! We have also described those notations in simple English, with graphical illustrations, and given examples for clarity.

Let the fun begin!

Get hands-on with algorithms today.#

Try one of our 300+ courses and learning paths: Data Structures and Algorithms in Python.

What is an algorithm#

An algorithm refers to a sequence of steps that solves a given problem. To that end, an algorithm takes some input(s) and produces desired output(s). For example, below is the input and the corresponding output of a typical sorting algorithm:

Input: An array of numbers [a1,a2,,an][a_1, a_2, \cdots, a_n]

Output: A sorted array of numbers [a1^,a2^,,an^][\hat{a_1}, \hat{a_2},\cdots, \hat{a_n}], where ai^a^i+1,i\hat{a_i} \leq \hat{a}_{i+1}, \forall i

An algorithm is considered correct if and only if, on all the possible inputs, it produces the desired output(s). For instance, a sorting algorithm that always sorts an array of numbers successfully for all the possible inputs will be considered correct.

widget

We usually write an algorithm in pseudocode. Pseudocode is a mixture of English and a programming-language-independent syntax that is easy to understand by a layperson. Below is an algorithm’s pseudocode to find the minimum number from an unsorted array of numbers.

// function takes an unsorted array as input and returns minimum value
find_min (A[a_1, a_2, ..., a_n])
// Assume that the first element is the minimum
// In pseudocode we assume first index as 1
min = A[1]
// for loop from the second index of array to its last element
for j=2 to n
// if any element is less than minimum then make it minimum
if A[j] < min
min = A[j]
// returns the minimum value
return min

Independent time complexity#

When we implement an algorithm in a programming language (such as Java or C), it becomes a program and can be executed on various hardware platforms. Different algorithms designed to solve the same problem usually differ widely in their time complexities. This difference may change drastically or be skewed when these algorithms are implemented in different programming languages and are executed on different hardware platforms. For instance, in general, an algorithm implemented in Assembly language would execute faster than its Python implementation. Similarly, an algorithm (say Algo-n) running on a supercomputer will be faster than another algorithm (Algo-m) running on an old resource-constrained PC. This might be true even when Algo-m is theoretically more efficient than Algo-n.

We cannot fairly compare algorithms’ efficiency by executing their implementations and recording their runtime.

Therefore, computer scientists and programmers are interested in knowing an algorithm’s time complexity independent of its implementation’s language and execution time on any hardware platform.

#include <stdio.h>
// function takes an unsorted array as input and returns minimum value
int find_min (int A[], int length) {
// Assume that the first element is the minimum
// in C array index starts at 0
int min = A[0];
// for loop from the second index of array to its last element
for (int j=1; j < length; j++) {
// if any element is less than minimum then make it minimum
if (A[j] < min)
min = A[j];
}
// returns the minimum value
return min;
}
// The main function
int main() {
int A[] = {2, 7, 9, -5, 4, 5, 3, 1};
printf("minimum = %d", find_min(A, 8));
return 0;
}

Time complexity as function of input’s size#

Most algorithms’ time complexity will increase as their input size increases. For instance, if the input to the find_min algorithm is an array of size 10, it will run faster as compared to when its input is an array containing 1 million elements.

If Algo-1 is faster on smaller inputs than Algo-2 but slower on large inputs, will the Algo-1 be considered more efficient?

In computer science, we are interested in the time complexity of an algorithm as a function of the size of its input. That is, how the time complexity of an algorithm grows with respect to the increase in its input size. Thus, an algorithm (in this case Algo-2) that is generally more efficient on the larger input sizes is considered superior.

Calculating time complexity#

We aim to measure an algorithm’s time complexity independent of its implementation and any hardware platform. To that end, we assume that each pseudocode instruction will take a constant amount of time. In particular, ithi^{th} pseudocode instruction will take cic_i time to execute, where cic_i is a positive integer. We will carefully calculate how many times each instruction will execute. Then we calculate the total time taken by each instruction as:

Total time of Instruction-i = constant time taken by it, cic_i ×\times number of times it will be executed

Finally, we add the total time taken by all the instructions of the pseudocode, to calculate the time taken by our algorithm.

Let’s make it crystal clear by calculating the time of our find_min pseudocode given earlier.

Code Cost of instruction Number of times executed
find_min (A[a_1, a_2, ..., a_n]) c1c_1 1
min = A[1] c2c_2 1
for j=2 to n c3c_3 nn
if A[j] < min c4c_4 n1n-1
min = A[j] c5c_5 kk
return min c6c_6 1

Thus, the total time taken by our find_min algorithm on given input of size nn is:

T(n)=c1+c2+nc3+(n1)c4+kc5+c6=(c1+c2c4+c6)+n(c3+c4)+kc5=a+nb+kc\begin{align*} T(n) &= c_1+c_2+nc_3+(n-1)c_4+k c_5+c_6 \\ &= (c_1+c_2-c_4+c_6)+n(c_3+c_4)+k c_5\\ &= a+nb+kc \end{align*}

In the equation above, a=c1+c2c4+c6a=c_1+c_2-c_4+c_6, b=c3+c4b=c_3+c_4, and c=c5c=c_5. Furthermore, the value of kk depends on the kind of input. We can understand the concept of the worst case and the best case behavior of the algorithm with the possible values the constant kk may take.

The best case occurs when the minimum integer will be the first element of the array. Hence, the if-condition will never be true. Thus, the value of kk in the best case will be zero, and T(n)=a+nbT(n)=a+nb. Whereas in the worst case the input array will be sorted in descending order. Hence, the if-condition will always be true, making the value of kk in the worst case nn, T(n)=a+nb+ncT(n)=a+nb+nc.

Asymptotic time complexity#

In the last section, we learned to find the running time of iterative algorithms. However, we may have two algorithms with different running times:

Algo-1 Alog-2
T1(n)=an2+bn+dT_1(n)=an^2+bn+d T2(n)=en2+fnT_2(n)=en^2+fn

We cannot say which one is better without having to calculate the exact values of constants aa, bb, dd, ee, and ff. However, these constant values will not be independent of the implementation language and hardware platform. Furthermore, finding their exact values will be too cumbersome. Thus, we resort to asymptotic time complexity. In asymptotic time complexity, our focus is on the order of growth of the running time corresponding to the increase of the input size. Big O, Big Theta, and Big Omega are three key notations that describe asymptotic time complexity.

Big O: The upper bound#

Given that a running time of an algorithm is T(n)=f(n)T(n)=f(n), where nn is the size of the input, we say that function f is big O of a function g, written as:

f(n)=O(g(n)),f(n) = O(g(n))\text{,}

if there exist positive constants c,n0Rc, n_0 \in \R such that

f(n)cg(n),  nn0.\begin{align*} f(n) \le c g(n),\ \ \forall n \ge n_0 \end{align*}\text{.}

In simple words, the above equation implies that when the size of input nn0n \ge n_0, the time of the algorithm f(n)f(n) will be upper bounded by cg(n)c g(n). That is, f(n)f(n) will be equal to or less than cg(n)c g(n) on all the input’s sizes nn0n \ge n_0. This is depicted by the following graphical illustration.

widget

A common mistake is referring to the Big O time complexity as the worst case of an algorithm. On the contrary, Big O refers to the upper bound, which could be for the best case, the worst case, or the average case.

If f(n)=O(ni)f(n) = O(n^i), for any integer ii, then by the above definition, the following statement will always be true:

f(n)=O(nj),ji\begin{equation*} f(n) = O(n^j), \forall j \ge i \end{equation*}


Big Omega: The lower bound#

Given that a running time of an algorithm is T(n)=f(n)T(n)=f(n), where nn is the size of the input, we say that function f is Big Omega of a function g, written as:

f(n)=Ω(g(n)),f(n) = \Omega(g(n))\text{,}

when there exist positive constants c,n0Rc, n_0 \in \R, such that

f(n)cg(n),  nn0.f(n) \ge c g(n),\ \ \forall n \ge n_0\text{.}

In simple words, the above equation implies that when the size of input nn0n \ge n_0, the time of the algorithm f(n)f(n) will be lower bounded by cg(n)c g(n). That is, f(n)f(n) will be equal to or greater than cg(n)c g(n) on all the input’s sizes nn0n \ge n_0, as depicted by the following illustration.

widget

Some people incorrectly refer to Big Omega as an algorithm’s best case time complexity. On the contrary, Big Omega refers to the lower bound, which could be for the best case, the worst case, or the average case.

If f(n)=Ω(ni)f(n) = \Omega(n^i), for any integer ii, then by the above definition, the following statement will always be true:

f(n)=Ω(nj),ji\begin{equation*} f(n) = \Omega(n^j), \forall j \le i \end{equation*}

Big Theta: The tight bound#

Big Theta is the most useful asymptotic notation and should be used in place of Big O and Big Omega whenever possible. It is because it represents the tight bound of running time and provides much more information about an algorithm’s complexity than its counterparts.

Given that a running time of an algorithm is T(n)=f(n)T(n)=f(n), where nn is the size of the input, we say that function f is Big Theta of a function g, written as:

f(n)=Θ(g(n)),f(n) = \Theta(g(n))\text{,}

if there exist positive constants c1,c2,n0Rc_1, c_2, n_0 \in \R such that

c2g(n)f(n)c1g(n),  nn0.c_2 g(n) \le f(n) \le c_1 g(n),\ \ \forall n \ge n_0 \text{.}

In simple words, the above equation implies that f(n)=Θ(g(n))f(n)=\Theta(g(n)) as well as f(n)=O(g(n)f(n)=O(g(n) for input size nn0n \ge n_0, with constants c1c_1 and c2c_2, respectively. In other words, f(n)f(n) will be equal to or less than c1(g(n))c_1(g(n)) as well as equal to or greater than c2(g(n))c_2(g(n)) on all the input’s size nn0n \ge n_0. This is depicted by the following graphical illustration.

widget

Example: Insertion sort#

Let’s go through an example to make all the concepts clear. Insertion sort is a famous sorting algorithm that is similar to the typical sorting of cards.

Insertion sort is similar to sorting cards
Insertion sort is similar to sorting cards

Explanation of algorithm#

In Insertion sort, we assume that our input array is divided into two parts: sorted elements and unsorted elements. Initially, the sorted array has only a single element, whereas the unsorted array has the rest of the elements. We remove an element from the unsorted part of the array and find the place to insert it at the correct location in the already sorted array. This process continues until the unsorted part becomes empty. The pseudocode of Insertion sort is given below:

//input is an array of integers
Insertion_sort(A[a_1, a_2, ..., a_n])
//A[1 .. i-1] is our sorted list.
//A[i .. A.length] is our unsorted list.
//Initally i=2 so sorted list has a single element in it.
for i=2 to n
toInsert = A[i] //toInsert is the element we want to insert at correct location in sorted list
for j=i-1; j > 0 and A[j] > toInsert; j=j-1
A[j+1] = A[j]
// now insert toInsert at its correct location in sorted array
A[j+1] = toInsert

Insertion sort running time#

Code Cost of instruction Number of times executed
Insertion_sort(A[a_1, a_2, ..., a_n]) c1c_1 1
for i=2 to n c2c_2 n
toInsert = A[i] c3c_3 n1n-1
for j=i-1; j > 0 and A[j] > toInsert; j=j-1 c4c_4 h(j)h(j)
A[j+1] = A[j] c5c_5 h(j)1h(j)-1
A[j+1] = toInsert c6c_6 n1n-1

According to the calculations presented in the table above, the running time of Insertion sort for an input of size nn is:

T(n)=c1+c2n+c3(n1)+c4(h(j))+c5(h(j)1)+c6(n1)=(c2+c3)n+(c4+c5)h(j)+(c1c3c5c6)=an+bh(j)+c\begin{align*} T(n) &= c_1+ c_2 n+ c_3 (n-1)+c_4 (h(j))+ c_5 (h(j)-1)+c_6 (n-1)\\ &= (c_2+c_3)n + (c_4+c_5)h(j)+ (c_1-c_3-c_5-c_6)\\ &= an+bh(j)+c \end{align*}

Where, constants a=c2+c3a=c_2+c_3, b=c4+c5b=c_4+c_5, and c=c1c3c5c6c=c_1-c_3-c_5-c_6

Best-case analysis#

In the best case, the condition A[j]>toInsertA[j] > \text{toInsert}, is never true. That is, in this case, the input array is already sorted. Thus, in the best case h(j)=1h(j)=1. The running time becomes:

T(n)=an+b+c=an+dT(n) = an+b+c = an+d

We claim that T(n)=Θ(n)T(n)=\Theta(n). To prove that claim, we have to show that T(n)=O(n)T(n)=O(n) and T(n)=Ω(n)T(n)=\Omega(n). Let’s verify it quickly.

To show that T(n)=O(n)T(n)=O(n), we have to find a positive constant c1c_1 such that an+dc1nan+d \le c_1n. One such possibility could be c1=a+dc_1=|a|+|d|. For all n>=1n>=1 the above inequality holds, proving that T(n)=O(n)T(n)=O(n).

To show that T(n)=Ω(n)T(n)=\Omega(n), we have to find a positive constant c2c_2 such that an+dc2nan+d \ge c_2n. One possibility could be c2=1c_2=1. For all n>=1n>=1 the above inequality holds, proving that T(n)=Ω(n)T(n)=\Omega(n).

Thus, proving that the best-case complexity analysis of Insertion sort is Θ(n)\Theta(n).

Worst-case analysis#

In the worst case, the condition A[j]>toInsertA[j] > \text{toInsert}, will always be true. In this case, the input array is reversely sorted (descending order). In our pseudocode, the loop (of line number 8) will always run from j=i1j=i-1 to 0. Therefore, h(j)=1+2+3++nh(j)=1+2+3+\cdots+n. We can calculate this sum using the sum of arithmetic series formula, h(j)=n2(1+n)=n2+n22h(j)=\frac{n}{2}(1+n)=\frac{n}{2}+\frac{n^2}{2}. Thus,

T(n)=a(n2+n22)+c+b=an+dT(n) = a\bigg(\frac{n}{2}+\frac{n^2}{2}\bigg)+c+b = an+d

We claim that according to the above equation, the worst-case running time of Insertion sort is T(n)=Θ(n2)T(n)=\Theta(n^2). We can easily prove it using arguments similar to the best-case proof. However, we are leaving it to the readers.

#include <stdio.h>
//input is an array of integers
void Insertion_sort(int A[], int n) {
//A[1 .. i-1] is our sorted list.
//A[i .. A.length] is our unsorted list.
//Initally i=1 so sorted list has a single element in it.
for (int i=1; i<n; i++) {
int toInsert = A[i]; //toInsert is the element we want to insert at correct location in sorted list
int j=i-1;
for (j=i-1; j >= 0 && A[j] > toInsert; j--) {
A[j+1] = A[j];
}
// now insert toInsert at its correct location in sorted array
A[j+1] = toInsert;
}
}
int main() {
int A[]={19, 9, 3, 1, 2, 19, 23, 0};
Insertion_sort(A, 8);
printf("\nSorted array is = {");
for (int index=0; index < 8; index++) {
printf("%d", A[index]);
if (index+1 < 8) printf(", ");
}
printf("}\n");
return 0;
}

Time complexities of sorting algorithms#

Let’s conclude this blog by presenting the best- and the worst-case complexity of some of the famous sorting algorithms, given that they take an array of size nn as input.

Algorithm Best case Worst case Average case
Insertion sort Θ(n)\Theta(n) Θ(n2)\Theta(n^2) Θ(n2)\Theta(n^2)
Quicksort Θ(nlogn)\Theta(n\log n) Θ(n2)\Theta(n^2) Θ(nlogn)\Theta(n\log n)
Merge sort Θ(nlogn)\Theta(n\log n) Θ(nlogn)\Theta(n\log n) Θ(nlogn)\Theta(n\log n)
Bubble sort Θ(n)\Theta(n) Θ(n2)\Theta(n^2) Θ(n2)\Theta(n^2)
Selection sort Θ(n2)\Theta(n^2) Θ(n2)\Theta(n^2) Θ(n2)\Theta(n^2)
Heap sort Θ(nlogn)\Theta(n\log n) Θ(nlogn)\Theta(n\log n) Θ(nlogn)\Theta(n\log n)

Quiz: Test your understanding#

Test your understanding and learn from mistakes.

1

Given an algorithm’s worst-case time complexity in Θ(W(n))\Theta(W(n)), and its best-case complexity is Θ(B(n))\Theta(B(n)), then its average-case complexity A(n)A(n) must be

A)

A(n)=O(B(n))A(n)=O(B(n)) and A(n)=Ω(W(n))A(n)=\Omega(W(n))

B)

A(n)=Ω(B(n))A(n)=\Omega(B(n)) and A(n)=O(W(n))A(n)=O(W(n))

C)

A(n)=Ω(B(n))A(n)=\Omega(B(n)) and A(n)=Ω(W(n))A(n)=\Omega(W(n))

D)

A(n)=O(B(n))A(n)=O(B(n)) and A(n)=O(W(n))A(n)=O(W(n))

Question 1 of 70 attempted
Cover
Algorithms for Coding Interviews in Java

Algorithms are one of the most common themes in coding interviews, so having a firm grip on them can be the difference between being hired or not. After this comprehensive course in Java, one of the most popular coding languages, you'll have an in-depth understanding of different algorithm types and be equipped with a simple process for approaching complex analysis. As you progress through the course, you’ll be exposed to the most important algorithms that you're likely to encounter in an interview. You'll work your way through over 50 interactive coding challenges and review detailed solutions for each problem. And guess what? Even if you’re not yet fully comfortable with Java, you’ll walk away with the ability to craft optimal solutions for addressing tough coding interview questions in it.

15hrs
Intermediate
50 Challenges
17 Quizzes

Increase your algorithm proficiency today#

At this point, you should have a pretty good understanding of the importance of calculating an algorithm’s efficiency, how to do so, the reasons for using asymptotic time complexity, and the differences between its key notations.

In case you want to learn about algorithms, and their time complexities, in more detail, then you’ll find a plethora of interactive and exciting courses available at Educative. If you’re relatively new to this subject, consider the course Data Structures and Algorithms in Python, which covers these fundamental computer science concepts using Python, an essential language for developers and data scientists alike. If you know a little Java and are looking to take the next step in your career, we’d suggest checking out the course Algorithms for Coding Interviews in Java.

Happy learning!


Written By:
 
Join 2.5 million developers at
Explore the catalog

Free Resources