Big-O is a standard mathematical notation that shows how efficient an algorithm is in the worst-case scenario relative to its input size. To measure the efficiency of an algorithm, we need to consider two things:
Big-O notation captures the upper bound to show how much time or space an algorithm would require in the worst case scenario as the input size grows. It is usually written as:
Time complexity is determined by taking two factors into account: the input size and the solution of the algorithm. Here’s a generic way to calculate the complexity:
Let’s look at the following code and see how we can calculate its complexity if the input size is equal to n
:
#include <iostream>using namespace std;int main() {int sum = 0;for (int i=0;i<5;i++){sum = sum+i;}cout << "Sum = " << sum;return 0;}
Let’s list down all the statements along with their execution count:
Operations | Num of Executions |
---|---|
int sum = 0 |
1 |
for (int i=0;i<5;i++) |
6 |
sum = sum+i |
5 |
cout << "Sum = " << sum |
1 |
return 0 |
1 |
Generalizing this notation in terms of input size (n) would form this expression:
After simplifying the above expression, the final time complexity would be:
To find Big-O notation, follow two steps:
After performing the above two steps on the time complexity that we just calculated, we can estimate the Big-O notation as:
Here’s a list of Big-O complexities sorted in ascending order:
Function | Name | Function | Name |
---|---|---|---|
1. | Constant | 7. | Quadratic |
2. | Logarithmic | 8. O(n^3) | Cubic |
3. | Log-square | 9. O(n^4) | Quartic |
4. | Root-n | 10. O(2^n) | Exponential |
5. | Linear | 11. O(e^n) | Exponential |
6. | Linearithmic | 12. O(n!) | n-Factorial |
Here is a rough plot of all the functions above to help you visualize the Big O time complexity to see where your algorithm lies in terms of efficiency.
Free Resources