In programming, there's a chance of having many ways to solve a problem, but we need to pick one that is most efficient in terms of cost. Time complexity and space complexity act like measurement scales of the cost of an algorithm.
In this answer, we'll discuss the following questions:
Let's start with the formal definition.
Time complexity measures the time taken to execute all statements of the algorithm. It should be as small as possible because the more time our algorithm takes, it will increase the cost of it. For instance, if a statement is executed multiple times
Note: The time complexity is the total amount of time taken by an algorithm to execute, as a function of the length of input data.
If an algorithm uses statements that are only executed a single time, it will require some time consistently. If it runs multiple times, like in a loop condition, its required time increases depending on the number of iterations. This raises the question of drawing a relationship between input and time required. For this, there are standardized notations.
As we have observed, the time complexity is given as a function of input data length. There are different notations to represent it, like theta notation and omega notation, but our focus would be on Big-
Big
There are some important time complexities in big-
An algorithm has constant time complexity when it's independent of the input size
#include <iostream>#include <time.h>using namespace std;int main() {// your code goes hereint n = 0;cout << "Integer = " << n << endl;return 0;}
Algorithms have linear time complexity when their runtime increases with respect to the size of the input
In linear search, the algorithm will compare each element of the array with the element searched. The search will go unless the element is found or it reaches the last element of the array.
Let's look at the figure below. We have an array of ten elements, and our element int search
is at index 7:
#include <iostream>using namespace std;int main() {// Let's assume we have an array of integersint arr[10] = {1,2,3,4,5,6,7,8,9,10};// And element to search is 7int search = 7;int iter = 0;for (iter=0; iter<10; iter++){if(arr[iter]== search){cout <<"Element found"<<endl;break;}}if(iter == 10){cout << "Element not found"<<endl;}return 0;}
So, the linear search will take at least seven iterations to search 7
. This is also known as the worst case. In general, it takes
An algorithm is said to be in logarithmic time complexity if its processing time is proportional to the logarithmic of the input elements with size
The binary search is also known as the half traversal search because of its mechanism. The binary search algorithm compares the targeted value with the middle element. Let's suppose the central element is not equal to the targeted element. In that case, it eliminates the half where the targeted element can't lie, and the search goes on in the remaining half, following the exact mechanism until the targeted element is found. If the remaining half of the array becomes empty, that means the targeted element is not found.
Note: Binary search only works on sorted arrays.
#include <iostream>using namespace std;int binarySearch(int arr[], int left, int right, int x) {while (left <= right) {int mid = left + (right - left) / 2;if (arr[mid] == x) {return mid;} else if (arr[mid] < x) {left = mid + 1;} else {right = mid - 1;}}return -1;}int main() {// Let's assume we have an array of integersint arr[10] = {1,2,3,4,5,6,7,8,9,10};// And element to search is 7int search = 7;int iter = 0;int output = binarySearch(arr, 0, 9, search);if(output != -1){cout <<"Element found"<<endl;return 1;}if(output == -1){cout << "Element not found"<<endl;}return 0;}
Now, we can see the difference in the number of operations. The binary search almost takes four operations, equivalent to log2 (10). So, the relationship between data and time is logarithmic.
If the running time of an algorithm is increasing exponentially with the size of input data
The following code chunk is in quadratic time because the outer loop run
#include <iostream>using namespace std;int main() {// your code goes hereint n = 5;for (int i = 0; i <= n ; i++) //Time complexity O(n){for(int j = 0 ; j <= n ; j++) //Time complexity O(n^2){cout << "Demonstration of Quadratic time complexit" << endl;}}return 0;}
If the processing time of an algorithm is proportional to the cube of input elements of size
Let's see the following code, whose running time is cubic because of three nested loops:
#include <iostream>using namespace std;int main() {// your code goes hereint n = 5;for (int i = 0; i <= n ; i++) //Time complexity O(n){for(int j = 0 ; j <= n ; j++) //Time complexity O(n^2){for(int k = 0; k <= n ; k++) //Time complexity O(n^3){cout << "Demonstration of Cubic time complexit" << endl;}}}return 0;}
There are many more time complexities, but these are the most used.
Free Resources