What is Dynamic Programming?

Dynamic Programming is a technique for solving complex problems in programming. At its core, the concept is about remembering and making use of answers to smaller “subproblems” that have already been solved.

Dynamic Programming (DP) is a powerful technique that makes it possible to solve certain problems considerably more efficiently than a recursive solution would usually be. This method essentially trades space for time. Instead of calculating all the solution’s different states (taking a lot of time but no space), we take up space for storing solutions of all the sub-problems to save time later. This is called Memoization.

Properties of Dynamic Programming

There are two main properties that should exist in a problem to be solved with the Dynamic Programming approach. DP is not fruitful if these properties do not exist.

  1. Overlapping subproblems: The best solution for a problem also includes the best possible solutions for its smaller parts or subproblems. For example, we can break down the Fibonacci series into overlapping subproblems because every number is the sum of two preceding numbers.
  2. Optimal substructure: The optimal solution of one sub-problem can be utilized to obtain the optimal solution for other subproblems. For example, the solution at each step of the Fibonacci series utilizes the solution of the previous steps.

Let’s understand this using the classic example of the Fibonacci series:

The first few numbers start as 11, 11, 22, 33, 55, 88, 13...13..., and so on.

Figure showing the function calls
Figure showing the function calls

As we can see in the illustration in order to calculate fib(6), fib(4) is computed 2 times, fib(3) is computed 3 times and fib(2) is computed 5 times. This causes extra function calls for things that have already been calculated and add extra time overhead. The problem could simply be solved by storing the results of the subproblems in an array and using the results in place of function calls. The space complexity of this solution is O(n)O(n).

Conveniently, the dynamic programming approach could store the prior results in just 2 variables, because each fib(n) just requires the result of the last two elements. The space complexity of this solution is O(1)O(1).

Fibonacci series using recursion

Below is a simple solution using recursion:

int fib (int n)
{
if (n < 2)
return 1;
return fib(n-1) + fib(n-2);
}
int main()
{
int n = 5;
cout << "Fibonacci of " << n << " = " << fib(n) << "\n";
}

The time complexity of the recursive solution is exponential.

T(n)=T(n1)+T(n2)T(n) = T(n-1) + T(n-2)

The space complexity of the recursive solution is O(n)O(n).

Fibonacci series using Dynamic Programming

Below is an advanced solution using dynamic programming:

int fib(int n)
{
int a = 1, b = 1, c;
if (n == 0)
return a;
for (int i = 2; i <= n; i++){
c = a + b;
a = b;
b = c;
}
return b;
}
int main()
{
int n = 5;
cout << "Fibonacci of " << n << " = " << fib(n) << "\n";
}

The time complexity of the dynamic programming solution is O(n)O(n).

The space complexity of this solution is O(1)O(1).

Free Resources

Copyright ©2024 Educative, Inc. All rights reserved