Learn dynamic programming in 10 minutes!

Those who cannot remember the past are condemned to repeat it.

Dynamic programming is all about remembering answers to the sub-problems you’ve already solved, ​and not solving them again.

Where do we need dynamic programming?

If you are given a problem that can be broken down into smaller sub-problems. These smaller sub-problems can still be broken into smaller ones, and if you manage to find out that there are some over-lapping sub-problems, then you’ve encountered a DP problem.

The core idea of dynamic programming is to avoid repeated work by remembering partial results.

Dynamic programming and recursion

  • Dynamic programming is basically recursion plus memoization.
  • Recursion allows you to express the value of a function in terms of other values of that function.
  • If you implement your function in a way that the recursive calls are done in advance and stored for easy access in the future, it will make your program faster.
  • This is what we call memoization – it is memorizing the results of some specific states, which can then be later accessed to solve other sub-problems.

The intuition behind dynamic programming is that we trade space for time, i.e., instead of calculating all the states, which takes a lot of time but no space, we take up space to store the results of all the sub-problems to save time later.

Let’s try to understand this by taking an example of Fibonacci numbers.

Fibonacci (n) = 1; if n = 0

Fibonacci (n) = 1; if n = 1

Fibonacci (n) = Fibonacci(n-1) + Fibonacci(n-2)

So, the first few numbers in this series will be:

1,1,2,3,5,8,13,21,35...1, 1, 2, 3, 5, 8, 13, 21, 35...

A code for it using pure recursion:

int fib (int n) {
if (n < 2)
return 1;
return fib(n-1) + fib(n-2);
}

Explanation:

  • The above code uses the recurrence equation for the Fibonacci series.
  • The calculation of time complexity of the recursion based approach is a bit tough; although, it comes out to be around O(2N)O( 2^{N})
  • The space complexity of this approach is O(N)O(N) as the recursion tree can be of depth N in the worst case.

Using the dynamic programming approach with memoization:

void fib (int n, int fib[]) {
fib[0] = 1;
fib[1] = 1;
for (int i = 2; i < n; i++)
fib[i] = fib[i-1] + fib[i-2];
}

Explanation:

  • It does not use recursion; instead, it uses memoization, which helps to remember previous results. We only need the last two results, so we save our previous two results.

  • The time complexity of the above code is linear, as the loop runs from 22 to nn, i.e., it runs in O(n)O(n).

  • The space complexity of the above approach is the same for fib(6) or fib(100), i.e., as NN changes, the space or memory used remains the same. Hence, its space complexity is O(1)O(1) or constant.

How to identify a DP problem

  1. Dynamic Programming is typically applied to optimization problems. In such problems, there can be many possible solutions. Each solution has a value, but we wish to find a solution with the optimal (minimum or maximum) value.
  2. All dynamic programming problems satisfy the overlapping sub-problems property, i.e., you would need the solutions to the sub-problems again and again to reach the final solution.