Those who cannot remember the past are condemned to repeat it.
Dynamic programming is all about remembering answers to the sub-problems you’ve already solved, and not solving them again.
If you are given a problem that can be broken down into smaller sub-problems. These smaller sub-problems can still be broken into smaller ones, and if you manage to find out that there are some over-lapping sub-problems, then you’ve encountered a DP problem.
The core idea of dynamic programming is to avoid repeated work by remembering partial results.
The intuition behind dynamic programming is that we trade space for time, i.e., instead of calculating all the states, which takes a lot of time but no space, we take up space to store the results of all the sub-problems to save time later.
Let’s try to understand this by taking an example of Fibonacci numbers.
Fibonacci (n)
= 1; if n = 0
Fibonacci (n)
= 1; if n = 1
Fibonacci (n)
= Fibonacci(n-1)
+ Fibonacci(n-2)
So, the first few numbers in this series will be:
A code for it using pure recursion:
int fib (int n) {if (n < 2)return 1;return fib(n-1) + fib(n-2);}
Explanation:
Using the dynamic programming approach with memoization:
void fib (int n, int fib[]) {fib[0] = 1;fib[1] = 1;for (int i = 2; i < n; i++)fib[i] = fib[i-1] + fib[i-2];}
Explanation:
It does not use recursion; instead, it uses memoization, which helps to remember previous results. We only need the last two results, so we save our previous two results.
The time complexity of the above code is linear, as the loop runs from to , i.e., it runs in .
The space complexity of the above approach is the same for fib(6)
or fib(100)
, i.e., as changes, the space or memory used remains the same. Hence, its space complexity is or constant.