Introducing Dynamic Programming With Fibonacci Numbers
In this lesson, we'll use a dynamic programming technique called memoization to reduce the time complexity of the Fibonacci function.
We'll cover the following
In the last lesson, we saw a recursive Java implementation of a function to calculate the Fibonacci number. We also calculated its time complexity by solving a recurrence relation that came out to be , i.e., exponential. We are now going to use a dynamic programming technique to reduce the time complexity to linear.
What is dynamic programming?
As discussed in an earlier lesson, dynamic programming involves algorithms that break problems into subproblems such that at any given stage the optimal solutions to the subproblems are known. The solutions to these subproblems are used to find the solution to the larger problem.
Characteristics
Most problems that can be solved with dynamic programming can also be solved with a divide and conquer approach. The difference between the two is that the dynamic programming approach is applicable when a problem has overlapping subproblems, i.e., the subproblems of a given problem are not independent, such as when two subproblems share a subproblem. An example of this is the Fibonacci code we saw in the last lesson, where fib(3)
had to be calculated in order to calculate both fib(4)
and fib(5)
.
Furthermore, dynamic programming problems have the optimal substructure property, which means that the overall optimal solution of the problem can be constructed from the optimal solutions of its subproblems.
For Fibonacci numbers, we know that:
fib(n) = fib(n-1) + fib(n-2)
Hence, a problem of size “n” can be reduced to subproblems of size n-1
and n-2
. Therefore, Fibonacci numbers have the optimal substructure property.
Dynamic programming algorithms solve every subproblem just once and save the answer in a lookup table, thereby avoiding recalculating the answer every time the subproblem is encountered.
There are two dynamic programming patterns: memoization and tabulation. Refer to the following lesson to read up on the basic definitions of both these approaches.
Lookup table
As we discussed, both memoization and tabulation make use of the lookup table in order to store and lookup values. So, when a function is called, it first looks for the value in the lookup table before making a recursive call. This helps reduce the number of redundant recursive calls.
What is a lookup table?
As discussed the lookup table is used to store and search values. Hence, it is usually an array which stores the values returned from a computation. This array can, of course, be multi-dimensional or associative.
A lookup table can also store values in the form of key-value pairs, where the keys are the data items being looked up, and the values are either the actual data or pointers to where the data is located. So, in this case, a lookup table can be a hash table.
Some other data structures that can be used for this purpose include list, vector and hashmap.
Level up your interview prep. Join Educative to access 70+ hands-on prep courses.