Home/Blog/Programming/Greedy algorithms vs. dynamic programming: How to choose
Home/Blog/Programming/Greedy algorithms vs. dynamic programming: How to choose

Greedy algorithms vs. dynamic programming: How to choose

Bismillah Jan
Jun 28, 2024
12 min read

In the domain of solving optimization problemsOptimization problems involve finding the optimal solution among possible options, aiming to maximize or minimize an objective while considering constraints. , several techniques are used to find optimal solutions, including divide and conquer, greedy algorithms, dynamic programming, linear programming, integral programming, and so on. Among them, two techniques stand out as pillars of efficiency: Greedy algorithms and dynamic programming. Each of these techniques has its advantages and limitations. It is like a fork in the road, where each path leads to the same destination but offers different landscapes and challenges. Similar is the situation when deciding between greedy algorithms and dynamic programming. Either we should opt for the quick, intuitive decisions of greedy algorithms or invest time and resources in the planning of dynamic programming.

Let’s unfold the complexities behind these two problem-solving techniques and discuss the scenarios where these algorithms can be applied.

Greedy algorithms#

Greedy algorithms are typically used for solving optimization problems. These algorithms make locally optimal choices at each step with the hope of finding a globally optimum solution. In other words, at each step of the algorithm, they choose the best possible option without considering the consequences of that choice in future steps.

We can devise a greedy strategy for an optimization problem if it possesses the following properties:

  • Greedy choice property: A problem possesses this property if we can construct a globally optimal solution by considering the locally optimal (greedy) choices. It means that the greedy algorithm opts for a solution that appears optimal for the current subproblem irrespective of the rest at each decision point.

  • Optimal substructure: A problem demonstrates optimal substructure property when its optimal solution includes within the optimal solutions to its subproblems. This property also applies to dynamic programming (discussed below).

Greedy algorithms make decisions based only on the information available in the current step (subproblem); therefore, they are often efficient and easy to implement. However, they don’t always guarantee an optimal solution for every problem. Sometimes, a greedy algorithm may produce a suboptimal solution, meaning it may not always find the optimal solution. For instance, the coin change problem can be solved using the greedy algorithm, but it wouldn’t produce the optimal solution each time.

To understand the greedy algorithm in depth, let’s take an example problem and solve it using the greedy algorithm in the following section:

An example of the greedy algorithm#

We will consider the coin change problem to understand the greedy algorithm.

Question statement#

Find the minimum number of coins required to make up a set of coin denominations and a target amount.

Solution#

The greedy approach involves repeatedly choosing the coin with the largest denomination that is less than or equal to the remaining amount until the amount becomes zero.

Let’s consider an example of denominations (2, 5, 10, 30), and the amount equal to 45. The following illustration shows a step-by-step solution via the greedy algorithm:

Available denominations are (2, 5, 10, 30) and the amount is 45
Available denominations are (2, 5, 10, 30) and the amount is 45
1 of 5

Let’s code the coin change problem using a greedy approach:

#Greedy algorithm for the coin denomination problem
def coin_change(denominations, amount):
denominations.sort(reverse=True)
num_coins = 0
for denomination in denominations:
num_coin = amount // denomination
num_coins += num_coin
amount -= num_coin * denomination
if amount == 0:
break
return num_coins
denominations = [2, 5, 10, 30]
amount = 45
print("Minimum number of coins needed:", coin_change(denominations, amount))
Greedy algorithm approach for the coin change problem in Python

Code explanation#

This Python code defines a coin_change function that calculates the minimum number of coins needed to change a given amount using a set of denominations. Here’s an explanation of the code:

  • Line 2: This line defines a function named coin_change that takes two arguments: denominations, a list of coin denominations, and the amount, the target amount for which we need to make a change.

  • Lines 3–4: The instruction denominations.sort(reverse=True) sorts the list of coin denominations in descending order. The num_coins=0 initializes a variable num_coins to keep track of the total number of coins used to make a change.

  • Lines 5–8: These lines starts a loop that iterates over each coin denomination in the sorted list of denominations. The instruction num_coin = amount // denomination calculates how many coins of the current denomination can be used to change the remaining amount. The // operator performs integer division (floor division). The num_coins += num_coin updates the total number of coins used by adding the number of coins of the current denomination calculated in the previous step. Also amount -= num_coin * denomination updates the remaining amount by subtracting the value of the coins used from the total amount.

  • Lines 10–11: The if statement checks if the remaining amount becomes zero. If it does, the exact change has been achieved, breaking the loop.

  • Line 12: This line returns the total number of coins used to make change for the given amount.

  • Lines 14–16: These lines demonstrate an example of a coin change problem.

We should know that the greedy approach may not always give the optimal solution. For example, if the coin denominations are (2, 6, 8) and the target amount is 12, the greedy approach will yield a solution of 3 coins (8, 2, and 2). However, the optimal solution is 2 (6 and 6).

Dynamic programming#

Dynamic programming is another method to solve optimization problems. In dynamic programming, problems are solved by combining solutions to the subproblems. Dynamic programming is useful when a problem can be broken down into overlapping subproblems. Also, finding the optimal solution to the problem depends on the optimal solutions to the smaller subproblems.

Here are the two properties a problem must have to be solved using dynamic programming:

  • Optimal substructure: An optimization problem contains the optimal substructure property if the optimal solution to the problem exists within the optimal solutions to subproblems. In simple words, if subproblems have optimal solutions, the overall problem will have an optimal solution.

  • Overlapping subproblems: If the subproblems contain repetitive computation toward the solution to the bigger problem, it indicates that the problem has overlapping subproblems. For example, when a recursive algorithm revisits the same computation multiple times, it indicates that the optimization problem contains overlapping subproblems.
    Dynamic programming stores the solutions to subproblems and reuses them when an overlapping subproblem occurs, eliminating the recomputation of intermediate results.

To understand the above two properties, let’s take the example of finding the nthn–th Fibonacci number.

Fibonacci number#

In this example, we must to find the nthn–th Fibonacci number, where nn is a positive integer. A Fibonacci number is a sequence of numbers in which each is the sum of the two previous numbers, usually starting with 0 and 1. The sequence starts as follows: 0,1,1,2,3,5,8,13,21,34,55,...0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...

Let’s look at the two characteristics: optimal substructure and overlapping subproblems in the question.

Optimal substructure #

Mathematically, the optimal solution to the problem can be solved using the following formula for n>1n>1:

According to the formula above, the problem can be solved recursively. For example, to find the 5th5th Fibonacci number, we need to compute the 4th4th and 3rd3rd Fibonacci number, i.e., Fib(5)=Fib(4)+Fib(3)Fib(5)=Fib(4)+Fib(3). Similarly, to find the 4th4th Fibonacci number, we need to have the 3rd3rd and 2nd2nd Fibonacci number, i.e., Fib(4)=Fib(3)+Fib(2)Fib(4)=Fib(3)+Fib(2), and so on. Hence, the problem exhibits the optimal substructure property.

Overlapping subproblems#

Using equation (1)(1) above, calculating Fib(5)Fib(5) requires the computation of Fib(4)Fib(4) and Fib(3)Fib(3). Similarly, the finding Fib(4)Fib(4) requires the computation of Fib(3)Fib(3) and Fib(2)Fib(2) and so on. We have observed that due to overlapping subproblems, the computation involves multiple calls to the function for the same input, as shown in the following illustration:

Overlapping subproblems in finding the 5th Fibonacci number
Overlapping subproblems in finding the 5th Fibonacci number

Dynamic programming avoids recomputation of the overlapping subproblems using memoization or tabulation. These terms refer to storing the results of expensive function calls in a data structure like a dictionary or an array. When the function is called with the same input again, it first checks if the result already exists in the data structure. Otherwise, the function computes and stores the result in the data structure.

Let’s try to code the Fibonacci problem using dynamic programming, as given below:

memory = {}
def Fib(n):
if n in memory:
return memory[n]
if n <= 1:
return n
else:
output = Fib(n-1) + Fib(n-2)
memory[n] = output
return output
n = 5
result = Fib(n)
print(f"The Fibonacci number {n} is: {result}")
Finding the n-th Fibonacci number using dynamic programming in Python

Code explanation#

  • Line 1: This line defines an empty dictionary memory to store the intermediate results.

  • Lines 3–4: These lines return the intermediate result if found in the dictionary.

  • Lines 5–6: This is the base case for ending the recursive calls.

  • Lines 8–10: These lines recursively call the Fib(n) function, store the computed result in the memory dictionary, and return the output.

  • Lines 12–14: These lines demonstrate an example of calculating the nthn–th Fibonacci number.

Let’s consider the coin change problem and solve it using dynamic programming to address the issues in the greedy approach in the following section:

An example of dynamic programming#

Question statement#

Find the minimum number of coins required to make up a set of coin denominations and a target amount.

Solution#

Dynamic programming is breaking down the problem into smaller subproblems and storing the solutions to these subproblems to avoid recomputation of the intermediate results.

We can define a recursive function coin_change(denominations, amount) that returns the minimum number of coins required to make up the given amount using the given coins. Then, we can use memoization to store and retrieve the solutions to subproblems.

For example:

  • coin_change([2, 5, 10, 30], 45) would recursively compute the minimum number of coins required.

  • Tabulation would store the solutions to subproblems, avoiding redundant calculations.

Let’s see dynamic programming in action for the coin change problem:

def coin_change(denominations, amount):
memory = [float('inf')] * (amount + 1)
memory[0] = 0
for i in range(1, amount + 1):
for denomination in denominations:
if denomination <= i:
memory[i] = min(memory[i], memory[i - denomination] + 1)
return memory[amount] if memory[amount] != float('inf') else -1
denominations = [2, 5, 10,30]
amount = 45
print("Minimum number of coins needed:", coin_change(denominations, amount))
Dynamic programming approach for the coin change problem in Python

Code explanation#

  • Line 1: This line defines a function named coin_change that takes two arguments: denominations, a list of coin denominations, and the amount, the target amount for which we need to make a change.

  • Lines 2–3: The memory = [float('inf')] * (amount + 1) statement initializes a list memory with amount + 1 elements, each initialized to positive infinity (float('inf')). This list will be used to store the minimum number of coins needed to make a change for each amount from 0 to amount. The next instruction sets the value of memory[0] to 0, indicating that zero coins are needed to make a change for an amount of 0.

  • Lines 5–6: These lines start a loop that iterates from 1 to amount, inclusive. This loop will compute the minimum number of coins needed to change for each amount from 1 to amount. The for denomination in denominations: statement starts a nested loop that iterates over each coin denomination in the denominations list.

  • Lines 7–8: If the current coin denomination is less than or equal to the current amount, it can change for the current amount i. The memory[i] = min(memory[i], memory[i - denomination] + 1) function updates the value of memory[i] to the minimum between its current value and the value of memory[i - denomination] + 1. This represents the minimum number of coins needed to change the current amount i, considering using the current coin denomination.

  • Line 10: This line returns the minimum number of coins needed to change for the target amount amount. If it’s not possible to make a change for the amount amount, the function returns -1.

  • Lines 12–14: These lines demonstrate an example of finding the minimum number of coins needed to make a change for the given amount.

The dynamic programming approach we followed for the coin change problem yields the same results as the greedy algorithm for our example. Also, it gives an optimal solution (2) for denominations (2, 6, 8) and the amount equal to 12.

Let’s explain when to choose the greedy algorithm over dynamic programming and vice versa.

Greedy algorithms vs. dynamic programming#

Choosing between a greedy algorithm and dynamic programming depends on the nature of the problems and the constraints imposed on them. Let’s look at each category and describe the cases where we can opt for either a greedy approach or dynamic programming.

Greedy algorithm#

  • Optimality of greedy choices: Greedy algorithms are preferred for problems having “greedy choice property,” where making a locally optimal choice at each step leads to a globally optimal solution.

  • Efficiency: Greedy algorithms are often more efficient than dynamic programming because they generally involve a simple decision-making process at each step. Therefore, if a problem can be efficiently solved using a greedy approach, it may be preferred over dynamic programming, especially for large input sizes.

  • Space complexity:  Unlike dynamic programming, greedy algorithms don’t store intermediate results; therefore, greedy algorithms usually have lower space complexity. In solving a problem, if more memory usage is a concern, a greedy approach may be preferable.

Dynamic programming#

  • Optimal substructure: Dynamic programming is likely the appropriate choice if a problem has the “optimal substructure” property. Dynamic programming algorithms consider all possible subproblems and construct the optimal solution from their solutions.

  • Overlapping subproblems: Dynamic programming is suitable for problems with overlapping subproblems, where the same subproblem is faced multiple times while solving the larger problem. By storing and reusing solutions to subproblems, dynamic programming avoids recomputations and improves efficiency.

  • Guaranteed optimal solution: Dynamic programming guarantees the optimal solution for problems with optimal substructure and overlapping subproblem properties. It is often the preferred choice if the problem requires finding the globally optimal solution.

As we have seen, the choice of algorithm depends on each approach’s advantages and the problem’s constraints. Also, while deciding on either strategy, we must consider the characteristics of both approaches and invest some time in finding how a problem exhibits those characteristics. This will make our task easier when choosing between the two approaches.

Next steps#

If you want to expand your knowledge of algorithms further, the following courses are an excellent starting point for you:

Cover
Grokking Dynamic Programming Interview

Some of the toughest questions in technical interviews require dynamic programming solutions. Dynamic programming (DP) is an advanced optimization technique applied to recursive solutions. However, DP is not a one-size-fits-all technique, and it requires practice to develop the ability to identify the underlying DP patterns. With a strategic approach, coding interview prep for DP problems shouldn’t take more than a few weeks. This course starts with an introduction to DP and thoroughly discusses five DP patterns. You’ll learn to apply each pattern to several related problems, with a visual representation of the working of the pattern, and learn to appreciate the advantages of DP solutions over naive solutions. After completing this course, you will have the skills you need to unlock even the most challenging questions, grok the coding interview, and level up your career with confidence. This course is also available in C++, JavaScript, and Python—with more coming soon!

25hrs
Intermediate
44 Challenges
868 Illustrations
Cover
Algorithmic Problem Solving: Preparing for a Coding Interview

This course provides a thorough exploration of essential algorithmic techniques fundamental in programming. It covers deep understanding of big-O notation, which is a crucial concept for evaluating the efficiency and performance of algorithms, in C++, Java, and Python In this course, you’ll delve into different algorithmic strategies i.e. greedy algorithms, divide-and-conquer, and dynamic programming. Next, you will become proficient in testing and debugging the code. The solutions will be rigorously examined by our automated grading system, which employs a diverse set of test cases. Throughout the course, you’ll be presented with popular algorithmic problems frequently encountered in technical interviews. Additionally, the course prepares the reader for coding interviews by presenting popular algorithmic problems encountered in technical assessments. Tackling these challenges head-on equips learners with the skills and confidence needed to excel in technical interviews within the field of computer science.

25hrs
Beginner
32 Challenges
11 Quizzes
Cover
Algorithms for Coding Interviews in Python

With algorithms being one of the most common themes in coding interviews, having a firm grip on them can be the difference between being hired and not. After completing this comprehensive course, you'll have an in-depth understanding of different algorithm types in Python and be equipped with a simple process for approaching complexity analysis. As you progress, you’ll be exposed to the most important algorithms you'll likely encounter in an interview. You'll work your way through over 50 interactive coding challenges and review detailed solutions for each problem. You’ll walk away with the ability to build-up to the optimal solution for addressing those tough coding interview questions head-on.

15hrs
Intermediate
42 Challenges
17 Quizzes

  

Free Resources