Home/Blog/Programming/Problem-solving using the decrease and conquer technique
Home/Blog/Programming/Problem-solving using the decrease and conquer technique

Problem-solving using the decrease and conquer technique

Sarfraz Raza
Dec 12, 2023
16 min read

Become a Software Engineer in Months, Not Years

From your first line of code, to your first day on the job — Educative has you covered. Join 2M+ developers learning in-demand programming skills.

Picture yourself confronting a challenging problem, akin to tackling a jigsaw puzzle. You meticulously dissect it into smaller, more manageable fragments, carefully selecting the most critical piece to address initially while consistently diminishing the problem's size. The decrease and conquer technique adheres to this strategy, systematically segmenting and progressively simplifying the problem. This simplification leads to a more concise version of the problem, ultimately guiding you to the complete solution. Consider it akin to navigating a maze, step by step—beginning by eliminating possibilities through the careful tracing of specific regions to reduce the problem's size—until you conquer the entire puzzle.

Before we delve into three distinct strategies for applying the decrease and conquer technique, it’s essential to grasp the fundamental differences between the decrease and conquer and the divide and conquer approach.

Divide and conquer vs. decrease and conquer#

The decrease and conquer technique closely resembles divide and conquer, differing slightly in its methodology.

In the divide and conquer approach, a problem is subdivided into smaller, identical subproblems. These subproblems undergo recursive resolution until they reach a minimal size (the base case), after which they are directly solved. The final step combines their solutions to address the original, larger problem.

On the other hand, decrease and conquer also simplifies complex problems by partitioning them into smaller instances. But contrary to divide and conquer, which tackles multiple subproblems, decrease and conquer concentrates on resolving a single subproblem, deriving inspiration from mathematical induction. This technique is also known as the incremental or inductive approach.

The three strategies of decrease and conquer#

Let's discuss the following three strategies one by one:

  1. Decrease by a constant size.

  2. Decrease by a constant factor (with constant manipulation on each recursive step).

  3. Decrease by a constant factor (with a polynomial manipulation on each recursive step).

Decrease and conquer strategy I (by a constant)#

In this variant of the decrease and conquer approach, the algorithm reduces the problem size by a constant cc with each iteration or recursive step. This signifies that at every stage, the problem is trimmed down into a more manageable subproblem, shrinking to a size of NcN - c, where NN represents the initial problem size and cc denotes the portion that has been pruned or separated from the original problem. On each recursive step, it takes NkN^k steps to process and trim the problem size to NcN-c, where c1,k1c \ge 1 , k\ge 1.

The recurrence relation governing this approach adheres to the following pattern:

Time complexity:

If we open the above recurrence, it will look like this:

It is not hard to see that the above recurrence has an overall time complexity of O(N×Nk)O(Nk+1).O(N \times N^k) \approx O(N^{k+1}).

  • There are around N/cN/c number of terms, and every term is bounded above by NkN^k, therefore T(N)N×Nk.T(N) \leq N \times N^k.

  • For the lower bound, we can easily say that half of the terms (i.e., N2c\frac{N}{2c} terms) are at least (N/2)k(N/2)^k, as kk is a constant, therefore, 2k2^k is just a constant and can be ignored, and the lower bound will be T(N)N2c×(N2)kΩ(Nk+1)T(N) \geq \frac{N}{2c} \times (\frac{N}{2})^k \approx \Omega(N^{k+1}) by ignoring the constants and only taking the terms with base N.N.

Let’s look at some examples for applying this decrease and conquer strategy.

Example 1: Insertion sort (recursive implementation)#

Insertion sort is a sorting algorithm that builds the final sorted array one item at a time. It iterates through an input array, taking one element at a time and inserting it into its correct position within the sorted portion of the array. This process continues until all elements are sorted.

Look at the following animation of insertion sort.

Insertion sort in action

To ponder: How can we apply the decrease and conquer strategy for implementing insertion sort?

The idea is to recursively sort a portion of the array (one lesser element) and then insert the next element into its correct position in the sorted part (extending the solution).

#include <iostream>
#include <vector>
using namespace std;
void recursiveInsertionSort(vector<int>& arr, int n)
{
if (n <= 1)
return;
// Sorting the first n-1 elements, assuming the last element is pruned
recursiveInsertionSort(arr, n - 1);
// Inserting the nth element into the sorted subarray of n-1 elements
int j = n - 1;
while (j-1 >= 0 && arr[j] < arr[j-1])
{
swap(arr[j-1] , arr[j]); // Shifting the j-1'th element forward
j--;
}
}
void print(string msg, vector<int>& arr)
{
cout << msg;
for (int i = 0; i < arr.size(); i++)
cout << arr[i] << " ";
cout << endl;
}
int main()
{
vector<int> arr = {12, 11, 13, 5, -6};
int n = arr.size();
print("Before Sorting: ", arr);
recursiveInsertionSort(arr, n);
print("After Sorting: ", arr);
return 0;
}

Explanation

  • Decrease and conquer: In recursiveInsertionSort(), the function first separates the last element and asks recursion to sort the last n-1 vector elements. Recursion sorts the n-1 elements (by reducing the problem size) and then inserts the n-th element into its correct position.

  • Time complexity: The time complexity for recursive insertion sort can be analyzed using the recurrence relation:

  • Hence the time complexity of recursiveInsertionSort() is O(N2)O(N^2).

Example 2: Recursive traversal of a linked list#

We apply the decrease and conquer approach for the recursive traversal of a linked list by recursively traversing a smaller sublist in each recursive call.

Note: At each step, we move from the current node to the next node, effectively reducing the problem size by one.

#include <iostream>
using namespace std;
struct ListNode
{
int data;
ListNode* next;
ListNode(int val) : data(val), next(nullptr)
{}
};
void recursiveLinkedListTraversal(ListNode* node)
{
if (node == nullptr)
return;
cout << node->data << " ";
recursiveLinkedListTraversal(node->next);
}
int main()
{
// Creating a sample linked list: 10 -> 20 -> 30 -> 40 -> 50
ListNode* head = new ListNode(10);
head->next = new ListNode(20);
head->next->next = new ListNode(30);
head->next->next->next = new ListNode(40);
head->next->next->next->next = new ListNode(50);
recursiveLinkedListTraversal(head);
return 0;
}

Explanation (Recursive linked list traversal):

  • Decrease and conquer: In recursiveLinkedListTraversal(), we traverse the linked list recursively by reducing the problem to a smaller linked list with each recursive call.

  • Time complexity: The time complexity for recursiveLinkedListTraversal() will follow this recurrence:

  • The resulting complexity is O(N)O(N), where NN is the number of nodes. Decrease and conquer is evident in this case because we traverse each node only once.

Exercise: Printing the linked list in reverse order#

What changes should we make in the above example such that the linked list is printed in reverse order?

The output should be in this order:

50 -> 40 -> 30 -> 20 -> 10

Decrease and conquer strategy II (by a constant factor, with constant manipulation)#

Another well-known decrease and conquer strategy is to consider the possibility of shrinking the problem size by a factor of bb (that is Nb\frac{N}{b}), which can be achieved through constant manipulation, cc. In such cases, the recurrence relation for the problem can be expressed as follows:

In such a case where cc is a constant the total time of the problem solution will take c×logbNc\times log_b N which is just O(logbN)O(log_b N), if cc is just a constant.

One famous strategy for such a case is binary search; it works in the following fashion:

  • Our task is to search a given value TT in a contiguous sorted array VV. We take the middle index ii and see if the value at the middle index ViV_i is equal to T.T.Then, we find the index of the value, i.e., i.i.

  • However, if the value ViV_i is greater than TT then it shrinks the search space to the first half (that is it searches in the range from the beginning of the range up to i1i-1). Otherwise, it searches in the second half (that is, it searches in the range from i+1i+1 up to the end of the range).

The above idea of binary searching follows this recurrence:

Solving this recurrence yields that the number of steps it takes to find the searched element is O(log2N)O(log_{2}N) time.

Here is the implementation of binary search:

#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
// Recursive binary search function
int BinarySearch(vector<int>& arr, int target, int left, int right, int & indexCount)
{
if (left <= right)
{
indexCount++;
int mid = left + (right - left) / 2; // This is to avoid overflow
// If the target is found at the middle element
if (arr[mid] == target)
return mid;
// If the target is in the left subarray
if (arr[mid] > target)
return BinarySearch(arr, target, left, mid - 1, indexCount);
// If the target is in the right subarray
return BinarySearch(arr, target, mid + 1, right, indexCount);
}
// If the target is not found
return -1;
}
// Wrapper function for binary search
int BinarySearch(vector<int>& arr, int target, int & indexCount)
{
indexCount = 0;
return BinarySearch(arr, target, 0, arr.size() - 1, indexCount);
}
void print(string msg, vector<int>& arr)
{
cout << msg << ": { ";
for(int i=0; i<arr.size(); i++)
cout << arr[i] << " ";
cout << "}"<<endl;
}
void randomFill(vector<int> &arr, int si, int ei)
{
// Filling random values between the given range [si-ei]
int range = ei-si+1; // Both ends included
srand(time(0));
for(int i=0; i<arr.size(); i++)
{
arr[i] = rand()%range + si; // Assigning a random value in a given range
}
}
int main() {
vector<int> arr(1<<20);
// = {1, 3, 5, 7, 9, 11, 13, 15}; // You may test on small number too
// print("Array ", arr); // Do not print an array of 1 million size :)
randomFill(arr, 0, 1<<30); // Filling the vector with values between 0-2^{30}
sort(arr.begin(), arr.end()); // sorting the array
int target = arr[789];
// Element to be searched after sorting present at 789 index. You can
// change it with any other index.
int fi = -1;
int indexCount=0;
if ((fi = BinarySearch(arr, target, indexCount))!=-1)
{
cout << target << " is found in the array at index: " << fi
<< ". It took " << indexCount << " steps to do this search" << endl;
}
else
{
cout << target << " is not found in the array."
<< ". It took " << indexCount << " steps to find that"
<< endl;
}
return 0;
}

Implementation details

In the provided code, a main() function and a binary search function are implemented in C++. The main function is responsible for testing the binary search algorithm on a large array.

  • In the main function, a vector arr of size 2202^{20} is created to hold integer values. The randomFill() function is called to populate this vector with random integers between 00 and 2302^{30}. After filling, the vector is sorted in ascending order using the sort() function from the C++ Standard Library.

  • Next, a target value is selected from the sorted array. In this example, the target is set to the element at index 789, but we can change it to any other index for testing. The BinarySearch() function is called to find the target value within the array. If the target is found, the index of the target in the sorted array is printed along with the number of steps taken for the binary search. If the target is not found, a message indicating that it wasn’t found is printed along with the number of steps taken.

  • The BinarySearch() function itself is a recursive binary search algorithm. It takes a sorted array, a target value, and two indices representing the left and right boundaries of the search range. It recursively divides the search range in half until it finds the target or determines that the target is not present in the array. The number of steps taken during the search is tracked using the indexCount parameter, and this count is returned along with the index of the target element or -1 if the target is not found.

Overall, the main function demonstrates how the binary search algorithm can efficiently locate a target value in a sorted array and measure the number of steps required for the search.

Activity: Play the guessing game in the following interactive widget. First, you need to choose a particular number that you want the binary search to search for; the animated shrinking mechanism will show how it will search for the element in the minimum number of steps by following the binary search algorithm.

Binary searching

Let us say, as an exercise, we try to solve the following problem:

A. We have a balance (which can be used to measure the heaviness of two things) and 9 balls where 8 balls have exactly the same weight, but one ball is heavier. Can we use the balance twice and find out which ball is heavier?

B. Can we generalize this technique to solve the NN balls problem, where N1N-1 balls are of exactly the same weight, and one ball is heavier? How many times will we be weighing on the balance? Assume NN is a perfect power of 3. That is, 3, 9, 27, 81, or so on.

Remember: We want to minimize the number of times the balance is used.

Food for thought: What will happen if NN is not a perfect power of 33. Can you still solve the problem in a similar bound?

Further applications #

The concept of strategy II pruning has a broad spectrum of applications, serving as the bedrock for numerous robust data structures. One notable example is the B-tree, characterized by nodes that form a balanced b-ary tree, where each node can house a maximum of bb branches and at least b/2b/2 branches. This structural balance results in an efficient searching cost of approximately O(logbN),O(log_b N), making B-trees a cornerstone in optimizing database data retrieval operations.

B-tree (4-5 tree)
B-tree (4-5 tree)

Decrease and conquer strategy III (by a constant factor, with some polynomial manipulation)#

This third strategy is very powerful; it works in the following fashion:

We recursively search into the space such that the original problem size on each recursive step shrinks by a constant pruned factor c>1c>1, where one pruning step takes NkN^k time. This can be seen as a recurrence in the following form:

For example, if we assume c=2c = 2, the above recurrence seems like a recurrence of the binary search only if k=0k = 0 because in that case, NkN^kwill be a constant. For this strategy, we assume that k1,k\ge1, that is, we have at least a linear function, i.e., O(Nk)O(N^k) where k1k\geq1 meaning that we scan through the entire input search space at least once.

In such a case, the recurrence becomes a decreasing geometric series as seen in the expanded expression for T(N)T(N):

The above expression can be seen as:

This entire expression is O(Nk),O(N^k), as (1+1ck+1c2k+1c3k+)O(1),(1 + \frac{1}{c^k}+\frac{1}{c^{2k}}+\frac{1}{c^{3k}}+\ldots ) \approx O(1), a decreasing geometric series is always bounded by the first term, which is 11 in this case.

Prune and search: This approach can be thought of as a strategy where, after pruning, the problem size reduces by a factor of cc and the computation for smaller recursive calls is asymptomatically achieved for free.

Let’s look at this strategy in action for finding the K’th smallest element in expected linear time.

Example: KthK^{th}Kth smallest element#

In this problem, we aim to find the KthK^{th} smallest element from an unsorted array. This means we want to determine the element that would be at the KthK^{th} position if the array were sorted in ascending order. For example, if we have an array [12, 5, 23, 9, 42, 1, 3, 7, 19] and we want to find the 4th smallest element. The algorithm should return 7.

Strategy: Decrease and conquer approach

The idea is again a pruning strategy; it randomly selects an element, and based on that, it partitions all the elements smaller than the chosen element in the given array to the left of the array, and those values that are bigger than the partition element to the right of the array. This can be done by the famous partition() function, which is the foundation of the quicksort algorithm.

Observation: The probability that element we randomly select lies in the sorted middle range from 25% to 75% is 0.5.

The probability that a randomly selected pivot will be from the range within the middle 50%
The probability that a randomly selected pivot will be from the range within the middle 50%

If the partitioned index is equal to KK we are done. If it isn't, we have two options. If after partition, the KthK^{th} index lies in the first portion, then the KthK^{th} element search will be searched in the first half (no need to search in the right half). Otherwise, it will search in the second half (no need to search in the first half).

Here is the code for finding the KthK^{th} element in the array.

#include <iostream>
#include <vector>
#include <cstdlib>
#include <ctime>
using namespace std;
// Function to partition the vector and return the index of the pivot element
int partition(vector<int>& arr, int low, int high)
{
// Choosing a random pivot within the range
int randomIndex = low + rand() % (high - low + 1);
swap(arr[randomIndex], arr[high]); // Moving pivot to the end
int pivot = arr[high];
// Initializing indices for elements to be compared
int i = low - 1;
for (int j = low; j <= high - 1; j++)
{
// If the current element is smaller than or equal to the pivot
if (arr[j] <= pivot)
{
i++;
swap(arr[i], arr[j]); // Swapping arr[i] and arr[j]
}
}
// Swapping arr[i+1] and arr[high] (pivot)
swap(arr[i + 1], arr[high]);
return i + 1; // Returning the index of the pivot element
}
// Function to find the K'th smallest element recursively
int findKthSmallest(std::vector<int>& arr, int low, int high, int k)
{
if (k > 0 && k <= high - low + 1)
{
// Partitioning the array and getting the index of the pivot
int pivotIndex = partition(arr, low, high);
// If the pivot is the K'th element, we return it
if (pivotIndex - low == k - 1)
return arr[pivotIndex];
// If the K'th element is in the left subarray, we recurse on the left subarray
if (pivotIndex - low > k - 1)
return findKthSmallest(arr, low, pivotIndex - 1, k);
// If the K'th element is in the right subarray, we recurse on the right subarray
return findKthSmallest(arr, pivotIndex + 1, high, k - (pivotIndex - low + 1));
}
// If K is out of range, we return -1
return -1;
}
int main()
{
vector<int> arr = {12, 5, 23, 9, 42, 1, 3, 7, 19};
int k = 4; // Finding the 4th smallest element
// Seeding the random number generator
srand(time(0));
int result = findKthSmallest(arr, 0, arr.size() - 1, k);
if (result != -1)
cout << "The " << k << "th smallest element is: " << result << std::endl;
else
cout << "Invalid value of K." << std::endl;
return 0;
}

How is the KthK^{th} element searched recursively?

The probability that a randomly chosen pivot point lies within the range of the middle sorted array is 50%, i.e., 0.5. Therefore, the expectation is that every 2nd element selection will partition the array, in the worst case, 34×N\frac{3}{4}\times N elements on one side and 14×N\frac{1}{4} \times N elements on the other side of the array. The worst case could be that the KthK^{th} element to be searched lies in the bigger range.

K’th Selection Searching in action
K’th Selection Searching in action

The worst-case expected recurrence will be something like this:

The above recurrence will yield the following geometric series:

This has an expected complexity of O(N).O(N).

Applications: This approach of finding KthK^{th} smallest element is the backbone of randomized quicksort. It calls this subroutine to find the middle element in expected linear time, then partitions the array based on the middle element, ensuring that the two recursive calls are almost the same. Therefore, making it into a recurrence of something like this:

T(N)=T(N/c)+T((11/c)N)+O(N), where c>1T(N) = T(N/c) + T((1-1/c) N) + O(N), \space where \space c>1

This recurrence has the time complexity of O(NlogcN)O(NlogN)O(N \log_c N) \approx O(N \log N) making it one of the fastest known algorithms for sorting NN elements.

For further details about the 3rd strategy (and details about the geometric series and its bounds), read the following blog Geometric series and classic search and prune.

Conclusion#

The decrease and conquer stratagies are very powerful tools in algorithmic recursive problem solving, and a topic that often comes up in coding interviews. It draws inspiration from backward mathematical induction, a fundamental concept in mathematics. Systematically reducing complex problems into smaller, more manageable parts provides a structured approach to finding solutions.

In addition to its relevance in algorithm design, this approach finds applications in various fields, including game theory, where it aids in analyzing and solving complex strategic scenarios. Its versatility extends further into the realm of dynamic programming, enabling the efficient computation of optimal solutions in diverse problem domains. With its roots in mathematical principles and wide-ranging applications, decrease and conquer is a valuable strategy at the intersection of mathematics, computer science, and problem-solving.

Grokking Coding Interview Patterns in C++

Cover
Grokking the Coding Interview Patterns

With thousands of potential questions to account for, preparing for the coding interview can feel like an impossible challenge. Yet with a strategic approach, coding interview prep doesn’t have to take more than a few weeks. Stop drilling endless sets of practice problems, and prepare more efficiently by learning coding interview patterns. This course teaches you the underlying patterns behind common coding interview questions. By learning these essential patterns, you will be able to unpack and answer any problem the right way — just by assessing the problem statement. This approach was created by FAANG hiring managers to help you prepare for the typical rounds of interviews at major tech companies like Apple, Google, Meta, Microsoft, and Amazon. Before long, you will have the skills you need to unlock even the most challenging questions, grok the coding interview, and level up your career with confidence. This course is also available in JavaScript, Python, Go, and C++ — with more coming soon!

85hrs
Intermediate
281 Challenges
282 Quizzes

Computer Science Bootcamp

Cover
Computer Science Bootcamp

In our rapidly advancing digital era, the need for computer science professionals has reached unprecedented heights. Businesses across diverse sectors increasingly rely on technology to fuel innovation, making expertise in computers, logical reasoning, problem-solving, and programming skills more crucial than ever. Possessing such skills positions you at the forefront of this demand, unlocking many rewarding career opportunities. This Skill Path is meticulously crafted to provide a comprehensive introduction to computer science, catering especially to those without a background in the discipline. Starting with the fundamentals of problem-solving and logical thinking in computing, this Skill Path will guide you through coding using data structures, database design and management, web application development, and professional adaptation to various software development models. Upon completing this Skill Path, you will have established a robust foundation to seamlessly transition into the software industry.

280hrs
Beginner
60 Challenges
132 Quizzes

Beginner to Advanced Computing and Logic Building

Cover
Beginner to Advanced Computing and Logic Building

This course focuses on building logic to solve simple to advanced coding problems. You will learn advanced computational techniques to solve real-world problems. The solutions for most problems are curated in a way that each solution is followed by a more efficient one, enabling you to devise the most efficient algorithm from the start. The course starts with the fundamentals of variables, loops, and arrays, and progresses to more advanced concepts such as creating games like Gomoku, Word Search, and Game of Life using the divide and conquer approach. The course also includes a built-in debugger to help you understand the code by setting breakpoints and analyzing variable values. Concepts will be demonstrated via animations and illustrations to engage your interest and give you hands-on experience in building interactive games. By the end of the course, you will be well-prepared to tackle coding challenges commonly found in FAANG interviews.

46hrs
Beginner
31 Challenges
44 Quizzes

  

Free Resources