Recursion is one of the most fundamental techniques for solving problems.
Often, solving a problem with recursion is cleaner and easier to implement than if you were to do it iteratively. A good example of where recursion is useful is in QuickSort algorithms.
It can be used to break down problems into smaller components — a recursive pattern known as Divide and Conquer which is a commonly used recursive algorithm. This is particularly useful for techniques such as MergeSort, binary search, and depth-first search.
You’ll also notice that they are particularly useful for problems such as the Fibonacci numbers (factorial function), the Tower of Hanoi, and dynamic programming problems.
In this article, we will explore the following:
Learn how to use recursion to make your projects simpler and more efficient.
Recursion for Coding Interviews in Java, C++, Python, and JavaScript.
Recursion is a method of program design where you break apart a problem into smaller repeatable subtasks. The program will complete each subtask later combined to achieve a solution.
The primary feature that defines recursion is that a recursive function calls itself, either directly or indirectly during execution. The call usually comes at the end of another operation using the passed data, a practice called tail recursion. This results in a looping structure that repeats the operation until an exit condition is met.
Each pass through this loop brings the program closer to its desired state or solution, which is known as the base case. Once this base case is reached, the method will no longer loop back into its recursive step. Instead, the program will end.
The base case (or base condition) is the state where the program’s solution has been reached. An achievable base case is essential to avoid an infinite loop. Recursive methods are built with two paths: the method first checks if the base state has been reached, if yes, the method ends and returns the current data, if not the method instead goes the other path and executes the recursive case, altering the input and calling the method again.
To help contextualize this, consider a hypothetical example.
Imagine a search program’s base case is when a searched value is found, or valuesearched
= cellvalue
. If the values don’t match, the currently selected cell isn’t the solution.
The method then executes the recursive step, selecting another item in the data set and calls the method again using this new cell as input.
If we did not have the base case, the program would simply repeat the recursive step and therefore scroll the data set infinitely.
The call stack is an integrated, hidden data structure within all modern programing languages. By storing active subroutines in a stack structure, the program is able to execute subroutines in the order they were received.
Each recursive call in a program causes a nesting effect in the call stack, adding more subroutines that must be finished before the stack is empty.
Broadly speaking, the larger the call stack, the more memory and time that is needed for the program to run.
Tail recursion is when the recursive call for the next cycle is the final statement in a method.
Tail end recursion is considered good practice whenever possible because it is easier to follow from a reader’s perspective and it can be optimized by modern compilers.
Compilers can recognize that a tail ended method has completed all the operations within that call. Since all the work is complete, the program doesn’t need to store the instance of that call, known as the call frame.
Modern compilers automatically recognize this and therefore perform tail call elimination, which eliminates all completed methods from the call stack.
Compiler’s use tail call elimination to simplify program execution and free up memory. The program stores the currently executed call frame.
Though right now we’ve only mentioned direct recursive calls, there are actually three ways to implement a recursive call – Direct, Indirect, and Anonymous.
Direct recursion is when the recursive method is plainly called by name. This is the most common type of recursion.
Let’s look at an example:
int rec(int n){if (n == 10) //base casereturn 1;elsereturn rec(n+1); //direct recursion}
Indirect recursion is when a method calls another method which in turn calls the first.
Indirect recursion is recursive because the first method’s execution still leads to another execution of itself. Indirect recursion is less common than direct recursion but may be useful in niche situations.
However, it’s best practice to avoid indirect recursion because it is more difficult to follow than direct recursion.
Here’s an example of indirect recursion:
int indirect1(int n){//...indirect2(); //calls second method//...}int indirect2(int n){//...indirect1(); //calls first method//...}
Anonymous recursion is an advanced form of recursive call where no method is called by name.
Instead recursion occurs by a higher-order function passing in the recursive method as an argument. The higher-order function then calls it directly or through reflection features which call the method in a particular context. Anonymous recursion is considered poor practice as the code is longer and less clear than using other recursive methods.
Learn recursion without scrubbing through videos or documentation. Educative’s text-based courses are easy to skim and feature live coding environments - making learning quick and efficient.
Recursion for Coding Interviews in Java, C++, Python, and JavaScript.
Difference in recursive call is not the only way recursive forms are grouped. By looking at how a method manipulates input data, we can group forms into either structural or generative.
Structurally recursive methods use part of the original input as a passed argument.
For example, our previous example could be further described as both structural and direct recursion as the method uses a part, n+1
, of the original input, n
.
int rec(int n){if (n == 10) //base casereturn 1;elsereturn rec(n+1); //structural, direct recursion}
Generative recursive methods on the other hand compute or “generate” new data as the input for each recursive call.
In this example of the Euclidean algorithm, we can see that gcdRec
is generative because it uses the modulo of a
and b
rather than a
or b
themselves.
int gcdRec(int a, int b) {if (b == 0) return a; //base casereturn gcdRec(b, a % b); //generative, direct recursion}
These are all helpful ways to discuss and understand this concept, but why should you learn about recursive programming? The secret lies in ease of use!
For problems featuring repetitive, smaller problems, such as most search functions, a recursive solution can be the most efficient as it is intuitive and requires less code to implement.
Also, recursive solutions are easy to scale to any size because they will always run until the base state is reached.
While easy to scale, the shortcomings of recursive solutions become more pronounced with scale. Without tail recursion and tail end elimination, recursive solutions will use more memory for each method stored on the call stack – the greater the scale, the more methods on the call stack. More memory used will slow down the process.
Another main downside is difficult to read and troubleshoot at greater complexity levels. especially if using indirect recursion.
Finally, recursive solutions are sensitive to errors. A recursive solution can easily have either an unreachable base case or with a recursive step which does not correctly progress toward the base case. Both of these errors cause a stack overflow error
, meaning that the recursive call resulted in an infinite loop and was therefore terminated.
If we take a closer look at the rec
example listed previously, we can see that the program will always repeat 10 times until it reaches the base case. As a result, this same behavior could be replicated by a for
or while
loop, forms of iterative construction.
int iterative(int n) {for (int i = 0; i != 10; i++) {n = i;}return n;}
All recursive solutions can be described iteratively and all iterative solutions can be done recursively.
Recursive solutions use self-calling methods and run until their base case is reached. Iterative solutions do not call themselves and instead are repeated until a certain number of loops are reached or until a condition is met (i==10
, for example).
Iterative solutions have the upper hand in memory usage and speed (usually).
These two benefits are actually derived from the same quality; while recursive methods add a new call to the call stack with each recurrence, iterative methods add only one call for the whole loop! This means less methods are stored and called, meaning the program uses less memory and usually creates a faster run-time.
Why “usually”?
Well this a bit more complicated than it seems and relies on a lot of ifs. If we combine tail recursion and tail call elimination we can avoid filling the call stack with call frames.
This means that when using a modern compiler and have perfectly optimized code, recursive and iterative counterparts actually have nearly the same run-times.
However, the time investment needed to make this equivalency possible is often unrealistic or sometimes the solution simply doesn’t allow for tail recursion. This is where we get the community knowledge that iterative solutions are “usually” faster.
Type | Benefits | Limitations | Uses |
---|---|---|---|
Recursive | - Easily scalable to any size - Quick to implement - Intuitive for problems involving multiple subtasks - Quick to read due to low amount of code |
- High memory use, expensive to scale - Usually slower than iterative solutions - Can be confusing to follow with complex recursive cases or indirect recursion - Susceptible to stack overflow error |
- Search functions (mergesort, binary search tree, quicksort etc.) - Most data structures (heaps, trees, graphs, linked lists, etc.) - Branching problems too complex for an iterative approach |
Iterative | - Faster than recursive solutions - Low memory use - Straightforward to code |
- More code for the same result as recursive - Unable to handle complex problems such as branching |
- Simple, large projects - Problems in which minimizing memory usage is essential |
Here is we’ll recursively solve this problem:
Code in JavaScript:
// searches for a node within a BSTFunction search(node, data){if(node === null)return null; //returns null if no nodes are presentelse if(data < node.data) //moves left if data is less than node’sreturn this.search(node.left, data);else if(data > node.data) //moves right if data is less than node’sreturn this.search(node.right, data);elsereturn node; //returns node if data equals node’s}
Code in Java:
// searches for a node within a BSTpublic Node search(Node root, int data){if (root == null || root.data == data) //returns null if tree is emptyreturn root;if (root.data > data) //moves left if data is less than node’sreturn search(root.left, data);//moves right if data is less than node’sreturn search(root.right, data);}
int fib(int n) {// Base caseif (n == 0 || n == 1) return n;// Recursive stepreturn fib(n-1) + fib(n-2);}
String1
to String2
starting from index = 0
.static void myCopy(char string1[],char string2[], int index){string2[index] = string1[index]; //copies characters string1 to string2if (index == string1.length - 1) //ends if string end reached{return;}myCopy(string1, string2, index + 1); //raise character index}
class LinkedList{Node head;class Node{int data;Node next;Node(int d) {data = d; next = null; }}/* prints linked list in reverse */void revPrint(Node head){if (head == null) return;revPrint(head.next); //prints list of head nodeSystem.out.print(head.data+" "); //prints after list print}}
For more interview questions, visit these Edpresso articles. They’ll be sure to test your recursion knowledge!
Got an interview on the horizon? Test your understanding of recursion with our Recursion for Coding Interviews series, available in Python, C++, Java, and JavaScript.
Free Resources