Now more than ever, it’s essential to have a good understanding of algorithms to succeed in coding interviews. Unfortunately, many developers are taught algorithms but not how or why they work.
Today, we will go over the fundamentals of algorithms in Python and walk through some of the most useful algorithms for coding interviews.
What you will learn today:
Prepare for Python coding interviews by learning all the algorithms expected of you in an interview.
Algorithms for Coding Interviews in Python
Algorithms are the backbone of software applications and are indispensable in the field of computer science. Python is a versatile and beginner-friendly programming language that employs algorithms to solve problems and accomplish specific tasks within a program. Mastering Python algorithms is a game-changer, as it enables developers to build efficient, optimized, and robust applications with superior performance. Let’s delve deeper into algorithms in Python below:
Algorithms are well-defined and structured steps that specify how data can be manipulated, analyzed, and processed to produce the optimal result. They serve as a roadmap for solving various types of problems. Algorithms are typically expressed via pseudocode and flowcharts before being implemented in a programming language such as Python.
Some common algorithms used in Python programming are:
Sorting algorithms: We use sorting algorithms to organize data in a specific sequence, such as ascending or descending order. For instance, various sorting algorithms include:
Searching algorithms: These algorithms can be utilized to locate specific elements or values within a dataset—for example, linear and binary search algorithms.
Graph algorithms: These algorithms manage graph data structures, including trees and networks. For example:
Dynamic programming algorithms: Dynamic programming algorithms are used to solve problems by optimizing them into smaller subproblems, and storing the results for future use. Some examples include:
Machine learning algorithms: Python has a rich library ecosystem for implementing machine learning algorithms, such as linear regression, decision trees, and neural networks.
We measure algorithm efficiency in terms of time and space complexity.
Time complexity refers to the time it takes for an algorithm to be completed as a function of its input size. We use big O notation to express time complexity (e.g., O(n), O(log n), O(n^2)).
Space complexity is the amount of memory an algorithm utilizes relative to its input size. Space complexity can also be calculated using Big O notation.
Understanding time and space complexities can assist you in comparing and selecting the most optimal algorithm for a specific problem-type or application as we discuss in detail in the following sections.
Python is a suitable programming language for learning about data structures and algorithms. For one, it’s excellent for algorithmic design, as it’s used extensively in data science and machine learning technologies.
Furthermore, it is a high-level programming language that obscures much of the low-level implementation details, such that your pseudo-code will look very similar to Python.
It also has relatively less syntactic sugar than other languages and can be run in any browser. This is very helpful for those who are just beginning to learn about data structures and algorithms, as low-level implementation details force you to learn unrelated topics to data structures and algorithms.
If you’re new to Python, I recommend you check out our Ace the Python Coding Interview learning path to be guided through 7 curated modules.
Algorithmic paradigms are strategies for solving a problem efficiently. Today, we will talk about the two most common algorithmic paradigms: brute force and divide & conquer. The two other more advanced paradigms are greedy algorithms and dynamic programming. If you want to learn more about these, feel free to check out our course Algorithms for Coding Interviews in Python.
Brute force algorithms are exhaustive methods of solving a problem through pure computing power and trying all possibilities to find a solution rather than using a more advanced strategy to improve overall efficiency.
For example, imagine you are trying to figure out a four-digit password combination. The brute force approach would test every possible combination of four-digit numbers from 0000 to 9999. Linear search, a method to find a target value in a given list, is an example of the brute force method. The search algorithm will traverse through the array and check each element until a match is found.
Advantages: The advantage of using the brute force method is that you are eventually guaranteed to find the solution. It’s also straight-forward and easy to implement compared to more complex algorithmic paradigm strategies.
Disadvantages: Though it’s easy to implement, it’s the most inefficient solution. It’s also difficult to improve performance and find shortcuts with this strategy.
Divide and conquer is an algorithmic paradigm that involves solving a problem by dividing it into subproblems to an “atomic” level. Once the subproblems are small enough, they will each be solved individually. Finally, the algorithm repeatedly combines the solved subsolutions into a solution for the original problem.
Advantages: It’s very efficient and powerful when dealing with general case solutions where the problem can easily be divided into subproblems. It also is efficient in terms of memory usage, as dividing the problems into atomic subproblems allows the problem to be solved in the cache itself.
Disadvantages: Because it is a recursive approach, it is oftentimes slow. There’s also a possibility that the approach duplicates subproblems leading to large recursive stacks, which will consume extra space.
Big O notation is a form of asymptotic analysis that describes how long an algorithm runs. It’s a simple language for developers to quickly describe the performance or complexity of their algorithms.
Because it’s challenging to identify the exact runtime of an algorithm (since it’s based on the computer hardware itself), we describe an algorithm’s runtime based on how quickly it grows. Big O describes the runtime, or execution time, of an algorithm relative to the input N as the input increases. It’s also important to note that we typically use the worst-case scenarios when using Big O notation. In the next section, you will learn how to use Big O notation to describe an algorithm’s time complexity.
def getFirst(arr):return arr[0]
An algorithm runs in time if it runs at a constant time no matter its input. From the above code, you can see that the function will always execute at the same time even if the array holds one element or one thousand elements because it only requires one “step."
def example(arr):for i in arr:print(i)
An algorithm runs in time if its runtime increases linearly relative to its input N. In other words, the time the algorithm takes to run is directly proportional to the input size. As seen from the code above, the function will take one “step” if the array has one element, and will take one-thousand “steps” if the array has one-thousand elements.
def example(arr):for x in arr:for y in arr:print(x, y)
An algorithm runs in time if its runtime is directly proportional to the square of the input size. For example, this runtime occurs when an algorithm contains a nested loop such that the outer loop executes N times, and the inner loop will run N times for every time the outer loop executes once, such that the runtime is .
Some rules to remember:
Ignore constants: When using Big O notation, you always drop the constants. So, even if the runtime complexity is , we call it .
Drop less dominant terms: You only keep the most dominant term when talking Big O. For example, is simply . Here’s the rule of thumb: < < < < < < .
Want to read more about Big O notation? Check out our article What is Big-O Notation?
Bubble sort is a sorting algorithm that swaps adjacent elements if they are in the incorrect order. The sorting algorithm will iterate through a list of elements until no more swaps occur, meaning that all the elements are in the correct order.
Let’s take a look at an example with the following array:
The algorithm will begin at index 0 with element 3 and traverse through the array, comparing index with index . At index 1, the algorithm will notice that 23 is greater than 7 and swap the two.
As the algorithm continues to index 2, it will notice that the 23 is greater than 11, the value at index 3, and will swap the two.
The algorithm will go through its second iteration, and because everything is in the correct order, no swaps will be made. When this happens, the algorithm will end.
def bubble_sort(lst):"""Bubble sort function:param lst: lst of unsorted integers"""# Traverse through all list elementsfor i in range(len(lst)):# Last i elements are already in placefor j in range(0, len(lst) - i - 1):# Traverse the list from 0 to size of lst - i - 1# Swap if the element found is greater than the next elementif lst[j] > lst[j + 1]:lst[j], lst[j + 1] = lst[j + 1], lst[j]# Driver code to test aboveif __name__ == '__main__':lst = [3, 2, 1, 5, 4]bubble_sort(lst) # Calling bubble sort functionprint("Sorted list is: ", lst)
The time complexity of the code is . This algorithm is not very efficient, so it’s mainly used as a learning resource.
Master algorithms and data structures in Python without scrubbing through videos or documentation. Educative’s text-based courses are easy to skim and feature live coding environments - making learning quick and efficient.
Selection sort is an algorithm that splits a list of elements into sorted and unsorted. During each iteration, the algorithm will traverse through the unsorted list and add the smallest element to the end of the sorted list.
It adds to the sorted group by swapping the smallest number in the unsorted group with the first element in the unsorted group.
This can be a confusing concept at first, so let’s take a look at an example:
In the beginning, all elements are part of the unsorted group, from index 0 to index 3. The algorithm will traverse through the array and find 5 as the smallest number and swap 5 and 12, which is the first element in the unsorted group. Now, the sorted group is index 0, and the unsorted group is index 1 to index 3.
During the second iteration, the algorithm will start at index 1. It will find 8 as the smallest element in the unsorted group (index 1 - 3). Then, it will swap the 8 and the 12, which is the first element in the unsorted group. Now, the sorted group is index 0 - 1, and the unsorted group is index 2 - 3.
For the third iteration, it will find 11 as the smallest element among the unsorted group and swap 11 and 12.
The algorithm ends since the final element can be assumed to be sorted without traversing to the last index.
Implementing selection sort in Python:
def selection_sort(lst):"""Selection sort function:param lst: List of integers"""# Traverse through all lst elementsfor i in range(len(lst)):# Find the minimum element in unsorted lstmin_index = ifor j in range(i + 1, len(lst)):if lst[min_index] > lst[j]:min_index = j# Swap the found minimum element with the first elementlst[i], lst[min_index] = lst[min_index], lst[i]# Driver code to test aboveif __name__ == '__main__':lst = [3, 2, 1, 5, 4]selection_sort(lst) # Calling selection sort function# Printing Sorted lstprint("Sorted lst: ", lst)
Once again, the time complexity is , making the algorithm impractical for large inputs.
Insertion sort is a simple sorting algorithm that builds the final sorted array by examining each element and comparing it to the sorted elements to the left.
The algorithm inserts the element to the correct order by swapping it with elements to the left until the left element is smaller than the selected element. Let’s take a look at an example:
The sorting algorithm starts at index 0. Because there are no sorted elements to the left of index 0, it stays where it’s at. Then it moves onto index 1. The value to the left of index 1 is 12, which is greater than 5. So, we swap the elements.
We continue following the original element in index 1, which is element 5. So, the algorithm looks to the left of 5, which is now at index 0. Because there is no value to the left, the algorithm moves onto index 2. The algorithm will look to the left of the element 8 and see that 12 is greater than 8, and swap the elements. It follows the integer 8 and looks to the left again. Because 5 is less than 8, no more changes are made.
Finally, the algorithm moves onto index 3. Because the left value, 12, is greater than 11, the values at index 2 and 3 are swapped.
Following element 11, the algorithm will look to the left. Because 8 is less than 11, the algorithm ends.
Implementing insertion sort in Python:
def insertion_sort(lst):"""Insertion sort function:param lst: lst of unsorted integers"""# Traverse through 1 to len(lst)for i in range(1, len(lst)):key = lst[i]# Move elements of lst greater than key, to one position aheadj = i - 1while j >= 0 and key < lst[j]:lst[j + 1] = lst[j]j -= 1lst[j + 1] = key# Driver code to test aboveif __name__ == '__main__':lst = [3, 2, 1, 5, 4]insertion_sort(lst) # Calling insertion sort functionprint("Sorted list is: ", lst)
The runtime complexity for this algorithm is , which makes it a poor choice for large inputs. However, the complexity is only for worst-case scenarios where the input list is in reverse. The best-base complexity is .
def linear_search(lst, key):"""Linear search function:param lst: lst of unsorted integers:param key: A key to be searched in the list"""if len(lst) <= 0: # Sanity checkreturn -1for i in range(len(lst)):if lst[i] == key:return i # If found return indexreturn -1 # Return -1 otherwise# Driver code to test aboveif __name__ == '__main__':lst = [5, 4, 1, 0, 5, 95, 4, -100, 200, 0]key = 95index = linear_search(lst, key)if index != -1:print("Key:", key, "is found at index:", index)else:print(key, " is not found in the list.")
Linear search has an runtime complexity.
If you’ve taken a computer science course, you’ve probably learned about binary search.
At principle, binary search is a simple concept. Assuming that the input list is sorted, the algorithm compares the desired element with the element in the middle of the array. If the desired element is greater than the middle element, the algorithm recursively checks for it in the second half of the list. If the desired element is less than the middle element, the algorithm checks for it in the first half of the list.
Let’s take a look at an example with the following array:
Say, we are looking for the index of value 17. Because 17 is greater than the middle value, 7, the algorithm will search in the second half of the given array.
The middle value of the second half of the given array is 15. Because 17 is greater than 15, the algorithm will once again search in the second half.
The middle value of the new half is 17. The algorithm will compare the desired value with the middle value and find that they are the same. Then, it will return the index.
Implementing binary search in Python:
def binary_search(lst, left, right, key):"""Binary search function:param lst: lst of unsorted integers:param left: Left sided index of the list:param right: Right sided index of the list:param key: A key to be searched in the list"""while left <= right:mid = left + (right - left) // 2# Check if key is present at midif lst[mid] == key:return mid# If key is greater, ignore left halfelif lst[mid] < key:left = mid + 1# If key is smaller, ignore right halfelse:right = mid - 1# If we reach here, then the element was not presentreturn -1# Driver to test above codeif __name__ == '__main__':lst = [1, 2, 3, 10, 20, 40, 111, 244, 14444, 800000]key = 111# Function callresult = binary_search(lst, 0, len(lst) - 1, key)if result != -1:print("Element is present at index:", result)else:print("Element is not present in the list")
Binary search runs in runtime, since the list is halved at every iteration.
Nicely done! With this overview of algorithms course in Python, you’re one step closer to acing that coding interview.
Some next topics and activities to explore as you prepare are:
Greedy algorithms
Merge sort
Quick sort
Hash tables
Linked lists
Graphs
Trees, e.g. binary search trees
Queues
Mock interviews focusing on in-depth interview prep quizzes
Most importantly of all, because most tech companies- especially FAANG companies like Apple, Amazon, and Microsoft care a lot about problem-solving, the next logical step is to practice implementing algorithms yourself so that you’re ready to solve interview questions and real-world problems and be able to launch a career as a software engineer or in software development in general.
If you are looking for a streamlined approach to advance your interview preparation by learning algorithms with Python, check our course Algorithms for Coding Interviews in Python on Educative which offers a real-time interactive environment and feedback as you learn.
Free Resources