Home/Blog/Interview Prep/Master Algorithms with Python for Coding Interviews
Home/Blog/Interview Prep/Master Algorithms with Python for Coding Interviews

Master Algorithms with Python for Coding Interviews

14 min read
Jun 16, 2020

Become a Software Engineer in Months, Not Years

From your first line of code, to your first day on the job — Educative has you covered. Join 2M+ developers learning in-demand programming skills.

Now more than ever, it’s essential to have a good understanding of algorithms to succeed in coding interviews. Unfortunately, many developers are taught algorithms but not how or why they work.

Today, we will go over the fundamentals of algorithms in Python and walk through some of the most useful algorithms for coding interviews.

What you will learn today:



Master algorithms in Python with hands-on practice

Prepare for Python coding interviews by learning all the algorithms expected of you in an interview.

Algorithms for Coding Interviews in Python


Understanding Python Algorithms#

Algorithms are the backbone of software applications and are indispensable in the field of computer science. Python is a versatile and beginner-friendly programming language that employs algorithms to solve problems and accomplish specific tasks within a program. Mastering Python algorithms is a game-changer, as it enables developers to build efficient, optimized, and robust applications with superior performance. Let’s delve deeper into algorithms in Python below:

What are Python algorithms?#

Algorithms are well-defined and structured steps that specify how data can be manipulated, analyzed, and processed to produce the optimal result. They serve as a roadmap for solving various types of problems. Algorithms are typically expressed via pseudocode and flowcharts before being implemented in a programming language such as Python.

Types of algorithms in Python#

Some common algorithms used in Python programming are:

  • Sorting algorithms: We use sorting algorithms to organize data in a specific sequence, such as ascending or descending order. For instance, various sorting algorithms include:

    • Bubble sort
    • Merge sort
    • Quick sort
    • Insertion sort
  • Searching algorithms: These algorithms can be utilized to locate specific elements or values within a dataset—for example, linear and binary search algorithms.

  • Graph algorithms: These algorithms manage graph data structures, including trees and networks. For example:

    • Depth-First Search (DFS)
    • Breadth-First Search (BFS)
    • Dijkstra’s Shortest Path algorithm
  • Dynamic programming algorithms: Dynamic programming algorithms are used to solve problems by optimizing them into smaller subproblems, and storing the results for future use. Some examples include:

    • Fibonacci sequence
    • Knapsack problem
    • Longest common subsequence problem
  • Machine learning algorithms: Python has a rich library ecosystem for implementing machine learning algorithms, such as linear regression, decision trees, and neural networks.

Evaluating algorithm efficiency#

We measure algorithm efficiency in terms of time and space complexity.

  • Time complexity refers to the time it takes for an algorithm to be completed as a function of its input size. We use big O notation to express time complexity (e.g., O(n), O(log n), O(n^2)).

  • Space complexity is the amount of memory an algorithm utilizes relative to its input size. Space complexity can also be calculated using Big O notation.

Understanding time and space complexities can assist you in comparing and selecting the most optimal algorithm for a specific problem-type or application as we discuss in detail in the following sections.


Why use Python for algorithms?#

Python is a suitable programming language for learning about data structures and algorithms. For one, it’s excellent for algorithmic design, as it’s used extensively in data science and machine learning technologies.

Furthermore, it is a high-level programming language that obscures much of the low-level implementation details, such that your pseudo-code will look very similar to Python.

It also has relatively less syntactic sugar than other languages and can be run in any browser. This is very helpful for those who are just beginning to learn about data structures and algorithms, as low-level implementation details force you to learn unrelated topics to data structures and algorithms.

If you’re new to Python, I recommend you check out our Ace the Python Coding Interview learning path to be guided through 7 curated modules.


Understanding algorithmic paradigms#

Algorithmic paradigms are strategies for solving a problem efficiently. Today, we will talk about the two most common algorithmic paradigms: brute force and divide & conquer. The two other more advanced paradigms are greedy algorithms and dynamic programming. If you want to learn more about these, feel free to check out our course Algorithms for Coding Interviews in Python.


Brute force#

Brute force algorithms are exhaustive methods of solving a problem through pure computing power and trying all possibilities to find a solution rather than using a more advanced strategy to improve overall efficiency.

For example, imagine you are trying to figure out a four-digit password combination. The brute force approach would test every possible combination of four-digit numbers from 0000 to 9999. Linear search, a method to find a target value in a given list, is an example of the brute force method. The search algorithm will traverse through the array and check each element until a match is found.

  • Advantages: The advantage of using the brute force method is that you are eventually guaranteed to find the solution. It’s also straight-forward and easy to implement compared to more complex algorithmic paradigm strategies.

  • Disadvantages: Though it’s easy to implement, it’s the most inefficient solution. It’s also difficult to improve performance and find shortcuts with this strategy.


Divide and conquer#

Divide and conquer is an algorithmic paradigm that involves solving a problem by dividing it into NN subproblems to an “atomic” level. Once the subproblems are small enough, they will each be solved individually. Finally, the algorithm repeatedly combines the solved subsolutions into a solution for the original problem.

  • Advantages: It’s very efficient and powerful when dealing with general case solutions where the problem can easily be divided into subproblems. It also is efficient in terms of memory usage, as dividing the problems into atomic subproblems allows the problem to be solved in the cache itself.

  • Disadvantages: Because it is a recursive approach, it is oftentimes slow. There’s also a possibility that the approach duplicates subproblems leading to large recursive stacks, which will consume extra space.


svg viewer

Understanding time and space complexity#

Big O notation#

Big O notation is a form of asymptotic analysis that describes how long an algorithm runs. It’s a simple language for developers to quickly describe the performance or complexity of their algorithms.

Because it’s challenging to identify the exact runtime of an algorithm (since it’s based on the computer hardware itself), we describe an algorithm’s runtime based on how quickly it grows. Big O describes the runtime, or execution time, of an algorithm relative to the input N as the input increases. It’s also important to note that we typically use the worst-case scenarios when using Big O notation. In the next section, you will learn how to use Big O notation to describe an algorithm’s time complexity.


Time complexity:#

O(1)O(1)

def getFirst(arr):
return arr[0]

An algorithm runs in O(1)O(1) time if it runs at a constant time no matter its input. From the above code, you can see that the function will always execute at the same time even if the array holds one element or one thousand elements because it only requires one “step."


O(N)O(N)

def example(arr):
for i in arr:
print(i)

An algorithm runs in O(N)O(N) time if its runtime increases linearly relative to its input N. In other words, the time the algorithm takes to run is directly proportional to the input size. As seen from the code above, the function will take one “step” if the array has one element, and will take one-thousand “steps” if the array has one-thousand elements.


O(N2)O(N^2)

def example(arr):
for x in arr:
for y in arr:
print(x, y)

An algorithm runs in O(N2)O(N^2) time if its runtime is directly proportional to the square of the input size. For example, this runtime occurs when an algorithm contains a nested loop such that the outer loop executes N times, and the inner loop will run N times for every time the outer loop executes once, such that the runtime is N2N^2.

Some rules to remember:

  • Ignore constants: When using Big O notation, you always drop the constants. So, even if the runtime complexity is O(2N)O(2N), we call it O(N)O(N).

  • Drop less dominant terms: You only keep the most dominant term when talking Big O. For example, O(N3+50N+17)O(N^3 + 50N +1 7) is simply O(N3)O(N^3). Here’s the rule of thumb: O(1)O(1) < O(logN)O(logN) < O(N)O(N) < O(NlogN)O(NlogN) < O(N2)O(N^2) < O(2N)O(2^N) < O(N!)O(N!).

Want to read more about Big O notation? Check out our article What is Big-O Notation?


Sorting algorithms#

Bubble sort#

Bubble sort is a sorting algorithm that swaps adjacent elements if they are in the incorrect order. The sorting algorithm will iterate through a list of elements until no more swaps occur, meaning that all the elements are in the correct order.

Let’s take a look at an example with the following array:

svg viewer

The algorithm will begin at index 0 with element 3 and traverse through the array, comparing index ii with index i+1i+1. At index 1, the algorithm will notice that 23 is greater than 7 and swap the two.

svg viewer

As the algorithm continues to index 2, it will notice that the 23 is greater than 11, the value at index 3, and will swap the two.

svg viewer

The algorithm will go through its second iteration, and because everything is in the correct order, no swaps will be made. When this happens, the algorithm will end.

svg viewer

Implementing bubble sort in Python:

def bubble_sort(lst):
"""
Bubble sort function
:param lst: lst of unsorted integers
"""
# Traverse through all list elements
for i in range(len(lst)):
# Last i elements are already in place
for j in range(0, len(lst) - i - 1):
# Traverse the list from 0 to size of lst - i - 1
# Swap if the element found is greater than the next element
if lst[j] > lst[j + 1]:
lst[j], lst[j + 1] = lst[j + 1], lst[j]
# Driver code to test above
if __name__ == '__main__':
lst = [3, 2, 1, 5, 4]
bubble_sort(lst) # Calling bubble sort function
print("Sorted list is: ", lst)

The time complexity of the code is O(n2)O(n^2). This algorithm is not very efficient, so it’s mainly used as a learning resource.


Keep the learning going.#

Master algorithms and data structures in Python without scrubbing through videos or documentation. Educative’s text-based courses are easy to skim and feature live coding environments - making learning quick and efficient.

Algorithms for Coding Interviews in Python


Selection sort#

Selection sort is an algorithm that splits a list of elements into sorted and unsorted. During each iteration, the algorithm will traverse through the unsorted list and add the smallest element to the end of the sorted list.

It adds to the sorted group by swapping the smallest number in the unsorted group with the first element in the unsorted group.

This can be a confusing concept at first, so let’s take a look at an example:

svg viewer

In the beginning, all elements are part of the unsorted group, from index 0 to index 3. The algorithm will traverse through the array and find 5 as the smallest number and swap 5 and 12, which is the first element in the unsorted group. Now, the sorted group is index 0, and the unsorted group is index 1 to index 3.

svg viewer

During the second iteration, the algorithm will start at index 1. It will find 8 as the smallest element in the unsorted group (index 1 - 3). Then, it will swap the 8 and the 12, which is the first element in the unsorted group. Now, the sorted group is index 0 - 1, and the unsorted group is index 2 - 3.

svg viewer

For the third iteration, it will find 11 as the smallest element among the unsorted group and swap 11 and 12.

svg viewer

The algorithm ends since the final element can be assumed to be sorted without traversing to the last index.

svg viewer

Implementing selection sort in Python:

def selection_sort(lst):
"""
Selection sort function
:param lst: List of integers
"""
# Traverse through all lst elements
for i in range(len(lst)):
# Find the minimum element in unsorted lst
min_index = i
for j in range(i + 1, len(lst)):
if lst[min_index] > lst[j]:
min_index = j
# Swap the found minimum element with the first element
lst[i], lst[min_index] = lst[min_index], lst[i]
# Driver code to test above
if __name__ == '__main__':
lst = [3, 2, 1, 5, 4]
selection_sort(lst) # Calling selection sort function
# Printing Sorted lst
print("Sorted lst: ", lst)

Once again, the time complexity is O(n2)O(n^2), making the algorithm impractical for large inputs.

Insertion sort#

Insertion sort is a simple sorting algorithm that builds the final sorted array by examining each element and comparing it to the sorted elements to the left.

The algorithm inserts the element to the correct order by swapping it with elements to the left until the left element is smaller than the selected element. Let’s take a look at an example:

svg viewer

The sorting algorithm starts at index 0. Because there are no sorted elements to the left of index 0, it stays where it’s at. Then it moves onto index 1. The value to the left of index 1 is 12, which is greater than 5. So, we swap the elements.

svg viewer

We continue following the original element in index 1, which is element 5. So, the algorithm looks to the left of 5, which is now at index 0. Because there is no value to the left, the algorithm moves onto index 2. The algorithm will look to the left of the element 8 and see that 12 is greater than 8, and swap the elements. It follows the integer 8 and looks to the left again. Because 5 is less than 8, no more changes are made.

svg viewer

Finally, the algorithm moves onto index 3. Because the left value, 12, is greater than 11, the values at index 2 and 3 are swapped.

svg viewer

Following element 11, the algorithm will look to the left. Because 8 is less than 11, the algorithm ends.

Implementing insertion sort in Python:

def insertion_sort(lst):
"""
Insertion sort function
:param lst: lst of unsorted integers
"""
# Traverse through 1 to len(lst)
for i in range(1, len(lst)):
key = lst[i]
# Move elements of lst greater than key, to one position ahead
j = i - 1
while j >= 0 and key < lst[j]:
lst[j + 1] = lst[j]
j -= 1
lst[j + 1] = key
# Driver code to test above
if __name__ == '__main__':
lst = [3, 2, 1, 5, 4]
insertion_sort(lst) # Calling insertion sort function
print("Sorted list is: ", lst)

The runtime complexity for this algorithm is O(n2)O(n^2), which makes it a poor choice for large inputs. However, the complexity is only n2n^2 for worst-case scenarios where the input list is in reverse. The best-base complexity is O(n)O(n).


Searching algorithms#

Linear search is the most simple search algorithm that takes a brute force approach. The algorithm will go through a list of elements one by one until it finds the desired element and returns its index.

Implementing linear search in Python:

def linear_search(lst, key):
"""
Linear search function
:param lst: lst of unsorted integers
:param key: A key to be searched in the list
"""
if len(lst) <= 0: # Sanity check
return -1
for i in range(len(lst)):
if lst[i] == key:
return i # If found return index
return -1 # Return -1 otherwise
# Driver code to test above
if __name__ == '__main__':
lst = [5, 4, 1, 0, 5, 95, 4, -100, 200, 0]
key = 95
index = linear_search(lst, key)
if index != -1:
print("Key:", key, "is found at index:", index)
else:
print(key, " is not found in the list.")

Linear search has an O(n)O(n) runtime complexity.

If you’ve taken a computer science course, you’ve probably learned about binary search.

At principle, binary search is a simple concept. Assuming that the input list is sorted, the algorithm compares the desired element with the element in the middle of the array. If the desired element is greater than the middle element, the algorithm recursively checks for it in the second half of the list. If the desired element is less than the middle element, the algorithm checks for it in the first half of the list.

Let’s take a look at an example with the following array:

svg viewer

Say, we are looking for the index of value 17. Because 17 is greater than the middle value, 7, the algorithm will search in the second half of the given array.

svg viewer

The middle value of the second half of the given array is 15. Because 17 is greater than 15, the algorithm will once again search in the second half.

svg viewer

The middle value of the new half is 17. The algorithm will compare the desired value with the middle value and find that they are the same. Then, it will return the index.

Implementing binary search in Python:

def binary_search(lst, left, right, key):
"""
Binary search function
:param lst: lst of unsorted integers
:param left: Left sided index of the list
:param right: Right sided index of the list
:param key: A key to be searched in the list
"""
while left <= right:
mid = left + (right - left) // 2
# Check if key is present at mid
if lst[mid] == key:
return mid
# If key is greater, ignore left half
elif lst[mid] < key:
left = mid + 1
# If key is smaller, ignore right half
else:
right = mid - 1
# If we reach here, then the element was not present
return -1
# Driver to test above code
if __name__ == '__main__':
lst = [1, 2, 3, 10, 20, 40, 111, 244, 14444, 800000]
key = 111
# Function call
result = binary_search(lst, 0, len(lst) - 1, key)
if result != -1:
print("Element is present at index:", result)
else:
print("Element is not present in the list")

Binary search runs in O(log(n))O(log(n)) runtime, since the list is halved at every iteration.


Wrapping up and resources#

Nicely done! With this overview of algorithms course in Python, you’re one step closer to acing that coding interview.

Some next topics and activities to explore as you prepare are:

  • Greedy algorithms

  • Dynamic programming

  • Merge sort

  • Quick sort

  • Recursion

  • Hash tables

  • Linked lists

  • Graphs

  • Trees, e.g. binary search trees

  • Queues

  • Mock interviews focusing on in-depth interview prep quizzes

Most importantly of all, because most tech companies- especially FAANG companies like Apple, Amazon, and Microsoft care a lot about problem-solving, the next logical step is to practice implementing algorithms yourself so that you’re ready to solve interview questions and real-world problems and be able to launch a career as a software engineer or in software development in general.

If you are looking for a streamlined approach to advance your interview preparation by learning algorithms with Python, check our course Algorithms for Coding Interviews in Python on Educative which offers a real-time interactive environment and feedback as you learn.


Continue reading about coding interviews and Python#


Written By:
Aaron Xie
 
Join 2.5 million developers at
Explore the catalog

Free Resources