Home/Blog/Interview Prep/Ace the LinkedIn Interview: Must-Know Python Coding Problems
Home/Blog/Interview Prep/Ace the LinkedIn Interview: Must-Know Python Coding Problems

Ace the LinkedIn Interview: Must-Know Python Coding Problems

32 min read
content
Pattern 1: Tree Depth-First Search
Lowest Common Ancestor in a Binary Tree
Solution
Pattern 2: Stacks
Flatten Nested List Iterator
Solution
Exclusive Execution Time of Functions
Solution
Pattern 3: Custom Data Structures
Insert Delete GetRandom O(1)
Solution
Pattern 4: Dynamic Programming
Maximum Product Subarray
Solution
Pattern 5: Graphs
Graph Valid Tree
Pattern 6: Sliding Window
Minimum Window Substring
Solution
Pattern 7: K-Way Merge
Find K Pairs with Smallest Sums
Solution
Pattern 8: Top K Elements
Kth Largest Element in an Array
Solution
Pattern 9: Hash Maps
Isomorphic Strings
Solution

As a platform committed to helping the world's professionals succeed, LinkedIn places equal emphasis on nurturing talent within the company. This means that you have ample opportunity to build new skills and advance your career.

To find skilled engineers who are eager to grow with the company, LinkedIn typically evaluates candidates through:

  1. An introductory conversation with an engineering recruiter

  2. A technical interview

  3. Interviews with a few engineering leaders

At some point in the interview loop, you will be asked to solve coding problems tailored to the team you'd be joining. This helps hiring managers assess your technical skills and approach to problem-solving.

Solving coding problems in front of industry experts can feel intimidating. The last thing you want is to encounter a tricky problem and freeze. Fortunately, you can prepare for a wide range of problems by studying coding patterns.

Here's how it works.

Let's say you're given a brand new coding problem. If you recognize the underlying pattern, you can quickly identify relevant strategies and algorithms — then apply them to find a solution.

After reviewing the most common types of coding problems in LinkedIn interviews, we've selected 9 coding patterns that can help you ace your technical interview. We'll break down each pattern, then provide sample coding problems for each.

Let's dive in!

Traditionally, when traversing a tree, one might start at the root node and visit each node, moving down one level at a time. This approach, although straightforward, may not always be efficient for certain types of tree-related problems. Naive solutions to some problems might require traversing the same nodes repeatedly, leading to inefficiencies in the traversal process. The Depth-First Search (DFS) pattern operates by recursively traversing from the root node down to the deepest level of a tree, exploring as far as possible along each branch before backtracking. It follows a straightforward principle: explore one branch fully before moving on to the next. This recursive exploration continues until all nodes have been visited or until a specific condition is met. This technique is useful for tasks like finding the height or depth of a tree, determining the lowest common ancestor of two nodes, or generating different traversal orders such as preorder, inorder, and postorder.

There is another closely related technique called Breadth-First Search (BFS), which traverses nodes level by level, exploring all nodes at the current level before moving on to the next level. This approach prioritizes breadth over depth, making it particularly useful for problems that involve finding the shortest path between nodes, locating neighbors at a specific distance, or exploring all possible paths of a fixed length in a tree structure. In contrast, Depth-First Search (DFS) explores one branch as deeply as possible before backtracking, prioritizing depth over breadth.

Let’s check out the following interview problem to see how the Tree Depth-First Search pattern works:

Lowest Common Ancestor in a Binary Tree#

Given the root node of a binary tree with nn nodes, your task is to find the lowest common ancestor of two of its nodes, p and q.

Note: The lowest common ancestor of two nodes, p and q, is defined as the lowest node in the binary tree that has both p and q as descendants.

A node can also be a descendant of itself. For example, if q is a descendant of p, and we know that p is a descendant of itself, then p will be the lowest common ancestor of p and q.

Solution#

We will use the depth-first search to find the lowest common ancestor of p and q in the binary tree. The algorithm to find the lowest common ancestor of p and q is as follows:

  1. First, we initialize three tracking variables, mid, left, and right, to track whether p or q has been found.

  2. Then, we traverse the binary tree recursively using depth-first search starting from the root node.

  3. If we find p or q during our traversal of the binary tree, we set the mid variable to TRUE and return mid.

  4. The left tracking variable is used to store the result of the left subtree of the current node, and right tracking variable is used to store the result of the right subtree of the current node. So, the results from the recursive calls are stored in their respective tracking variables.

  5. Finally, during the traversal of the binary tree, if any two of the tracking variables, mid, left, or right, are TRUE, we set the current node as our answer node because this node will be the lowest common ancestor of p and q.

We need to understand the purpose of each of the tracking variables to answer the question of how a node becomes the lowest common ancestor if any two of the tracking variables are TRUE. If the left and right variables are TRUE for any node, it means that both the nodes are descendants of the current node, and therefore, the current node is the lowest common ancestor of the two nodes. However, if mid and either one of the left or right variables are TRUE, then either p or q is the current node itself, and the other is the descendant of the current node. Since a node is an ancestor of itself, the lowest common ancestor of the input nodes is the current node.

Let’s look at the code for this solution below:

main.py
TreeNode.py
BinaryTree.py
from BinaryTree import *
class Solution:
def __init__(self):
self.lca = None
def lowest_common_ancestor(self, root, p, q):
self.lowest_common_ancestor_rec(root, p, q)
return self.lca
# helper function to find the lowest common ancestor recursively
def lowest_common_ancestor_rec(self, current_node, p, q):
# if current_node does not exist
if not current_node:
return False
# initialize tracking variables
left, right, mid = False, False, False
# check if either of the input nodes is the current_node
if p == current_node or q == current_node:
mid = True
# traverse binary tree using depth-first search
left = self.lowest_common_ancestor_rec(current_node.left, p, q)
# if the lowest common ancestor has not been found, only then traverse the right subtree
if not self.lca:
right = self.lowest_common_ancestor_rec(current_node.right, p, q)
# if any two of the tracking variables are true, set current_node as answer node
if mid + left + right >= 2:
self.lca = current_node
# return true if any of the tracking variables is true
return mid or left or right
# driver code
def main():
input_trees = [[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],
[TreeNode(100), TreeNode(200), TreeNode(75), TreeNode(50), TreeNode(25), TreeNode(350)],
[TreeNode(350), TreeNode(100), TreeNode(75), TreeNode(50), TreeNode(200), TreeNode(25)],
[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],
[TreeNode(25), TreeNode(50), TreeNode(75), TreeNode(100), TreeNode(200), TreeNode(350)]]
input_nodes = [
[25, 75],
[50, 350],
[100, 200],
[50, 25],
[350, 200]
]
for i in range(len(input_trees)):
solution = Solution()
tree = BinaryTree(input_trees[i])
print(i+1, ".\tBinary tree:", sep="")
display_tree(tree.root)
print("\tp = ", input_nodes[i][0])
print("\tq = ", input_nodes[i][1])
lca = solution.lowest_common_ancestor(tree.root, tree.find(input_nodes[i][0]), tree.find(input_nodes[i][1]))
print("\n\tLowest common ancestor: ", lca.data, sep="")
print("-"*100)
if __name__ == "__main__":
main()
Lowest Common Ancestor in a Binary Tree

Now that we've covered the Tree Depth-First Search, let's move on to another frequently asked coding pattern.

Pattern 2: Stacks#

A stack is a linear data structure that organizes and manages data in a Last In, First Out (LIFO) manner. This means the last element added to the stack is the first to be removed. Think of it like a stack of plates where you can only add or remove plates from the top.

Using stacks as a coding pattern involves the following fundamental operations:

Operation

Time Complexity

Description

Push

O(1)

Adds the element at the top of the stack.

Pop

O(1)

Removes and returns the element from the top of the stack.

Peek

O(1)

Returns the element at the top of the stack without removing it.

IsEmpty

O(1)

Checks whether the stack is empty or not. Returns TRUE if the stack is empty, FALSE otherwise.

Size

O(1)

Returns the total number of elements in the stack.

Stacks are commonly used for tasks like expression evaluation, syntax parsing, or tracking state changes in algorithms. To identify if a problem can be solved using the Stacks pattern, look for scenarios where the last in, first out property is advantageous or where tracking state changes in a last in, first out manner is necessary. Examples of common interview problems that can be tackled using the Stacks pattern include evaluating arithmetic expressions, checking balanced parentheses, or implementing a browser’s back functionality.

Let’s see how the following examples illustrate the application of the Stacks pattern to efficiently solve these problems:

Flatten Nested List Iterator#

Given a nested list of integers where each element is either an integer or a list whose elements may also be integers or other integer lists, implement an iterator to flatten the nested list.

Implement the Nested Iterator class that has the following functions:

  • Init: This initializes the iterator with the nested list.

  • Next (): This returns the next integer in the nested list.

  • Has Next (): This returns TRUE if there are still some integers in the nested list. Otherwise, it returns FALSE.

Solution#

In this solution, we use a stack to store both individual integers and nested lists of integers. We push all the elements of the nested list onto the stack in the reverse order during initialization. We do this to ensure the correct processing of the nested list when using a stack-based iterator. This ensures that when we pop elements off the stack, they are in the correct order as they appeared in the original nested list.

The Has Next function performs a set of push and pop operations on the stack in the form of a loop. It checks if the top element of the stack is an integer. If so, it returns TRUE. Otherwise, if the top element is a list of integers, then it pops from the stack and pushes each element of the list onto the stack in reverse order. This way, the lists at the top of the stack are converted into individual integers whenever the Has Next function is called. If the stack is empty, the function returns FALSE.

The Next function first calls the Has Next function to check if there is an integer in the stack. If the Has Next function returns TRUE, it pops from the stack and returns this popped value.

Let’s look at the code for this solution:

main.py
nested_integers.py
from nested_integers import NestedIntegers
class NestedIterator:
# NestedIterator constructor initializes the stack using the
# given nested_list list
def __init__(self, nested_list):
self.nested_list_stack = list(reversed([NestedIntegers(val) for val in nested_list]))
# has_next() will return True if there are still some integers in the
# stack (that has nested_list elements) and, otherwise, will return False.
def has_next(self):
# Iterate in the stack while the stack is not empty
while len(self.nested_list_stack) > 0:
# Save the top value of the stack
top = self.nested_list_stack[-1]
# Check if the top value is integer, if true return True,
# if not continue
if top.is_integer():
return True
# If the top is not an integer, it must be the list of integers
# Pop the list from the stack and save it in the top_list
top_list = self.nested_list_stack.pop().get_list()
# Save the length of the top_list in i and iterate in the list
i = len(top_list) - 1
while i >= 0:
# Append the values of the nested list into the stack
self.nested_list_stack.append(top_list[i])
i -= 1
return False
# next will return the integer from the nested_list
def next(self):
# Check if there is still an integer in the stack
if self.has_next():
# If true pop and return the top of the stack
return self.nested_list_stack.pop().get_integer()
return None
# Driver code
def create_nested_iterator_structure(input_list):
def parse_input(nested, input_list):
if isinstance(input_list, int):
nested.set_integer(input_list)
else:
for item in input_list:
child = NestedIntegers()
nested.add(child)
parse_input(child, item)
nested_structure = NestedIntegers()
parse_input(nested_structure, input_list)
return nested_structure
def create_nested_iterator_from_structure(nested_structure):
def flatten(nested, result):
if nested.is_integer():
result.append(nested.get_integer())
else:
for child in nested.get_list():
flatten(child, result)
flattened_list = []
flatten(nested_structure, flattened_list)
return NestedIterator(flattened_list)
# Driver code
def main():
inputs = [
[1, [2, 3], 4],
[3, [2, 3, 4], 4, [2, 3]],
[[2, 3], 3, [2, 3], 4, [2, 3, 4, 5]],
[1, [3, [4, [5, 6], 7], 8], 9],
[[2, 3, [2, 3]]]
]
for i, test_case_input in enumerate(inputs, start=1):
print(i, ".\tOriginal structure: ", inputs[i-1], sep="")
print("\n\tOutput:\n")
nested_structure = create_nested_iterator_structure(test_case_input)
test_case = create_nested_iterator_from_structure(nested_structure)
#result = []
while test_case.has_next():
print("\titr.next(): ", test_case.next())
print("-"*100)
if __name__ == '__main__':
main()
Flatten Nested List Iterator

Now, let's move to the next example for Stacks.

Exclusive Execution Time of Functions#

We are given an integer number, n, representing the number of functions running in a single-threaded CPU, and an execution log, which is essentially a list of strings. Each string has the format {function id}:{"start" | "end"}:{timestamp}, indicating that the function with function id either started or stopped execution at the time identified by the timestamp value. Each function has a unique ID between 00 and n1n−1. Compute the exclusive time of the functions in the program.

Note: The exclusive time is the sum of the execution times for all the calls to a specific function.

Solution#

To find out the exclusive execution time of functions, we will use a stack. Just as a single-threaded CPU uses a stack to manage function execution, preemption, and resumption, we can use a stack to perform our calculation. Every time we see a new start event, we’ll push the information regarding the previously running function onto the stack. When we see an end event, we’ll pop the currently running function from the stack. That way, all end events will be matched with the corresponding start event, and the execution time is correctly computed.

The stack will contain the starting time of all functions in the program. Here’s how the algorithm would work:

  1. First, we’ll read a line of text from the log and tokenize it to separate the function ID, the event type, and the timestamp.

  2. If the event type is "start", push the current log details to the stack.

  3. Otherwise, we pop the log details from the stack and add the execution time of the current function in the actual exclusive time.

  4. If the stack is not empty, the current function is a nested function call. Therefore, we subtract the execution time of this called function from the calling function. We decrease the time in the calling function, in advance.

  5. We store the execution time of each function at the index equal to the function ID in the result array.

  6. When the same function is called recursively, we add the function’s execution time to the current value at the specific index.

Let’s look at the solution code below:

main.py
logs.py
from logs import *
def exclusive_time(n, logs):
logs_stack = []
result = [0]*n
for content in logs:
# Extract the logs details from the content(string)
logs = Log(content)
if logs.is_start:
# Push the logs details to the stack
logs_stack.append(logs)
else:
# Pop the logs details from the stack
top = logs_stack.pop()
# Add the execution time of the current function in the actual result
result[top.id] += (logs.time - top.time + 1)
# If the stack is not empty, subtract the current child function execution time from the parent function
if logs_stack:
result[logs_stack[-1].id] -= (logs.time - top.time + 1)
return result
# Driver code
def main():
logs = [["0:start:0", "1:start:2", "1:end:3", "2:start:4", "2:end:7", "0:end:8"],
["0:start:0", "0:start:2", "0:end:5", "1:start:6", "1:end:6", "0:end:7"],
["0:start:0", "1:start:5", "1:end:6", "0:end:7"],
["0:start:0", "1:start:5", "2:start:8", "3:start:12", "4:start:15", "5:start:19", "5:end:22", "4:end:24", "3:end:27", "2:end:32", "1:end:35", "0:end:36"],
["0:start:0", "1:start:3", "1:end:6", "0:end:10"]
]
n = [3, 2, 2, 6, 2]
x = 1
for i in range(len(n)):
print(x, ".\tn = ", n[i], sep = "")
print("\tlogs = ", logs[i], sep = "")
print("\tOutput: ", exclusive_time(n[i], logs[i]), sep = "")
print("-"*100, "\n", sep = "")
x += 1
if __name__ == '__main__':
main()
Exclusive Execution Time of Functions

We've looked at how stacks can be used as a coding pattern to solve problems that require data to be processed in LIFO order, now let's move on to the next pattern.

Pattern 3: Custom Data Structures#

Custom data structures are essentially modified versions of existing data structures tailored to address specific needs. We often need to go beyond standard data structures like arrays and hash tables to tackle unique challenges more effectively. For instance, a web crawler that processes numerous pages and URLs might use a specialized "URL queue" to manage these URLs efficiently, ensuring they are unique and prioritized based on relevance. Custom data structures involve creating custom classes that encapsulate the necessary functionality and properties needed to efficiently manage and manipulate the data. By designing data structures optimized for the problem domain, we can improve the performance and readability of our code while simplifying complex operations. To determine if a problem can benefit from the Custom Data Structures pattern, consider scenarios where standard data structures like arrays, lists, or maps are not sufficient or where specialized operations need to be performed frequently. Common problems suitable for this pattern include implementing priority queues, disjoint-set data structures, or specialized graph representations.

Let’s see how the following example illustrates the application of the Custom Data Structures pattern to efficiently solve the given coding problem:

Insert Delete GetRandom O(1)#

Implement a Random Set data structure that can perform the following operations:

  • Init(): This initializes the Random Set object.

  • Insert(): This function takes an integer, data, as its parameter and, if it does not already exist in the set, add it to the set, returning TRUE. If the integer already exists in the set, the function returns FALSE.

  • Delete(): This function takes an integer, data, as its parameter and, if it exists in the set, removes it, returning TRUE. If the integer does not exist in the set, the function returns FALSE.

  • GetRandom(): This function takes no parameters. It returns an integer chosen at random from the set.

Note: Your implementation should aim to have a running time of O(1)O(1) (on average) for each operation.

Solution#

Arrays support constant time lookups but deletion operations in arrays typically take O(n)O(n) time complexity due to the need to shift elements. To overcome this limitation, we can go for a data structure that offers constant time deletion—hash map. By combining arrays for efficient lookups with hash maps for rapid deletions, we can design our custom data structure. The implementation of our custom data structure's methods is as follows:

  • Insert(): When inserting data, we will store the new element in our array. And then, insert a key-value pair in our hash map, where the key will be the new element and the value will be its index in the array. Both of these operations have a time complexity of O(1)O(1).

  • Delete(): Using the hash map, we find the index in the array at which the element is located. Swap the last element in the array with the one to be removed. Then, in the hash map, update the location of the element we just swapped with the target element, and remove the entry for the target element from the hash map. Lastly, pop out the target element from the array. Each of these five operations has a time complexity of O(1)O(1).

  • GetRandom(): For this operation, we can choose a number at random between 00 and n1n−1, where nn is the size of the array. We return the integer located at this random index.

Let’s take a look at the code for this solution below:

from random import choice
class RandomSet():
def __init__(self):
self.indexor = {} # Maps the actual value to its index
self.store = [] # Store the actual values in an array
# Function to insert a value
def insert(self, val):
"""
Inserts a value in the data structure.
Returns True if it did not already contain the specified element.
"""
if val in self.indexor:
return False
# Insert the actual value as a key and its index as a value
self.indexor[val] = len(self.store)
# Append a new value to array
self.store.append(val)
return True
# Function to remove a value
def delete(self, val):
"""
Removes a value from the data structure.
Returns True if it contains the specified element.
"""
if val in self.indexor:
# swap the last element in the array with the element
# to delete, update the location of the moved element
# in its entry in the hash map
last, i = self.store[-1], self.indexor[val]
self.store[i], self.indexor[last] = last, i
# delete the last element
del self.indexor[val]
self.store.pop()
return True
return False
# Function to generate a random value
def get_random(self):
"""
Choose an element at random from the data structure.
"""
return choice(self.store)
# Driver code
def main():
commands = [["I", "G", "I", "I", "R", "G"],
["I", "I", "R", "G", "R", "I"]]
values = [[10, -1, 100, 1000, 200, -1], [30, 60, 10, -1, 30, 90]]
for i in range(len(commands)):
dataset = RandomSet()
print(i + 1, ". Starting operations:", "\n", sep="")
for j in range(len(commands[i])):
if commands[i][j] == "I":
print("\tInsert (", values[i][j], ") returns ",
dataset.insert(values[i][j]), sep="")
elif commands[i][j] == "R":
print("\tDelete (", values[i][j], ") returns ",
dataset.delete(values[i][j]), sep="")
elif commands[i][j] == "G":
print("\tGenerate Random() returns ",
dataset.get_random(), sep="")
print("-" * 100)
if __name__ == '__main__':
main()
Insert Delete GetRandom O(1)

Now that we've explored the design and implementation of Custom Data Structures, let's explore the next coding pattern in the list.

Pattern 4: Dynamic Programming#

The Dynamic Programming pattern is a technique that helps solve complex problems by breaking them down into simpler subproblems and storing their solutions to avoid redundant computations. This technique is useful when the problem can be divided into overlapping subproblems and optimal substructures. By storing and reusing intermediate results, dynamic programming enables us to solve problems with improved time and space complexity. For instance, a naive recursive approach to check if a string like "rotator" is a palindrome or to calculate Fibonacci numbers can be inefficient due to the repeated calculations of the same subproblems. Dynamic programming addresses these inefficiencies through two main strategies:

  • Memoization (top-down approach): This technique optimizes recursion by storing the results of subproblems the first time they are computed, preventing redundant calculations.

  • Tabulation (bottom-up approach): Tabulation constructs a table to store the results of smaller subproblems, gradually building up to solve the larger problem. 

Let’s see how the following example illustrates the application of the Dynamic Programming pattern to efficiently solve the given coding problem:

Maximum Product Subarray#

Given an integer array, nums, find a subarray that has the largest product, and return the product.

Solution#

The maximum product subarray problem can be solved efficiently using dynamic programming by dividing the array and solving the subproblems. In this case, we can find the maximum product of a subarray from the start to any element by using the maximum product of the subarray from the start to its previous element. Using this approach, we don’t need to calculate the previous products again; instead, we can use them from the stored products.

The simplest case is when the numbers in the array nums are all positive numbers. In that case, we calculate the maximum product by multiplying all the numbers to get the result. When we encounter zeros or negative numbers in the array, we have to use a slightly different approach, as explained below:

  • Zeros will reset the product to 00. If the result is less than the product, update it with the product. Otherwise, the result remains the same.

  • Negative numbers can be a little challenging. When multiplied, a single negative number can flip the larger product into a smaller number. However, if we get another negative number, we can get the larger product again. Unlike zero, here we have a chance that our larger product can be restored if negative numbers are encountered twice.

To solve the problem above, follow the steps below:

  1. Initialize max_so_far with the first element of nums. It will be used to track the maximum product up to an element. Initialize min_so_far with the first element of nums. It will be used to track the minimum product up to an element. Initialize result with max_so_far.

  2. Now, iterate through the rest of the elements of the array as follows:

    1. Update max_so_far and min_so_far by taking the maximum and minimum of the current value, the product of max_so_far and the current value, and the product of min_so_far and the current value, respectively.

    2. Update result by taking the maximum of max_so_far and result.

  3. When all elements have been traversed, return result.

Let’s look at the code for this solution below:

def max_product(nums):
# Check if the input list is empty, return 0 if it is
if len(nums) == 0:
return 0
# Initialize max_so_far and min_so_far with the first element in the list, and result with max_so_far
max_so_far = nums[0]
min_so_far = nums[0]
result = max_so_far
# Loop through the rest of the elements in the list
for i in range(1, len(nums)):
curr = nums[i]
# Update the max_so_far and min_so_far with the maximum and minimum of curr, max_so_far * curr, and min_so_far * curr
# prev_max_so_far is used to store the value of max_so_far so that it does not get updated while calculated min_so_far
prev_max_so_far = max_so_far
max_so_far = max(curr, max_so_far * curr, min_so_far * curr)
min_so_far = min(curr, prev_max_so_far * curr, min_so_far * curr)
# Update result with the maximum of max_so_far and result
result = max(max_so_far, result)
# Return the final result
return result
# Driver code
def main():
input_bits = [
[-2, 0, -1],
[2, 3, -2, 4],
[2, -5, 3, 1, -4, 0, -10, 2],
[1, 2, 3, 0, 4],
[5, 4, 3, 10, 4, 1],
]
for i in range(len(input_bits)):
print(i + 1, ".\t Input array: ", input_bits[i], sep="")
print("\n\t Maximum product: ", max_product(input_bits[i]), sep="")
print("-" * 100)
if __name__ == "__main__":
main()
Maximum Product Subarray

After understanding how to use the Dynamic Programming effectively, it's time to explore the next coding pattern.

Pattern 5: Graphs#

The Graphs pattern offers a structured approach for solving problems involving graph data structure, where entities are represented as nodes and their relationships are represented as edges. This pattern involves traversing the graph using various algorithms like Depth-First Search (DFS) or Breadth-First Search (BFS) to explore its vertices and edges systematically.

  • Breadth-First Search (BFS) starts at a selected node and explores all its neighboring nodes at the current depth level before moving to the nodes at the next depth level. It traverses the graph level by level, often using a queue data structure to keep track of the nodes to be visited. BFS is well-suited for finding the shortest path between two nodes in an unweighted graph.

  • Depth-First Search (DFS) starts at a selected node and explores as far as possible along each branch before backtracking. It explores one branch fully before moving on to the next branch. DFS is often implemented using recursion or a stack data structure to keep track of the visited nodes and explore the unvisited ones. This algorithm is useful for tasks like finding cycles in a graph, topological sorting, or exploring all possible paths from a starting node.

Common problems suitable for this pattern include finding the shortest path between two nodes, determining the connectivity of a network, or identifying cycles within a graph.

Let’s check out the following interview problem to see how the Graphs pattern works:

Graph Valid Tree#

For the graph to be a valid tree, the number of edges must equal n1n - 1. If the total number of edges is less than n1n - 1, not all the graph nodes are connected, which defies the property of a tree. Similarly, more edges will mean that there is a cycle in the graph; Therefore, it is not a tree.

Now, let’s look at the algorithm to solve this problem:

For the base case, return FALSE if the number of edges is not equal to n1n−1. Otherwise, create an adjacency matrix of length n1n−1, where adjacency[i] will represent the array of all the graph nodes from which node i is connected.

Next, we initialize a visited set and a stack. We start exploring the graph from node 00. Using a depth-first search (DFS) approach, we traverse the graph, visiting each node and marking it as visited. If we encounter a neighbor that hasn’t been visited, we add it to the visited set and push it onto the stack for further exploration. Finally, after completing the DFS traversal, we check if all nodes have been visited.

Ultimately, we will verify that the graph is fully connected and no cycle exists by comparing the length of the visited set and the n number of nodes.

Let’s look at the code for this solution below:

def valid_tree(n, edges):
# Check if n - 1 edges exists
if len(edges) != n - 1:
return False
adjacency = [[] for _ in range(n)]
# Populate adjacency with all the connected nodes
for x, y in edges:
adjacency[x].append(y)
adjacency[y].append(x)
visited = {0}
stack = [0]
while stack:
node = stack.pop()
# Iterate over the neighbors of the popped node
for neighbour in adjacency[node]:
if neighbour not in visited:
# Add a neighbor in visited set and stack if it doesn't already exist in the set
visited.add(neighbour)
stack.append(neighbour)
return len(visited) == n
# Driver code
def main():
n = [3, 4, 5, 5, 6]
edges = [[[0, 1], [0, 2], [1, 2]],
[[0, 1], [0, 2], [0, 3]],
[[0, 1], [0, 2], [0, 3], [0, 4], [3, 4]],
[[0, 1], [0, 2], [0, 3], [3, 4]],
[[0, 1], [0, 2], [1, 3], [2, 4], [0, 5]]]
for i in range(len(n)):
print(i + 1, ".\t n = ", n[i], sep="")
print("\t Edges = ", edges[i], sep="")
print("\t Is the given graph a valid tree:", valid_tree(n[i], edges[i]))
print("-" * 100)
if __name__ == "__main__":
main()
Graph Valid Tree

Now that we've covered the Graphs, let's move on to another frequently asked coding pattern.

Pattern 6: Sliding Window#

The Sliding Window pattern is a useful tool for efficiently solving problems involving sequential data such as arrays or strings, where computations on subsets of data must be repeated. In this technique, a window is defined as a contiguous subset of elements within the data that adjusts its boundaries as it moves through the data. The processing of sequential information is efficient because the window only focuses on relevant subsets of the data at any given time, avoiding unnecessary computations on the entire dataset. Computations are typically updated in constant time by considering elements entering or exiting the window. By subtracting leaving elements and adding new ones, the computational time remains constant with each movement of the window. Problems like Find Maximum in Sliding Window, Repeated DNA Sequences, and Best Time to Buy and Sell Stock are commonly solved using the Sliding Window pattern.

Let’s see how the following example illustrates the application of the Sliding Window pattern to efficiently solve the given coding problem:

Minimum Window Substring#

Given two strings, s and t, find the minimum window substring in s, which has the following properties:

  1. It is the shortest substring of s that includes all of the characters present in t.

  2. It must contain at least the same frequency of each character as in t.

  3. The order of the characters does not matter here.

Solution#

Instead of iterating over each substring separately, we use the Sliding Window pattern. We are searching for the shortest substring of s that contains all the characters of t. Once we have found the initial window in s that contains all the characters of t, we can slide the window in order the find the shortest one. Let’s see how this approach can efficiently solve this problem.

The first step is to verify whether or not t is a valid string. If it isn’t, we return an empty string as our output. Then, we initialize two pointers to apply the sliding window technique to our solution. Before we discuss how they’re being used in our solution, we need to take a look at the other components at work.

There are two separate hash maps that we initialize, req_count and window. We populate the req_count hash map with the characters in t and their corresponding frequencies. This is done by traversing each character of t. If it doesn’t already exist in the hash map, we add it with count 111, but if it does, we increment its count. The window hash map is initialized to contain the same characters present in the req_count hash map with the corresponding counts set to 00. The window hash map will be used to keep track of the frequency of the characters of t in the current_ window.

We also set up two more variables called current and required, which tells us whether we need to increase or decrease the size of our sliding window. The current variable will initially hold the value 000 but will be incremented by 11 when we find a character whose frequency in the window hash map matches its frequency in the req_count hash map. The required variable will store the size of the req_count hash map. The moment these two become equal, we have found all the characters that we were looking for in the current window. So, we can start trying to reduce our window size to find the shortest possible substring.

Next, let’s look at how we create this window and adjust it. We initialize a variable called left, which acts as the left pointer, ​​but on the other side, we don’t need to initialize a right pointer explicitly. It can simply be the iterator of our loop, right, which traverses the array from left to right. In each iteration of this loop, we perform the following steps:

  • If the new character is present in the window hash map, we increment its frequency by 11.

  • If the new character occurs in t, we check if its frequency in the window hash map is equal to its frequency in the req_count hash map. Here, we are actually checking if the current character has appeared the same number of times in the current window as it appears in t. If so, we increment current by 11.

  • Finally, if current and required become equal this means that we have found a substring that meets our requirements. So, we start reducing the size of the current window to find a shorter substring that still meets these requirements. As long as current and required remain equal, we move the left pointer forward, decrementing the frequency of the character being dropped out of the window. By doing this, we remove all the unnecessary characters present at the start of the substring. We keep comparing the size of the current window with the length of the shortest substring we have found so far. If the size of the current window is less than the length of the minimum substring, we update the minimum substring.

  • Once current and required become unequal, it means we have checked all the possible substrings that meet our requirement. Now, we slide the right edge of the window one step forward and continue iterating over s.

When s has been traversed, we return the minimum window substring.

Let’s have a look at the code for the algorithm we just discussed.

def min_window(s, t):
# empty string scenario
if t == "":
return ""
# creating the two hash maps
req_count = {}
window = {}
# populating req_count hash map
for c in t:
req_count[c] = 1 + req_count.get(c, 0)
# populating window hash map
for c in t:
window[c] = 0
# setting up the conditional variables
current, required = 0, len(req_count)
# setting up a variable containing the result's starting and ending point
# with default values and a length variable
res, res_len = [-1, -1], float("infinity")
# setting up the sliding window pointers
left = 0
for right in range(len(s)):
c = s[right]
# if the current character also occurs in t, update its frequency in window hash map
if c in t:
window[c] = 1 + window.get(c, 0)
# updating the current variable
if c in req_count and window[c] == req_count[c]:
current += 1
# adjusting the sliding window
while current == required:
# update our result
if (right - left + 1) < res_len:
res = [left, right]
res_len = (right - left + 1)
# pop from the left of our window
if s[left] in t:
window[s[left]] -= 1
# if the popped character was among the required characters and
# removing it has reduced its frequency below its frequency in t, decrement current
if s[left] in req_count and window[s[left]] < req_count[s[left]]:
current -= 1
left += 1
left, right = res
# return the minimum window substring
return s[left:right+1] if res_len != float("infinity") else ""
# driver Code
def main():
s = ["PATTERN", "LIFE", "ABRACADABRA", "STRIKER", "DFFDFDFVD"]
t = ["TN", "I", "ABC", "RK", "VDD"]
for i in range(len(s)):
print(i + 1, ".\ts: ", s[i], "\n\tt: ", t[i], "\n\tThe minimum substring containing ", \
t[i], " is: ", min_window(s[i], t[i]), sep="")
print("-" * 100)
if __name__ == '__main__':
main()
Minimum Window Substring

With our understanding of Sliding Window established, let's move on to discussing the next coding pattern.

Pattern 7: K-Way Merge#

The K-Way Merge pattern is a technique for merging multiple sorted data structures, like arrays and linked lists, into one. This technique extends the classic merge sort by not just merging two lists but several at once. We repeatedly pick the smallest (or largest for descending order) elements from each list and keep adding them to a new list until all are merged. We can do this efficiently using a min-heap, where we add the first element of each list to the heap. We keep replacing the top of the heap with the next element from the same list until all elements are merged into the new list. Another approach is grouping lists into pairs and merging them through two-way merges. We do this by merging each pair of lists and repeating until we end up with a single fully sorted merged list. Both methods help us merge multiple lists, ensuring our data stays sorted. 

Let’s see how the following example illustrates the application of the K-Way Merge pattern to efficiently solve the given coding problem:

Find K Pairs with Smallest Sums#

Given two lists and an integer kk, find kk pairs of numbers with the smallest sum so that in each pair, each list contributes one number to the pair.

Solution#

We can use the K-Way Merge pattern here because it helps solve problems where we need to compute a result by comparing the elements of multiple sorted lists. While traversing through the elements of both lists and making pairs, we use a min-heap to track the pair with the smallest sum encountered so far.

As proposed above, we use a min-heap to store three things:

  • The smallest sum (at the root of the min-heap).

  • The sum of each pair.

  • The list indexes of each element in the pair.

Initially, we can start making pairs by adding only the first element of the second list to each element of the first list.

Let’s take two lists as an example, [2,8,9][2,8,9] and [1,3,6][1,3,6]. We can make 33 pairs in the first iteration and store them in the min-heap:

[(2+1)=3,(8+1)=9,(9+1)=10][(2+1)=3,(8+1)=9,(9+1)=10]

Now, we start a second loop that iterates while elements are in the min-heap and while we have yet to find all kk smallest pairs. In each iteration, we perform three steps:

  1. Pop the smallest pair from the min-heap, noting the sum of the pair and the list indexes of each element, calling the i and j indexes, respectively.

  2. Add this pair to the result list.

  3. Increment j to move forward in the second list, and compose a new pair with the same element from the first list and the next element in the second list. This step is skipped if a new pair can’t be formed when the popped pair’s j was already pointing to the end of the second list.

Supposing k=9k=9 in our example, the sequence of pairs pushed and popped in the second step is as follows:


1. Pop: (2 + 1) = 3 // 1st pair // Min-heap: [(8 + 1) = 9, (9 + 1) = 10]
2. Push: (2 + 3) = 5 // Min-heap: [(2 + 3) = 5, (8 + 1) = 9, (9 + 1) = 10]
3. Pop: (2 + 3) = 5 // 2nd pair // Min-heap: [(8 + 1) = 9, (9 + 1) = 10]
4. Push: (2 + 6) = 8 // Min-heap: [(2 + 6) = 8, (8 + 1) = 9, (9 + 1) = 10]
5. Pop: (2 + 6) = 8 // 3rd pair // Min-heap: [(8 + 1) = 9, (9 + 1) = 10]
<-- No pair to push as at the end of list 2 -->
6. Pop: (8 + 1) = 9 // 4th pair // Min-heap: [(9 + 1) = 10]
7. Push: (8 + 3) = 11 // Min-heap: [(9 + 1) = 10, (8 + 3) = 11]
8. Pop: (9 + 1) = 10 // 5th pair // Min-heap: [(8 + 3) = 11]
9. Push: (9 + 3) = 12 // Min-heap: [(8 + 3) = 11, (9 + 3) = 12]
10. Pop: (8 + 3) = 11 // 6th pair // Min-heap: [(9 + 3) = 12]
11. Push: (8 + 6) = 14 // Min-heap: [(9 + 3) = 12, (8 + 6) = 14]
12. Pop: (9 + 3) = 12 // 7th pair // Min-heap: [(8 + 6) = 14]
13. Push: (9 + 6) = 15 // Min-heap: [(8 + 6) = 14, (9 + 6) = 15]
14. Pop: (8 + 6) = 14 // 8th pair // Min-heap: [(9 + 6) = 15]
<-- No pair to push as at the end of list 2 -->
15. Pop: (9 + 6) = 15 // 9th

At all times, the smallest sum is at the root of the min-heap. Overall, we remove kk pairs from the min-heap.

Let’s look at the code for this solution below:

from heapq import *
def k_smallest_pairs(list1, list2, k):
# storing the length of lists to use it in a loop later
list_length = len(list1)
# declaring a min-heap to keep track of the smallest sums
min_heap = []
# to store the pairs with smallest sums
pairs = []
# iterate over the length of list 1
for i in range(min(k, list_length)):
# computing sum of pairs all elements of list1 with first index
# of list2 and placing it in the min-heap
heappush(min_heap, (list1[i] + list2[0], i, 0))
counter = 1
# iterate over elements of min-heap and only go upto k
while min_heap and counter <= k:
# placing sum of the top element of min-heap
# and its corresponding pairs in i and j
sum_of_pairs, i, j = heappop(min_heap)
# add pairs with the smallest sum in the new list
pairs.append([list1[i], list2[j]])
# increment the index for 2nd list, as we've
# compared all possible pairs with the 1st index of list2
next_element = j + 1
# if next element is available for list2 then add it to the heap
if len(list2) > next_element:
heappush(min_heap,
(list1[i] + list2[next_element], i, next_element))
counter += 1
# return the pairs with the smallest sums
return pairs
# Driver code
def main():
# multiple inputs for efficient results
list1 = [[2, 8, 9],
[1, 2, 300],
[1, 1, 2],
[4, 6],
[4, 7, 9],
[1, 1, 2]]
# multiple inputs for efficient results
list2 = [[1, 3, 6],
[1, 11, 20, 35, 300],
[1, 2, 3],
[2, 3],
[4, 7, 9],
[1]]
k = [9, 30, 1, 2, 5, 4]
# loop to execute till the length of list k
for i in range(len(k)):
print(i + 1, ".\t Input pairs: ", list1[i], ", ", list2[i],
f"\n\t k = {k[i]}", sep="")
print("\t Pairs with the smallest sum are: ",
k_smallest_pairs(list1[i], list2[i], k[i]), sep="")
print("-" * 100)
if __name__ == '__main__':
main()
Find K Pairs with Smallest Sums

Now that we've discussed the K-Way Merge pattern, let's turn our attention to another important coding pattern.

Pattern 8: Top K Elements#

Unlike other techniques that require sorting the entire data to find the top or bottom kk elements, this technique uses a heap to access the required elements from unsorted data. It begins by processing the elements of the dataset and inserting them into the heap. As new elements are encountered, they are compared with those in the collection. If the new element is larger (or smaller, depending on whether we are finding the top kk largest or smallest elements), it replaces one of the elements in the collection. This ensures that the collection always contains the top kk elements seen so far. This makes it useful for handling large datasets or real-time data where performance matters. Using this pattern saves both time and computational resources while solving problems similar to finding the K Largest Elements in a Stream, Top K Frequent Words, and K Smallest Elements in an Array, etc.

Let’s see how the following example illustrates the application of the Top K Elements pattern to efficiently solve the given coding problem:

Kth Largest Element in an Array#

Given an unsorted array, find the kthk^{th} largest element.

Note: We need to find the kthk^{th} largest element in the sorted order, not the kthk^{th} distinct element.

Solution#

We can use a min-heap to efficiently return the kthk^{th} largest number from the given unsorted array. Min-heap ensures the minimum element is always at the root, and it will keep the kk largest elements on top. Once we have inserted all the elements from the array onto the min-heap, we can simply return the root as the kthk^{th} largest element.

The algorithm works as follows:

  • Insert the first kk elements onto the min-heap.

  • For each subsequent element in the array, compare it with the root (minimum element) of the min-heap.

    • If the current element in the array is greater than the root element in our min-heap, remove the root element and insert the current element from the array.

    • Continue inserting and removing elements in the min-heap until we have processed all elements in the array.

  • After processing all the elements, the min-heap will contain the kk largest elements from the input array. The smallest of them, the kthk^{th} largest element, will be at the root of the min-heap.

Let’s look at the code for this solution:

import heapq
def find_kth_largest(nums, k):
# Initialize an empty list
k_numbers_min_heap = []
# Add first k elements to the list
for i in range(k):
k_numbers_min_heap.append(nums[i])
# Convert the list into a min-heap
heapq.heapify(k_numbers_min_heap)
# Loop through the remaining elements in the 'nums' array
for i in range(k, len(nums)):
# Compare the current element with the minimum
# element (root) of the min-heap
if nums[i] > k_numbers_min_heap[0]:
# Remove the smallest element
heapq.heappop(k_numbers_min_heap)
# Add the current element
heapq.heappush(k_numbers_min_heap, nums[i])
# The root of the heap has the Kth largest element
return k_numbers_min_heap[0]
# Driver code
def main():
input_array = [
[1, 5, 12, 2, 11, 9, 7, 30, 20],
[5, 2, 9, -3, 7],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 4, 6, 0, 2],
[3, 5, 2, 3, 8, 5, 3]
]
k = [3, 1, 9, 1, 4]
for i in range(len(input_array)):
print(i + 1, ".\tInput array: ", input_array[i], sep="")
print("\tValue of k: ", k[i], sep="")
print("\tkth largest number: ", find_kth_largest(input_array[i], k[i]), sep="")
print("-" * 100)
if __name__ == '__main__':
main()
Kth Largest Element in an Array

After understanding how to use the K-Way Merge pattern effectively, it's time to explore the last, but certainly not the least, coding pattern from the list of frequently asked patterns by LinkedIn.

Pattern 9: Hash Maps#

The Hash Maps pattern is a tool for storing and retrieving key-value pairs, offering constant-time complexity for insertion, deletion, and lookup operations on average. Quicker lookup time makes hash maps ideal for tasks like caching, indexing, or frequency counting. 

The core operations of hash maps are the following:

1. Insert: A key-value pair is added, with the hash function determining the index for storage. While typically quick (O(1)O(1) on average), this can degrade to O(n)O(n) if many keys collide at the same index.

2. Search: Values are retrieved by applying the hash function to compute the key’s index, typically a quick operation (O(1)O(1)), though it can degrade to O(n)O(n) when handling collisions or during resizing.

3. Remove: Values are removed by deleting the entry at the key’s computed index. Usually quick (O(1)O(1)), the process can slow down to O(n)O(n) in certain scenarios like collisions or resizing.

Using hash maps significantly reduces lookup times to an average of O(1)O(1) compared to managing separate arrays for related data. This efficiency makes hash maps a useful tool for optimizing performance in algorithmic problems. Common interview questions where the Hash Maps pattern demonstrates effectiveness include Logger Rate Limiter, Next Greater Element, Isomorphic Strings, and Longest Palindrome.

Let’s see how the following example illustrates the application of the Hash Maps pattern to efficiently solve the given coding problems:

Isomorphic Strings#

Given two strings, check whether two strings are isomorphic to each other or not.  Two strings are isomorphic if a fixed mapping exists from the characters of one string to the characters of the other string. For example, if there are two instances of the character "a"  in the first string, both these instances should be converted to another character (which could also remain the same character if "a" is mapped to itself) in the second string. This converted character should remain the same in both positions of the second string since there is a fixed mapping from the character "a" in the first string to the converted character in the second string.

Note: Two different characters cannot map to the same character. Furthermore, all the instances of a character must be replaced with another character while protecting the order of characters.

Solution#

We can use the Hash Maps pattern to solve the problem more efficiently. The idea is to use a suitable data structure (dictionary) to store the mapping of characters of string1 to characters of string2 and vice versa.

We use the following two hash maps:

  • map_str1_str2: This stores the mapping from string1 to string2.

  • map_str2_str1: This stores the mapping of the characters from string2 to string 1.

Since the length of both strings is the same, we can traverse the indexes of either one of the two strings in a loop.

Within the loop, we do the following:

Note: i represents the ithi^{th} iteration of the loop which traverses the string.

  • We check if string1[i] is present as a key in map_str1_str2 and that the value corresponding to this key is not string2[i]. If both these conditions are satisfied, we return FALSE since a fixed mapping does not exist for string1[i] to string2[i]. Otherwise, we skip this condition and move forward.

  • We check if string2[i] is present as a key in map_str2_str1 and that the value corresponding to this key is not string1[i]. If both these conditions are satisfied, we return FALSE since a fixed mapping does not exist for string2[i] to string1[i]. Otherwise, we skip this condition and move forward.

  • If both the above conditions do not return FALSE, it means the mapping is not defined in either hash map. So we add the key-value pairs of the mapping from the string1[i] to string2[i] and vice versa in the respective hash maps.

  • If the loop ends successfully, it means both the strings are isomorphic since no condition that invalidates the isomorphic properties of both strings was met in the loop.

We can see the code of this solution below:

def is_isomorphic(string1, string2):
map_str1_str2 = {}
map_str2_str1 = {}
for i in range(len(string1)):
char1 = string1[i]
char2 = string2[i]
if char1 in map_str1_str2 and map_str1_str2[char1] != char2:
return False
if char2 in map_str2_str1 and map_str2_str1[char2] != char1:
return False
map_str1_str2[char1] = char2
map_str2_str1[char2] = char1
return True
# Driver code
def main():
A = ["egg", "foo", "paper", "badc", "aaeaa"]
B = ["all", "bar", "title", "baba", "uuxyy"]
x = 1
for i in range(len(A)):
print(x, ".\tString 1 = ", A[i], sep="")
print("\tString 2 = ", B[i], sep="")
print("\n\tIsomorphic String ?", is_isomorphic(A[i], B[i]))
print(100 * '-')
x = x+1
if __name__ == '__main__':
main()
Isomorphic Strings

That's it about exploring the coding patterns based on the frequently asked coding questions by LinkedIn.

To ace your LinkedIn interview, mastering the patterns we have just discovered is important. Understanding the underlying patterns behind the solutions you devise will not only help you tackle similar problems in the future but also demonstrate your depth of understanding to interviewers. We have explored some of the most common coding patterns with the help of interview questions frequently asked by LinkedIn, but it’s just a start. Remember, practice makes perfect, so dedicate time to solving problems regularly and seek feedback to improve further. You may explore the following courses by Educative for even better preparation because they cover a wide range of coding patterns as well as Dynamic Programming patterns, and that too in various programming languages:

Moreover, if you are looking for a customized learning plan, take a look at the following paths by Educative:

With determination, preparation, and a solid grasp of coding patterns, you’ll be well-equipped to tackle any coding challenge that comes your way during the LinkedIn interview process. Best of luck!


Written By:
Minahil Yaser
Join 2.5 million developers at
Explore the catalog

Free Resources