As a platform committed to helping the world's professionals succeed, LinkedIn places equal emphasis on nurturing talent within the company. This means that you have ample opportunity to build new skills and advance your career.
To find skilled engineers who are eager to grow with the company, LinkedIn typically evaluates candidates through:
An introductory conversation with an engineering recruiter
A technical interview
Interviews with a few engineering leaders
At some point in the interview loop, you will be asked to solve coding problems tailored to the team you'd be joining. This helps hiring managers assess your technical skills and approach to problem-solving.
Solving coding problems in front of industry experts can feel intimidating. The last thing you want is to encounter a tricky problem and freeze. Fortunately, you can prepare for a wide range of problems by studying coding patterns.
Here's how it works.
Let's say you're given a brand new coding problem. If you recognize the underlying pattern, you can quickly identify relevant strategies and algorithms — then apply them to find a solution.
After reviewing the most common types of coding problems in LinkedIn interviews, we've selected 9 coding patterns that can help you ace your technical interview. We'll break down each pattern, then provide sample coding problems for each.
Let's dive in!
Traditionally, when traversing a tree, one might start at the root node and visit each node, moving down one level at a time. This approach, although straightforward, may not always be efficient for certain types of tree-related problems. Naive solutions to some problems might require traversing the same nodes repeatedly, leading to inefficiencies in the traversal process. The Depth-First Search (DFS) pattern operates by recursively traversing from the root node down to the deepest level of a tree, exploring as far as possible along each branch before backtracking. It follows a straightforward principle: explore one branch fully before moving on to the next. This recursive exploration continues until all nodes have been visited or until a specific condition is met. This technique is useful for tasks like finding the height or depth of a tree, determining the lowest common ancestor of two nodes, or generating different traversal orders such as preorder, inorder, and postorder.
There is another closely related technique called Breadth-First Search (BFS), which traverses nodes level by level, exploring all nodes at the current level before moving on to the next level. This approach prioritizes breadth over depth, making it particularly useful for problems that involve finding the shortest path between nodes, locating neighbors at a specific distance, or exploring all possible paths of a fixed length in a tree structure. In contrast, Depth-First Search (DFS) explores one branch as deeply as possible before backtracking, prioritizing depth over breadth.
Let’s check out the following interview problem to see how the Tree Depth-First Search pattern works:
Given the root node of a binary tree with p
and q
.
Note: The lowest common ancestor of two nodes,
p
andq
, is defined as the lowest node in the binary tree that has bothp
andq
as descendants.A node can also be a descendant of itself. For example, if
q
is a descendant ofp
, and we know thatp
is a descendant of itself, thenp
will be the lowest common ancestor ofp
andq
.
We will use the depth-first search to find the lowest common ancestor of p
and q
in the binary tree. The algorithm to find the lowest common ancestor of p
and q
is as follows:
First, we initialize three tracking variables, mid
, left
, and right
, to track whether p
or q
has been found.
Then, we traverse the binary tree recursively using depth-first search starting from the root
node.
If we find p
or q
during our traversal of the binary tree, we set the mid
variable to TRUE and return mid
.
The left
tracking variable is used to store the result of the left subtree of the current node, and right
tracking variable is used to store the result of the right subtree of the current node. So, the results from the recursive calls are stored in their respective tracking variables.
Finally, during the traversal of the binary tree, if any two of the tracking variables, mid
, left
, or right
, are TRUE, we set the current node as our answer node because this node will be the lowest common ancestor of p
and q
.
We need to understand the purpose of each of the tracking variables to answer the question of how a node becomes the lowest common ancestor if any two of the tracking variables are TRUE. If the left
and right
variables are TRUE for any node, it means that both the nodes are descendants of the current node, and therefore, the current node is the lowest common ancestor of the two nodes. However, if mid
and either one of the left
or right
variables are TRUE, then either p
or q
is the current node itself, and the other is the descendant of the current node. Since a node is an ancestor of itself, the lowest common ancestor of the input nodes is the current node.
Let’s look at the code for this solution below:
from BinaryTree import *class Solution:def __init__(self):self.lca = Nonedef lowest_common_ancestor(self, root, p, q):self.lowest_common_ancestor_rec(root, p, q)return self.lca# helper function to find the lowest common ancestor recursivelydef lowest_common_ancestor_rec(self, current_node, p, q):# if current_node does not existif not current_node:return False# initialize tracking variablesleft, right, mid = False, False, False# check if either of the input nodes is the current_nodeif p == current_node or q == current_node:mid = True# traverse binary tree using depth-first searchleft = self.lowest_common_ancestor_rec(current_node.left, p, q)# if the lowest common ancestor has not been found, only then traverse the right subtreeif not self.lca:right = self.lowest_common_ancestor_rec(current_node.right, p, q)# if any two of the tracking variables are true, set current_node as answer nodeif mid + left + right >= 2:self.lca = current_node# return true if any of the tracking variables is truereturn mid or left or right# driver codedef main():input_trees = [[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],[TreeNode(100), TreeNode(200), TreeNode(75), TreeNode(50), TreeNode(25), TreeNode(350)],[TreeNode(350), TreeNode(100), TreeNode(75), TreeNode(50), TreeNode(200), TreeNode(25)],[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],[TreeNode(25), TreeNode(50), TreeNode(75), TreeNode(100), TreeNode(200), TreeNode(350)]]input_nodes = [[25, 75],[50, 350],[100, 200],[50, 25],[350, 200]]for i in range(len(input_trees)):solution = Solution()tree = BinaryTree(input_trees[i])print(i+1, ".\tBinary tree:", sep="")display_tree(tree.root)print("\tp = ", input_nodes[i][0])print("\tq = ", input_nodes[i][1])lca = solution.lowest_common_ancestor(tree.root, tree.find(input_nodes[i][0]), tree.find(input_nodes[i][1]))print("\n\tLowest common ancestor: ", lca.data, sep="")print("-"*100)if __name__ == "__main__":main()
Now that we've covered the Tree Depth-First Search, let's move on to another frequently asked coding pattern.
A stack is a linear data structure that organizes and manages data in a Last In, First Out (LIFO) manner. This means the last element added to the stack is the first to be removed. Think of it like a stack of plates where you can only add or remove plates from the top.
Using stacks as a coding pattern involves the following fundamental operations:
Operation | Time Complexity | Description |
Push | O(1) | Adds the element at the top of the stack. |
Pop | O(1) | Removes and returns the element from the top of the stack. |
Peek | O(1) | Returns the element at the top of the stack without removing it. |
IsEmpty | O(1) | Checks whether the stack is empty or not. Returns TRUE if the stack is empty, FALSE otherwise. |
Size | O(1) | Returns the total number of elements in the stack. |
Stacks are commonly used for tasks like expression evaluation, syntax parsing, or tracking state changes in algorithms. To identify if a problem can be solved using the Stacks pattern, look for scenarios where the last in, first out property is advantageous or where tracking state changes in a last in, first out manner is necessary. Examples of common interview problems that can be tackled using the Stacks pattern include evaluating arithmetic expressions, checking balanced parentheses, or implementing a browser’s back functionality.
Let’s see how the following examples illustrate the application of the Stacks pattern to efficiently solve these problems:
Given a nested list of integers where each element is either an integer or a list whose elements may also be integers or other integer lists, implement an iterator to flatten the nested list.
Implement the Nested Iterator class that has the following functions:
Init: This initializes the iterator with the nested list.
Next (): This returns the next integer in the nested list.
Has Next (): This returns TRUE if there are still some integers in the nested list. Otherwise, it returns FALSE.
In this solution, we use a stack to store both individual integers and nested lists of integers. We push all the elements of the nested list onto the stack in the reverse order during initialization. We do this to ensure the correct processing of the nested list when using a stack-based iterator. This ensures that when we pop elements off the stack, they are in the correct order as they appeared in the original nested list.
The Has Next function performs a set of push and pop operations on the stack in the form of a loop. It checks if the top element of the stack is an integer. If so, it returns TRUE. Otherwise, if the top element is a list of integers, then it pops from the stack and pushes each element of the list onto the stack in reverse order. This way, the lists at the top of the stack are converted into individual integers whenever the Has Next function is called. If the stack is empty, the function returns FALSE.
The Next function first calls the Has Next function to check if there is an integer in the stack. If the Has Next function returns TRUE, it pops from the stack and returns this popped value.
Let’s look at the code for this solution:
from nested_integers import NestedIntegersclass NestedIterator:# NestedIterator constructor initializes the stack using the# given nested_list listdef __init__(self, nested_list):self.nested_list_stack = list(reversed([NestedIntegers(val) for val in nested_list]))# has_next() will return True if there are still some integers in the# stack (that has nested_list elements) and, otherwise, will return False.def has_next(self):# Iterate in the stack while the stack is not emptywhile len(self.nested_list_stack) > 0:# Save the top value of the stacktop = self.nested_list_stack[-1]# Check if the top value is integer, if true return True,# if not continueif top.is_integer():return True# If the top is not an integer, it must be the list of integers# Pop the list from the stack and save it in the top_listtop_list = self.nested_list_stack.pop().get_list()# Save the length of the top_list in i and iterate in the listi = len(top_list) - 1while i >= 0:# Append the values of the nested list into the stackself.nested_list_stack.append(top_list[i])i -= 1return False# next will return the integer from the nested_listdef next(self):# Check if there is still an integer in the stackif self.has_next():# If true pop and return the top of the stackreturn self.nested_list_stack.pop().get_integer()return None# Driver codedef create_nested_iterator_structure(input_list):def parse_input(nested, input_list):if isinstance(input_list, int):nested.set_integer(input_list)else:for item in input_list:child = NestedIntegers()nested.add(child)parse_input(child, item)nested_structure = NestedIntegers()parse_input(nested_structure, input_list)return nested_structuredef create_nested_iterator_from_structure(nested_structure):def flatten(nested, result):if nested.is_integer():result.append(nested.get_integer())else:for child in nested.get_list():flatten(child, result)flattened_list = []flatten(nested_structure, flattened_list)return NestedIterator(flattened_list)# Driver codedef main():inputs = [[1, [2, 3], 4],[3, [2, 3, 4], 4, [2, 3]],[[2, 3], 3, [2, 3], 4, [2, 3, 4, 5]],[1, [3, [4, [5, 6], 7], 8], 9],[[2, 3, [2, 3]]]]for i, test_case_input in enumerate(inputs, start=1):print(i, ".\tOriginal structure: ", inputs[i-1], sep="")print("\n\tOutput:\n")nested_structure = create_nested_iterator_structure(test_case_input)test_case = create_nested_iterator_from_structure(nested_structure)#result = []while test_case.has_next():print("\titr.next(): ", test_case.next())print("-"*100)if __name__ == '__main__':main()
Now, let's move to the next example for Stacks.
We are given an integer number, n
, representing the number of functions running in a single-threaded CPU, and an execution log, which is essentially a list of strings. Each string has the format {function id}:{"start" | "end"}:{timestamp}
, indicating that the function with function id
either started or stopped execution at the time identified by the timestamp
value. Each function has a unique ID between
Note: The exclusive time is the sum of the execution times for all the calls to a specific function.
To find out the exclusive execution time of functions, we will use a stack. Just as a single-threaded CPU uses a stack to manage function execution, preemption, and resumption, we can use a stack to perform our calculation. Every time we see a new start event, we’ll push the information regarding the previously running function onto the stack. When we see an end event, we’ll pop the currently running function from the stack. That way, all end events will be matched with the corresponding start event, and the execution time is correctly computed.
The stack will contain the starting time of all functions in the program. Here’s how the algorithm would work:
First, we’ll read a line of text from the log and tokenize it to separate the function ID, the event type, and the timestamp.
If the event type is "start", push the current log details to the stack.
Otherwise, we pop the log details from the stack and add the execution time of the current function in the actual exclusive time.
If the stack is not empty, the current function is a nested function call. Therefore, we subtract the execution time of this called function from the calling function. We decrease the time in the calling function, in advance.
We store the execution time of each function at the index equal to the function ID in the result
array.
When the same function is called recursively, we add the function’s execution time to the current value at the specific index.
Let’s look at the solution code below:
from logs import *def exclusive_time(n, logs):logs_stack = []result = [0]*nfor content in logs:# Extract the logs details from the content(string)logs = Log(content)if logs.is_start:# Push the logs details to the stacklogs_stack.append(logs)else:# Pop the logs details from the stacktop = logs_stack.pop()# Add the execution time of the current function in the actual resultresult[top.id] += (logs.time - top.time + 1)# If the stack is not empty, subtract the current child function execution time from the parent functionif logs_stack:result[logs_stack[-1].id] -= (logs.time - top.time + 1)return result# Driver codedef main():logs = [["0:start:0", "1:start:2", "1:end:3", "2:start:4", "2:end:7", "0:end:8"],["0:start:0", "0:start:2", "0:end:5", "1:start:6", "1:end:6", "0:end:7"],["0:start:0", "1:start:5", "1:end:6", "0:end:7"],["0:start:0", "1:start:5", "2:start:8", "3:start:12", "4:start:15", "5:start:19", "5:end:22", "4:end:24", "3:end:27", "2:end:32", "1:end:35", "0:end:36"],["0:start:0", "1:start:3", "1:end:6", "0:end:10"]]n = [3, 2, 2, 6, 2]x = 1for i in range(len(n)):print(x, ".\tn = ", n[i], sep = "")print("\tlogs = ", logs[i], sep = "")print("\tOutput: ", exclusive_time(n[i], logs[i]), sep = "")print("-"*100, "\n", sep = "")x += 1if __name__ == '__main__':main()
We've looked at how stacks can be used as a coding pattern to solve problems that require data to be processed in LIFO order, now let's move on to the next pattern.
Custom data structures are essentially modified versions of existing data structures tailored to address specific needs. We often need to go beyond standard data structures like arrays and hash tables to tackle unique challenges more effectively. For instance, a web crawler that processes numerous pages and URLs might use a specialized "URL queue" to manage these URLs efficiently, ensuring they are unique and prioritized based on relevance. Custom data structures involve creating custom classes that encapsulate the necessary functionality and properties needed to efficiently manage and manipulate the data. By designing data structures optimized for the problem domain, we can improve the performance and readability of our code while simplifying complex operations. To determine if a problem can benefit from the Custom Data Structures pattern, consider scenarios where standard data structures like arrays, lists, or maps are not sufficient or where specialized operations need to be performed frequently. Common problems suitable for this pattern include implementing priority queues, disjoint-set data structures, or specialized graph representations.
Let’s see how the following example illustrates the application of the Custom Data Structures pattern to efficiently solve the given coding problem:
Implement a Random Set data structure that can perform the following operations:
Init(): This initializes the Random Set object.
Insert(): This function takes an integer, data, as its parameter and, if it does not already exist in the set, add it to the set, returning TRUE. If the integer already exists in the set, the function returns FALSE.
Delete(): This function takes an integer, data, as its parameter and, if it exists in the set, removes it, returning TRUE. If the integer does not exist in the set, the function returns FALSE.
GetRandom(): This function takes no parameters. It returns an integer chosen at random from the set.
Note: Your implementation should aim to have a running time of
(on average) for each operation.
Arrays support constant time lookups but deletion operations in arrays typically take
Insert(): When inserting data, we will store the new element in our array. And then, insert a key-value pair in our hash map, where the key will be the new element and the value will be its index in the array. Both of these operations have a time complexity of
Delete(): Using the hash map, we find the index in the array at which the element is located. Swap the last element in the array with the one to be removed. Then, in the hash map, update the location of the element we just swapped with the target element, and remove the entry for the target element from the hash map. Lastly, pop out the target element from the array. Each of these five operations has a time complexity of
GetRandom(): For this operation, we can choose a number at random between
Let’s take a look at the code for this solution below:
from random import choiceclass RandomSet():def __init__(self):self.indexor = {} # Maps the actual value to its indexself.store = [] # Store the actual values in an array# Function to insert a valuedef insert(self, val):"""Inserts a value in the data structure.Returns True if it did not already contain the specified element."""if val in self.indexor:return False# Insert the actual value as a key and its index as a valueself.indexor[val] = len(self.store)# Append a new value to arrayself.store.append(val)return True# Function to remove a valuedef delete(self, val):"""Removes a value from the data structure.Returns True if it contains the specified element."""if val in self.indexor:# swap the last element in the array with the element# to delete, update the location of the moved element# in its entry in the hash maplast, i = self.store[-1], self.indexor[val]self.store[i], self.indexor[last] = last, i# delete the last elementdel self.indexor[val]self.store.pop()return Truereturn False# Function to generate a random valuedef get_random(self):"""Choose an element at random from the data structure."""return choice(self.store)# Driver codedef main():commands = [["I", "G", "I", "I", "R", "G"],["I", "I", "R", "G", "R", "I"]]values = [[10, -1, 100, 1000, 200, -1], [30, 60, 10, -1, 30, 90]]for i in range(len(commands)):dataset = RandomSet()print(i + 1, ". Starting operations:", "\n", sep="")for j in range(len(commands[i])):if commands[i][j] == "I":print("\tInsert (", values[i][j], ") returns ",dataset.insert(values[i][j]), sep="")elif commands[i][j] == "R":print("\tDelete (", values[i][j], ") returns ",dataset.delete(values[i][j]), sep="")elif commands[i][j] == "G":print("\tGenerate Random() returns ",dataset.get_random(), sep="")print("-" * 100)if __name__ == '__main__':main()
Now that we've explored the design and implementation of Custom Data Structures, let's explore the next coding pattern in the list.
The Dynamic Programming pattern is a technique that helps solve complex problems by breaking them down into simpler subproblems and storing their solutions to avoid redundant computations. This technique is useful when the problem can be divided into overlapping subproblems and optimal substructures. By storing and reusing intermediate results, dynamic programming enables us to solve problems with improved time and space complexity. For instance, a naive recursive approach to check if a string like "rotator" is a palindrome or to calculate Fibonacci numbers can be inefficient due to the repeated calculations of the same subproblems. Dynamic programming addresses these inefficiencies through two main strategies:
Memoization (top-down approach): This technique optimizes recursion by storing the results of subproblems the first time they are computed, preventing redundant calculations.
Tabulation (bottom-up approach): Tabulation constructs a table to store the results of smaller subproblems, gradually building up to solve the larger problem.
Let’s see how the following example illustrates the application of the Dynamic Programming pattern to efficiently solve the given coding problem:
Given an integer array, nums
, find a subarray that has the largest product, and return the product.
The maximum product subarray problem can be solved efficiently using dynamic programming by dividing the array and solving the subproblems. In this case, we can find the maximum product of a subarray from the start to any element by using the maximum product of the subarray from the start to its previous element. Using this approach, we don’t need to calculate the previous products again; instead, we can use them from the stored products.
The simplest case is when the numbers in the array nums
are all positive numbers. In that case, we calculate the maximum product by multiplying all the numbers to get the result
. When we encounter zeros or negative numbers in the array, we have to use a slightly different approach, as explained below:
Zeros will reset the product to result
is less than the product, update it with the product. Otherwise, the result
remains the same.
Negative numbers can be a little challenging. When multiplied, a single negative number can flip the larger product into a smaller number. However, if we get another negative number, we can get the larger product again. Unlike zero, here we have a chance that our larger product can be restored if negative numbers are encountered twice.
To solve the problem above, follow the steps below:
Initialize max_so_far
with the first element of nums
. It will be used to track the maximum product up to an element. Initialize min_so_far
with the first element of nums
. It will be used to track the minimum product up to an element. Initialize result
with max_so_far
.
Now, iterate through the rest of the elements of the array as follows:
Update max_so_far
and min_so_far
by taking the maximum and minimum of the current value, the product of max_so_far
and the current value, and the product of min_so_far
and the current value, respectively.
Update result
by taking the maximum of max_so_far
and result
.
When all elements have been traversed, return result
.
Let’s look at the code for this solution below:
def max_product(nums):# Check if the input list is empty, return 0 if it isif len(nums) == 0:return 0# Initialize max_so_far and min_so_far with the first element in the list, and result with max_so_farmax_so_far = nums[0]min_so_far = nums[0]result = max_so_far# Loop through the rest of the elements in the listfor i in range(1, len(nums)):curr = nums[i]# Update the max_so_far and min_so_far with the maximum and minimum of curr, max_so_far * curr, and min_so_far * curr# prev_max_so_far is used to store the value of max_so_far so that it does not get updated while calculated min_so_farprev_max_so_far = max_so_farmax_so_far = max(curr, max_so_far * curr, min_so_far * curr)min_so_far = min(curr, prev_max_so_far * curr, min_so_far * curr)# Update result with the maximum of max_so_far and resultresult = max(max_so_far, result)# Return the final resultreturn result# Driver codedef main():input_bits = [[-2, 0, -1],[2, 3, -2, 4],[2, -5, 3, 1, -4, 0, -10, 2],[1, 2, 3, 0, 4],[5, 4, 3, 10, 4, 1],]for i in range(len(input_bits)):print(i + 1, ".\t Input array: ", input_bits[i], sep="")print("\n\t Maximum product: ", max_product(input_bits[i]), sep="")print("-" * 100)if __name__ == "__main__":main()
After understanding how to use the Dynamic Programming effectively, it's time to explore the next coding pattern.
The Graphs pattern offers a structured approach for solving problems involving graph data structure, where entities are represented as nodes and their relationships are represented as edges. This pattern involves traversing the graph using various algorithms like Depth-First Search (DFS) or Breadth-First Search (BFS) to explore its vertices and edges systematically.
Breadth-First Search (BFS) starts at a selected node and explores all its neighboring nodes at the current depth level before moving to the nodes at the next depth level. It traverses the graph level by level, often using a queue data structure to keep track of the nodes to be visited. BFS is well-suited for finding the shortest path between two nodes in an unweighted graph.
Depth-First Search (DFS) starts at a selected node and explores as far as possible along each branch before backtracking. It explores one branch fully before moving on to the next branch. DFS is often implemented using recursion or a stack data structure to keep track of the visited nodes and explore the unvisited ones. This algorithm is useful for tasks like finding cycles in a graph, topological sorting, or exploring all possible paths from a starting node.
Common problems suitable for this pattern include finding the shortest path between two nodes, determining the connectivity of a network, or identifying cycles within a graph.
Let’s check out the following interview problem to see how the Graphs pattern works:
For the graph to be a valid tree, the number of edges must equal
Now, let’s look at the algorithm to solve this problem:
For the base case, return FALSE if the number of edges is not equal to adjacency[i]
will represent the array of all the graph nodes from which node i
is connected.
Next, we initialize a visited
set and a stack
. We start exploring the graph from node node
and marking it as visited. If we encounter a neighbor
that hasn’t been visited, we add it to the visited
set and push it onto the stack
for further exploration. Finally, after completing the DFS traversal, we check if all nodes have been visited.
Ultimately, we will verify that the graph is fully connected and no cycle exists by comparing the length of the visited
set and the n
number of nodes.
Let’s look at the code for this solution below:
def valid_tree(n, edges):# Check if n - 1 edges existsif len(edges) != n - 1:return Falseadjacency = [[] for _ in range(n)]# Populate adjacency with all the connected nodesfor x, y in edges:adjacency[x].append(y)adjacency[y].append(x)visited = {0}stack = [0]while stack:node = stack.pop()# Iterate over the neighbors of the popped nodefor neighbour in adjacency[node]:if neighbour not in visited:# Add a neighbor in visited set and stack if it doesn't already exist in the setvisited.add(neighbour)stack.append(neighbour)return len(visited) == n# Driver codedef main():n = [3, 4, 5, 5, 6]edges = [[[0, 1], [0, 2], [1, 2]],[[0, 1], [0, 2], [0, 3]],[[0, 1], [0, 2], [0, 3], [0, 4], [3, 4]],[[0, 1], [0, 2], [0, 3], [3, 4]],[[0, 1], [0, 2], [1, 3], [2, 4], [0, 5]]]for i in range(len(n)):print(i + 1, ".\t n = ", n[i], sep="")print("\t Edges = ", edges[i], sep="")print("\t Is the given graph a valid tree:", valid_tree(n[i], edges[i]))print("-" * 100)if __name__ == "__main__":main()
Now that we've covered the Graphs, let's move on to another frequently asked coding pattern.
The Sliding Window pattern is a useful tool for efficiently solving problems involving sequential data such as arrays or strings, where computations on subsets of data must be repeated. In this technique, a window is defined as a contiguous subset of elements within the data that adjusts its boundaries as it moves through the data. The processing of sequential information is efficient because the window only focuses on relevant subsets of the data at any given time, avoiding unnecessary computations on the entire dataset. Computations are typically updated in constant time by considering elements entering or exiting the window. By subtracting leaving elements and adding new ones, the computational time remains constant with each movement of the window. Problems like Find Maximum in Sliding Window, Repeated DNA Sequences, and Best Time to Buy and Sell Stock are commonly solved using the Sliding Window pattern.
Let’s see how the following example illustrates the application of the Sliding Window pattern to efficiently solve the given coding problem:
Given two strings, s
and t
, find the minimum window substring in s
, which has the following properties:
It is the shortest substring of s
that includes all of the characters present in t
.
It must contain at least the same frequency of each character as in t
.
The order of the characters does not matter here.
Instead of iterating over each substring separately, we use the Sliding Window pattern. We are searching for the shortest substring of s
that contains all the characters of t
. Once we have found the initial window in s
that contains all the characters of t
, we can slide the window in order the find the shortest one. Let’s see how this approach can efficiently solve this problem.
The first step is to verify whether or not t
is a valid string. If it isn’t, we return an empty string as our output. Then, we initialize two pointers to apply the sliding window technique to our solution. Before we discuss how they’re being used in our solution, we need to take a look at the other components at work.
There are two separate hash maps that we initialize, req_count
and window
. We populate the req_count
hash map with the characters in t
and their corresponding frequencies. This is done by traversing each character of t
. If it doesn’t already exist in the hash map, we add it with count 111, but if it does, we increment its count. The window
hash map is initialized to contain the same characters present in the req_count
hash map with the corresponding counts set to window
hash map will be used to keep track of the frequency of the characters of t
in the current_ window.
We also set up two more variables called current
and required
, which tells us whether we need to increase or decrease the size of our sliding window. The current
variable will initially hold the value 000 but will be incremented by window
hash map matches its frequency in the req_count
hash map. The required
variable will store the size of the req_count
hash map. The moment these two become equal, we have found all the characters that we were looking for in the current window. So, we can start trying to reduce our window size to find the shortest possible substring.
Next, let’s look at how we create this window and adjust it. We initialize a variable called left
, which acts as the left pointer, but on the other side, we don’t need to initialize a right pointer explicitly. It can simply be the iterator of our loop, right
, which traverses the array from left to right. In each iteration of this loop, we perform the following steps:
If the new character is present in the window
hash map, we increment its frequency by
If the new character occurs in t
, we check if its frequency in the window
hash map is equal to its frequency in the req_count
hash map. Here, we are actually checking if the current character has appeared the same number of times in the current window as it appears in t
. If so, we increment current
by
Finally, if current
and required
become equal this means that we have found a substring that meets our requirements. So, we start reducing the size of the current window to find a shorter substring that still meets these requirements. As long as current
and required
remain equal, we move the left
pointer forward, decrementing the frequency of the character being dropped out of the window. By doing this, we remove all the unnecessary characters present at the start of the substring. We keep comparing the size of the current window with the length of the shortest substring we have found so far. If the size of the current window is less than the length of the minimum substring, we update the minimum substring.
Once current
and required
become unequal, it means we have checked all the possible substrings that meet our requirement. Now, we slide the right edge of the window one step forward and continue iterating over s
.
When s
has been traversed, we return the minimum window substring.
Let’s have a look at the code for the algorithm we just discussed.
def min_window(s, t):# empty string scenarioif t == "":return ""# creating the two hash mapsreq_count = {}window = {}# populating req_count hash mapfor c in t:req_count[c] = 1 + req_count.get(c, 0)# populating window hash mapfor c in t:window[c] = 0# setting up the conditional variablescurrent, required = 0, len(req_count)# setting up a variable containing the result's starting and ending point# with default values and a length variableres, res_len = [-1, -1], float("infinity")# setting up the sliding window pointersleft = 0for right in range(len(s)):c = s[right]# if the current character also occurs in t, update its frequency in window hash mapif c in t:window[c] = 1 + window.get(c, 0)# updating the current variableif c in req_count and window[c] == req_count[c]:current += 1# adjusting the sliding windowwhile current == required:# update our resultif (right - left + 1) < res_len:res = [left, right]res_len = (right - left + 1)# pop from the left of our windowif s[left] in t:window[s[left]] -= 1# if the popped character was among the required characters and# removing it has reduced its frequency below its frequency in t, decrement currentif s[left] in req_count and window[s[left]] < req_count[s[left]]:current -= 1left += 1left, right = res# return the minimum window substringreturn s[left:right+1] if res_len != float("infinity") else ""# driver Codedef main():s = ["PATTERN", "LIFE", "ABRACADABRA", "STRIKER", "DFFDFDFVD"]t = ["TN", "I", "ABC", "RK", "VDD"]for i in range(len(s)):print(i + 1, ".\ts: ", s[i], "\n\tt: ", t[i], "\n\tThe minimum substring containing ", \t[i], " is: ", min_window(s[i], t[i]), sep="")print("-" * 100)if __name__ == '__main__':main()
With our understanding of Sliding Window established, let's move on to discussing the next coding pattern.
The K-Way Merge pattern is a technique for merging multiple sorted data structures, like arrays and linked lists, into one. This technique extends the classic merge sort by not just merging two lists but several at once. We repeatedly pick the smallest (or largest for descending order) elements from each list and keep adding them to a new list until all are merged. We can do this efficiently using a min-heap, where we add the first element of each list to the heap. We keep replacing the top of the heap with the next element from the same list until all elements are merged into the new list. Another approach is grouping lists into pairs and merging them through two-way merges. We do this by merging each pair of lists and repeating until we end up with a single fully sorted merged list. Both methods help us merge multiple lists, ensuring our data stays sorted.
Let’s see how the following example illustrates the application of the K-Way Merge pattern to efficiently solve the given coding problem:
Given two lists and an integer
We can use the K-Way Merge pattern here because it helps solve problems where we need to compute a result by comparing the elements of multiple sorted lists. While traversing through the elements of both lists and making pairs, we use a min-heap to track the pair with the smallest sum encountered so far.
As proposed above, we use a min-heap to store three things:
The smallest sum (at the root of the min-heap).
The sum of each pair.
The list indexes of each element in the pair.
Initially, we can start making pairs by adding only the first element of the second list to each element of the first list.
Let’s take two lists as an example,
Now, we start a second loop that iterates while elements are in the min-heap and while we have yet to find all
Pop the smallest pair from the min-heap, noting the sum of the pair and the list indexes of each element, calling the i
and j
indexes, respectively.
Add this pair to the result list.
Increment j
to move forward in the second list, and compose a new pair with the same element from the first list and the next element in the second list. This step is skipped if a new pair can’t be formed when the popped pair’s j
was already pointing to the end of the second list.
Supposing
1. Pop: (2 + 1) = 3 // 1st pair // Min-heap: [(8 + 1) = 9, (9 + 1) = 10]2. Push: (2 + 3) = 5 // Min-heap: [(2 + 3) = 5, (8 + 1) = 9, (9 + 1) = 10]3. Pop: (2 + 3) = 5 // 2nd pair // Min-heap: [(8 + 1) = 9, (9 + 1) = 10]4. Push: (2 + 6) = 8 // Min-heap: [(2 + 6) = 8, (8 + 1) = 9, (9 + 1) = 10]5. Pop: (2 + 6) = 8 // 3rd pair // Min-heap: [(8 + 1) = 9, (9 + 1) = 10]<-- No pair to push as at the end of list 2 -->6. Pop: (8 + 1) = 9 // 4th pair // Min-heap: [(9 + 1) = 10]7. Push: (8 + 3) = 11 // Min-heap: [(9 + 1) = 10, (8 + 3) = 11]8. Pop: (9 + 1) = 10 // 5th pair // Min-heap: [(8 + 3) = 11]9. Push: (9 + 3) = 12 // Min-heap: [(8 + 3) = 11, (9 + 3) = 12]10. Pop: (8 + 3) = 11 // 6th pair // Min-heap: [(9 + 3) = 12]11. Push: (8 + 6) = 14 // Min-heap: [(9 + 3) = 12, (8 + 6) = 14]12. Pop: (9 + 3) = 12 // 7th pair // Min-heap: [(8 + 6) = 14]13. Push: (9 + 6) = 15 // Min-heap: [(8 + 6) = 14, (9 + 6) = 15]14. Pop: (8 + 6) = 14 // 8th pair // Min-heap: [(9 + 6) = 15]<-- No pair to push as at the end of list 2 -->15. Pop: (9 + 6) = 15 // 9th
At all times, the smallest sum is at the root of the min-heap. Overall, we remove
Let’s look at the code for this solution below:
from heapq import *def k_smallest_pairs(list1, list2, k):# storing the length of lists to use it in a loop laterlist_length = len(list1)# declaring a min-heap to keep track of the smallest sumsmin_heap = []# to store the pairs with smallest sumspairs = []# iterate over the length of list 1for i in range(min(k, list_length)):# computing sum of pairs all elements of list1 with first index# of list2 and placing it in the min-heapheappush(min_heap, (list1[i] + list2[0], i, 0))counter = 1# iterate over elements of min-heap and only go upto kwhile min_heap and counter <= k:# placing sum of the top element of min-heap# and its corresponding pairs in i and jsum_of_pairs, i, j = heappop(min_heap)# add pairs with the smallest sum in the new listpairs.append([list1[i], list2[j]])# increment the index for 2nd list, as we've# compared all possible pairs with the 1st index of list2next_element = j + 1# if next element is available for list2 then add it to the heapif len(list2) > next_element:heappush(min_heap,(list1[i] + list2[next_element], i, next_element))counter += 1# return the pairs with the smallest sumsreturn pairs# Driver codedef main():# multiple inputs for efficient resultslist1 = [[2, 8, 9],[1, 2, 300],[1, 1, 2],[4, 6],[4, 7, 9],[1, 1, 2]]# multiple inputs for efficient resultslist2 = [[1, 3, 6],[1, 11, 20, 35, 300],[1, 2, 3],[2, 3],[4, 7, 9],[1]]k = [9, 30, 1, 2, 5, 4]# loop to execute till the length of list kfor i in range(len(k)):print(i + 1, ".\t Input pairs: ", list1[i], ", ", list2[i],f"\n\t k = {k[i]}", sep="")print("\t Pairs with the smallest sum are: ",k_smallest_pairs(list1[i], list2[i], k[i]), sep="")print("-" * 100)if __name__ == '__main__':main()
Now that we've discussed the K-Way Merge pattern, let's turn our attention to another important coding pattern.
Unlike other techniques that require sorting the entire data to find the top or bottom
Let’s see how the following example illustrates the application of the Top K Elements pattern to efficiently solve the given coding problem:
Given an unsorted array, find the
Note: We need to find the
largest element in the sorted order, not the distinct element.
We can use a min-heap to efficiently return the
The algorithm works as follows:
Insert the first
For each subsequent element in the array, compare it with the root (minimum element) of the min-heap.
If the current element in the array is greater than the root element in our min-heap, remove the root element and insert the current element from the array.
Continue inserting and removing elements in the min-heap until we have processed all elements in the array.
After processing all the elements, the min-heap will contain the
Let’s look at the code for this solution:
import heapqdef find_kth_largest(nums, k):# Initialize an empty listk_numbers_min_heap = []# Add first k elements to the listfor i in range(k):k_numbers_min_heap.append(nums[i])# Convert the list into a min-heapheapq.heapify(k_numbers_min_heap)# Loop through the remaining elements in the 'nums' arrayfor i in range(k, len(nums)):# Compare the current element with the minimum# element (root) of the min-heapif nums[i] > k_numbers_min_heap[0]:# Remove the smallest elementheapq.heappop(k_numbers_min_heap)# Add the current elementheapq.heappush(k_numbers_min_heap, nums[i])# The root of the heap has the Kth largest elementreturn k_numbers_min_heap[0]# Driver codedef main():input_array = [[1, 5, 12, 2, 11, 9, 7, 30, 20],[5, 2, 9, -3, 7],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 4, 6, 0, 2],[3, 5, 2, 3, 8, 5, 3]]k = [3, 1, 9, 1, 4]for i in range(len(input_array)):print(i + 1, ".\tInput array: ", input_array[i], sep="")print("\tValue of k: ", k[i], sep="")print("\tkth largest number: ", find_kth_largest(input_array[i], k[i]), sep="")print("-" * 100)if __name__ == '__main__':main()
After understanding how to use the K-Way Merge pattern effectively, it's time to explore the last, but certainly not the least, coding pattern from the list of frequently asked patterns by LinkedIn.
The Hash Maps pattern is a tool for storing and retrieving key-value pairs, offering constant-time complexity for insertion, deletion, and lookup operations on average. Quicker lookup time makes hash maps ideal for tasks like caching, indexing, or frequency counting.
The core operations of hash maps are the following:
1. Insert: A key-value pair is added, with the hash function determining the index for storage. While typically quick (
2. Search: Values are retrieved by applying the hash function to compute the key’s index, typically a quick operation (
3. Remove: Values are removed by deleting the entry at the key’s computed index. Usually quick (
Using hash maps significantly reduces lookup times to an average of
Let’s see how the following example illustrates the application of the Hash Maps pattern to efficiently solve the given coding problems:
Given two strings, check whether two strings are isomorphic to each other or not. Two strings are isomorphic if a fixed mapping exists from the characters of one string to the characters of the other string. For example, if there are two instances of the character "a" in the first string, both these instances should be converted to another character (which could also remain the same character if "a" is mapped to itself) in the second string. This converted character should remain the same in both positions of the second string since there is a fixed mapping from the character "a" in the first string to the converted character in the second string.
Note: Two different characters cannot map to the same character. Furthermore, all the instances of a character must be replaced with another character while protecting the order of characters.
We can use the Hash Maps pattern to solve the problem more efficiently. The idea is to use a suitable data structure (dictionary) to store the mapping of characters of string1 to characters of string2 and vice versa.
We use the following two hash maps:
map_str1_str2
: This stores the mapping from string1 to string2.
map_str2_str1
: This stores the mapping of the characters from string2 to string 1.
Since the length of both strings is the same, we can traverse the indexes of either one of the two strings in a loop.
Within the loop, we do the following:
Note:
i
represents theiteration of the loop which traverses the string.
We check if string1[i]
is present as a key in map_str1_str2
and that the value corresponding to this key is not string2[i]
. If both these conditions are satisfied, we return FALSE since a fixed mapping does not exist for string1[i]
to string2[i]
. Otherwise, we skip this condition and move forward.
We check if string2[i]
is present as a key in map_str2_str1
and that the value corresponding to this key is not string1[i]
. If both these conditions are satisfied, we return FALSE since a fixed mapping does not exist for string2[i]
to string1[i]
. Otherwise, we skip this condition and move forward.
If both the above conditions do not return FALSE, it means the mapping is not defined in either hash map. So we add the key-value pairs of the mapping from the string1[i]
to string2[i]
and vice versa in the respective hash maps.
If the loop ends successfully, it means both the strings are isomorphic since no condition that invalidates the isomorphic properties of both strings was met in the loop.
We can see the code of this solution below:
def is_isomorphic(string1, string2):map_str1_str2 = {}map_str2_str1 = {}for i in range(len(string1)):char1 = string1[i]char2 = string2[i]if char1 in map_str1_str2 and map_str1_str2[char1] != char2:return Falseif char2 in map_str2_str1 and map_str2_str1[char2] != char1:return Falsemap_str1_str2[char1] = char2map_str2_str1[char2] = char1return True# Driver codedef main():A = ["egg", "foo", "paper", "badc", "aaeaa"]B = ["all", "bar", "title", "baba", "uuxyy"]x = 1for i in range(len(A)):print(x, ".\tString 1 = ", A[i], sep="")print("\tString 2 = ", B[i], sep="")print("\n\tIsomorphic String ?", is_isomorphic(A[i], B[i]))print(100 * '-')x = x+1if __name__ == '__main__':main()
That's it about exploring the coding patterns based on the frequently asked coding questions by LinkedIn.
To ace your LinkedIn interview, mastering the patterns we have just discovered is important. Understanding the underlying patterns behind the solutions you devise will not only help you tackle similar problems in the future but also demonstrate your depth of understanding to interviewers. We have explored some of the most common coding patterns with the help of interview questions frequently asked by LinkedIn, but it’s just a start. Remember, practice makes perfect, so dedicate time to solving problems regularly and seek feedback to improve further. You may explore the following courses by Educative for even better preparation because they cover a wide range of coding patterns as well as Dynamic Programming patterns, and that too in various programming languages:
Moreover, if you are looking for a customized learning plan, take a look at the following paths by Educative:
With determination, preparation, and a solid grasp of coding patterns, you’ll be well-equipped to tackle any coding challenge that comes your way during the LinkedIn interview process. Best of luck!
Free Resources