Despite Facebook's rebranding to Meta, the culture and hiring process for software engineers remain largely the same. Meta expects all engineers, regardless of their level or tenure, to voice opinions about strategic direction. The most successful candidates bring technical excellence and a drive to build boundary-pushing products.
Candidates will be able to showcase their leadership skills during the behavioral portion of the interview. However, first, they'll need to excel in the technical screen and coding sessions.
The purpose of the technical screen and coding sessions is to assess your proficiency in specific technical skills, as well as your underlying ability to solve complex problems. Of course, it’s important to aim for error-free solutions—but it’s not the end of the world. Meta values your ability to catch mistakes and effectively narrate your problem-solving process.
So, how can you efficiently prepare to tackle Meta’s technical interview?
Focus on the patterns that underlie Meta’s most common coding problems.
After researching Meta’s approach to technical interviews, we’ve identified 7 patterns to help you ace your technical screen and coding sessions. Instead of worrying about a brand-new coding problem, you’ll be able to recognize the underlying pattern and apply the algorithms and strategies that will be most effective.
In this blog, we’ll break down each pattern and provide examples of common coding problems that use it. Let’s dive in!
A stack is a linear data structure that organizes and manages data in a Last In, First Out (LIFO) manner. This means the last element added to the stack is the first to be removed. Think of it like a stack of plates where you can only add or remove plates from the top.
Using stacks as a coding pattern involves the following fundamental operations:
Operation | Time Complexity | Description |
Push | O(1) | Adds the element at the top of the stack. |
Pop | O(1) | Removes and returns the element from the top of the stack. |
Peek | O(1) | Returns the element at the top of the stack without removing it. |
IsEmpty | O(1) | Checks whether the stack is empty or not. Returns TRUE if the stack is empty, FALSE otherwise. |
Size | O(1) | Returns the total number of elements in the stack. |
Stacks are commonly used for tasks like expression evaluation, syntax parsing, or tracking state changes in algorithms. To identify if a problem can be solved using the Stacks pattern, look for scenarios where the last in, first out property is advantageous or where tracking state changes in a last in, first out manner is necessary. Examples of common interview problems that can be tackled using the Stacks pattern include evaluating arithmetic expressions, checking balanced parentheses, or implementing a browser’s back functionality.
Let’s see how the following example illustrates the application of the Stacks pattern to efficiently solve the given coding problem:
Given a string, s
, that may have
In this solution, we use the Stacks pattern to remove all the extra parentheses from the input string. We traverse the input string, and every time we encounter an opening parenthesis, we push it, along with its index, onto the stack and keep traversing. Meanwhile, whenever we find a closing parenthesis, we decide whether to push it onto the stack. For this, we check the stack as follows:
If the stack is not empty and the top stack element is an opening parenthesis, we pop it off. This represents that the recently removed opening parenthesis corresponds to the current closing parenthesis, making a valid set of parenthesis in the input string.
If the stack is empty or the top stack element is a closing parenthesis, we push the current closing parenthesis, along with its index, onto the stack.
After traversing the complete string, all the parentheses left in the stack are invalid. Since we have stored each parenthesis index as well, we can now use these index values to remove the instances of these parentheses in the input string. Return the updated string as the output, representing the valid parenthesization.
Let's look at the code for this solution:
def min_remove_parentheses(s):stack = []s_list = list(s)for i, val in enumerate(s):# if stack is not empty and top element of stack is an opening parenthesis# and the current element is a closing parenthesisif stack and stack[-1][0] == '(' and val == ')':# pop the opening parenthesis as it makes a valid pair# with the current closing parenthesisstack.pop()# if the current value is an opening or a closing parenthesiselif val == '(' or val == ')':# push onto stackstack.append([val, i])# Remove the invalid parenthesesfor p in stack:s_list[p[1]] = ""# convert the list to stringresult = ''.join(s_list)return result# Driver codedef main():inputs = ["ar)ab(abc)abd(", "a)rt)lm(ikgh)", "aq)xy())qf(a(ba)q)","(aw))kk())(w(aa)(bv(wt)r)", "(qi)(kl)((y(yt))(r(q(g)s)"]for i in range(len(inputs)):print(i + 1, ". Input: \"", inputs[i], "\"", sep="")print(" Valid parentheses, after minimum removal: \"", \min_remove_parentheses(inputs[i]), "\"", sep="")print("-" * 100)if __name__ == "__main__":main()
We've looked at how stacks can be used as a coding pattern to solve problems that require data to be processed in LIFO order, now, let's move on to the next pattern.
In many coding interviews, candidates often encounter problems where binary search comes in handy. It's known for its logarithmic time complexity, which makes it super efficient. However, it only works when the input data is already sorted. That's where the Modified Binary Search pattern steps in. It is an advanced adaptation of the traditional binary search algorithm, modified to handle more complex scenarios where elements may not strictly meet the standard sorted criteria. This pattern excels in efficiently locating elements or conditions that are not straightforward to find through linear searching, particularly when dealing with rotated arrays, finding boundaries, or solving the random pick weight problem.
By dividing the search space in half, this method significantly reduces the time complexity to
The adaptability of the Modified Binary Search pattern makes it a powerful tool in software development, enhancing the ability to manage and retrieve data efficiently in scenarios where direct comparisons and typical ordering do not apply. This pattern not only streamlines data retrieval processes but also aids in optimizing performance across various programming tasks.
Let’s see how the following example illustrates the application of the Modified Binary Search pattern to efficiently solve the given coding problem:
We’re given an array of positive integers, weights
, where weights[i]
is the weight of the weights
array. The larger the value of weights[i]
, the heavier the weight is, and the higher the chances of its index being picked.
Suppose that the array consists of the weights
Index 0:
Index 1:
Index 2:
Note: Since we’re randomly choosing from the options, there is no guarantee that in any specific run of the program, any of the elements will be selected with the exact expected frequency.
We can use the Modified Binary Search pattern to speed up the random index-picking process. It reduces the index searching time to i
stores the cumulative sum of weights up to index i
. Next, we generate a random number between 1 and the total weight. Finally, we use binary search to find the index corresponding to the randomly generated number in the prefix sum array. This approach ensures that elements with higher weights have a proportionally higher chance of being selected while maintaining randomness.
Here’s how the algorithm works:
The Init() method generates a list of cumulative sums using the given list of weights.
The Pick Index() method returns a randomly selected index while considering the provided weights. It works as follows:
Generates a random number, target
, between
Uses binary search to find the index of the first cumulative sum greater than the random value. Initialize the low
index to high
index to the length of the list of cumulative sums of weights. While the low
index is less than or equal to the high
index, the algorithm:
Calculates the mid
index as low
high
low
)
If the cumulative sum at the mid
index is less than or equal to the target
, the low
index is updated to mid + 1
.
Otherwise, the high
index is updated to mid
.
At the end of the binary search, the low
pointer will point to the index of the first cumulative sum greater than the target
. Return this index as the chosen index.
Let’s look at the code for this solution below:
import randomclass RandomPickWithWeight:# Constructordef __init__(self, weights):# List to store running sums of weightsself.running_sums = []# Variable to calculate a running sumrunning_sum = 0# Iterate through the given weightsfor w in weights:# Add the current weight to the running sumrunning_sum += w# Append the running sum to the running_sums listself.running_sums.append(running_sum)# Store the total sum of weightsself.total_sum = running_sum# Method to pick an index based on the weightsdef pick_index(self):# Generate a random number between 1 and the total sum of the arraytarget = random.randint(1, self.total_sum)# Initialize low and high variables for binary searchlow = 0high = len(self.running_sums)# Perform binary search to find the first value higher than the targetwhile low < high:mid = low + (high - low) // 2if target > self.running_sums[mid]:low = mid + 1else:high = mid# Return the index (low) foundreturn low# Driver codedef main():counter = 900weights = [[1, 2, 3, 4, 5],[1, 12, 23, 34, 45, 56, 67, 78, 89, 90],[10, 20, 30, 40, 50],[1, 10, 23, 32, 41, 56, 62, 75, 87, 90],[12, 20, 35, 42, 55],[10, 10, 10, 10, 10],[10, 10, 20, 20, 20, 30],[1, 2, 3],[10, 20, 30, 40],[5, 10, 15, 20, 25, 30]]dict = {}for i in range(len(weights)):print(i + 1, ".\tList of weights: ", weights[i], ", pick_index() called ", counter, " times", "\n", sep="")[dict.setdefault(l, 0) for l in range(len(weights[i]))]sol = RandomPickWithWeight(weights[i])for j in range(counter):index = sol.pick_index()dict[index] += 1print("-"*105)print("\t{:<10}{:<5}{:<10}{:<5}{:<15}{:<5}{:<20}{:<5}{:<15}".format( \"Indexes", "|", "Weights", "|", "Occurences", "|", "Actual Frequency", "|", "Expected Frequency"))print("-"*105)for key, value in dict.items():print("\t{:<10}{:<5}{:<10}{:<5}{:<15}{:<5}{:<20}{:<5}{:<15}".format(key, "|", weights[i][key], "|", value, "|", \str(round((value/counter)*100, 2)) + "%", "|", str(round(weights[i][key]/sum(weights[i])*100, 2))+"%"))dict = {}print("\n", "-"*105, "\n", sep="")if __name__ == '__main__':main()
Now that we've discussed Modified Binary Search, let's turn our attention to another important coding pattern.
Unlike other techniques that require sorting the entire data to find the top or bottom
Let’s see how the following examples illustrate the application of the Top K Elements pattern to efficiently solve these problems:
Given an unsorted array, find the
Note: We need to find the
largest element in the sorted order, not the distinct element.
We can use a min-heap to efficiently return the
The algorithm works as follows:
Insert the first
For each subsequent element in the array, compare it with the root (minimum element) of the min-heap.
If the current element in the array is greater than the root element in our min-heap, remove the root element and insert the current element from the array.
Continue inserting and removing elements in the min-heap until we have processed all elements in the array.
After processing all the elements, the min-heap will contain the
Let’s look at the code for this solution:
import heapqdef find_kth_largest(nums, k):# Initialize an empty listk_numbers_min_heap = []# Add first k elements to the listfor i in range(k):k_numbers_min_heap.append(nums[i])# Convert the list into a min-heapheapq.heapify(k_numbers_min_heap)# Loop through the remaining elements in the 'nums' arrayfor i in range(k, len(nums)):# Compare the current element with the minimum# element (root) of the min-heapif nums[i] > k_numbers_min_heap[0]:# Remove the smallest elementheapq.heappop(k_numbers_min_heap)# Add the current elementheapq.heappush(k_numbers_min_heap, nums[i])# The root of the heap has the Kth largest elementreturn k_numbers_min_heap[0]# Driver codedef main():input_array = [[1, 5, 12, 2, 11, 9, 7, 30, 20],[5, 2, 9, -3, 7],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],[1, 4, 6, 0, 2],[3, 5, 2, 3, 8, 5, 3]]k = [3, 1, 9, 1, 4]for i in range(len(input_array)):print(i + 1, ".\tInput array: ", input_array[i], sep="")print("\tValue of k: ", k[i], sep="")print("\tkth largest number: ", find_kth_largest(input_array[i], k[i]), sep="")print("-" * 100)if __name__ == '__main__':main()
Now, let's look at another problem that can be solved using the Top K Elements pattern.
Given a list of points on a plane, where the plane is a 2-D array with
Note: Here, the distance between two points on a plane is the Euclidean distance:
When we are trying to find the
With a max-heap, we maintain the
Now, instead of comparing all k points with the next point from the list, we simply compare the point in the max-heap that is farthest from the origin with the next point from the list. If the next point is closer to the origin, it wins inclusion in the max-heap and pops the point it was compared with. If not, nothing changes.
In this way, at every step of the scan through the list, the max-heap acts like a sieve, picking out the top k points in terms of their distance from the origin.
The Euclidean distance between a point P
Now that we can calculate the distance between Point
and implement a custom less-than function in it (__lt__(self, other)
in Python for use by the heapify process. In this function, we compare the distances of the two points relative to the origin. The point closer to the origin will be considered less than the other point. We’ll iterate through the given list of points, and if we find one that is closer to the origin than the point at the root of the max-heap, we do the following two things:
Pop from the max-heap—that is, remove the point in the heap farthest from the origin.
Push the point that is closer to the origin onto the max-heap.
As we move through the given list of points, this will ensure that we always have the
Let’s look at the code for this solution below:
from point import Pointimport heapqdef k_closest(points, k):points_max_heap = []# put first 'k' points in the max heapfor i in range(k):heapq.heappush(points_max_heap, points[i])# go through the remaining points of the input array, if a# point is closer to the origin than the top point of the# max-heap, remove the top point from heap and add the point# from the input arrayfor i in range(k, len(points)):if points[i].distance_from_origin() \< points_max_heap[0].distance_from_origin():heapq.heappop(points_max_heap)heapq.heappush(points_max_heap, points[i])# the heap has 'k' points closest to the origin, return# them in a listreturn list(points_max_heap)# Function used to convert list to stringdef lst_to_str(lst):out = "["for i in range(len(lst)-1):out += str(lst[i]) + ", "out += str(lst[len(lst)-1]) + "]"return out# Driver codedef main():points_one = [Point(1, 3), Point(3, 4), Point(2, -1)]points_two = [Point(1, 3), Point(2, 4), Point(2, -1), Point(-2, 2),Point(5, 3), Point(3, -2)]points_three = [Point(1, 3), Point(5, 3), Point(3, -2), Point(-2, 2)]points_four = [Point(2, -1), Point(-2, 2), Point(1, 3), Point(2, 4)]points_five = [Point(1, 3), Point(2, 4), Point(2, -1), Point(-2, 2),Point(5, 3), Point(3, -2), Point(5, 3), Point(3, -2)]k_list = [2, 3, 1, 4, 5]points = [points_one, points_two, points_three, points_four, points_five]for i in range(len(k_list)):result = k_closest(points[i], k_list[i])print(i + 1, ".\tSet of points: ", sep="", end='')print(lst_to_str(points[i]))print("\tk:", k_list[i])print("\tHere are the k =", k_list[i], "points closest to the", \"origin (0, 0): ", end='')print(lst_to_str(result))print("-"*100)if __name__ == '__main__':main()
Now, let's move to the next example for Top K Elements.
Given an array of integers, arr
, and an integer, k
, return the
Note: You can return the answer in any order.
Finding the top
The hash map will store the element as the key, and its corresponding frequency in the array as the value. When inserting elements from the hash map into the min-heap, the following steps are taken:
We’ll store a pair
We’ll make sure that if the size of the min-heap becomes greater than
Once we have added the pairs from the hash map to the min-heap, the min-heap will have the pairs with the top
Let’s look at the code for this solution below:
from heapq import heappush, heappopdef top_k_frequent(arr, k):# find the frequency of each numbernum_frequency_map = {}for num in arr:num_frequency_map[num] = num_frequency_map.get(num, 0) + 1top_k_elements = []# go through all numbers of the num_frequency_map# and push them in the top_k_elements, which will have# top k frequent numbers. If the heap size is more than k,# we remove the smallest(top) numberfor num, frequency in num_frequency_map.items():heappush(top_k_elements, (frequency, num))if len(top_k_elements) > k:heappop(top_k_elements)# create a list of top k numberstop_numbers = []while top_k_elements:top_numbers.append(heappop(top_k_elements)[1])return top_numbers# Driver codedef main():arr = [[1, 3, 5, 12, 11, 12, 11, 12, 5], [1, 3, 5, 14, 18, 14, 5],[2, 3, 4, 5, 6, 7, 7], [9, 8, 7, 6, 6, 5, 4, 3, 2, 1],[2, 4, 3, 2, 3, 4, 5, 4, 4, 4], [1, 1, 1, 1, 1, 1], [2, 3]]k = [3, 2, 1, 1, 3, 1, 2]for i in range(len(k)):print(i+1, ". \t Input: (", arr[i], ", ", k[i], ")", sep="")print("\t Top", k[i], "frequent Elements: ",top_k_frequent(arr[i], k[i]))print("-"*100)if __name__ == '__main__':main()
With our understanding of Top K Elements established, let's discuss the next coding pattern.
Traditionally, when traversing a tree, one might start at the root node and visit each node, moving down one level at a time. This approach, although straightforward, may not always be efficient for certain types of tree-related problems. Naive solutions to some problems might require traversing the same nodes repeatedly, leading to inefficiencies in the traversal process.
The Depth-First Search (DFS) pattern operates by recursively traversing from the root node down to the deepest level of a tree, exploring as far as possible along each branch before backtracking. It follows a straightforward principle: explore one branch fully before moving on to the next. This recursive exploration continues until all nodes have been visited or until a specific condition is met. This technique is useful for tasks like finding the height or depth of a tree, determining the lowest common ancestor of two nodes, or generating different traversal orders such as preorder, inorder, and postorder.
There is another closely related technique called Breadth-First Search (BFS), which traverses nodes level by level, exploring all nodes at the current level before moving on to the next level. This approach prioritizes breadth over depth, making it particularly useful for problems that involve finding the shortest path between nodes, locating neighbors at a specific distance, or exploring all possible paths of a fixed length in a tree structure. In contrast, Depth-First Search (DFS) explores one branch as deeply as possible before backtracking, prioritizing depth over breadth.
Let’s take a closer look at how the following examples demonstrate the usefulness of using Tree Depth-First Search to solve these problems efficiently:
Given the root node of a binary tree with p
and q
.
Note: The lowest common ancestor of two nodes,
p
andq
, is defined as the lowest node in the binary tree that has bothp
andq
as descendants.A node can also be a descendant of itself. For example, if
q
is a descendant ofp
, and we know thatp
is a descendant of itself, thenp
will be the lowest common ancestor ofp
andq
.
We will use the depth-first search to find the lowest common ancestor of p
and q
in the binary tree. The algorithm to find the lowest common ancestor of p
and q
is as follows:
First, we initialize three tracking variables, mid
, left
, and right
, to track whether p
or q
has been found.
Then, we traverse the binary tree recursively using depth-first search starting from the root
node.
If we find p
or q
during our traversal of the binary tree, we set the mid
variable to TRUE and return mid
.
The left
tracking variable is used to store the result of the left subtree of the current node, and right
tracking variable is used to store the result of the right subtree of the current node. So, the results from the recursive calls are stored in their respective tracking variables.
Finally, during the traversal of the binary tree, if any two of the tracking variables, mid
, left
, or right
, are TRUE, we set the current node as our answer node because this node will be the lowest common ancestor of p
and q
.
We need to understand the purpose of each of the tracking variables to answer the question of how a node becomes the lowest common ancestor if any two of the tracking variables are TRUE. If the left
and right
variables are TRUE for any node, it means that both nodes are descendants of the current node, and therefore, the current node is the lowest common ancestor of the two nodes. However, if mid
and either one of the left
or right
variables are TRUE, then either p
or q
is the current node itself, and the other is the descendant of the current node. Since a node is an ancestor of itself, the lowest common ancestor of the input nodes is the current node.
Let’s look at the code for this solution below:
from BinaryTree import *class Solution:def __init__(self):self.lca = Nonedef lowest_common_ancestor(self, root, p, q):self.lowest_common_ancestor_rec(root, p, q)return self.lca# helper function to find the lowest common ancestor recursivelydef lowest_common_ancestor_rec(self, current_node, p, q):# if current_node does not existif not current_node:return False# initialize tracking variablesleft, right, mid = False, False, False# check if either of the input nodes is the current_nodeif p == current_node or q == current_node:mid = True# traverse binary tree using depth-first searchleft = self.lowest_common_ancestor_rec(current_node.left, p, q)# if the lowest common ancestor has not been found, only then traverse the right subtreeif not self.lca:right = self.lowest_common_ancestor_rec(current_node.right, p, q)# if any two of the tracking variables are true, set current_node as answer nodeif mid + left + right >= 2:self.lca = current_node# return true if any of the tracking variables is truereturn mid or left or right# driver codedef main():input_trees = [[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],[TreeNode(100), TreeNode(200), TreeNode(75), TreeNode(50), TreeNode(25), TreeNode(350)],[TreeNode(350), TreeNode(100), TreeNode(75), TreeNode(50), TreeNode(200), TreeNode(25)],[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],[TreeNode(25), TreeNode(50), TreeNode(75), TreeNode(100), TreeNode(200), TreeNode(350)]]input_nodes = [[25, 75],[50, 350],[100, 200],[50, 25],[350, 200]]for i in range(len(input_trees)):solution = Solution()tree = BinaryTree(input_trees[i])print(i+1, ".\tBinary tree:", sep="")display_tree(tree.root)print("\tp = ", input_nodes[i][0])print("\tq = ", input_nodes[i][1])lca = solution.lowest_common_ancestor(tree.root, tree.find(input_nodes[i][0]), tree.find(input_nodes[i][1]))print("\n\tLowest common ancestor: ", lca.data, sep="")print("-"*100)if __name__ == "__main__":main()
Now, let's look at another problem that can be solved using the Tree Depth-First Search pattern.
Given a root of a binary tree that has n
number of nodes, return the right-side view in the form of a list.
A right-side view of a binary tree is the data of the nodes that are visible when the tree is viewed from the right side.
To return the list of the right-side view of the tree, we will apply a recursive depth-first search (DFS) approach. Our main function will first check if the root is NULL, in which case it returns an empty list. If the root is not NULL, we will initialize an empty list, rside
, to store the data of the tree’s rightmost nodes. Since we need only one right-side element at each level, the index of rside
list will be maintained to keep track of these node values.
The recursive DFS() function will take three arguments as input, which are rside
, node
, and level
, and check whether rside
's length is equal to the current tree level. If this is TRUE, then add node
's value to the list.
Next, we’ll iterate over node
to check for any children. Here, the right child of the node will be visited first. If the child is not NULL, we’ll recursively call the DFS() function on that child, along with incrementing the level of the tree by 1
. The purpose of visiting the right child first is that the rightmost elements of a level will be appended to the rside
list, thereby increasing its length.
Finally, after completing the depth-first search, we will return the rside
list, containing the right-side view of the tree.
Now, let’s look at the code for this solution below:
from TreeNode import *from BinaryTree import *def right_side_view(root):# If the root is None, return an empty listif root is None:return []# Initialize list to store the right side nodes of the binary tree.rside = []# Start depth first search on the root node and 0th leveldfs(root, 0, rside)return rside# Apply depth-first searchdef dfs(node, level, rside):# Check if the level is equal to the rside.lengthif level == len(rside):rside.append(node.data)# Iterate through the child nodes, first the right then the left child.for child in [node.right, node.left]:if child: # Recursively calling the dfs on the child nodedfs(child, level + 1, rside)# Driver codedef main():input = [[TreeNode(1), TreeNode(2), TreeNode(3), None, None, TreeNode(4), TreeNode(5)],[TreeNode(1), TreeNode(2), None, TreeNode(3), None, TreeNode(4)],[TreeNode(1), TreeNode(2), TreeNode(3), TreeNode(4), TreeNode(5), TreeNode(6), TreeNode(7), TreeNode(8)],[TreeNode(1), TreeNode(2), TreeNode(3), TreeNode(4), TreeNode(5), TreeNode(6), None, TreeNode(8)],[TreeNode(1), TreeNode(2), None, TreeNode(3), TreeNode(4)]]# Create the binary trees using the BinaryTree classinput_trees = []for list_of_nodes in input:tree = BinaryTree(list_of_nodes)input_trees.append(tree)for i in range(len(input_trees)):root = input_trees[i].rootprint(str(i + 1) + ". Binary tree:")display_tree(root)print("\n Right side view: " + str(right_side_view(root)))print("-" * 100, "\n", sep="")if __name__ == "__main__":main()
Now that we've covered the Tree Depth-First Search, let's move on to another frequently asked coding pattern.
For problems involving meeting times, or intervals of some nature, the Merge Intervals pattern is a powerful coding technique. This technique is particularly useful when we need to deal with a set of intervals and perform operations such as merging overlapping intervals or determining their intersections.
In this technique, we typically start by sorting the given intervals based on their start or end times, which helps efficiently identify the overlapping intervals. Once we have this interval information, we can swiftly perform the tasks based on the problem’s requirements. The Merge Intervals pattern has many applications in multiple scenarios, including scheduling algorithms, resource allocation problems, and calendar management systems. From analyzing time-based data to consolidating meeting schedules, this coding technique offers an elegant solution for handling interval-related operations effectively.
Let’s see how the following example illustrates the application of the Merge Intervals pattern to efficiently solve the given coding problem:
For two lists of closed intervals given as input, interval_list_a
and interval_list_b
, where each interval has its own start and end time, write a function that returns the intersection of the two interval lists.
For example, the intersection of
This problem shares two features with the merge intervals pattern: the lists of intervals are sorted and the result requires comparing intervals to check overlap. Taking advantage of the sorted array of the lists, we can safely compare pairs of intervals (one from List A and one from List B), knowing that after every comparison, we need only move forward in the lists without having to re-check either list from the start.
The algorithm to solve this problem is as follows:
We’ll use two indices, i
and j
, to iterate through the intervals in both lists, that is, interval_list_a
and interval_list_b
respectively.
To check whether there’s any intersecting point among the given intervals:
Take the starting times of the first pair of intervals from both lists and check which occurs later, storing it in a variable, say start
.
Also, compare the ending times of the same pair of intervals from both lists and store the minimum end time in another variable, say end
.
Next, we will check if interval_list_a[i]
and interval_list_b[j]
overlap by comparing the start
and end
times.
If the times overlap, then the intersecting time interval will be added to the resultant list, that is, intersections
.
After the comparison, we need to move forward in one of the two input lists. The decision is taken based on which of the two intervals being compared ends earlier. If the interval that ends first is in interval_list_a
, we move forward in that list, else, we move forward in interval_list_b
.
Let’s look at the code for this solution below:
def intervals_intersection(interval_list_a, interval_list_b):# to store all intersecting intervalsintersections = []# index "i" to iterate over the length of list a and index "j"# to iterate over the length of list bi = j = 0# while loop will break whenever either of the lists endswhile i < len(interval_list_a) and j < len(interval_list_b):# Let's check if interval_list_a[i] intersects interval_list_b[j]# 1. start - the potential startpoint of the intersection# 2. end - the potential endpoint of the intersectionstart = max(interval_list_a[i][0], interval_list_b[j][0])end = min(interval_list_a[i][1], interval_list_b[j][1])# if this is an actual intersectionif start <= end:intersections.append([start, end]) # add it to the list# Move forward in the list whose interval ends earlierif interval_list_a[i][1] < interval_list_b[j][1]:i += 1else:j += 1return intersections# Driver codedef main():input_interval_list_a = [[[1, 2]],[[1, 4], [5, 6], [9, 15]],[[3, 6], [8, 16], [17, 25]],[[4, 7],[9, 16],[17, 28],[39, 50],[55, 66],[70, 89],],[[1, 3],[5, 6],[7, 8],[12, 15],],]input_interval_list_b = [[[1, 2]],[[2, 4], [5, 7], [9, 15]],[[2, 3], [10, 15], [18, 23]],[[3, 6],[7, 8],[9, 10],[14, 19],[23, 33],[35, 40],[45, 59],[60, 64],[68, 76],],[[2, 4], [7, 10]],]for i in range(len(input_interval_list_a)):print(i + 1, '.\t Interval List A: ', input_interval_list_a[i], sep="")print('\t Interval List B: ', input_interval_list_b[i], sep="")print("\t Intersecting intervals in 'A' and 'B' are: ", intervals_intersection(input_interval_list_a[i], input_interval_list_b[i]), sep="")print('-' * 100)if __name__ == "__main__":main()
After understanding how to use the Merge Intervals effectively, it's time to explore the next coding pattern.
Custom data structures are essentially modified versions of existing data structures tailored to address specific needs. We often need to go beyond standard data structures like arrays and hash tables to tackle unique challenges more effectively. For instance, a web crawler that processes numerous pages and URLs might use a specialized "URL queue" to manage these URLs efficiently, ensuring they are unique and prioritized based on relevance. Custom data structures involve creating custom classes that encapsulate the necessary functionality and properties needed to efficiently manage and manipulate the data. By designing data structures optimized for the problem domain, we can improve the performance and readability of our code while simplifying complex operations. To determine if a problem can benefit from the Custom Data Structures pattern, consider scenarios where standard data structures like arrays, lists, or maps are not sufficient or where specialized operations need to be performed frequently. Common problems suitable for this pattern include implementing priority queues, disjoint-set data structures, or specialized graph representations.
Let’s see how the following example illustrates the application of the Custom Data Structures pattern to efficiently solve the given coding problem:
Implement an LRU cache class with the following functions:
Init(capacity): Initializes an LRU cache with the capacity size.
Set(key, value): Adds a new key-value pair or updates an existing key with a new value.
Get(key): Returns the value of the key, or −1 if the key does not exist.
If the number of keys has reached the cache capacity, evict the least recently used key and add the new key.
As caches use relatively expensive, faster memory, they are not designed to store large data sets. Whenever the cache becomes full, we must evict some data from it. There are several caching algorithms to implement a cache eviction policy. LRU is a very simple and commonly used algorithm. The core concept of the LRU algorithm is to evict the oldest data from the cache to accommodate more data.
This problem can be solved efficiently if we combine two data structures and use their respective functionalities, as well as the way they interact with each other, to our advantage. A doubly linked list allows us to arrange nodes by the time they were last accessed. However, accessing a value in a linked list is
Here is the algorithm for the LRU cache:
Set:
If the element exists in the hash map, then update its value and move the corresponding linked list node to the head of the linked list.
Otherwise, if the cache is already full, remove the tail element from the doubly linked list. Then delete its hash map entry, add the new element at the head of the linked list, and add the new key-value pair to the hash map.
Get:
If the element exists in the hash map, move the corresponding linked list node to the head of the linked list and return the element value.
Otherwise, return -1.
Note that the doubly linked list keeps track of the most recently accessed elements. The element at the head of the doubly linked list is the most recently accessed element. All newly inserted elements (in Set) go to the head of the list. Similarly, any element accessed (in the Get operation) goes to the head of the list.
Let’s look at the code for this solution below:
from linked_list import LinkedList# We will use a linked list having pair of integers# where the first integer will be the key# and the second integer will be the valueclass LRUCache:# Initializes an LRU cache with the capacity sizedef __init__(self, capacity):self.cache_capacity = capacityself.cache_map = {}self.cache_list = LinkedList()# Returns the value of the key, or -1 if the key does not exist.def get(self, key):# If the key doesn't exist, we return -1found_itr = Noneif key in self.cache_map:found_itr = self.cache_map[key]else:return -1list_iterator = found_itr# If the key exists, we must move it to the front of the listself.cache_list.move_to_head(found_itr)return list_iterator.pair[1]# Adds a new key-value pair or updates an existing key with a new valuedef set(self, key, value):# Check if the key exists in the cache hashmap# If the key existsif key in self.cache_map:found_iter = self.cache_map[key]list_iterator = found_iter# Move the node corresponding to key to front of the listself.cache_list.move_to_head(found_iter)# We then update the value of the nodelist_iterator.pair[1] = valuereturn# If key does not exist and the cache is fullif len(self.cache_map) == self.cache_capacity:# We will need to evict the LRU entry# Get the key of the LRU node# The first element of each cache entry is the keykey_tmp = self.cache_list.get_tail().pair[0]# This is why we needed to store a <key, value> pair# in the cacheList. We would not have been able to get# the key if we had just stored the values# Remove the last node in listself.cache_list.remove_tail()# Remove the entry from the cachedel self.cache_map[key_tmp]# The insert_at_head function inserts a new element at the front# of the list in constant timeself.cache_list.insert_at_head([key, value])# We set the value of the key as the list begining# since we added the new element at head of the listself.cache_map[key] = self.cache_list.get_head()def print(self):print("Cache current size: ", self.cache_list.size,", ", end="")print("Cache contents: {", end="")node = self.cache_list.get_head()while node:print("{", str(node.pair[0]), ",", str(node.pair[1]),"}", end="")node = node.nextif node:print(", ", end="")print("}")print("-"*100, "\n")def main():# Creating a cache of size 2cache_capacity = 2cache = LRUCache(cache_capacity)print("Initial state of cache")print("Cache capacity: " + str(cache_capacity))cache.print()keys = [10, 10, 15, 20, 15, 25, 5]values = ["20", "get", "25", "40", "get", "85", "5"]for i in range(len(keys)):if values[i] == "get":print("Getting by Key: ", keys[i])print("Cached value returned: ", cache.get(keys[i]))else:print("Setting cache: Key: ", keys[i], ", Value: ", values[i])cache.set(keys[i], int(values[i]))cache.print()if __name__ == '__main__':main()
Now that we've explored the design and implementation of Custom Data Structures, let's explore the last, but certainly not the least, coding pattern from the list of frequently asked patterns by Meta.
The K-Way Merge pattern is a technique for merging multiple sorted data structures, like arrays and linked lists, into one. This technique extends the classic merge sort by not just merging two lists but several at once. We repeatedly pick the smallest (or largest for descending order) elements from each list and keep adding them to a new list until all are merged. We can do this efficiently using a min-heap, where we add the first element of each list to the heap. We keep replacing the top of the heap with the next element from the same list until all elements are merged into the new list. Another approach is grouping lists into pairs and merging them through two-way merges. We do this by merging each pair of lists and repeating until we end up with a single fully sorted merged list. Both methods help us merge multiple lists, ensuring our data stays sorted.
Let’s see how the following example illustrates the application of the K-Way Merge pattern to efficiently solve the given problem:
Given an array of
In this solution, we iterate through the given list of sorted lists, progressively merging pairs of lists until only one merged list remains. We achieve this by using a divide-and-conquer strategy, where it iteratively merges adjacent pairs of lists. This way, after the first pairing, we're left with
To merge the adjacent pairs of lists at a time, we use a helper function, merge_2_lists(head1, head2)
. It uses a dummy node to initialize the merged list and a prev
pointer to track the last node added. Iterating through both lists simultaneously, it compares nodes from each list and attaches the smaller one to the merged list. It continues until one list is exhausted. Then, it appends the remaining nodes from the non-empty list. Finally, it returns the head of the merged list. This approach ensures the merged list remains sorted throughout the merging process.
We keep merging the adjacent pairs of lists until all lists are merged into one. Finally, we return the head of the final merged list.
Let's look at the code for this solution below:
from __future__ import print_functionfrom linked_list import LinkedListfrom linked_list_node import LinkedListNodefrom print_list import print_list_with_forward_arrow# Helper functiondef merge_2_lists(head1, head2):dummy = LinkedListNode(-1)prev = dummy # set prev pointer to dummy node# traverse over the lists until both or one of them becomes nullwhile head1 and head2:if head1.data <= head2.data:# if l1 value is <= l2 value, add l1 node to the listprev.next = head1head1 = head1.nextelse:# if l1 value is greater than l2 value, add l2 node to the listprev.next = head2head2 = head2.nextprev = prev.nextif head1 is not None:prev.next = head1else:prev.next = head2return dummy.next# Main functiondef merge_k_lists(lists):if len(lists) > 0:step = 1while step < len(lists):# Traversing over the lists in pairs to merge with resultfor i in range(0, len(lists) - step, step * 2):lists[i].head = merge_2_lists(lists[i].head, lists[i + step].head)step *= 2return lists[0].headreturn# Driver codedef main():inputlists = [[[21, 23, 42], [1, 2, 4]],[[11, 41, 51], [21, 23, 42]],[[2], [1, 2, 4], [25, 56, 66, 72]],[[11, 41, 51], [2], [2], [2], [1, 2, 4]],[[10, 30], [15, 25], [1, 7], [3, 9], [100, 300], [115, 125], [10, 70], [30, 90]]]inp_num = 1for i in inputlists:print(inp_num, ".\tInput lists:", sep = "")ll_lists = []for x in i:a = LinkedList()a.create_linked_list(x)ll_lists.append(a)print("\t", end = "")print_list_with_forward_arrow(a.head)print()inp_num += 1print("\tMerged list: \n\t", end = "")print_list_with_forward_arrow(merge_k_lists(ll_lists))print("\n", "-"*100, sep = "")if __name__ == "__main__":main()
That's about exploring the coding patterns based on the frequently asked coding questions by Meta.
To ace your Meta interview, mastering the patterns we have just discovered is important. Understanding the underlying patterns behind the solutions you devise will not only help you tackle similar problems in the future but also demonstrate your depth of understanding to interviewers. We have explored some of the most common coding patterns with the help of interview questions frequently asked by Meta, but it’s just a start. Remember, practice makes perfect, so dedicate time to solving problems regularly and seek feedback to improve further. You may explore the following courses by Educative for even better preparation because they cover a wide range of coding patterns as well as Dynamic Programming patterns, and that too in various programming languages:
Moreover, if you are looking for a customized learning plan, take a look at the following paths by Educative:
With determination, preparation, and a solid grasp of coding patterns, you’ll be well-equipped to tackle any coding challenge that comes your way during the Meta interview process. Best of luck!
Free Resources