Home/Blog/Interview Prep/Ace the Meta Interview: Must-Know Python Coding Problems
Home/Blog/Interview Prep/Ace the Meta Interview: Must-Know Python Coding Problems

Ace the Meta Interview: Must-Know Python Coding Problems

28 min read
May 31, 2024
content
Pattern 1: Stacks
Minimum Remove to Make Valid Parentheses
Solution
Pattern 2: Modified Binary Search
Random Pick with Weight
Solution
Pattern 3: Top K Elements
Kth Largest Element in an Array
Solution
K Closest Points to Origin
Solution
Top K Frequent Elements
Solution
Pattern 4: Tree Depth-First Search
Lowest Common Ancestor in a Binary Tree
Solution
Binary Tree Right Side View
Solution
Pattern 5: Merge Intervals
Interval List Intersections
Solution
Pattern 6: Custom Data Structures
Implement LRU Cache
Solution
Pattern 7: K-Way Merge
Merge K Sorted Lists
Solution

Despite Facebook's rebranding to Meta, the culture and hiring process for software engineers remain largely the same. Meta expects all engineers, regardless of their level or tenure, to voice opinions about strategic direction. The most successful candidates bring technical excellence and a drive to build boundary-pushing products.

Candidates will be able to showcase their leadership skills during the behavioral portion of the interview. However, first, they'll need to excel in the technical screen and coding sessions.

The purpose of the technical screen and coding sessions is to assess your proficiency in specific technical skills, as well as your underlying ability to solve complex problems. Of course, it’s important to aim for error-free solutions—but it’s not the end of the world. Meta values your ability to catch mistakes and effectively narrate your problem-solving process.

So, how can you efficiently prepare to tackle Meta’s technical interview?

Focus on the patterns that underlie Meta’s most common coding problems.

After researching Meta’s approach to technical interviews, we’ve identified 7 patterns to help you ace your technical screen and coding sessions. Instead of worrying about a brand-new coding problem, you’ll be able to recognize the underlying pattern and apply the algorithms and strategies that will be most effective. 

In this blog, we’ll break down each pattern and provide examples of common coding problems that use it. Let’s dive in!

Pattern 1: Stacks#

A stack is a linear data structure that organizes and manages data in a Last In, First Out (LIFO) manner. This means the last element added to the stack is the first to be removed. Think of it like a stack of plates where you can only add or remove plates from the top.

Using stacks as a coding pattern involves the following fundamental operations:

Operation

Time Complexity

Description

Push

O(1)

Adds the element at the top of the stack.

Pop

O(1)

Removes and returns the element from the top of the stack.

Peek

O(1)

Returns the element at the top of the stack without removing it.

IsEmpty

O(1)

Checks whether the stack is empty or not. Returns TRUE if the stack is empty, FALSE otherwise.

Size

O(1)

Returns the total number of elements in the stack.

Stacks are commonly used for tasks like expression evaluation, syntax parsing, or tracking state changes in algorithms. To identify if a problem can be solved using the Stacks pattern, look for scenarios where the last in, first out property is advantageous or where tracking state changes in a last in, first out manner is necessary. Examples of common interview problems that can be tackled using the Stacks pattern include evaluating arithmetic expressions, checking balanced parentheses, or implementing a browser’s back functionality.

Let’s see how the following example illustrates the application of the Stacks pattern to efficiently solve the given coding problem:

Minimum Remove to Make Valid Parentheses#

Given a string, s, that may have matchedEach opening parenthesis, (, has a closing parenthesis, ), for it. and unmatchedThere is no corresponding closing parenthesis, ), for an opening parenthesis, (. parentheses, remove the minimum number of parentheses so that the resulting string represents a valid parenthesization. For example, the string "a(b)" represents a valid parenthesization while the string "a(b" doesn't because the opening parenthesis doesn't have any corresponding closing parenthesis.

Solution#

In this solution, we use the Stacks pattern to remove all the extra parentheses from the input string. We traverse the input string, and every time we encounter an opening parenthesis, we push it, along with its index, onto the stack and keep traversing. Meanwhile, whenever we find a closing parenthesis, we decide whether to push it onto the stack. For this, we check the stack as follows:

  • If the stack is not empty and the top stack element is an opening parenthesis, we pop it off. This represents that the recently removed opening parenthesis corresponds to the current closing parenthesis, making a valid set of parenthesis in the input string.

  • If the stack is empty or the top stack element is a closing parenthesis, we push the current closing parenthesis, along with its index, onto the stack.

After traversing the complete string, all the parentheses left in the stack are invalid. Since we have stored each parenthesis index as well, we can now use these index values to remove the instances of these parentheses in the input string. Return the updated string as the output, representing the valid parenthesization.

Let's look at the code for this solution:

def min_remove_parentheses(s):
stack = []
s_list = list(s)
for i, val in enumerate(s):
# if stack is not empty and top element of stack is an opening parenthesis
# and the current element is a closing parenthesis
if stack and stack[-1][0] == '(' and val == ')':
# pop the opening parenthesis as it makes a valid pair
# with the current closing parenthesis
stack.pop()
# if the current value is an opening or a closing parenthesis
elif val == '(' or val == ')':
# push onto stack
stack.append([val, i])
# Remove the invalid parentheses
for p in stack:
s_list[p[1]] = ""
# convert the list to string
result = ''.join(s_list)
return result
# Driver code
def main():
inputs = ["ar)ab(abc)abd(", "a)rt)lm(ikgh)", "aq)xy())qf(a(ba)q)",
"(aw))kk())(w(aa)(bv(wt)r)", "(qi)(kl)((y(yt))(r(q(g)s)"]
for i in range(len(inputs)):
print(i + 1, ". Input: \"", inputs[i], "\"", sep="")
print(" Valid parentheses, after minimum removal: \"", \
min_remove_parentheses(inputs[i]), "\"", sep="")
print("-" * 100)
if __name__ == "__main__":
main()
Minimum Remove to Make Valid Parentheses

We've looked at how stacks can be used as a coding pattern to solve problems that require data to be processed in LIFO order, now, let's move on to the next pattern.

In many coding interviews, candidates often encounter problems where binary search comes in handy. It's known for its logarithmic time complexity, which makes it super efficient. However, it only works when the input data is already sorted. That's where the Modified Binary Search pattern steps in. It is an advanced adaptation of the traditional binary search algorithm, modified to handle more complex scenarios where elements may not strictly meet the standard sorted criteria. This pattern excels in efficiently locating elements or conditions that are not straightforward to find through linear searching, particularly when dealing with rotated arrays, finding boundaries, or solving the random pick weight problem.

By dividing the search space in half, this method significantly reduces the time complexity to O(logn)O(\log n), making it highly effective for large datasets. Like the traditional binary search, it operates by comparing midpoints and systematically narrowing down the search range, but it incorporates additional logic to handle irregularities and special cases.

The adaptability of the Modified Binary Search pattern makes it a powerful tool in software development, enhancing the ability to manage and retrieve data efficiently in scenarios where direct comparisons and typical ordering do not apply. This pattern not only streamlines data retrieval processes but also aids in optimizing performance across various programming tasks.

Let’s see how the following example illustrates the application of the Modified Binary Search pattern to efficiently solve the given coding problem:

Random Pick with Weight#

We’re given an array of positive integers, weights, where weights[i] is the weight of the ithi^{th} index. The task here is to write a function, Pick Index(), which performs weighted random selection to return an index from the weights array. The larger the value of weights[i], the heavier the weight is, and the higher the chances of its index being picked.

Suppose that the array consists of the weights [12,84,35][12, 84, 35]. In this case, the probabilities of picking the indexes will be as follows:

  • Index 0: 12/(12+84+35)=9.2%12/(12 + 84 + 35) = 9.2\%

  • Index 1: 84/(12+84+35)=64.1%84/(12+84+35)=64.1\%

  • Index 2: 35/(12+84+35)=26.7%35/(12+84+35)=26.7\%

Note: Since we’re randomly choosing from the options, there is no guarantee that in any specific run of the program, any of the elements will be selected with the exact expected frequency.

Solution#

We can use the Modified Binary Search pattern to speed up the random index-picking process. It reduces the index searching time to O(logn)O(\log n)which otherwise would have been O(n)O(n) with linear scanning. We start by preprocessing the given weights by summing them to obtain the total weight. Then, we construct a prefix sum array where each index i stores the cumulative sum of weights up to index i. Next, we generate a random number between 1 and the total weight. Finally, we use binary search to find the index corresponding to the randomly generated number in the prefix sum array. This approach ensures that elements with higher weights have a proportionally higher chance of being selected while maintaining randomness.

Here’s how the algorithm works:

  • The Init() method generates a list of cumulative sums using the given list of weights.

  • The Pick Index() method returns a randomly selected index while considering the provided weights. It works as follows:

    • Generates a random number, target, between 11 and kk, where kk is the largest value in the list of cumulative sums of weights.

    • Uses binary search to find the index of the first cumulative sum greater than the random value. Initialize the low index to 00 and the high index to the length of the list of cumulative sums of weights. While the low index is less than or equal to the high index, the algorithm:

      • Calculates the mid index as (low ++ (high low)//2//2⌋.

      • If the cumulative sum at the mid index is less than or equal to the target, the low index is updated to mid + 1.

      • Otherwise, the high index is updated to mid.

    • At the end of the binary search, the low pointer will point to the index of the first cumulative sum greater than the target. Return this index as the chosen index.

Let’s look at the code for this solution below:

import random
class RandomPickWithWeight:
# Constructor
def __init__(self, weights):
# List to store running sums of weights
self.running_sums = []
# Variable to calculate a running sum
running_sum = 0
# Iterate through the given weights
for w in weights:
# Add the current weight to the running sum
running_sum += w
# Append the running sum to the running_sums list
self.running_sums.append(running_sum)
# Store the total sum of weights
self.total_sum = running_sum
# Method to pick an index based on the weights
def pick_index(self):
# Generate a random number between 1 and the total sum of the array
target = random.randint(1, self.total_sum)
# Initialize low and high variables for binary search
low = 0
high = len(self.running_sums)
# Perform binary search to find the first value higher than the target
while low < high:
mid = low + (high - low) // 2
if target > self.running_sums[mid]:
low = mid + 1
else:
high = mid
# Return the index (low) found
return low
# Driver code
def main():
counter = 900
weights = [[1, 2, 3, 4, 5],
[1, 12, 23, 34, 45, 56, 67, 78, 89, 90],
[10, 20, 30, 40, 50],
[1, 10, 23, 32, 41, 56, 62, 75, 87, 90],
[12, 20, 35, 42, 55],
[10, 10, 10, 10, 10],
[10, 10, 20, 20, 20, 30],
[1, 2, 3],
[10, 20, 30, 40],
[5, 10, 15, 20, 25, 30]]
dict = {}
for i in range(len(weights)):
print(i + 1, ".\tList of weights: ", weights[i], ", pick_index() called ", counter, " times", "\n", sep="")
[dict.setdefault(l, 0) for l in range(len(weights[i]))]
sol = RandomPickWithWeight(weights[i])
for j in range(counter):
index = sol.pick_index()
dict[index] += 1
print("-"*105)
print("\t{:<10}{:<5}{:<10}{:<5}{:<15}{:<5}{:<20}{:<5}{:<15}".format( \
"Indexes", "|", "Weights", "|", "Occurences", "|", "Actual Frequency", "|", "Expected Frequency"))
print("-"*105)
for key, value in dict.items():
print("\t{:<10}{:<5}{:<10}{:<5}{:<15}{:<5}{:<20}{:<5}{:<15}".format(key, "|", weights[i][key], "|", value, "|", \
str(round((value/counter)*100, 2)) + "%", "|", str(round(weights[i][key]/sum(weights[i])*100, 2))+"%"))
dict = {}
print("\n", "-"*105, "\n", sep="")
if __name__ == '__main__':
main()
Random Pick with Weight

Now that we've discussed Modified Binary Search, let's turn our attention to another important coding pattern.

Pattern 3: Top K Elements#

Unlike other techniques that require sorting the entire data to find the top or bottom kk elements, this technique uses a heap to access the required elements from unsorted data. It begins by processing the elements of the dataset and inserting them into the heap. As new elements are encountered, they are compared with those in the collection. If the new element is larger (or smaller, depending on whether we are finding the top kk largest or smallest elements), it replaces one of the elements in the collection. This ensures that the collection always contains the top kk elements seen so far. This makes it useful for handling large datasets or real-time data where performance matters. Using this pattern saves time and computational resources while solving problems similar to finding the K Largest Elements in a Stream, Top K Frequent Words, and K Smallest Elements in an Array, etc.

Let’s see how the following examples illustrate the application of the Top K Elements pattern to efficiently solve these problems:

Kth Largest Element in an Array#

Given an unsorted array, find the kthk^{th} largest element.

Note: We need to find the kthk^{th} largest element in the sorted order, not the kthk^{th} distinct element.

Solution#

We can use a min-heap to efficiently return the kthk^{th} largest number from the given unsorted array. Min-heap ensures the minimum element is always at the root, and it will keep the kk largest elements on top. Once we have inserted all the elements from the array onto the min-heap, we can simply return the root as the kthk^{th} largest element.

The algorithm works as follows:

  • Insert the first kk elements onto the min-heap.

  • For each subsequent element in the array, compare it with the root (minimum element) of the min-heap.

    • If the current element in the array is greater than the root element in our min-heap, remove the root element and insert the current element from the array.

    • Continue inserting and removing elements in the min-heap until we have processed all elements in the array.

  • After processing all the elements, the min-heap will contain the kk largest elements from the input array. The smallest of them, the kthk^{th} largest element, will be at the root of the min-heap.

Let’s look at the code for this solution:

import heapq
def find_kth_largest(nums, k):
# Initialize an empty list
k_numbers_min_heap = []
# Add first k elements to the list
for i in range(k):
k_numbers_min_heap.append(nums[i])
# Convert the list into a min-heap
heapq.heapify(k_numbers_min_heap)
# Loop through the remaining elements in the 'nums' array
for i in range(k, len(nums)):
# Compare the current element with the minimum
# element (root) of the min-heap
if nums[i] > k_numbers_min_heap[0]:
# Remove the smallest element
heapq.heappop(k_numbers_min_heap)
# Add the current element
heapq.heappush(k_numbers_min_heap, nums[i])
# The root of the heap has the Kth largest element
return k_numbers_min_heap[0]
# Driver code
def main():
input_array = [
[1, 5, 12, 2, 11, 9, 7, 30, 20],
[5, 2, 9, -3, 7],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 4, 6, 0, 2],
[3, 5, 2, 3, 8, 5, 3]
]
k = [3, 1, 9, 1, 4]
for i in range(len(input_array)):
print(i + 1, ".\tInput array: ", input_array[i], sep="")
print("\tValue of k: ", k[i], sep="")
print("\tkth largest number: ", find_kth_largest(input_array[i], k[i]), sep="")
print("-" * 100)
if __name__ == '__main__':
main()
Kth Largest Element in an Array

Now, let's look at another problem that can be solved using the Top K Elements pattern.

K Closest Points to Origin#

Given a list of points on a plane, where the plane is a 2-D array with (x,y)(x, y) coordinates, find the kk closest points to the origin (0,0)(0, 0).

Note: Here, the distance between two points on a plane is the Euclidean distance: x2+y2\sqrt{x^2 + y^2}

Solution#

When we are trying to find the kk closest points to a given point (e.g., the origin), we typically iterate through all points, calculating distances and comparing them to maintain the kk closest points found so far. Through a little organization of our set of kk closest points, we can optimize this process: By storing these points in a max-heap sorted on the distance from the origin.

With a max-heap, we maintain the kk closest points such that the farthest point among them is at the root of the heap. This means that we have O(1)O(1) access to the point, among these k points, that is farthest from the origin. Therefore, at each step, instead of iterating through all points to find the farthest one, we can simply access the root of the max-heap.

Now, instead of comparing all k points with the next point from the list, we simply compare the point in the max-heap that is farthest from the origin with the next point from the list. If the next point is closer to the origin, it wins inclusion in the max-heap and pops the point it was compared with. If not, nothing changes.

In this way, at every step of the scan through the list, the max-heap acts like a sieve, picking out the top k points in terms of their distance from the origin.

The Euclidean distance between a point P(x,y)(x, y) and the origin can be calculated using the following formula:

x2+y2\sqrt{x^2 + y^2}

​​Now that we can calculate the distance between (0,0)(0,0) and all the points, how will we find the kk nearest points? As discussed above, the heap data structure is ideal for this purpose—we’ll use a custom comparison function to define the order of the elements in a heap. Since we plan to populate the heap with coordinate pairs, we’ll define a class Point and implement a custom less-than function in it (__lt__(self, other) in Python for use by the heapify process. In this function, we compare the distances of the two points relative to the origin. The point closer to the origin will be considered less than the other point. We’ll iterate through the given list of points, and if we find one that is closer to the origin than the point at the root of the max-heap, we do the following two things:

  • Pop from the max-heap—that is, remove the point in the heap farthest from the origin.

  • Push the point that is closer to the origin onto the max-heap.

As we move through the given list of points, this will ensure that we always have the kk points in our heap that are the closest to the origin.

Let’s look at the code for this solution below:

main.py
point.py
from point import Point
import heapq
def k_closest(points, k):
points_max_heap = []
# put first 'k' points in the max heap
for i in range(k):
heapq.heappush(points_max_heap, points[i])
# go through the remaining points of the input array, if a
# point is closer to the origin than the top point of the
# max-heap, remove the top point from heap and add the point
# from the input array
for i in range(k, len(points)):
if points[i].distance_from_origin() \
< points_max_heap[0].distance_from_origin():
heapq.heappop(points_max_heap)
heapq.heappush(points_max_heap, points[i])
# the heap has 'k' points closest to the origin, return
# them in a list
return list(points_max_heap)
# Function used to convert list to string
def lst_to_str(lst):
out = "["
for i in range(len(lst)-1):
out += str(lst[i]) + ", "
out += str(lst[len(lst)-1]) + "]"
return out
# Driver code
def main():
points_one = [Point(1, 3), Point(3, 4), Point(2, -1)]
points_two = [Point(1, 3), Point(2, 4), Point(2, -1), Point(-2, 2),
Point(5, 3), Point(3, -2)]
points_three = [Point(1, 3), Point(5, 3), Point(3, -2), Point(-2, 2)]
points_four = [Point(2, -1), Point(-2, 2), Point(1, 3), Point(2, 4)]
points_five = [Point(1, 3), Point(2, 4), Point(2, -1), Point(-2, 2),
Point(5, 3), Point(3, -2), Point(5, 3), Point(3, -2)]
k_list = [2, 3, 1, 4, 5]
points = [points_one, points_two, points_three, points_four, points_five]
for i in range(len(k_list)):
result = k_closest(points[i], k_list[i])
print(i + 1, ".\tSet of points: ", sep="", end='')
print(lst_to_str(points[i]))
print("\tk:", k_list[i])
print("\tHere are the k =", k_list[i], "points closest to the", \
"origin (0, 0): ", end='')
print(lst_to_str(result))
print("-"*100)
if __name__ == '__main__':
main()
K Closest Points to Origin

Now, let's move to the next example for Top K Elements.

Top K Frequent Elements#

Given an array of integers, arr, and an integer, k, return the kk most frequent elements.

Note: You can return the answer in any order.

Solution#

Finding the top kk frequent elements typically involves iterating through all elements. We can optimize this process using the Top K Elements pattern. We begin by determining the frequency of each element using a hash map. Once we have the frequency map, the min-heap is the best data structure to keep track of the top kk frequencies. After storing the frequencies of the elements, we’ll iterate through the hash map, insert each element into our heap, and find the top kk frequent elements.

The hash map will store the element as the key, and its corresponding frequency in the array as the value. When inserting elements from the hash map into the min-heap, the following steps are taken:

  • We’ll store a pair (a,b)(a,b) as a tuple (this can be in the form of any data structure like an array or a tuple) in the heap where aa will be the frequency of the element, and bb will be the element itself. This ensures that the elements in the min-heap are always sorted by frequency.

  • We’ll make sure that if the size of the min-heap becomes greater than kk, that is, there are more than kk elements in the min-heap, we pop the pair with the lowest frequency element from the min-heap. This ensures we always have the kk maximum frequent elements in the min-heap.

Once we have added the pairs from the hash map to the min-heap, the min-heap will have the pairs with the top kk frequencies. We will traverse the min-heap and add the second element of each pair from the min-heap to this new array. This is done since we only need the top kk frequent elements, not their respective frequencies. Finally, we will return this new array.

Let’s look at the code for this solution below:

from heapq import heappush, heappop
def top_k_frequent(arr, k):
# find the frequency of each number
num_frequency_map = {}
for num in arr:
num_frequency_map[num] = num_frequency_map.get(num, 0) + 1
top_k_elements = []
# go through all numbers of the num_frequency_map
# and push them in the top_k_elements, which will have
# top k frequent numbers. If the heap size is more than k,
# we remove the smallest(top) number
for num, frequency in num_frequency_map.items():
heappush(top_k_elements, (frequency, num))
if len(top_k_elements) > k:
heappop(top_k_elements)
# create a list of top k numbers
top_numbers = []
while top_k_elements:
top_numbers.append(heappop(top_k_elements)[1])
return top_numbers
# Driver code
def main():
arr = [[1, 3, 5, 12, 11, 12, 11, 12, 5], [1, 3, 5, 14, 18, 14, 5],
[2, 3, 4, 5, 6, 7, 7], [9, 8, 7, 6, 6, 5, 4, 3, 2, 1],
[2, 4, 3, 2, 3, 4, 5, 4, 4, 4], [1, 1, 1, 1, 1, 1], [2, 3]]
k = [3, 2, 1, 1, 3, 1, 2]
for i in range(len(k)):
print(i+1, ". \t Input: (", arr[i], ", ", k[i], ")", sep="")
print("\t Top", k[i], "frequent Elements: ",
top_k_frequent(arr[i], k[i]))
print("-"*100)
if __name__ == '__main__':
main()
Top K Frequent Elements

With our understanding of Top K Elements established, let's discuss the next coding pattern.

Traditionally, when traversing a tree, one might start at the root node and visit each node, moving down one level at a time. This approach, although straightforward, may not always be efficient for certain types of tree-related problems. Naive solutions to some problems might require traversing the same nodes repeatedly, leading to inefficiencies in the traversal process.

The Depth-First Search (DFS) pattern operates by recursively traversing from the root node down to the deepest level of a tree, exploring as far as possible along each branch before backtracking. It follows a straightforward principle: explore one branch fully before moving on to the next. This recursive exploration continues until all nodes have been visited or until a specific condition is met. This technique is useful for tasks like finding the height or depth of a tree, determining the lowest common ancestor of two nodes, or generating different traversal orders such as preorder, inorder, and postorder.

There is another closely related technique called Breadth-First Search (BFS), which traverses nodes level by level, exploring all nodes at the current level before moving on to the next level. This approach prioritizes breadth over depth, making it particularly useful for problems that involve finding the shortest path between nodes, locating neighbors at a specific distance, or exploring all possible paths of a fixed length in a tree structure. In contrast, Depth-First Search (DFS) explores one branch as deeply as possible before backtracking, prioritizing depth over breadth.

Let’s take a closer look at how the following examples demonstrate the usefulness of using Tree Depth-First Search to solve these problems efficiently:

Lowest Common Ancestor in a Binary Tree#

Given the root node of a binary tree with nn nodes, your task is to find the lowest common ancestor of two of its nodes, p and q.

Note: The lowest common ancestor of two nodes, p and q, is defined as the lowest node in the binary tree that has both p and q as descendants.

A node can also be a descendant of itself. For example, if q is a descendant of p, and we know that p is a descendant of itself, then p will be the lowest common ancestor of p and q.

Solution#

We will use the depth-first search to find the lowest common ancestor of p and q in the binary tree. The algorithm to find the lowest common ancestor of p and q is as follows:

  1. First, we initialize three tracking variables, mid, left, and right, to track whether p or q has been found.

  2. Then, we traverse the binary tree recursively using depth-first search starting from the root node.

  3. If we find p or q during our traversal of the binary tree, we set the mid variable to TRUE and return mid.

  4. The left tracking variable is used to store the result of the left subtree of the current node, and right tracking variable is used to store the result of the right subtree of the current node. So, the results from the recursive calls are stored in their respective tracking variables.

  5. Finally, during the traversal of the binary tree, if any two of the tracking variables, mid, left, or right, are TRUE, we set the current node as our answer node because this node will be the lowest common ancestor of p and q.

We need to understand the purpose of each of the tracking variables to answer the question of how a node becomes the lowest common ancestor if any two of the tracking variables are TRUE. If the left and right variables are TRUE for any node, it means that both nodes are descendants of the current node, and therefore, the current node is the lowest common ancestor of the two nodes. However, if mid and either one of the left or right variables are TRUE, then either p or q is the current node itself, and the other is the descendant of the current node. Since a node is an ancestor of itself, the lowest common ancestor of the input nodes is the current node.

Let’s look at the code for this solution below:

main.py
TreeNode.py
BinaryTree.py
from BinaryTree import *
class Solution:
def __init__(self):
self.lca = None
def lowest_common_ancestor(self, root, p, q):
self.lowest_common_ancestor_rec(root, p, q)
return self.lca
# helper function to find the lowest common ancestor recursively
def lowest_common_ancestor_rec(self, current_node, p, q):
# if current_node does not exist
if not current_node:
return False
# initialize tracking variables
left, right, mid = False, False, False
# check if either of the input nodes is the current_node
if p == current_node or q == current_node:
mid = True
# traverse binary tree using depth-first search
left = self.lowest_common_ancestor_rec(current_node.left, p, q)
# if the lowest common ancestor has not been found, only then traverse the right subtree
if not self.lca:
right = self.lowest_common_ancestor_rec(current_node.right, p, q)
# if any two of the tracking variables are true, set current_node as answer node
if mid + left + right >= 2:
self.lca = current_node
# return true if any of the tracking variables is true
return mid or left or right
# driver code
def main():
input_trees = [[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],
[TreeNode(100), TreeNode(200), TreeNode(75), TreeNode(50), TreeNode(25), TreeNode(350)],
[TreeNode(350), TreeNode(100), TreeNode(75), TreeNode(50), TreeNode(200), TreeNode(25)],
[TreeNode(100), TreeNode(50), TreeNode(200), TreeNode(25), TreeNode(75), TreeNode(350)],
[TreeNode(25), TreeNode(50), TreeNode(75), TreeNode(100), TreeNode(200), TreeNode(350)]]
input_nodes = [
[25, 75],
[50, 350],
[100, 200],
[50, 25],
[350, 200]
]
for i in range(len(input_trees)):
solution = Solution()
tree = BinaryTree(input_trees[i])
print(i+1, ".\tBinary tree:", sep="")
display_tree(tree.root)
print("\tp = ", input_nodes[i][0])
print("\tq = ", input_nodes[i][1])
lca = solution.lowest_common_ancestor(tree.root, tree.find(input_nodes[i][0]), tree.find(input_nodes[i][1]))
print("\n\tLowest common ancestor: ", lca.data, sep="")
print("-"*100)
if __name__ == "__main__":
main()
Lowest Common Ancestor in a Binary Tree

Now, let's look at another problem that can be solved using the Tree Depth-First Search pattern.

Binary Tree Right Side View#

Given a root of a binary tree that has n number of nodes, return the right-side view in the form of a list.

A right-side view of a binary tree is the data of the nodes that are visible when the tree is viewed from the right side.

Solution#

To return the list of the right-side view of the tree, we will apply a recursive depth-first search (DFS) approach. Our main function will first check if the root is NULL, in which case it returns an empty list. If the root is not NULL, we will initialize an empty list, rside, to store the data of the tree’s rightmost nodes. Since we need only one right-side element at each level, the index of rside list will be maintained to keep track of these node values.

The recursive DFS() function will take three arguments as input, which are rside, node, and level, and check whether rside's length is equal to the current tree level. If this is TRUE, then add node's value to the list.

Next, we’ll iterate over node to check for any children. Here, the right child of the node will be visited first. If the child is not NULL, we’ll recursively call the DFS() function on that child, along with incrementing the level of the tree by 1. The purpose of visiting the right child first is that the rightmost elements of a level will be appended to the rside list, thereby increasing its length.

Finally, after completing the depth-first search, we will return the rside list, containing the right-side view of the tree.

Now, let’s look at the code for this solution below:

main.py
TreeNode.py
BinaryTree.py
from TreeNode import *
from BinaryTree import *
def right_side_view(root):
# If the root is None, return an empty list
if root is None:
return []
# Initialize list to store the right side nodes of the binary tree.
rside = []
# Start depth first search on the root node and 0th level
dfs(root, 0, rside)
return rside
# Apply depth-first search
def dfs(node, level, rside):
# Check if the level is equal to the rside.length
if level == len(rside):
rside.append(node.data)
# Iterate through the child nodes, first the right then the left child.
for child in [node.right, node.left]:
if child: # Recursively calling the dfs on the child node
dfs(child, level + 1, rside)
# Driver code
def main():
input = [
[TreeNode(1), TreeNode(2), TreeNode(3), None, None, TreeNode(4), TreeNode(5)],
[TreeNode(1), TreeNode(2), None, TreeNode(3), None, TreeNode(4)],
[TreeNode(1), TreeNode(2), TreeNode(3), TreeNode(4), TreeNode(5), TreeNode(6), TreeNode(7), TreeNode(8)],
[TreeNode(1), TreeNode(2), TreeNode(3), TreeNode(4), TreeNode(5), TreeNode(6), None, TreeNode(8)],
[TreeNode(1), TreeNode(2), None, TreeNode(3), TreeNode(4)]
]
# Create the binary trees using the BinaryTree class
input_trees = []
for list_of_nodes in input:
tree = BinaryTree(list_of_nodes)
input_trees.append(tree)
for i in range(len(input_trees)):
root = input_trees[i].root
print(str(i + 1) + ". Binary tree:")
display_tree(root)
print("\n Right side view: " + str(right_side_view(root)))
print("-" * 100, "\n", sep="")
if __name__ == "__main__":
main()
Binary Tree Right Side View

Now that we've covered the Tree Depth-First Search, let's move on to another frequently asked coding pattern.

Pattern 5: Merge Intervals#

For problems involving meeting times, or intervals of some nature, the Merge Intervals pattern is a powerful coding technique. This technique is particularly useful when we need to deal with a set of intervals and perform operations such as merging overlapping intervals or determining their intersections. 

In this technique, we typically start by sorting the given intervals based on their start or end times, which helps efficiently identify the overlapping intervals. Once we have this interval information, we can swiftly perform the tasks based on the problem’s requirements. The Merge Intervals pattern has many applications in multiple scenarios, including scheduling algorithms, resource allocation problems, and calendar management systems. From analyzing time-based data to consolidating meeting schedules, this coding technique offers an elegant solution for handling interval-related operations effectively. 

Let’s see how the following example illustrates the application of the Merge Intervals pattern to efficiently solve the given coding problem:

Interval List Intersections#

For two lists of closed intervals given as input, interval_list_a and interval_list_b, where each interval has its own start and end time, write a function that returns the intersection of the two interval lists.

For example, the intersection of [3,8][3, 8] and [5,10][5, 10] is [5,8][5, 8].

Solution#

This problem shares two features with the merge intervals pattern: the lists of intervals are sorted and the result requires comparing intervals to check overlap. Taking advantage of the sorted array of the lists, we can safely compare pairs of intervals (one from List A and one from List B), knowing that after every comparison, we need only move forward in the lists without having to re-check either list from the start.

The algorithm to solve this problem is as follows:

  • We’ll use two indices, i and j, to iterate through the intervals in both lists, that is, interval_list_a and interval_list_b respectively.

  • To check whether there’s any intersecting point among the given intervals:

    • Take the starting times of the first pair of intervals from both lists and check which occurs later, storing it in a variable, say start.

    • Also, compare the ending times of the same pair of intervals from both lists and store the minimum end time in another variable, say end.

  • Next, we will check if interval_list_a[i] and interval_list_b[j] overlap by comparing the start and end times.

    • If the times overlap, then the intersecting time interval will be added to the resultant list, that is, intersections.

    • After the comparison, we need to move forward in one of the two input lists. The decision is taken based on which of the two intervals being compared ends earlier. If the interval that ends first is in interval_list_a, we move forward in that list, else, we move forward in interval_list_b.

Let’s look at the code for this solution below:

def intervals_intersection(interval_list_a, interval_list_b):
# to store all intersecting intervals
intersections = []
# index "i" to iterate over the length of list a and index "j"
# to iterate over the length of list b
i = j = 0
# while loop will break whenever either of the lists ends
while i < len(interval_list_a) and j < len(interval_list_b):
# Let's check if interval_list_a[i] intersects interval_list_b[j]
# 1. start - the potential startpoint of the intersection
# 2. end - the potential endpoint of the intersection
start = max(interval_list_a[i][0], interval_list_b[j][0])
end = min(interval_list_a[i][1], interval_list_b[j][1])
# if this is an actual intersection
if start <= end:
intersections.append([start, end]) # add it to the list
# Move forward in the list whose interval ends earlier
if interval_list_a[i][1] < interval_list_b[j][1]:
i += 1
else:
j += 1
return intersections
# Driver code
def main():
input_interval_list_a = [
[[1, 2]],
[[1, 4], [5, 6], [9, 15]],
[[3, 6], [8, 16], [17, 25]],
[
[4, 7],
[9, 16],
[17, 28],
[39, 50],
[55, 66],
[70, 89],
],
[
[1, 3],
[5, 6],
[7, 8],
[12, 15],
],
]
input_interval_list_b = [
[[1, 2]],
[[2, 4], [5, 7], [9, 15]],
[[2, 3], [10, 15], [18, 23]],
[
[3, 6],
[7, 8],
[9, 10],
[14, 19],
[23, 33],
[35, 40],
[45, 59],
[60, 64],
[68, 76],
],
[[2, 4], [7, 10]],
]
for i in range(len(input_interval_list_a)):
print(i + 1, '.\t Interval List A: ', input_interval_list_a[i], sep="")
print('\t Interval List B: ', input_interval_list_b[i], sep="")
print("\t Intersecting intervals in 'A' and 'B' are: ", intervals_intersection(input_interval_list_a[i], input_interval_list_b[i]), sep="")
print('-' * 100)
if __name__ == "__main__":
main()
Interval List Intersections

After understanding how to use the Merge Intervals effectively, it's time to explore the next coding pattern.

Pattern 6: Custom Data Structures#

Custom data structures are essentially modified versions of existing data structures tailored to address specific needs. We often need to go beyond standard data structures like arrays and hash tables to tackle unique challenges more effectively. For instance, a web crawler that processes numerous pages and URLs might use a specialized "URL queue" to manage these URLs efficiently, ensuring they are unique and prioritized based on relevance. Custom data structures involve creating custom classes that encapsulate the necessary functionality and properties needed to efficiently manage and manipulate the data. By designing data structures optimized for the problem domain, we can improve the performance and readability of our code while simplifying complex operations. To determine if a problem can benefit from the Custom Data Structures pattern, consider scenarios where standard data structures like arrays, lists, or maps are not sufficient or where specialized operations need to be performed frequently. Common problems suitable for this pattern include implementing priority queues, disjoint-set data structures, or specialized graph representations.

Let’s see how the following example illustrates the application of the Custom Data Structures pattern to efficiently solve the given coding problem:

Implement LRU Cache#

Implement an LRU cache class with the following functions:

  • Init(capacity): Initializes an LRU cache with the capacity size.

  • Set(key, value): Adds a new key-value pair or updates an existing key with a new value.

  • Get(key): Returns the value of the key, or −1 if the key does not exist.

If the number of keys has reached the cache capacity, evict the least recently used key and add the new key.

As caches use relatively expensive, faster memory, they are not designed to store large data sets. Whenever the cache becomes full, we must evict some data from it. There are several caching algorithms to implement a cache eviction policy. LRU is a very simple and commonly used algorithm. The core concept of the LRU algorithm is to evict the oldest data from the cache to accommodate more data.

Solution#

This problem can be solved efficiently if we combine two data structures and use their respective functionalities, as well as the way they interact with each other, to our advantage. A doubly linked list allows us to arrange nodes by the time they were last accessed. However, accessing a value in a linked list is O(n)O(n). On the other hand, a hash table has O(1)O(1) lookup time but has no concept of recency. We can combine these two data structures to get the best properties.

Here is the algorithm for the LRU cache:

Set:

  • If the element exists in the hash map, then update its value and move the corresponding linked list node to the head of the linked list.

  • Otherwise, if the cache is already full, remove the tail element from the doubly linked list. Then delete its hash map entry, add the new element at the head of the linked list, and add the new key-value pair to the hash map.

Get:

  • If the element exists in the hash map, move the corresponding linked list node to the head of the linked list and return the element value.

  • Otherwise, return -1.

Note that the doubly linked list keeps track of the most recently accessed elements. The element at the head of the doubly linked list is the most recently accessed element. All newly inserted elements (in Set) go to the head of the list. Similarly, any element accessed (in the Get operation) goes to the head of the list.

Let’s look at the code for this solution below:

main.py
linked_list_node.py
linked_list.py
from linked_list import LinkedList
# We will use a linked list having pair of integers
# where the first integer will be the key
# and the second integer will be the value
class LRUCache:
# Initializes an LRU cache with the capacity size
def __init__(self, capacity):
self.cache_capacity = capacity
self.cache_map = {}
self.cache_list = LinkedList()
# Returns the value of the key, or -1 if the key does not exist.
def get(self, key):
# If the key doesn't exist, we return -1
found_itr = None
if key in self.cache_map:
found_itr = self.cache_map[key]
else:
return -1
list_iterator = found_itr
# If the key exists, we must move it to the front of the list
self.cache_list.move_to_head(found_itr)
return list_iterator.pair[1]
# Adds a new key-value pair or updates an existing key with a new value
def set(self, key, value):
# Check if the key exists in the cache hashmap
# If the key exists
if key in self.cache_map:
found_iter = self.cache_map[key]
list_iterator = found_iter
# Move the node corresponding to key to front of the list
self.cache_list.move_to_head(found_iter)
# We then update the value of the node
list_iterator.pair[1] = value
return
# If key does not exist and the cache is full
if len(self.cache_map) == self.cache_capacity:
# We will need to evict the LRU entry
# Get the key of the LRU node
# The first element of each cache entry is the key
key_tmp = self.cache_list.get_tail().pair[0]
# This is why we needed to store a <key, value> pair
# in the cacheList. We would not have been able to get
# the key if we had just stored the values
# Remove the last node in list
self.cache_list.remove_tail()
# Remove the entry from the cache
del self.cache_map[key_tmp]
# The insert_at_head function inserts a new element at the front
# of the list in constant time
self.cache_list.insert_at_head([key, value])
# We set the value of the key as the list begining
# since we added the new element at head of the list
self.cache_map[key] = self.cache_list.get_head()
def print(self):
print("Cache current size: ", self.cache_list.size,
", ", end="")
print("Cache contents: {", end="")
node = self.cache_list.get_head()
while node:
print("{", str(node.pair[0]), ",", str(node.pair[1]),
"}", end="")
node = node.next
if node:
print(", ", end="")
print("}")
print("-"*100, "\n")
def main():
# Creating a cache of size 2
cache_capacity = 2
cache = LRUCache(cache_capacity)
print("Initial state of cache")
print("Cache capacity: " + str(cache_capacity))
cache.print()
keys = [10, 10, 15, 20, 15, 25, 5]
values = ["20", "get", "25", "40", "get", "85", "5"]
for i in range(len(keys)):
if values[i] == "get":
print("Getting by Key: ", keys[i])
print("Cached value returned: ", cache.get(keys[i]))
else:
print("Setting cache: Key: ", keys[i], ", Value: ", values[i])
cache.set(keys[i], int(values[i]))
cache.print()
if __name__ == '__main__':
main()
Implement LRU Cache

Now that we've explored the design and implementation of Custom Data Structures, let's explore the last, but certainly not the least, coding pattern from the list of frequently asked patterns by Meta.

Pattern 7: K-Way Merge#

The K-Way Merge pattern is a technique for merging multiple sorted data structures, like arrays and linked lists, into one. This technique extends the classic merge sort by not just merging two lists but several at once. We repeatedly pick the smallest (or largest for descending order) elements from each list and keep adding them to a new list until all are merged. We can do this efficiently using a min-heap, where we add the first element of each list to the heap. We keep replacing the top of the heap with the next element from the same list until all elements are merged into the new list. Another approach is grouping lists into pairs and merging them through two-way merges. We do this by merging each pair of lists and repeating until we end up with a single fully sorted merged list. Both methods help us merge multiple lists, ensuring our data stays sorted. 

Let’s see how the following example illustrates the application of the K-Way Merge pattern to efficiently solve the given problem:

Merge K Sorted Lists#

Given an array of kk sorted linked lists, merge them into a single sorted linked list and return the head of this linked list.

Solution#

In this solution, we iterate through the given list of sorted lists, progressively merging pairs of lists until only one merged list remains. We achieve this by using a divide-and-conquer strategy, where it iteratively merges adjacent pairs of lists. This way, after the first pairing, we're left with k2\frac{k}{2} lists, then k4\frac{k}{4}, thenk8\frac{k}{8} and so on.

To merge the adjacent pairs of lists at a time, we use a helper function, merge_2_lists(head1, head2). It uses a dummy node to initialize the merged list and a prev pointer to track the last node added. Iterating through both lists simultaneously, it compares nodes from each list and attaches the smaller one to the merged list. It continues until one list is exhausted. Then, it appends the remaining nodes from the non-empty list. Finally, it returns the head of the merged list. This approach ensures the merged list remains sorted throughout the merging process.

We keep merging the adjacent pairs of lists until all lists are merged into one. Finally, we return the head of the final merged list.

Let's look at the code for this solution below:

main.py
linked_list_node.py
linked_list.py
print_list.py
from __future__ import print_function
from linked_list import LinkedList
from linked_list_node import LinkedListNode
from print_list import print_list_with_forward_arrow
# Helper function
def merge_2_lists(head1, head2):
dummy = LinkedListNode(-1)
prev = dummy # set prev pointer to dummy node
# traverse over the lists until both or one of them becomes null
while head1 and head2:
if head1.data <= head2.data:
# if l1 value is <= l2 value, add l1 node to the list
prev.next = head1
head1 = head1.next
else:
# if l1 value is greater than l2 value, add l2 node to the list
prev.next = head2
head2 = head2.next
prev = prev.next
if head1 is not None:
prev.next = head1
else:
prev.next = head2
return dummy.next
# Main function
def merge_k_lists(lists):
if len(lists) > 0:
step = 1
while step < len(lists):
# Traversing over the lists in pairs to merge with result
for i in range(0, len(lists) - step, step * 2):
lists[i].head = merge_2_lists(lists[i].head, lists[i + step].head)
step *= 2
return lists[0].head
return
# Driver code
def main():
inputlists = [[[21, 23, 42], [1, 2, 4]],
[[11, 41, 51], [21, 23, 42]],
[[2], [1, 2, 4], [25, 56, 66, 72]],
[[11, 41, 51], [2], [2], [2], [1, 2, 4]],
[[10, 30], [15, 25], [1, 7], [3, 9], [100, 300], [115, 125], [10, 70], [30, 90]]
]
inp_num = 1
for i in inputlists:
print(inp_num, ".\tInput lists:", sep = "")
ll_lists = []
for x in i:
a = LinkedList()
a.create_linked_list(x)
ll_lists.append(a)
print("\t", end = "")
print_list_with_forward_arrow(a.head)
print()
inp_num += 1
print("\tMerged list: \n\t", end = "")
print_list_with_forward_arrow(merge_k_lists(ll_lists))
print("\n", "-"*100, sep = "")
if __name__ == "__main__":
main()
Merge K Sorted Lists

That's about exploring the coding patterns based on the frequently asked coding questions by Meta.

To ace your Meta interview, mastering the patterns we have just discovered is important. Understanding the underlying patterns behind the solutions you devise will not only help you tackle similar problems in the future but also demonstrate your depth of understanding to interviewers. We have explored some of the most common coding patterns with the help of interview questions frequently asked by Meta, but it’s just a start. Remember, practice makes perfect, so dedicate time to solving problems regularly and seek feedback to improve further. You may explore the following courses by Educative for even better preparation because they cover a wide range of coding patterns as well as Dynamic Programming patterns, and that too in various programming languages:

Moreover, if you are looking for a customized learning plan, take a look at the following paths by Educative:

With determination, preparation, and a solid grasp of coding patterns, you’ll be well-equipped to tackle any coding challenge that comes your way during the Meta interview process. Best of luck!


Written By:
Minahil Yaser
Join 2.5 million developers at
Explore the catalog

Free Resources