Home/Blog/Interview Prep/Frequently asked unweighted graph algorithms
Home/Blog/Interview Prep/Frequently asked unweighted graph algorithms

Frequently asked unweighted graph algorithms

Hassan Shahzad
Nov 25, 2024
13 min read

Graphs are fundamental structures in computer science and are especially relevant in interviews focused on graph algorithms as they help us solve real-world problems like finding the shortest route on a map, analyzing social networks, and scheduling tasks efficiently. At their core, graphs comprise vertices (points) and edges (lines connecting these points). Imagine a graph as a network of interconnected nodes—like cities linked by roads or friends in a social network.

This blog will discuss some frequently asked unweighted graph algorithms in technical interviews and their solutions. Understanding these problems can help you tackle real-world issues more effectively.

Types of graphs#

  1. Directed graphs: The edges have a direction in these graphs. This means you can only travel from one vertex to another in a specific direction where the arrow on the edge determines direction.

An example of a directed graph
An example of a directed graph
  1. Undirected graphs: The edges in these graphs do not have a direction. You can travel both ways between vertices.

An example of an undirected graph
An example of an undirected graph
  1. Weighted graphs: Each edge has a weight or cost. This is useful for problems where you must find the least expensive or shortest path.

Weighted undirected graph
Weighted undirected graph
  1. Unweighted graphs: The edges do not have weights. All connections are considered equal.

Unweighted directed graph
Unweighted directed graph

How to represent graphs#

  1. Adjacency matrix: This is a 2D array in which each cell at row ii and column jj represents the edge between vertex ii and jj. If there is an edge, the cell has a value 11. Otherwise, it is 00. The adjacency matrix is also faster than the other representations because of the direct memory access. Refer to the following illustration where[A,D]=1[A, D] = 1 represents an edge between the vertex AA and DD.

Adjacency matrix
Adjacency matrix
  1. Adjacency list: This is a list where each vertex has a sublist of all the vertices it is connected to. This method is more memory-efficient for graphs with fewer edges because an adjacency list only stores the neighbors for each vertex, unlike an adjacency matrix where every pair of vertices is represented, whether there is an edge between them or not. Refer to the following illustration, where the neighbors of each vertex are connected in the form of a list.

Adjacency list
Adjacency list

For those interested in a deeper dive, resources like Educative.io offer detailed courses covering unweighted graph algorithms and interview techniques, suitable for all levels:

Recommended resource

Cover
Algorithms for Coding Interviews in Python

With algorithms being one of the most common themes in coding interviews, having a firm grip on them can be the difference between being hired and not. After completing this comprehensive course, you'll have an in-depth understanding of different algorithm types in Python and be equipped with a simple process for approaching complexity analysis. As you progress, you’ll be exposed to the most important algorithms you'll likely encounter in an interview. You'll work your way through over 50 interactive coding challenges and review detailed solutions for each problem. You’ll walk away with the ability to build-up to the optimal solution for addressing those tough coding interview questions head-on.

15hrs
Intermediate
42 Challenges
17 Quizzes

Graph traversals#

Graph traversals are important methods for investigating the vertices and edges of a graph. Examples of these traversals include breadth-first search (BFS) and depth-first search (DFS). These techniques assist in exploring every vertex in a graph, which is necessary for several applications, such as determining the shortest path, detecting cycles, or checking for connectivity between vertices. Let’s look at the algorithm and step-by-step solution of both of them:

Breadth-first search (BFS)#

A key technique for examining vertices and edges in a graph is called breadth-first search, or BFS. In unweighted graphs, it is very helpful for determining the shortest path. It performs effectively on both directed and undirected graphs. To proceed to the next level of vertices, BFS visits every neighbor of the source vertex it starts from as it travels the graph layer by layer.

Let’s look at the step-by-step implementation of the algorithm:

  • Start with the source vertex first. To prevent processing the same vertex more than once, use a list or set to mark vertices as visited and a queue to keep track of the vertices that need to be examined.

  • Add the source vertex as visited and add it to the queue.

  • Repeat the following steps until the queue is not empty:

    • Dequeue the vertex from the queue and store it in the output.

    • For each neighbor of the removed vertex:

      • Mark the neighbor vertex as visited and add it to the queue if it has not been visited yet

      • Continue repeating these steps until no more vertices are left in the queue.

  • The BFS traversal is completed, and the list of vertices is returned in the order they were visited.

Let’s look at the following illustration to get a better understanding of the solution:

Initialization of an empty queue, visited set, and an output list to store the BFS.
Initialization of an empty queue, visited set, and an output list to store the BFS.
1 of 12

Let’s look at the code for the algorithm we just discussed:

from collections import deque
def bfs(graph, source):
# Set to track visited nodes
visited = set()
# Queue for BFS, initialized with the source node
queue = deque([source])
visited.add(source)
# List to store the order of BFS traversal
bfs = []
# Continue until the queue is empty
while queue:
# Pop the leftmost element (FIFO) from the queue
vertex = queue.popleft()
# Append the vertex to the BFS result
bfs.append(vertex)
# Explore all the neighbors of the current vertex
for neighbor in graph[vertex]:
if neighbor not in visited:
# Mark neighbor as visited and add it to the queue
visited.add(neighbor)
queue.append(neighbor)
# Return the list of nodes in BFS order
return bfs
# Driver code
def main():
graph = {
'A': ['D', 'C'],
'B': ['E', 'F'],
'C': ['A', 'F'],
'D': ['A', 'F'],
'E': ['B'],
'F': ['B', 'D', 'C']
}
print("BFS:", bfs(graph, 'A'))
if __name__ == "__main__":
main()
Breadth-first search (BFS)

Depth-first search (DFS)#

One of the most important methods for examining vertices and edges in a graph is depth-first search or DFS. In contrast to BFS, which explores a graph level by level, DFS follows a single path as far as it can go before it starts backtracking. It’s very helpful for solving problems involving detecting cycles, finding connected components, and solving problems that require exploring all possible paths. DFS may be used for directed and undirected graphs and functions by visiting a vertex and then recursively visiting its unvisited neighbors before returning to earlier vertices.

Let’s look at the step-by-step implementation of the algorithm:

  • Create a stack, push the starting vertex onto it, and mark it as visited.

  • Repeat the following steps while the stack is not empty:

    • Pop the vertex from the stack, mark it as visited if it is not visited yet, and store it in the output

    • Push each adjacent vertex that has not been visited.

  • The DFS traversal is completed, and the list of vertices is returned in the order they were visited.

Let’s look at the following illustration to get a better understanding of the solution:

Initialize an empty stack and the visited set
Initialize an empty stack and the visited set
1 of 13

Let’s look at the code for the algorithm we just discussed:

# Depth-First Search (DFS) using an iterative approach with a stack
def dfs(graph, source):
visited = set() # Set to keep track of visited nodes
stack = [source] # Stack for DFS, initialized with the source node
dfs_result = [] # List to store the DFS traversal order
# Continue until all nodes connected to the source are explored
while stack:
vertex = stack.pop() # Pop the last node from the stack
if vertex not in visited: # If the node hasn't been visited yet
dfs_result.append(vertex) # Add the node to the DFS result
visited.add(vertex) # Mark the node as visited
# Push all unvisited neighbors of the current node onto the stack
for neighbor in graph[vertex]:
if neighbor not in visited:
stack.append(neighbor)
return dfs_result # Return the order of nodes visited in DFS
# Driver code
def main():
graph = {
'A': ['D', 'C'],
'B': ['E', 'F'],
'C': ['A', 'F'],
'D': ['A', 'F'],
'E': ['B'],
'F': ['B', 'D', 'C']
}
print("DFS:", dfs(graph, 'A'))
if __name__ == "__main__":
main()
Depth-first search (DFS)

Strongly connected components (SCC)#

A strongly connected component (SCC) in directed graphs is a maximal subgraphA maximal subgraph cannot be expanded from the original graph by adding more edges or vertices without breaking a certain rule. in which every vertex can be reached from every other vertex. Finding SCCs is important in many graph-related tasks, especially in directed graph structure and connectivity analysis methods. Let’s look at some of the algorithms related to strongly connected components:

Kosaraju’s algorithm#

Kosaraju’s algorithm is a technique for locating every strongly connected component (SCC) in a directed graph. It detects these components by running DFS iterations twice, once on the original graph and second on the transposed graph. The first DFS decides which vertices should be handled first. The second DFS reveals the SCCs, visiting vertices in the order established by the first DFS on the reversed graph.

Note: After running the first DFS, the graph is transposed, i.e., the edges are reversed in the opposite direction.

Let’s look at the step-by-step implementation of the algorithm:

  • Create a stack to store the vertices in the order they are finished in the first DFS traversal, and maintain a visited array to track which vertices have already been visited during the search.

  • Perform a DFS on the original graph.

  • Reverse all edges of the graph to create the transpose graph.

  • Initialize a visited array to keep track of visited vertices and process vertices in the reverse order of their completion in the DFS traversal of the original graph.

  • If an unvisited vertex is present, perform a DFS and mark all reachable"Reachable" refers to a vertex that can be visited from another vertex by following the edges of the graph. vertices as part of the same SCC.

    • Pop a vertex from the stack.

    • If the vertex is unvisited, start a DFS from it in the transpose graph.

    • All vertices reached during this DFS form one strongly connected component.

  • Each DFS in the second pass identifies a strongly connected component.

  • Return the SCCs as they are found.

Let’s look at the following illustration to get a better understanding of the solution:

Initialize a graph
Initialize a graph
1 of 8

Let’s look at the code for the algorithm we just discussed:

def kosaraju(graph):
stack = [] # Stack to store the order of finishing times of vertices
visited = set() # Set to keep track of visited vertices
scc = [] # List to store the strongly connected components (SCCs)
# First DFS to fill the stack with the finishing order of vertices
def dfs(vertex):
visited.add(vertex) # Mark the vertex as visited
for neighbor in graph[vertex]: # Explore all its neighbors
if neighbor not in visited:
dfs(neighbor) # Recursively visit unvisited neighbors
stack.append(vertex) # Push the vertex to stack after exploring all neighbors
# Second DFS on the reversed graph to collect SCCs
def reverse_dfs(vertex, visited, current_scc):
visited.add(vertex) # Mark the vertex as visited
current_scc.append(vertex) # Add vertex to the current SCC
for neighbor in reverse_graph[vertex]: # Explore all neighbors in the reversed graph
if neighbor not in visited:
reverse_dfs(neighbor, visited, current_scc) # Recursively visit neighbors
# Step 1: Perform DFS on the original graph to fill the stack
for vertex in graph:
if vertex not in visited:
dfs(vertex)
# Step 2: Reverse the graph by reversing the edges
reverse_graph = {vertex: [] for vertex in graph} # Create a reversed graph structure
for vertex in graph:
for neighbor in graph[vertex]:
reverse_graph[neighbor].append(vertex) # Reverse the direction of edges
# Step 3: Clear the visited set and process vertices in order of decreasing finishing times
visited.clear()
while stack:
vertex = stack.pop() # Get the vertex with the highest finishing time
if vertex not in visited:
current_scc = [] # To store the current strongly connected component
reverse_dfs(vertex, visited, current_scc) # Perform DFS on reversed graph
scc.append(current_scc) # Add the SCC to the result list
return scc # Return the list of all SCCs
# Driver code
def main():
graph = {
'A': ['B'],
'B': ['E'],
'C': ['B'],
'D': ['B'],
'E': ['C'],
'F': ['E']
}
print("Strongly connected components:", kosaraju(graph))
if __name__ == "__main__":
main()
Kosaraju’s algorithm

Tarjan’s algorithm#

Tarjan’s algorithm is used to find a directed graph’s strongly connected components (SCCs) in a directed graph. It utilizes the DFS method, and the information on the vertices’ visiting time and the lowest points that can be reached from each vertex is stored in two arrays or lists, and a stack is used to track which vertices have been visited.

Let’s look at the step-by-step implementation of the algorithm:

  • Initialize a counter to track the discovery times of the vertices.

  • Create 4 data structures for the following purposes:

    • An array to store the discovery times of the vertices.

    • An array to store the lowest points reachable from each vertex.

    • An array to keep track of vertices that are currently in the stack.

    • A stack to keep track of the visited vertices.

  • For each vertex, if it has not been visited, call the recursive DFS function.

  • Assign a discovery time to the vertex and set its value to the discovery time.

  • Push the vertex onto the stack and mark it as being in it.

  • For each adjacent vertex:

    • If the adjacent vertex has not been visited, recursively call the DFS function for the adjacent vertex.

    • Update the current vertex’s value based on the adjacent vertex’s minimum value.

    • If the adjacent vertex is in the stack, update the current vertex’s minimum value based on the adjacent vertex’s discovery time.

  • After visiting all adjacent vertices, if the minimum value of the current vertex is equal to its discovery time, it means the current vertex is the root of an SCC.

  • Pop vertices from the stack until the current vertex is reached, marking them as part of the same SCC.

  • Repeat the DFS traversal for all vertices to ensure all SCCs are found.

Let’s look at the code for the algorithm we just discussed:

def tarjans(graph):
# Initialize variables for tracking indices and the recursion stack
index = 0
stack = [] # Stack to maintain the current path of visited nodes
indices = {} # Dictionary to store the discovery index of each node
lowlinks = {} # Dictionary to store the smallest reachable node from each node
on_stack = {} # Dictionary to check if a node is in the current stack
sccs = [] # List to store the strongly connected components (SCCs)
# Helper function to perform DFS and find SCCs
def strong_connect(vertex):
nonlocal index
indices[vertex] = index # Set discovery index for the current node
lowlinks[vertex] = index # Initially, the lowlink is the same as the index
index += 1
stack.append(vertex) # Push the node onto the stack
on_stack[vertex] = True # Mark it as being on the stack
# Visit all neighbors of the current node
for neighbor in graph[vertex]:
if neighbor not in indices: # If neighbor hasn't been visited, recurse
strong_connect(neighbor)
lowlinks[vertex] = min(lowlinks[vertex], lowlinks[neighbor]) # Update lowlink
elif on_stack[neighbor]: # If neighbor is on the stack, it's part of the current SCC
lowlinks[vertex] = min(lowlinks[vertex], indices[neighbor])
# If the current node is a root of an SCC, pop the stack to form the SCC
if lowlinks[vertex] == indices[vertex]:
scc = []
while True:
neighbor = stack.pop() # Pop nodes from the stack
on_stack[neighbor] = False # Mark them as no longer in the stack
scc.append(neighbor)
if neighbor == vertex: # Stop when the current root is reached
break
sccs.append(scc) # Add the newly found SCC to the result
# Perform DFS for each unvisited node
for vertex in graph:
if vertex not in indices: # Only start a new DFS if the node hasn't been visited
strong_connect(vertex)
return sccs # Return the list of strongly connected components
# Driver code
def main():
graph = {
'A': ['B'],
'B': ['E'],
'C': ['B'],
'D': ['B'],
'E': ['C'],
'F': ['E']
}
print("Strongly connected components:", tarjans(graph))
if __name__ == "__main__":
main()
Tarjan’s algorithm

Graph coloring#

The process of giving minimum colors to a graph’s vertices so that no two neighboring vertices share the same color is known as graph coloring. This technique plays an important role in solving conflicts that need to be avoided, such as scheduling tasks or allocating compiler registers. By ensuring that adjacent vertices (representing conflicting tasks or resources) are assigned different colors, graph coloring helps efficiently manage resources and time.

Let’s look at the step-by-step implementation of the algorithm:

  • Begin with a graph where no vertices have been assigned a color.

  • Select any vertex in the graph and assign the first color to it. This is usually done by starting with color 1.

  • For each vertex in the graph until all vertices are colored:

    • Choose the next vertex. Look at all the vertices adjacent to it (connected by edges).

    • Check the colors already assigned to the adjacent vertices. Choose the lowest-numbered color that any adjacent vertices haven’t used and assign this color to the current vertex.

    • If all adjacent vertices of a current vertex use different colors, use the next available color.

  • The algorithm ends when all vertices have been assigned a color such that no two adjacent vertices have the same color.

Please note that this strategy provides a valid coloring but may not produce an optimal solution.

Initialize the graph and an empty map to store the assigned colors
Initialize the graph and an empty map to store the assigned colors
1 of 7

Let’s look at the code for the algorithm we just discussed:

def is_safe(graph, vertex, color, assigned_colors):
# Check if any neighbor of the current vertex has the same color
for neighbor in graph[vertex]:
if neighbor in assigned_colors and assigned_colors[neighbor] == color:
return False # Not safe to color with this color
return True # Safe to assign this color
def backtrack(graph, m, assigned_colors, vertex):
# Base case: All vertices are colored
if vertex == len(graph):
return True
# Try all possible colors (1 to m)
for color in range(1, m + 1):
# Check if it's safe to assign the current color
if is_safe(graph, list(graph.keys())[vertex], color, assigned_colors):
# Assign the color
assigned_colors[list(graph.keys())[vertex]] = color
# Recur to assign colors to the rest of the vertices
if backtrack(graph, m, assigned_colors, vertex + 1):
return True
# Backtrack: Remove the color if it leads to no solution
assigned_colors.pop(list(graph.keys())[vertex])
return False # No valid coloring found
def graph_coloring(graph):
assigned_colors = {} # Dictionary to store assigned colors
m = len(graph) # Maximum number of colors (equal to number of vertices)
# Try to color the graph using backtracking
if backtrack(graph, m, assigned_colors, 0):
return assigned_colors # Return the valid coloring
else:
return None # No valid coloring exists
# Driver code
def main():
graph = {
'A': ['B', 'C', 'D'],
'B': ['A', 'D', 'F'],
'C': ['A', 'D'],
'D': ['A', 'B', 'C'],
'E': ['F'],
'F': ['B', 'E']
}
result = graph_coloring(graph)
if result:
print("Graph colors:", result)
else:
print("Solution does not exist")
if __name__ == "__main__":
main()
Graph coloring

Hamiltonian path#

A path that makes exactly one visit to every vertex in a graph is called a Hamiltonian path. A Hamiltonian path requires no vertex to be visited more than once. Hamiltonian paths can exist in both directed and undirected graphs and are important for tackling graph problems, especially when every vertex needs to be included in the solution.

Let’s look at the step-by-step implementation of the algorithm:

  • Begin with a recursive backtracking approach, a common method for exploring all possible paths.

    • Select a starting vertex, mark it as visited, and visit all adjacent vertices recursively.

  • To avoid unnecessary computation, use pruning techniques:

    • Skip paths that revisit vertices.

    • Backtrack immediately if a partial path cannot extend to a full Hamiltonian path.

  • Once a Hamiltonian path is found, verify if all vertices are visited exactly once.

Let’s look at the code for the algorithm we just discussed:

def hamiltonian_path(graph, start):
path = [] # Stores the path of vertices in the current attempt
visited = set() # Tracks the visited vertices
# Helper function for backtracking through vertices
def backtrack(vertex):
if len(path) == len(graph): # If path includes all vertices, return True
return True
for neighbor in graph[vertex]: # Explore neighbors of the current vertex
if neighbor not in visited: # Proceed only if neighbor hasn't been visited
visited.add(neighbor) # Mark neighbor as visited
path.append(neighbor) # Add neighbor to current path
if backtrack(neighbor): # Recur to check if this leads to a solution
return True
path.pop() # Backtrack by removing last vertex from path
visited.remove(neighbor) # Unmark the vertex as visited
return False # Return False if no Hamiltonian path is found
visited.add(start) # Mark the start vertex as visited
path.append(start) # Add the start vertex to the path
if backtrack(start): # If a valid Hamiltonian path is found, return it
return path
else:
return None # Return None if no Hamiltonian path exists
# Driver code
def main():
graph = {
'A': ['E'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['A', 'B', 'F'],
'F': ['C', 'E']
}
for key in graph.keys():
result = hamiltonian_path(graph, key)
if result:
print("Hamiltonian path:", result)
break
if __name__ == "__main__":
main()
Hamiltonian path

Time and space complexity#

When working with graph algorithms, it’s important to understand their time and space complexities. These metrics help us estimate the efficiency of algorithms and determine their suitability for different types of problems. Below is a comparison table of the time and space complexities for the unweighted graph algorithms discussed above:

Complexity Comparison

Graph Algorithm

Time Complexity

Space Complexity

Breadth-first search

O(V+E)

O(V)

Depth-first search

O(V+E)

O(V)

Kosaraju’s algorithm

O(V+E)

O(V+E)

Tarjan’s algorithm

O(V+E)

O(V+E)

Graph coloring

Backtracking: O(mv)

Backtracking: O(V)

Hamiltonian path/cycle

O(V!)

O(V)

Here:

  • VV = Number of vertices in the graph.

  • EE = Number of edges in the graph.

  • mm = Number of colors available.

Applications of the unweighted graphs#

Unweighted graph algorithms play an important role in solving various computational problems across various domains. From network design and optimization to solving puzzles and scheduling tasks, these algorithms efficiently manage and analyze connections and relationships within data. Below is a table highlighting some key problems solved by different unweighted graph algorithms and briefly describing each problem.

Graph Applications

Graph Algorithm

Problem Name

Problem Description

Breadth-first search

Shortest path in an unweighted graph

Finding the shortest path between two vertices in an unweighted graph ensures the fewest edges.

Depth-first search (DFS)

Cycle detection

Identifying whether there are any cycles in a directed or undirected graph is critical in circuit design and scheduling.

Kosaraju’s algorithm

Strongly connected components (SCC)

Identifying all strongly connected components in a directed graph, where each component is a subgraph where any vertex can reach any other vertex.

Tarjan’s algorithm

Articulation points and bridges

Identifying critical vertices and edges whose removal increases the number of connected components in a graph is useful in network reliability analysis.

Graph coloring

Scheduling problems

Assigning tasks, exams, or resources to time slots or entities such that no conflicting tasks are assigned in the same slot.

Hamiltonian path/cycle

Traveling salesperson problem

Determining the shortest route that visits each city exactly once and returns to the origin city.

Conclusion#

In this blog, we have covered many important graph problems frequently asked in interviews and exams. These problems help us understand how to work with graphs and solve real-world issues. These problems are significant because they provide a strong foundation in graph theory and are often used in practical applications like navigation systems, network design, and resource management. We encourage you to practice these problems regularly. Practicing will improve your problem-solving skills and make you more confident in dealing with graph-related questions in interviews and exams.

Additional resources#

Educative offers in-depth courses and learning paths tailored for learners at all levels, whether you’re just starting or looking to refine your skills. Our platform equips you with the essential knowledge for coding interviews, including a focus on frequently asked unweighted graph algorithms. Educative.io provides the tools to excel in your next coding interview and advance your tech career, from understanding fundamental graph traversal techniques to mastering complex algorithms. Explore our top resources to help you prepare effectively for your interview.


  

Free Resources