Graphs are fundamental structures in computer science and are especially relevant in interviews focused on graph algorithms as they help us solve real-world problems like finding the shortest route on a map, analyzing social networks, and scheduling tasks efficiently. At their core, graphs comprise vertices (points) and edges (lines connecting these points). Imagine a graph as a network of interconnected nodes—like cities linked by roads or friends in a social network.
This blog will discuss some frequently asked unweighted graph algorithms in technical interviews and their solutions. Understanding these problems can help you tackle real-world issues more effectively.
Directed graphs: The edges have a direction in these graphs. This means you can only travel from one vertex to another in a specific direction where the arrow on the edge determines direction.
Undirected graphs: The edges in these graphs do not have a direction. You can travel both ways between vertices.
Weighted graphs: Each edge has a weight or cost. This is useful for problems where you must find the least expensive or shortest path.
Unweighted graphs: The edges do not have weights. All connections are considered equal.
Adjacency matrix: This is a 2D array in which each cell at row
Adjacency list: This is a list where each vertex has a sublist of all the vertices it is connected to. This method is more memory-efficient for graphs with fewer edges because an adjacency list only stores the neighbors for each vertex, unlike an adjacency matrix where every pair of vertices is represented, whether there is an edge between them or not. Refer to the following illustration, where the neighbors of each vertex are connected in the form of a list.
For those interested in a deeper dive, resources like Educative.io offer detailed courses covering unweighted graph algorithms and interview techniques, suitable for all levels:
Recommended resource
With algorithms being one of the most common themes in coding interviews, having a firm grip on them can be the difference between being hired and not. After completing this comprehensive course, you'll have an in-depth understanding of different algorithm types in Python and be equipped with a simple process for approaching complexity analysis. As you progress, you’ll be exposed to the most important algorithms you'll likely encounter in an interview. You'll work your way through over 50 interactive coding challenges and review detailed solutions for each problem. You’ll walk away with the ability to build-up to the optimal solution for addressing those tough coding interview questions head-on.
Graph traversals are important methods for investigating the vertices and edges of a graph. Examples of these traversals include breadth-first search (BFS) and depth-first search (DFS). These techniques assist in exploring every vertex in a graph, which is necessary for several applications, such as determining the shortest path, detecting cycles, or checking for connectivity between vertices. Let’s look at the algorithm and step-by-step solution of both of them:
A key technique for examining vertices and edges in a graph is called breadth-first search, or BFS. In unweighted graphs, it is very helpful for determining the shortest path. It performs effectively on both directed and undirected graphs. To proceed to the next level of vertices, BFS visits every neighbor of the source vertex it starts from as it travels the graph layer by layer.
Let’s look at the step-by-step implementation of the algorithm:
Start with the source vertex first. To prevent processing the same vertex more than once, use a list or set to mark vertices as visited and a queue to keep track of the vertices that need to be examined.
Add the source vertex as visited and add it to the queue.
Repeat the following steps until the queue is not empty:
Dequeue the vertex from the queue and store it in the output.
For each neighbor of the removed vertex:
Mark the neighbor vertex as visited and add it to the queue if it has not been visited yet
Continue repeating these steps until no more vertices are left in the queue.
The BFS traversal is completed, and the list of vertices is returned in the order they were visited.
Let’s look at the following illustration to get a better understanding of the solution:
Let’s look at the code for the algorithm we just discussed:
from collections import dequedef bfs(graph, source):# Set to track visited nodesvisited = set()# Queue for BFS, initialized with the source nodequeue = deque([source])visited.add(source)# List to store the order of BFS traversalbfs = []# Continue until the queue is emptywhile queue:# Pop the leftmost element (FIFO) from the queuevertex = queue.popleft()# Append the vertex to the BFS resultbfs.append(vertex)# Explore all the neighbors of the current vertexfor neighbor in graph[vertex]:if neighbor not in visited:# Mark neighbor as visited and add it to the queuevisited.add(neighbor)queue.append(neighbor)# Return the list of nodes in BFS orderreturn bfs# Driver codedef main():graph = {'A': ['D', 'C'],'B': ['E', 'F'],'C': ['A', 'F'],'D': ['A', 'F'],'E': ['B'],'F': ['B', 'D', 'C']}print("BFS:", bfs(graph, 'A'))if __name__ == "__main__":main()
One of the most important methods for examining vertices and edges in a graph is depth-first search or DFS. In contrast to BFS, which explores a graph level by level, DFS follows a single path as far as it can go before it starts backtracking. It’s very helpful for solving problems involving detecting cycles, finding connected components, and solving problems that require exploring all possible paths. DFS may be used for directed and undirected graphs and functions by visiting a vertex and then recursively visiting its unvisited neighbors before returning to earlier vertices.
Let’s look at the step-by-step implementation of the algorithm:
Create a stack, push the starting vertex onto it, and mark it as visited.
Repeat the following steps while the stack is not empty:
Pop the vertex from the stack, mark it as visited if it is not visited yet, and store it in the output
Push each adjacent vertex that has not been visited.
The DFS traversal is completed, and the list of vertices is returned in the order they were visited.
Let’s look at the following illustration to get a better understanding of the solution:
Let’s look at the code for the algorithm we just discussed:
# Depth-First Search (DFS) using an iterative approach with a stackdef dfs(graph, source):visited = set() # Set to keep track of visited nodesstack = [source] # Stack for DFS, initialized with the source nodedfs_result = [] # List to store the DFS traversal order# Continue until all nodes connected to the source are exploredwhile stack:vertex = stack.pop() # Pop the last node from the stackif vertex not in visited: # If the node hasn't been visited yetdfs_result.append(vertex) # Add the node to the DFS resultvisited.add(vertex) # Mark the node as visited# Push all unvisited neighbors of the current node onto the stackfor neighbor in graph[vertex]:if neighbor not in visited:stack.append(neighbor)return dfs_result # Return the order of nodes visited in DFS# Driver codedef main():graph = {'A': ['D', 'C'],'B': ['E', 'F'],'C': ['A', 'F'],'D': ['A', 'F'],'E': ['B'],'F': ['B', 'D', 'C']}print("DFS:", dfs(graph, 'A'))if __name__ == "__main__":main()
A strongly connected component (SCC) in directed graphs is a
Kosaraju’s algorithm is a technique for locating every strongly connected component (SCC) in a directed graph. It detects these components by running DFS iterations twice, once on the original graph and second on the transposed graph. The first DFS decides which vertices should be handled first. The second DFS reveals the SCCs, visiting vertices in the order established by the first DFS on the reversed graph.
Note: After running the first DFS, the graph is transposed, i.e., the edges are reversed in the opposite direction.
Let’s look at the step-by-step implementation of the algorithm:
Create a stack to store the vertices in the order they are finished in the first DFS traversal, and maintain a visited array to track which vertices have already been visited during the search.
Perform a DFS on the original graph.
Reverse all edges of the graph to create the transpose graph.
Initialize a visited array to keep track of visited vertices and process vertices in the reverse order of their completion in the DFS traversal of the original graph.
If an unvisited vertex is present, perform a DFS and mark all
Pop a vertex from the stack.
If the vertex is unvisited, start a DFS from it in the transpose graph.
All vertices reached during this DFS form one strongly connected component.
Each DFS in the second pass identifies a strongly connected component.
Return the SCCs as they are found.
Let’s look at the following illustration to get a better understanding of the solution:
Let’s look at the code for the algorithm we just discussed:
def kosaraju(graph):stack = [] # Stack to store the order of finishing times of verticesvisited = set() # Set to keep track of visited verticesscc = [] # List to store the strongly connected components (SCCs)# First DFS to fill the stack with the finishing order of verticesdef dfs(vertex):visited.add(vertex) # Mark the vertex as visitedfor neighbor in graph[vertex]: # Explore all its neighborsif neighbor not in visited:dfs(neighbor) # Recursively visit unvisited neighborsstack.append(vertex) # Push the vertex to stack after exploring all neighbors# Second DFS on the reversed graph to collect SCCsdef reverse_dfs(vertex, visited, current_scc):visited.add(vertex) # Mark the vertex as visitedcurrent_scc.append(vertex) # Add vertex to the current SCCfor neighbor in reverse_graph[vertex]: # Explore all neighbors in the reversed graphif neighbor not in visited:reverse_dfs(neighbor, visited, current_scc) # Recursively visit neighbors# Step 1: Perform DFS on the original graph to fill the stackfor vertex in graph:if vertex not in visited:dfs(vertex)# Step 2: Reverse the graph by reversing the edgesreverse_graph = {vertex: [] for vertex in graph} # Create a reversed graph structurefor vertex in graph:for neighbor in graph[vertex]:reverse_graph[neighbor].append(vertex) # Reverse the direction of edges# Step 3: Clear the visited set and process vertices in order of decreasing finishing timesvisited.clear()while stack:vertex = stack.pop() # Get the vertex with the highest finishing timeif vertex not in visited:current_scc = [] # To store the current strongly connected componentreverse_dfs(vertex, visited, current_scc) # Perform DFS on reversed graphscc.append(current_scc) # Add the SCC to the result listreturn scc # Return the list of all SCCs# Driver codedef main():graph = {'A': ['B'],'B': ['E'],'C': ['B'],'D': ['B'],'E': ['C'],'F': ['E']}print("Strongly connected components:", kosaraju(graph))if __name__ == "__main__":main()
Tarjan’s algorithm is used to find a directed graph’s strongly connected components (SCCs) in a directed graph. It utilizes the DFS method, and the information on the vertices’ visiting time and the lowest points that can be reached from each vertex is stored in two arrays or lists, and a stack is used to track which vertices have been visited.
Let’s look at the step-by-step implementation of the algorithm:
Initialize a counter to track the discovery times of the vertices.
Create 4 data structures for the following purposes:
An array to store the discovery times of the vertices.
An array to store the lowest points reachable from each vertex.
An array to keep track of vertices that are currently in the stack.
A stack to keep track of the visited vertices.
For each vertex, if it has not been visited, call the recursive DFS function.
Assign a discovery time to the vertex and set its value to the discovery time.
Push the vertex onto the stack and mark it as being in it.
For each adjacent vertex:
If the adjacent vertex has not been visited, recursively call the DFS function for the adjacent vertex.
Update the current vertex’s value based on the adjacent vertex’s minimum value.
If the adjacent vertex is in the stack, update the current vertex’s minimum value based on the adjacent vertex’s discovery time.
After visiting all adjacent vertices, if the minimum value of the current vertex is equal to its discovery time, it means the current vertex is the root of an SCC.
Pop vertices from the stack until the current vertex is reached, marking them as part of the same SCC.
Repeat the DFS traversal for all vertices to ensure all SCCs are found.
Let’s look at the code for the algorithm we just discussed:
def tarjans(graph):# Initialize variables for tracking indices and the recursion stackindex = 0stack = [] # Stack to maintain the current path of visited nodesindices = {} # Dictionary to store the discovery index of each nodelowlinks = {} # Dictionary to store the smallest reachable node from each nodeon_stack = {} # Dictionary to check if a node is in the current stacksccs = [] # List to store the strongly connected components (SCCs)# Helper function to perform DFS and find SCCsdef strong_connect(vertex):nonlocal indexindices[vertex] = index # Set discovery index for the current nodelowlinks[vertex] = index # Initially, the lowlink is the same as the indexindex += 1stack.append(vertex) # Push the node onto the stackon_stack[vertex] = True # Mark it as being on the stack# Visit all neighbors of the current nodefor neighbor in graph[vertex]:if neighbor not in indices: # If neighbor hasn't been visited, recursestrong_connect(neighbor)lowlinks[vertex] = min(lowlinks[vertex], lowlinks[neighbor]) # Update lowlinkelif on_stack[neighbor]: # If neighbor is on the stack, it's part of the current SCClowlinks[vertex] = min(lowlinks[vertex], indices[neighbor])# If the current node is a root of an SCC, pop the stack to form the SCCif lowlinks[vertex] == indices[vertex]:scc = []while True:neighbor = stack.pop() # Pop nodes from the stackon_stack[neighbor] = False # Mark them as no longer in the stackscc.append(neighbor)if neighbor == vertex: # Stop when the current root is reachedbreaksccs.append(scc) # Add the newly found SCC to the result# Perform DFS for each unvisited nodefor vertex in graph:if vertex not in indices: # Only start a new DFS if the node hasn't been visitedstrong_connect(vertex)return sccs # Return the list of strongly connected components# Driver codedef main():graph = {'A': ['B'],'B': ['E'],'C': ['B'],'D': ['B'],'E': ['C'],'F': ['E']}print("Strongly connected components:", tarjans(graph))if __name__ == "__main__":main()
The process of giving minimum colors to a graph’s vertices so that no two neighboring vertices share the same color is known as graph coloring. This technique plays an important role in solving conflicts that need to be avoided, such as scheduling tasks or allocating compiler registers. By ensuring that adjacent vertices (representing conflicting tasks or resources) are assigned different colors, graph coloring helps efficiently manage resources and time.
Let’s look at the step-by-step implementation of the algorithm:
Begin with a graph where no vertices have been assigned a color.
Select any vertex in the graph and assign the first color to it. This is usually done by starting with color 1.
For each vertex in the graph until all vertices are colored:
Choose the next vertex. Look at all the vertices adjacent to it (connected by edges).
Check the colors already assigned to the adjacent vertices. Choose the lowest-numbered color that any adjacent vertices haven’t used and assign this color to the current vertex.
If all adjacent vertices of a current vertex use different colors, use the next available color.
The algorithm ends when all vertices have been assigned a color such that no two adjacent vertices have the same color.
Please note that this strategy provides a valid coloring but may not produce an optimal solution.
Let’s look at the code for the algorithm we just discussed:
def is_safe(graph, vertex, color, assigned_colors):# Check if any neighbor of the current vertex has the same colorfor neighbor in graph[vertex]:if neighbor in assigned_colors and assigned_colors[neighbor] == color:return False # Not safe to color with this colorreturn True # Safe to assign this colordef backtrack(graph, m, assigned_colors, vertex):# Base case: All vertices are coloredif vertex == len(graph):return True# Try all possible colors (1 to m)for color in range(1, m + 1):# Check if it's safe to assign the current colorif is_safe(graph, list(graph.keys())[vertex], color, assigned_colors):# Assign the colorassigned_colors[list(graph.keys())[vertex]] = color# Recur to assign colors to the rest of the verticesif backtrack(graph, m, assigned_colors, vertex + 1):return True# Backtrack: Remove the color if it leads to no solutionassigned_colors.pop(list(graph.keys())[vertex])return False # No valid coloring founddef graph_coloring(graph):assigned_colors = {} # Dictionary to store assigned colorsm = len(graph) # Maximum number of colors (equal to number of vertices)# Try to color the graph using backtrackingif backtrack(graph, m, assigned_colors, 0):return assigned_colors # Return the valid coloringelse:return None # No valid coloring exists# Driver codedef main():graph = {'A': ['B', 'C', 'D'],'B': ['A', 'D', 'F'],'C': ['A', 'D'],'D': ['A', 'B', 'C'],'E': ['F'],'F': ['B', 'E']}result = graph_coloring(graph)if result:print("Graph colors:", result)else:print("Solution does not exist")if __name__ == "__main__":main()
A path that makes exactly one visit to every vertex in a graph is called a Hamiltonian path. A Hamiltonian path requires no vertex to be visited more than once. Hamiltonian paths can exist in both directed and undirected graphs and are important for tackling graph problems, especially when every vertex needs to be included in the solution.
Let’s look at the step-by-step implementation of the algorithm:
Begin with a recursive backtracking approach, a common method for exploring all possible paths.
Select a starting vertex, mark it as visited, and visit all adjacent vertices recursively.
To avoid unnecessary computation, use pruning techniques:
Skip paths that revisit vertices.
Backtrack immediately if a partial path cannot extend to a full Hamiltonian path.
Once a Hamiltonian path is found, verify if all vertices are visited exactly once.
Let’s look at the code for the algorithm we just discussed:
def hamiltonian_path(graph, start):path = [] # Stores the path of vertices in the current attemptvisited = set() # Tracks the visited vertices# Helper function for backtracking through verticesdef backtrack(vertex):if len(path) == len(graph): # If path includes all vertices, return Truereturn Truefor neighbor in graph[vertex]: # Explore neighbors of the current vertexif neighbor not in visited: # Proceed only if neighbor hasn't been visitedvisited.add(neighbor) # Mark neighbor as visitedpath.append(neighbor) # Add neighbor to current pathif backtrack(neighbor): # Recur to check if this leads to a solutionreturn Truepath.pop() # Backtrack by removing last vertex from pathvisited.remove(neighbor) # Unmark the vertex as visitedreturn False # Return False if no Hamiltonian path is foundvisited.add(start) # Mark the start vertex as visitedpath.append(start) # Add the start vertex to the pathif backtrack(start): # If a valid Hamiltonian path is found, return itreturn pathelse:return None # Return None if no Hamiltonian path exists# Driver codedef main():graph = {'A': ['E'],'B': ['A', 'D', 'E'],'C': ['A', 'F'],'D': ['B'],'E': ['A', 'B', 'F'],'F': ['C', 'E']}for key in graph.keys():result = hamiltonian_path(graph, key)if result:print("Hamiltonian path:", result)breakif __name__ == "__main__":main()
When working with graph algorithms, it’s important to understand their time and space complexities. These metrics help us estimate the efficiency of algorithms and determine their suitability for different types of problems. Below is a comparison table of the time and space complexities for the unweighted graph algorithms discussed above:
Graph Algorithm | Time Complexity | Space Complexity |
Breadth-first search | O(V+E) | O(V) |
Depth-first search | O(V+E) | O(V) |
Kosaraju’s algorithm | O(V+E) | O(V+E) |
Tarjan’s algorithm | O(V+E) | O(V+E) |
Graph coloring | Backtracking: O(mv) | Backtracking: O(V) |
Hamiltonian path/cycle | O(V!) | O(V) |
Here:
Unweighted graph algorithms play an important role in solving various computational problems across various domains. From network design and optimization to solving puzzles and scheduling tasks, these algorithms efficiently manage and analyze connections and relationships within data. Below is a table highlighting some key problems solved by different unweighted graph algorithms and briefly describing each problem.
Graph Algorithm | Problem Name | Problem Description |
Breadth-first search | Shortest path in an unweighted graph | Finding the shortest path between two vertices in an unweighted graph ensures the fewest edges. |
Depth-first search (DFS) | Cycle detection | Identifying whether there are any cycles in a directed or undirected graph is critical in circuit design and scheduling. |
Kosaraju’s algorithm | Strongly connected components (SCC) | Identifying all strongly connected components in a directed graph, where each component is a subgraph where any vertex can reach any other vertex. |
Tarjan’s algorithm | Articulation points and bridges | Identifying critical vertices and edges whose removal increases the number of connected components in a graph is useful in network reliability analysis. |
Graph coloring | Scheduling problems | Assigning tasks, exams, or resources to time slots or entities such that no conflicting tasks are assigned in the same slot. |
Hamiltonian path/cycle | Traveling salesperson problem | Determining the shortest route that visits each city exactly once and returns to the origin city. |
In this blog, we have covered many important graph problems frequently asked in interviews and exams. These problems help us understand how to work with graphs and solve real-world issues. These problems are significant because they provide a strong foundation in graph theory and are often used in practical applications like navigation systems, network design, and resource management. We encourage you to practice these problems regularly. Practicing will improve your problem-solving skills and make you more confident in dealing with graph-related questions in interviews and exams.
Educative offers in-depth courses and learning paths tailored for learners at all levels, whether you’re just starting or looking to refine your skills. Our platform equips you with the essential knowledge for coding interviews, including a focus on frequently asked unweighted graph algorithms. Educative.io provides the tools to excel in your next coding interview and advance your tech career, from understanding fundamental graph traversal techniques to mastering complex algorithms. Explore our top resources to help you prepare effectively for your interview.
Free Resources