Extracting images or frames from a video can be an essential task for several purposes like data analysis, object detection, and content creation.
In this Answer, we'll discuss a method to achieve this using Python and the OpenCV library.
Before diving into the code, ensure to have the following:
Python is installed on the system.
OpenCV and Matplotlib libraries are installed. If not, these can be easily installed using pip
:
pip install opencv-python matplotlib
Before diving into the core steps, it's important to ensure that all necessary libraries are imported and set up:
import cv2import osimport matplotlib.pyplot as plt
This step involves defining a function that will process the video, extract frames, and then display them:
def fetch_and_show_images(clip_path, save_dir, capture_interval=20):"""Fetch images from a clip, save as individual files, then visualize using subplots.Parameters:- clip_path (str): Path to the video clip.- save_dir (str): Folder to save the fetched images.- capture_interval (int): Span to fetch images."""# Verify if save directory existsif not os.path.exists(save_dir):os.makedirs(save_dir)# Initialize video readerclip_reader = cv2.VideoCapture(clip_path)frame_total = int(clip_reader.get(cv2.CAP_PROP_FRAME_COUNT))print(f"Frames available in the clip: {frame_total}")print(f"Fetching every {capture_interval}th frame...")
Video frames read with OpenCV are in BGR format. For display purposes using Matplotlib, they need to be converted to RGB format:
current_frame = 0fetched_count = 0image_list = []while True:status, image = clip_reader.read()# If reading frame fails, exit the loopif not status:break# Switch from BGR (OpenCV default) to RGB for visualization with matplotlibimage_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
Extracted frames are saved as images and then prepared for display using subplots:
if current_frame % capture_interval == 0:image_filename = f"snapshot_{current_frame}.png"image_filepath = os.path.join(save_dir, image_filename)cv2.imwrite(image_filepath, image)image_list.append(image_rgb)fetched_count += 1print(f"Fetched frame {current_frame} as {image_filename}")current_frame += 1clip_reader.release()# Display images using subplotssubplot_rows = len(image_list) // 2 + len(image_list) % 2figure, axes = plt.subplots(subplot_rows, 2, figsize=(10, 10))for idx, axis in enumerate(axes.ravel()):if idx < len(image_list):axis.imshow(image_list[idx])axis.set_title(f"Snapshot at {idx * capture_interval}")axis.axis('off')else:axis.axis('off')plt.tight_layout()plt.show()print(f"Process complete! Fetched {fetched_count} frames from the clip.")
Finally, to put everything into action, set the source video, specify the output directory, and call the function:
# Example usage:clip_source = "video_sample.mp4"save_location = "fetched_images"fetch_and_show_images(clip_source, save_location, capture_interval=20)
Here's the video we will be using in the code. As we can see, there are five frames in this video.
Here's the complete code to extract images from the video:
import cv2 import os import matplotlib.pyplot as plt def extract_and_display_frames(video_path, output_folder, frame_interval=20): """ Extract frames from a video, save them as images, and display them using subplots. Parameters: - video_path (str): Path to the video file. - output_folder (str): Directory to save the extracted images. - frame_interval (int): Interval to capture frames. """ # Ensure the output directory exists if not os.path.exists(output_folder): os.makedirs(output_folder) # Load the video video_obj = cv2.VideoCapture(video_path) total_frames = int(video_obj.get(cv2.CAP_PROP_FRAME_COUNT)) print(f"Total frames in the video: {total_frames}") print(f"Extracting every {frame_interval}th frame...") frame_count = 0 extracted_count = 0 frames = [] while True: ret, frame = video_obj.read() # If frame read is unsuccessful, break the loop if not ret: break if frame_count % frame_interval == 0: img_name = f"frame_{frame_count}.png" img_path = os.path.join(output_folder, img_name) # Convert frame from BGR to RGB for displaying with matplotlib frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) cv2.imwrite(img_path, frame) frames.append(frame_rgb) extracted_count += 1 print(f"Extracted frame {frame_count} as {img_name}") frame_count += 1 video_obj.release() # Display frames using subplots rows = len(frames) // 2 + len(frames) % 2 fig, axs = plt.subplots(rows, 2, figsize=(10, 10)) for i, ax in enumerate(axs.ravel()): if i < len(frames): ax.imshow(frames[i]) ax.set_title(f"Frame {i * frame_interval}") ax.axis('off') else: ax.axis('off') plt.tight_layout() plt.show() # Example usage: video_source = "video.mp4" output_dir = "extracted_frames" extract_and_display_frames(video_source, output_dir, frame_interval=20)
Here’s an explanation for the above code:
Lines 1–3: These lines import the necessary libraries. cv2
is OpenCV, used for computer vision tasks like reading videos and processing images. os
helps manage the file system, such as checking if directories exist or creating them. matplotlib.pyplot
is a plotting library used for displaying the extracted frames.
Line 5: The function extract_and_display_frames
is defined. It is designed to extract frames from a given video and display them. It accepts three arguments:
video_path
: the path to the video
output_folder
: the output folder to save the extracted frames
frame_interval
: specify the frequency at which frames are extracted from the video and displayed
Lines 16–17: Before extracting frames, the function checks if the specified output directory exists. If not, it creates it.
Lines 23–24: Initial messages are printed to inform the user about the total number of frames in the video and the extraction interval.
Lines 30–39: The main loop starts to read through each frame of the video. If the current frame number is a multiple of the specified interval, it gets processed.
Line 42: The frame’s color space is converted from BGR (OpenCV’s default) to RGB, suitable for display using matplotlib
.
Lines 44–47: The frame is saved as an image in the specified output directory, and the RGB version of the frame is added to a list for later display.
Lines 54–66: After processing the video, the extracted frames are displayed using matplotlib
. The images are arranged in rows and columns, where each row has two images (columns).
Lines 70–72: The code concludes with an example of function usage. It specifies a video source, specifies an output directory for extracted frames, and calls the extract_and_display_frames
function with an interval of 20 frames.
Extracting frames from a video is a simple yet powerful task that can be quickly accomplished with Python and OpenCV. Whether for analysis, content creation, or any other purpose, this method offers an efficient way to convert video data into a sequence of images.
Happy coding!
Note: To learn how to create a video from images, refer to this Answer.
Here's a matching activity to test what we've learned in this Answer. Match the given descriptions on the left side with their correct keywords on the right side:
Computer vision library that provides tools for working with images and videos
Matplotlib
OpenCV’s default color space for images
BGR
A plotting library for displaying images
OpenCV
Free Resources