Privacy is becoming a top priority in digital media. Whether you are a content creator or a developer, knowing how to automatically obscure identities in videos is a valuable skill. In this tutorial, we will build a Python script that detects faces in a video and applies a Gaussian blur to them in real-time using OpenCV and MediaPipe.
Why Use MediaPipe and OpenCV?
OpenCV is the industry standard for image processing, while MediaPipe (developed by Google) provides highly optimized, lightweight machine learning solutions for face detection. Together, they allow us to process video frames quickly without needing a heavy GPU setup.
Prerequisites
Before we begin, ensure you have Python installed on your system. You will need to install two libraries using pip:
pip install opencv-python mediapipe
The Implementation Logic
The workflow of our face-blurring converter is straightforward:
- Load the input video file.
- Initialize the MediaPipe Face Detection module.
- Loop through every frame of the video.
- Identify the bounding box coordinates for every face detected.
- Apply a blur filter only to those specific coordinates.
- Save the processed frames into a new video file.
The Python Code
Copy and save the following code as blur_faces.py. Make sure to have a sample video file in the same directory.
import cv2
import mediapipe as mp
def blur_faces(input_video, output_video):
# Initialize MediaPipe Face Detection
mp_face_detection = mp.solutions.face_detection
face_detection = mp_face_detection.FaceDetection(model_selection=1, min_detection_confidence=0.5)
# Open Video
cap = cv2.VideoCapture(input_video)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
# Define Video Writer
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_video, fourcc, fps, (width, height))
while cap.isOpened():
success, frame = cap.read()
if not success:
break
# Convert to RGB for MediaPipe
img_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = face_detection.process(img_rgb)
if results.detections:
for detection in results.detections:
bboxC = detection.location_data.relative_bounding_box
ih, iw, _ = frame.shape
x, y, w, h = int(bboxC.xmin * iw), int(bboxC.ymin * ih), \
int(bboxC.width * iw), int(bboxC.height * ih)
# Ensure coordinates are within frame
x, y = max(0, x), max(0, y)
# Extract and Blur Face
face_roi = frame[y:y+h, x:x+w]
blurred_face = cv2.GaussianBlur(face_roi, (99, 99), 30)
frame[y:y+h, x:x+w] = blurred_face
out.write(frame)
cv2.imshow('Face Blurring...', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
# Usage
blur_faces('input.mp4', 'output_blurred.mp4')
How the Code Works
1. Detection
The FaceDetection class from MediaPipe identifies faces and returns normalized coordinates. We multiply these by the frame's width and height to get the actual pixel locations.
2. Region of Interest (ROI)
Using NumPy slicing, we isolate the face area from the rest of the image. This allows us to apply the blur specifically to the face without affecting the background.
3. Gaussian Blur
The cv2.GaussianBlur function is used here. The (99, 99) parameter is the kernel size; the higher the numbers, the more intense the blur. Note that these numbers must be odd.
Conclusion
With just a few lines of Python, you've created a functional tool to protect privacy in video content. This script can be further enhanced by adding a GUI using Tkinter or by deploying it as a web service using Flask or FastAPI.
Comments
Post a Comment