Face Recognition Using OpenCV on Raspberry Pi 400


A face recognition system is a technology that able to match human faces from digital images or video frames to facial databases. Although humans can recognize faces without much effort, facial recognition is a challenging pattern recognition problem in computing. The face recognition system seeks to identify the human face, which is three-dimensional and changes appearance with facial lighting and expression, based on its two-dimensional image. To complete this computational task, the face recognition system performs four steps. The first face detection is used to segment the face from the background of the image. In the second step, segmented facial images are adjusted to take into account facial poses, image sizes, and photographic properties, such as lighting and grayscale. The purpose of the alignment process is to enable proper localization of facial features in the third step, extraction of facial features. Features such as eyes, nose, and mouth are shown and measured in pictures to represent the face. The vector features a robust face then, in the fourth step, is matched to the face database.


This video shows how I use OpenCV to make a simple face recognition system on Raspberry Pi 400.

Hardware Preparation

This is the list of items used in the video.

Sample Program

This is python3 sample program for OpenCV Face Recognition using Raspberry Pi. You can use it with Thonny Python IDE.

#! /usr/bin/python
# import the necessary packages
from imutils.video import VideoStream
from imutils.video import FPS
import face_recognition
import imutils
import pickle
import time
import cv2
#Initialize 'currentname' to trigger only when a new person is identified.
currentname = "unknown"
#Determine faces from encodings.pickle file model created from train_model.py
encodingsP = "encodings.pickle"
#use this xml file
cascade = "haarcascade_frontalface_default.xml"
# load the known faces and embeddings along with OpenCV's Haar
# cascade for face detection
print("[INFO] loading encodings + face detector…")
data = pickle.loads(open(encodingsP, "rb").read())
detector = cv2.CascadeClassifier(cascade)
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream…")
vs = VideoStream(src=0).start()
#vs = VideoStream(usePiCamera=True).start()
# start the FPS counter
fps = FPS().start()
# loop over frames from the video file stream
while True:
# grab the frame from the threaded video stream and resize it
# to 500px (to speedup processing)
frame = vs.read()
frame = imutils.resize(frame, width=500)
# convert the input frame from (1) BGR to grayscale (for face
# detection) and (2) from BGR to RGB (for face recognition)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# detect faces in the grayscale frame
rects = detector.detectMultiScale(gray, scaleFactor=1.1,
minNeighbors=5, minSize=(30, 30),
# OpenCV returns bounding box coordinates in (x, y, w, h) order
# but we need them in (top, right, bottom, left) order, so we
# need to do a bit of reordering
boxes = [(y, x + w, y + h, x) for (x, y, w, h) in rects]
# compute the facial embeddings for each face bounding box
encodings = face_recognition.face_encodings(rgb, boxes)
names = []
# loop over the facial embeddings
for encoding in encodings:
# attempt to match each face in the input image to our known
# encodings
matches = face_recognition.compare_faces(data["encodings"],
name = "Unknown" #if face is not recognized, then print Unknown
# check to see if we have found a match
if True in matches:
# find the indexes of all matched faces then initialize a
# dictionary to count the total number of times each face
# was matched
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
# loop over the matched indexes and maintain a count for
# each recognized face face
for i in matchedIdxs:
name = data["names"][i]
counts[name] = counts.get(name, 0) + 1
# determine the recognized face with the largest number
# of votes (note: in the event of an unlikely tie Python
# will select first entry in the dictionary)
name = max(counts, key=counts.get)
#If someone in your dataset is identified, print their name on the screen
if currentname != name:
currentname = name
# update the list of names
# loop over the recognized faces
for ((top, right, bottom, left), name) in zip(boxes, names):
# draw the predicted face name on the image – color is in BGR
cv2.rectangle(frame, (left, top), (right, bottom),
(0, 255, 0), 2)
y = top 15 if top 15 > 15 else top + 15
cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
.8, (255, 0, 0), 2)
# display the image to our screen
cv2.imshow("Facial Recognition is Running", frame)
key = cv2.waitKey(1) & 0xFF
# quit when 'q' key is pressed
if key == ord("q"):
# update the FPS counter
# stop the timer and display FPS information
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
# do a bit of cleanup

view raw
hosted with ❤ by GitHub

import cv2
name = 'Suad' #replace with your name
cam = cv2.VideoCapture(0)
cv2.namedWindow("press space to take a photo", cv2.WINDOW_NORMAL)
cv2.resizeWindow("press space to take a photo", 500, 300)
img_counter = 0
while True:
ret, frame = cam.read()
if not ret:
print("failed to grab frame")
cv2.imshow("press space to take a photo", frame)
k = cv2.waitKey(1)
if k%256 == 27:
# ESC pressed
print("Escape hit, closing…")
elif k%256 == 32:
# SPACE pressed
img_name = "dataset/"+ name +"/image_{}.jpg".format(img_counter)
cv2.imwrite(img_name, frame)
print("{} written!".format(img_name))
img_counter += 1

view raw
hosted with ❤ by GitHub

#! /usr/bin/python
# import the necessary packages
from imutils import paths
import face_recognition
#import argparse
import pickle
import cv2
import os
# our images are located in the dataset folder
print("[INFO] start processing faces…")
imagePaths = list(paths.list_images("dataset"))
# initialize the list of known encodings and known names
knownEncodings = []
knownNames = []
# loop over the image paths
for (i, imagePath) in enumerate(imagePaths):
# extract the person name from the image path
print("[INFO] processing image {}/{}".format(i + 1,
name = imagePath.split(os.path.sep)[2]
# load the input image and convert it from RGB (OpenCV ordering)
# to dlib ordering (RGB)
image = cv2.imread(imagePath)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# detect the (x, y)-coordinates of the bounding boxes
# corresponding to each face in the input image
boxes = face_recognition.face_locations(rgb,
# compute the facial embedding for the face
encodings = face_recognition.face_encodings(rgb, boxes)
# loop over the encodings
for encoding in encodings:
# add each encoding + name to our set of known names and
# encodings
# dump the facial encodings + names to disk
print("[INFO] serializing encodings…")
data = {"encodings": knownEncodings, "names": knownNames}
f = open("encodings.pickle", "wb")

view raw
hosted with ❤ by GitHub


Thanks for reading this tutorial. If you have any technical inquiries, please post at Cytron Technical Forum.

Please be reminded, this tutorial is prepared for you to try and learn.
You are encouraged to improve the code for better application.

3 thoughts on “Face Recognition Using OpenCV on Raspberry Pi 400”

  1. from .cv2 import *
    ImportError: libQtGui.so.4: cannot open shared object file: No such file or directory

    how to solved this problem?

Leave a Comment

Your email address will not be published.

Share this Tutorial

Share on facebook
Share on whatsapp
Share on email
Share on print
Share on twitter
Share on pinterest
Share on facebook
Share on whatsapp
Share on email
Share on print
Share on twitter
Share on pinterest

Latest Tutorial

DIY Interactive Robot Using REKA:BIT With Micro:bit
BLTouch Installation for Ender 3 with 32-bit V4.2.2 Board
Pick and Send Random Meal’s Option and Locations through Telegram Bot Using Grove WiFi 8266 on micro:bit
DIY Automated Vacuum Cleaner Using REKA:BIT With Micro:bit
Rainbow Spark in Mini House using Maker Uno.
Tutorials of Cytron Technologies Scroll to Top