UniFace: All-in-One Face Analysis Toolkit for Production
Face analysis has become essential in modern applications—from security systems and photo management to social media filters and identity verification. But building a robust face analysis pipeline traditionally meant juggling multiple libraries, dealing with complex dependencies, and wrestling with performance optimization across different hardware platforms.
UniFace is a comprehensive, production-ready face analysis library that brings together face detection, recognition, landmark detection, and attribute analysis under one unified API. Built on ONNX Runtime with automatic hardware acceleration support, UniFace delivers high-performance face analysis across Apple Silicon, NVIDIA GPUs, and CPU-only environments.
Core Capabilities
1. Face Detection
UniFace includes two state-of-the-art detection model families:
RetinaFace: Proven accuracy across diverse conditions with models ranging from ultra-lightweight (1.7MB) to high-accuracy variants. The default MNET_V2 model achieves 91.70% accuracy on the WIDER FACE Easy subset while remaining fast enough for real-time applications.
SCRFD: State-of-the-art speed-accuracy tradeoffs with the SCRFD_10G model achieving 95.16% accuracy on WIDER FACE Easy—perfect for applications demanding maximum precision.
import cv2
from uniface import RetinaFace
detector = RetinaFace()
image = cv2.imread("photo.jpg")
faces = detector.detect(image)
for face in faces:
print(f"Confidence: {face['confidence']:.2f}")
print(f"BBox: {face['bbox']}")
print(f"Landmarks: {face['landmarks']}")
2. Face Recognition
Compare and identify faces using industry-standard embedding models:
ArcFace: State-of-the-art face recognition using additive angular margin loss, trained on MS1M-V2 (5.8M images, 85K identities). Available in both MobileNet (8MB) and ResNet50 (166MB) variants.
MobileFace: Lightweight alternatives optimized for mobile and edge devices, with models as small as 1MB.
from uniface import RetinaFace, ArcFace
import numpy as np
detector = RetinaFace()
recognizer = ArcFace()
# Compare two faces
faces1 = detector.detect(image1)
faces2 = detector.detect(image2)
emb1 = recognizer.get_normalized_embedding(image1, faces1[0]['landmarks'])
emb2 = recognizer.get_normalized_embedding(image2, faces2[0]['landmarks'])
similarity = np.dot(emb1, emb2.T)[0][0]
print(f"Similarity: {similarity:.4f}")
3. Facial Landmark Detection
Precise 106-point facial landmark localization for detailed face analysis and alignment. The landmarks cover face contour (33 points), eyebrows (18 points), nose (12 points), eyes (24 points), and mouth (19 points).
from uniface import RetinaFace, Landmark106
detector = RetinaFace()
landmarker = Landmark106()
faces = detector.detect(image)
landmarks = landmarker.get_landmarks(image, faces[0]['bbox'])
# Returns (106, 2) array of (x, y) coordinates
4. Attribute Analysis
Extract demographic and emotional attributes from detected faces:
Age & Gender Detection: Trained on CelebA dataset, providing age estimation and gender classification.
Emotion Detection: 7 or 8-class emotion recognition trained on AffectNet (Neutral, Happy, Sad, Surprise, Fear, Disgust, Anger, and optionally Contempt).
from uniface import RetinaFace, AgeGender
detector = RetinaFace()
age_gender = AgeGender()
faces = detector.detect(image)
gender, age = age_gender.predict(image, faces[0]['bbox'])
print(f"{gender}, {age} years old")
Hardware Acceleration
One of UniFace’s standout features is seamless hardware acceleration support:
Apple Silicon (M1/M2/M3/M4): Install with pip install uniface[silicon] to leverage CoreML acceleration, delivering 3-5x faster inference compared to CPU-only execution.
NVIDIA GPUs: Install with pip install uniface[gpu] for CUDA acceleration, perfect for server deployments and batch processing.
CPU Fallback: Works out of the box on any platform with automatic optimization.
The library automatically detects and uses the best available execution provider—no configuration needed.
Model Selection Guide
UniFace provides a comprehensive model zoo with clear guidance:
Mobile/Edge Devices: Use lightweight models like RetinaFace(MNET_025) (1.7MB) or MobileFace(MNET_V2) (4MB) for resource-constrained environments.
Real-Time Applications: Balance speed and accuracy with RetinaFace(MNET_V2) or SCRFD(SCRFD_500M) for webcam and video processing.
High-Accuracy Applications: Deploy SCRFD(SCRFD_10G) or ArcFace(RESNET) for security systems and identity verification where precision is paramount.
Server/Cloud Deployment: Leverage larger models with GPU acceleration for maximum throughput and accuracy in batch processing scenarios.
Performance Benchmarks
UniFace models deliver impressive accuracy on the WIDER FACE benchmark:
| Model | Easy | Medium | Hard | Size |
|---|---|---|---|---|
| RetinaFace MNET_V2 | 91.70% | 91.03% | 86.60% | 3.5MB |
| RetinaFace RESNET34 | 94.16% | 93.12% | 88.90% | 56MB |
| SCRFD 10G | 95.16% | 93.87% | 83.05% | 17MB |
Production-Ready Features
Clean, Intuitive API
from uniface import create_detector, create_recognizer
# Factory functions with sensible defaults
detector = create_detector('retinaface')
recognizer = create_recognizer('arcface')
# Or customize everything
detector = create_detector('scrfd',
model_name='scrfd_10g_kps',
conf_thresh=0.8,
input_size=(640, 640))
Automatic Model Management
Models are automatically downloaded on first use and cached in ~/.uniface/models/. SHA-256 checksums ensure integrity, and you can customize the cache location if needed.
Visualization Utilities
Built-in drawing functions make it easy to visualize results:
from uniface.visualization import draw_detections
bboxes = [f['bbox'] for f in faces]
scores = [f['confidence'] for f in faces]
landmarks = [f['landmarks'] for f in faces]
draw_detections(image, bboxes, scores, landmarks, vis_threshold=0.6)
cv2.imwrite("output.jpg", image)
Real-World Use Cases
Face Search System
# Build database
database = {}
for person_id, image_path in person_images.items():
image = cv2.imread(image_path)
faces = detector.detect(image)
if faces:
embedding = recognizer.get_normalized_embedding(
image, faces[0]['landmarks']
)
database[person_id] = embedding
# Search for a face
query_faces = detector.detect(query_image)
query_embedding = recognizer.get_normalized_embedding(
query_image, query_faces[0]['landmarks']
)
# Find best match
best_match = max(database.items(),
key=lambda x: np.dot(query_embedding, x[1].T)[0][0])
Real-Time Webcam Detection
import cv2
from uniface import RetinaFace
from uniface.visualization import draw_detections
detector = RetinaFace()
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
faces = detector.detect(frame)
bboxes = [f['bbox'] for f in faces]
scores = [f['confidence'] for f in faces]
landmarks = [f['landmarks'] for f in faces]
draw_detections(frame, bboxes, scores, landmarks)
cv2.imshow("Face Detection", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Getting Started
Installation is straightforward:
# macOS (Apple Silicon)
pip install uniface[silicon]
# Linux/Windows with NVIDIA GPU
pip install uniface[gpu]
# CPU-only (all platforms)
pip install uniface
Then start detecting faces in seconds:
import cv2
from uniface import RetinaFace
detector = RetinaFace()
image = cv2.imread("photo.jpg")
faces = detector.detect(image)
for face in faces:
print(f"Found face with confidence: {face['confidence']:.2f}")
Open Source
UniFace is open source under the MIT license and actively maintained on GitHub. The project includes comprehensive documentation, Jupyter notebook examples, and training code repositories for model reproduction.
The library builds on proven research:
- RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild
- SCRFD: Sample and Computation Redistribution for Efficient Face Detection
- ArcFace: Additive Angular Margin Loss for Deep Face Recognition
Resources:
- GitHub: github.com/yakhyo/uniface
- PyPI: pypi.org/project/uniface
- Documentation: README.md
- Model Zoo: MODELS.md