Deep-Live-Cam: Real-Time Face Swapping With Just One Photo
What if you could swap your face in a live video call β in real time β using nothing but a single photograph?
Thatβs exactly what Deep-Live-Cam does. And with over 91,000 GitHub stars and 13,000+ forks, itβs become one of the most popular open-source AI projects in the world.
What Deep-Live-Cam Actually Does
Deep-Live-Cam performs real-time face swapping in video streams. You provide one source image (the face you want to wear), point it at a webcam or video feed, and it swaps your face β live, in real time. No model training. No dataset collection. One image, three clicks.
The core pipeline:
- Face detection β Identifies faces in each video frame
- Face swap β Uses the InsightFace/inswapper model to replace detected faces with your source image
- Face enhancement β GFPGAN restores quality and detail post-swap
- Frame output β Renders the result to a virtual camera or output stream
The entire process runs at interactive framerates on consumer hardware β NVIDIA GPUs via CUDA, Apple Silicon via CoreML, or even CPU-only (slower but functional).
Why This Project Matters
Deep-Live-Cam sits at a fascinating intersection of capability and accessibility. A few years ago, creating a convincing face swap required:
- Hundreds or thousands of training images
- Hours of model training on expensive GPUs
- Significant technical expertise
- Post-processing and manual cleanup
Now it takes one photo and a few seconds of setup. The democratization curve here is steep and worth understanding.
The Technical Stack
Under the hood, Deep-Live-Cam combines several well-established models:
| Component | Model | Purpose |
|---|---|---|
| Face Swap | inswapper_128 (InsightFace) | Core face replacement |
| Face Enhance | GFPGANv1.4 | Post-swap quality restoration |
| Face Detection | InsightFace buffalo_l | Real-time face detection |
| Execution | ONNX Runtime | Cross-platform inference |
The ONNX Runtime backend means it runs across CUDA (NVIDIA), CoreML (Apple Silicon), DirectML (AMD/Intel on Windows), and plain CPU β making it genuinely cross-platform.
Key Features
- Mouth Mask β Retains your original mouth movements for natural-looking speech
- Face Mapping β Apply different faces to multiple people in the same frame simultaneously
- Live Streaming β Output to virtual cameras for use in Zoom, Teams, OBS, or any video app
- Video Processing β Process pre-recorded videos with face swaps
- Movie Mode β Watch movies with any face swapped in, in real time
The Ethics Question
The project includes built-in safeguards β NSFW content detection that blocks processing of inappropriate material. But the ethical dimensions of real-time deepfakes are significant:
Legitimate uses:
- Content creators animating custom characters
- Virtual try-on for fashion and entertainment
- Privacy protection in video calls
- Film and media production on a budget
- Accessibility β letting people with facial differences participate in video without anxiety
Risks:
- Identity fraud and impersonation
- Non-consensual deepfake creation
- Erosion of trust in video as evidence
- Potential for harassment and scams
The project maintainers are explicit: if you use a real personβs face, get consent and label outputs as deepfakes. The AGPL-3.0 license also ensures all derivative works must remain open source.
How It Compares
Deep-Live-Cam isnβt the only face-swapping tool, but itβs arguably the most accessible real-time solution:
| Tool | Real-Time | Training Required | Open Source | Stars |
|---|---|---|---|---|
| Deep-Live-Cam | β | None (1 image) | β AGPL-3.0 | 91K+ |
| DeepFaceLab | β | Hours of training | β | 48K+ |
| FaceFusion | β | None | β | 25K+ |
| Roop | β | None | β (archived) | 26K+ |
Deep-Live-Cam essentially evolved from the Roop project (now archived) and added the critical real-time capability that makes it practical for live use cases.
Getting Started
The simplest path:
git clone https://github.com/hacksider/Deep-Live-Cam.git
cd Deep-Live-Cam
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Download the two required models (GFPGANv1.4 and inswapper_128_fp16.onnx) from HuggingFace and drop them in the models/ folder. Then:
python run.py # CPU mode
python run.py --execution-provider cuda # NVIDIA GPU
For Apple Silicon users, install onnxruntime-silicon for CoreML acceleration.
The project also offers pre-built binaries for Windows and Mac if youβd rather skip the manual setup.
The Bigger Picture
Deep-Live-Cam is a signal of where real-time AI is heading. The models are getting smaller, the inference is getting faster, and the barrier to entry is approaching zero.
What took a research lab in 2019 now runs on a laptop in 2026. The face-swapping capability itself is table stakes β the real question is what the next generation of real-time video manipulation looks like, and whether our detection and verification systems can keep pace.
For now, Deep-Live-Cam remains a remarkable piece of engineering: a single-image, real-time, cross-platform face swapper that anyone can run. Use it responsibly.
π Source: github.com/hacksider/Deep-Live-Cam