Deep-Live-Cam: Real-Time Face Swapping With Just One Photo

By Prahlad Menon 4 min read

What if you could swap your face in a live video call β€” in real time β€” using nothing but a single photograph?

That’s exactly what Deep-Live-Cam does. And with over 91,000 GitHub stars and 13,000+ forks, it’s become one of the most popular open-source AI projects in the world.

What Deep-Live-Cam Actually Does

Deep-Live-Cam performs real-time face swapping in video streams. You provide one source image (the face you want to wear), point it at a webcam or video feed, and it swaps your face β€” live, in real time. No model training. No dataset collection. One image, three clicks.

The core pipeline:

  1. Face detection β€” Identifies faces in each video frame
  2. Face swap β€” Uses the InsightFace/inswapper model to replace detected faces with your source image
  3. Face enhancement β€” GFPGAN restores quality and detail post-swap
  4. Frame output β€” Renders the result to a virtual camera or output stream

The entire process runs at interactive framerates on consumer hardware β€” NVIDIA GPUs via CUDA, Apple Silicon via CoreML, or even CPU-only (slower but functional).

Why This Project Matters

Deep-Live-Cam sits at a fascinating intersection of capability and accessibility. A few years ago, creating a convincing face swap required:

  • Hundreds or thousands of training images
  • Hours of model training on expensive GPUs
  • Significant technical expertise
  • Post-processing and manual cleanup

Now it takes one photo and a few seconds of setup. The democratization curve here is steep and worth understanding.

The Technical Stack

Under the hood, Deep-Live-Cam combines several well-established models:

ComponentModelPurpose
Face Swapinswapper_128 (InsightFace)Core face replacement
Face EnhanceGFPGANv1.4Post-swap quality restoration
Face DetectionInsightFace buffalo_lReal-time face detection
ExecutionONNX RuntimeCross-platform inference

The ONNX Runtime backend means it runs across CUDA (NVIDIA), CoreML (Apple Silicon), DirectML (AMD/Intel on Windows), and plain CPU β€” making it genuinely cross-platform.

Key Features

  • Mouth Mask β€” Retains your original mouth movements for natural-looking speech
  • Face Mapping β€” Apply different faces to multiple people in the same frame simultaneously
  • Live Streaming β€” Output to virtual cameras for use in Zoom, Teams, OBS, or any video app
  • Video Processing β€” Process pre-recorded videos with face swaps
  • Movie Mode β€” Watch movies with any face swapped in, in real time

The Ethics Question

The project includes built-in safeguards β€” NSFW content detection that blocks processing of inappropriate material. But the ethical dimensions of real-time deepfakes are significant:

Legitimate uses:

  • Content creators animating custom characters
  • Virtual try-on for fashion and entertainment
  • Privacy protection in video calls
  • Film and media production on a budget
  • Accessibility β€” letting people with facial differences participate in video without anxiety

Risks:

  • Identity fraud and impersonation
  • Non-consensual deepfake creation
  • Erosion of trust in video as evidence
  • Potential for harassment and scams

The project maintainers are explicit: if you use a real person’s face, get consent and label outputs as deepfakes. The AGPL-3.0 license also ensures all derivative works must remain open source.

How It Compares

Deep-Live-Cam isn’t the only face-swapping tool, but it’s arguably the most accessible real-time solution:

ToolReal-TimeTraining RequiredOpen SourceStars
Deep-Live-Camβœ…None (1 image)βœ… AGPL-3.091K+
DeepFaceLab❌Hours of trainingβœ…48K+
FaceFusionβœ…Noneβœ…25K+
Roop❌Noneβœ… (archived)26K+

Deep-Live-Cam essentially evolved from the Roop project (now archived) and added the critical real-time capability that makes it practical for live use cases.

Getting Started

The simplest path:

git clone https://github.com/hacksider/Deep-Live-Cam.git
cd Deep-Live-Cam
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Download the two required models (GFPGANv1.4 and inswapper_128_fp16.onnx) from HuggingFace and drop them in the models/ folder. Then:

python run.py                          # CPU mode
python run.py --execution-provider cuda  # NVIDIA GPU

For Apple Silicon users, install onnxruntime-silicon for CoreML acceleration.

The project also offers pre-built binaries for Windows and Mac if you’d rather skip the manual setup.

The Bigger Picture

Deep-Live-Cam is a signal of where real-time AI is heading. The models are getting smaller, the inference is getting faster, and the barrier to entry is approaching zero.

What took a research lab in 2019 now runs on a laptop in 2026. The face-swapping capability itself is table stakes β€” the real question is what the next generation of real-time video manipulation looks like, and whether our detection and verification systems can keep pace.

For now, Deep-Live-Cam remains a remarkable piece of engineering: a single-image, real-time, cross-platform face swapper that anyone can run. Use it responsibly.

πŸ“ Source: github.com/hacksider/Deep-Live-Cam