RuView: See People Through Walls Using Only WiFi

By Prahlad Menon 5 min read

What if your WiFi router could tell you someone just fell? Or that a person is trapped under rubble, breathing shallowly — without a single camera in the room?

That’s not a future scenario. It’s what RuView does today, and it runs on hardware that costs less than a pizza dinner.

The Idea: Physics as a Sensor

RuView is built on a research concept called WiFi DensePose, first demonstrated in academic work at Carnegie Mellon University. The core insight: when a human body moves through a space, it disturbs the WiFi signals filling that space. Those disturbances — measured as Channel State Information (CSI) across dozens of subcarriers — encode enough detail to reconstruct body position, breathing rate, and heart rate.

RuView takes that research insight and turns it into a practical edge system. No cloud required. No labeled training data needed. No cameras at all.

What It Can Actually Do

The capabilities are surprisingly broad:

FeatureDetail
Pose estimationFull body skeleton from CSI amplitude/phase data
Breathing detection6–30 breaths/min via bandpass filtering
Heart rate40–120 BPM via FFT peak detection
Presence sensingSub-1ms latency using RSSI variance
Through-wall detectionUp to 5m depth via Fresnel zone modeling
Multi-person trackingMultiple people, independent vitals per person
Disaster responseSTART triage classification for trapped survivors

The signal processing pipeline runs at 54,000 frames per second in Rust — fast enough for real-time pose visualization with almost no perceptible lag.

The Hardware: $54 Total

The full-featured setup requires CSI-capable hardware — standard consumer WiFi chips don’t expose raw CSI data. But the recommended stack is surprisingly affordable:

  • 3–6× ESP32-S3 nodes (~$9 each): the sensor mesh
  • A WiFi router: for the access point

That’s roughly $54 for a full 360-degree room coverage system. Each ESP32 module runs independently — the central server is optional, used only for visualization and aggregation.

If you have zero hardware, you can still run RuView in RSSI-only mode on any laptop for coarse presence and motion detection. Or verify the signal processing pipeline without any hardware at all:

python v1/data/proof/verify.py

Self-Learning, No Labels Required

One of the most impressive aspects of RuView is its training approach. The system:

  1. Learns the RF signature of a room over time — walls, furniture, reflections
  2. Subtracts the environment to isolate human activity
  3. Improves automatically as it operates — no hand-tuning, no labeled datasets

The contrastive embedding model (documented in ADR-024) learns from raw WiFi data alone. The persistent field model (ADR-030) even detects signal drift over days and flags spoofing attempts.

The result: a model that transfers across rooms, buildings, and hardware variants — trained once, deployed anywhere.

Getting Started in 30 Seconds

If you want to try the signal processing stack without hardware, Docker is the fastest path:

docker pull ruvnet/wifi-densepose:latest
docker run -p 3000:3000 ruvnet/wifi-densepose:latest
# Open http://localhost:3000

For full functionality with live CSI capture, you’ll need the ESP32-S3 mesh or a research NIC (Intel 5300 / Atheros AR9580, ~$50–100 on eBay).

Privacy-First by Design

This is worth highlighting explicitly: RuView stores no video, no images, no personal identifiers. It processes WiFi signal perturbations — mathematical abstractions of human presence. There’s nothing to subpoena, nothing to leak from a breach.

For healthcare environments, elder care, disaster response, and smart buildings, this is a significant advantage over camera-based systems. The residents or patients being monitored don’t need to consent to being filmed — because they aren’t being filmed.

The Architecture

RuView is built on RuVector, a self-learning vector database. The full stack includes:

  • Rust core — 54K fps signal processing pipeline
  • Python ML layer — attention networks, graph algorithms, domain generalization
  • ESP32-S3 firmware — edge modules that run fully offline
  • Tauri v2 desktop app (WIP) — mesh visualization and OTA updates
  • Docker imageruvnet/wifi-densepose for quick evaluation

The project has 48 Architecture Decision Records (ADRs) and 7 Domain-Driven Design models — unusually thorough documentation for an open-source project of this scale.

Why This Matters Now

The convergence of a few trends makes WiFi sensing practical in 2026:

  1. ESP32-S3 everywhere — CSI-capable chips are now commodity components
  2. Edge ML maturity — attention networks that fit on microcontrollers
  3. Rust for embedded — deterministic, safe systems code without a runtime
  4. Privacy regulation pressure — GDPR and similar laws making camera deployment expensive

RuView sits at the intersection of all four. It’s not a research demo — it’s a deployable system with Docker images, hardware guides, and a signal processing pipeline anyone can verify.

The Bigger Picture

RuView hints at something larger: the physical world as a sensor network, built from the radio signals already filling every room. WiFi, Bluetooth, cellular, even ambient RF — all of it encodes information about human presence and activity.

The question isn’t whether this sensing capability will become ubiquitous. It’s whether the systems built on it will be camera-based surveillance or privacy-respecting signal analysis.

RuView makes a clear architectural bet on the latter.

GitHub: ruvnet/RuView
Live demo: ruvnet.github.io/RuView
Docker: docker pull ruvnet/wifi-densepose:latest