The gaze panopticon


Can the gaze be turned into a collective power?


YearDec 2025RoleConcept, Design, DevelopmentTechnologies3D Graphics, Real-time system, Face Detection
This interactive installation reimagines Foucault’s Panopticon through a radical reversal: the confined self transforms external surveillance into collective power. Audiences join via a shared web link (https://the-gaze-panopticon.onrender.com/audience.html) where real-time face detection tracks their attention. Each gaze is measured by duration and multiplied by the number of watchers, accumulating as tangible pressure on the confined self. As collective observation intensifies, the self channels this force to shatter the prison structure, taking us to a place where what was designed to control becomes the catalyst for liberation.

 Join the gaze here

 Try the feeling of being gazed
#Interactive Installation
#Speculative Design
#Creative Coding





Concept

One person (the Experiencer) navigates a 3D space where they're confined within a digital Panopticon alongside a representation of their “Self." Multiple Audience members watch through their screens, and their facial attention is tracked in real-time using computer vision.

As collective observation intensifies, pressure accumulates - the Panopticon structure begins to shake, the Self deforms under stress. When pressure reaches critical mass, the prison structure shatters, and the Self transforms into a luminous, free-floating form.

Design Goals

  • Create real-time networked interaction between 1 experiencer and multiple audiences
  • Use facial detection to quantify attention as measurable force
  • Visualize abstract concepts (pressure, surveillance) through 3D spatial experience
  • Build entirely web-based for accessibility—no app downloads required



Technical Architecture



Built with Three.js, Socket.io, face-api.js — entirely web-based, no installation required.
  • Face Detection: face-api.js tracks attention duration in real-time
  • 3D Morphing: Blender shape keys → Three.js morph targets (pressure 0→100)
  • WebSockets: Socket.io synchronizes state across all connected clients
  • Particles: ~5K-10K vertices explode with radial velocity at rupture


Development

Phase 1 - Basic 3D modelingI created the geometry of the basic object (Self) and the scene (Panopticon) in Blender, then using shape keys to define its deformation states. Three.js then controls these morph targets in real-time based on accumulated gaze pressure.
Interpolation: 0.0 (base shape) 1.0 (fully deformed) 



Phase 2 - Importing the 3D model in Three.jsTo make the Self deform based on accumulated gaze pressure, I created an updateBlobMorph() function that maps pressure (0-100) to the shape key value (0.0-1.0).
  • At pressure = 0 morphTargets [0] = 0.0 blob stays in base form
  • At pressure = 50 morphTargets [0] = 0.5 blob is 50% deformed
  • At pressure = 100 morphTargets [0] = 1.0 blob is fully compressed


Phase 3 - Use face detection to collect the gazeThe most challenging aspect was implementing reliable face detection in the browser. This required three complete attempts before finding a working solution.

1st try - MediaPipe Face Mesh
The failed reason: Google's MediaPipe model promised 468 facial landmarks with high accuracy. However, it relied on WebAssembly (WASM) binaries that failed to initialize.
2nd try - TensorFlow.js Face Landmarks
I pivoted to TensorFlow.js, which uses pure JavaScript/WebGL instead of WASM. Model loaded successfully, camera feed worked; but detection consistently returned 0 faces.

3rd try - face-api.js Finally, I found success with the vladmandic fork of face-api.js. It’s a more stable, better-maintained version with simpler API.


Phase 4 - Building the Real-Time Communication System
To synchronize state across multiple users, I built a Node.js server using Socket.io. The server maintains a centralized gameState and broadcasts updates to all connected clients.

Gaze Event HandlingFace detection on the client emits three events: gaze-start, gaze-hold (every 100ms), and gaze-end. The server accumulates pressure accordingly.
  • Timing: 1 person gazing continuously = ~2 pressure/sec → 50 seconds to reach rupture (100)

Phase 5 - More visual details

Each phase required distinct visual language to communicate psychological states. I implemented dynamic materials, particle systems, and animations responding to accumulated pressure.
Waiting / Stable (0-30 pressure)
  • Pristine metallic form
  • Subtle gray glow
  • Shape keys begin deforming


Rupture (70-100 pressure)
  • Structure explodes
  • ~5K particles with radial velocity
  • Cyan/magenta color gradient
    Critical (30-70 pressure)
    • Red/yellow flashing (20Hz)
    • Shake effects intensify
    • Visual tension builds


    Transmutation (Manual trigger)
    • Self resets, high roughness
    • Zero metalness, emission 3.0
    • Gentle floating animation



    Demo & Reflection

    Join the gaze: https://the-gaze-panopticon.onrender.com/audience.html
    Want to be gazed?: https://the-gaze-panopticon.onrender.com/index.html

    Effect Video


    This project intentionally uses the tools of surveillance to imagine their opposite, like cameras, algorithms, quantified attention. This contradiction remains unresolved. But perhaps that's the point: transformation doesn't come from perfect solutions, but from the willingness to question the structures we inhabit.

    The work reveals its own limitations. You watch through screens, mediated by the very tools of digital surveillance we live under. The face detection that enables "liberation" is the same technology used for mass surveillance in authoritarian states. But perhaps this liminal space, which between constraint and freedom, between complicity and resistance, is exactly where transformation begins.



    Next step

    • Better user experience: responsive design for mobile and tablets
    • WebXR integration: may introduce VR/AR mode for a more immersive experience
    • Session persistence: Database storage of game states and histories


    References

    Three.js Documentation
    Socket.io Documentation
    face-api.js GitHub
    WebGL Fundamentals



    Thank you!:)