Facehacking combines face movement with 3D projection mapping in real-time. A projector casts an image onto a face while a tracker keeps that image fixed on the owner even while in motion. Think of it as a temporary tattoo made of light that can change on demand.
Enthusiasts of the face hacking found in slasher flicks can look forward to live performances with gore previously only reached by those with access to CGI and stop-motion photography. The impact on other kinds of performances is equally interesting. It was even seen at this year’s CES show on the Intel stage.
Facehacking works by coordinating the face tracker to specific points on the wearer’s face, which lets it track its lines, contours and position. Using those points as reference guides, it’s possible to project an image onto the face that both fits and follows with pinpoint accuracy. In this way, it’s similar to the motion capture outfits with the ping pong balls you readily see in “making of” documentaries.
Nobumichi Asai and others with an interest in digital design and make-up originated the idea of Facehacking during a summer 2014 project they dubbed ‘Omote.’ The video mapping from that initial run allowed for striking effects usually reserved for skillfully applied stage makeup. The most recent iteration, displayed in a video released last week, expanded in the direction of full-on masks.
Though Facehacking is still simply a concept worthy of demonstration, one can envision live entertainment in a not-too-distant future where technology impacts both audience and performer concurrently. Future audiences will be treated to a new level of spectacle where actors can portray a limitless range of characters and change their appearance on a whim. From the performers’ perspective, the skill of mastering facial expressions that work well with at-will projected images could soon be in high demand.