Future Sketches Allows Guests to See Themselves in a Whole New Way
Future Sketches by Zach Leiberman, which made its ARTECHOUSE debut at our D.C. location in early 2020, is a playful examination of how we can create with technology and code to see possibilities for the future. Leiberman's medium of choice is creative coding, and about playing with those rules.
Each of the galleries is focused on a theme, and the Face Lab highlights work focused on how technology sees and interacts with faces and how we can see ourselves in a whole new way. The Face Lab is home to multiple interactive pieces including an LED video wall of facial features, augmented reality filters, and a video essay that explores the dark side of facial recognition technology.
Más Que la Cara, created with artist Molmol Kuo, presents augmented reality "masks" over visitors' faces. The title translates from Spanish to "more than the face." Lieberman wanted to create a kind of living poster that visitors can bring their own face to. Software finds and tracks points on people's faces, identifying key areas, like the mouth, nose, eyebrows, and contour, and then analyzes that data to create something intuitive, playful, and engaging. There are around 40 masks that can be seen.
The Expression Mirror (pictured above) tracks muscle movements in faces to sense emotion. The face-detection software tracks muscle movements at 68 locations on the face as motion-recognition software interprets these movements. As you mold your face into different expressions, the system builds a database of eyes, noses, and mouths displaying certain emotions that the program has recognized. Then guests can head to the central kiosk of the Face Lab where there’s a diagnostic view of the Expression Mirror that shows an outline of the viewer’s face, which of the 68 points are registering, and how much of each emotion the program is recognizing.
In Reface, created by Leiberman and Golan Levin, visitors experienced a playful portraiture that combines different parts of current and previous visitors’ faces. Using face-tracking software to accurately align the different sections of faces, this piece records and redistributes short video clips of mouths, eyes and brows. Blinking can “edit” and advance the clips taken, giving everyone control over this fun take on identity and the possibilities for our own faces. The installation stores around 40 clips at a time before older clips are replaced by newer ones.
And lastly in Faces in Things, artist Robby Kraft used an algorithm usually used to average out the faces of real people to average out around 2,500 images of inanimate objects tagged with the Instagram hashtag #FacesInThings. This piece features a dial that visitors can adjust to show the image as more or less interpreted by the facial recognition algorithm. On one end, you can see the plain images that used the hashtag, and on the other end is the fully-interpreted "face" that the computer created from averaging the faces it identified in the original images.
Want to give it a try for yourself? Explore Lieberman’s filters on Instagram! Visit his page to use the filters, and upload and create your own effect and tag with the hashtag #IGFaceLab and we'll share our favorites throughout the week.