How exactly does one become an emotionally competent and reactionary 3D avatar of oneself? I found myself in the midst of precisely 40 DSLR cameras strapped onto poles aimed at my face, with a heavy weight atop of my head, while going through a series of emotions and sounds I wasn’t aware I was capable of. Not exactly a scene from a Tarkovsky film, but not unlike other ventures into life-like technology.
Recently, London live-event design company Immersive presented a new technology created by Expressive AI called Emotion Capture, where, as demonstrated in the above process, avatars are created to be emotionally responsive and as human-like as possible.
Expressive AI is a new company that combines mathematical algorithms with computer vision in order to create software personalities. They stress the human and organic element through a procedure they call “emotion capture.”
“Ours [is] based on human emotions” Producer of this Emotion Capture session and Creative Director of Immersive, Sean Wilder, states, emphasizing the human element both as what sets their technology apart from corporations like Disney and Facebook and in reference to Elon Musk’s call for a human element to safeguard against AI.
After creating the software personality, it will then be able to speak and express emotions “consistent with [the] personality of its human creator.” You can view an early example with similar principles by cofounder and emotion coach extraordinaire Chris Shaw below, as the newer work has not yet been disclosed to the public.
Rather than implement Emotion Capture for the service industry and have it function solely as a “chatbox for kids with autism,” as they are “more responsive” and have “less fear and agitation when talking with a robot,” Wilder explains a new focus here for the company.
Instead, it’s an “aim for entertainment and arts aspect of it” with dreams of a digital being reading the daily news on tv and having opinions and consciousness. “[You] can put parameters for avatar to do anything,” Wilder enthusiastically tells me.
This is the second shoot they have done and it’s much more large-scale, consisting of a model casting call over the course of two days. The shoot begins with a natural makeup look as imperfections or other desirable changes can be done in post-production, which can lead into gray territory when it comes to the ethics of Photoshop for avatars—or, as Wilder calls it, “digital plastic surgery.”
The process with the models takes up about 20 minutes, while post-production work and programming takes anywhere from four to six weeks.
Personally, I didn’t realize I had the same face for anxiety as I did for contempt until Shaw asked me to make a face for both emotions, along with others including “interested,” “really really interested,” “despair,” and “annoyance” to the point of wanting to slap someone across the face. According to Shaw, the process can really capture the beauty of the human face and emotions. Apparently, most of my own expressions come from various movements of my bottom eyelids—but upon seeing myself in a single-emotion scan, I guess I don’t have the same deadpan expression for everything:
It’s a strange thing seeing yourself immortalized online through a virtual self that mimics you in action and form. What can this lead to exactly? Can 3D avatars be the new selfies? Expressive AI and Immersive certainly think so, as they are trying to create a permanent setup for Emotion Capture. We may all have 3D avatar versions of ourselves interacting, modeling, and presenting amongst other activities on screens very soon.
Click here to learn more about Immersive.