Technology is constantly improving and evolving. With the rise of facial recognition software, it comes as no surprise that there are now ways to fool these complex algorithms. An AI technique, developed by Adversa, is claimed to fool facial recognition systems into identifying a picture of one person’s face as that of someone else by adding minute alterations, or noise, to the original image:
Listen beautiful relax classics on our Youtube channel.
The companyannounced the technique on its website with a demonstration video showing that it could alter an image of CEO Alex Polyakov into fooling PimEyes, apublicly available facial recognition search engine, into misidentifying his face as that of Elon Musk.
To test this, I sent a photo of myself to the researchers, who ran it through their system and sent it back to me. I uploaded it to PimEyes, and now PimEyes thinks I’m Mark Zuckerberg.
Adversarial attacks against facial recognition systems have beenimproving for years, as have the defenses against them. But there are several factors that distinguish Adversa AI’s attack, which the company has nicknamed Adversarial Octopus because it is “adaptable,” “stealthy,” and “precise.”
Other methods are “just hiding you, they’re not changing you to somebody else,” Polyakov told Motherboard.
And rather than adding noise to the image data on which models are trained in order to subvert that training—known as a poisoning attack—this technique involves altering the image that will be input into the facial recognition system and doesn’t require inside knowledge of how that system was trained.
Image via Vice