Artificial intelligence (AI) could hold the key to hiding personal photos from unwanted facial recognition software and scammers without compromising image quality.
A new study from Georgia Tech, published July 19 in the preprint arXiv database, details how researchers created an AI model called “Chameleon.” This model can generate a digital “single personalized privacy protection (P-3) mask” using an individual’s photo to prevent detection of a person’s face by unwanted facial scans. The chameleon instead causes a facial recognition scanner to recognize the photo as a different person.
“Chameleon-like, privacy-preserving data sharing and analytics can help foster governance and responsible adoption of AI technologies, and stimulate responsible science and innovation,” said lead author of the study and Georgia Tech said Lin Liu, professor of data and intelligence computing at the university. in computer science, who developed the Chameleon model along with other researchers, said in a statement.
Related: Large-scale language models are not suitable for real-world use, scientists warn – even the slightest change can cause world models to collapse
From police cameras to Face ID on iPhones, facial recognition systems are now commonplace in everyday life. However, unwanted or fraudulent scans can allow cybercriminals to collect images for fraudulent purposes, as well as for fraud and stalking. Images can also be collected to build a database that can be used for unwanted ad targeting and cyberattacks.
making masks
Image masking is not new, but existing systems often obscure important details in people’s photos or fail to preserve true-quality images due to the introduction of digital artifacts. . To overcome this, the researchers said chameleons have three unique features.
The first is the use of inter-image optimization. This allows Chameleon to create one P3 mask per user, rather than creating a new mask for each image. This means that AI systems can protect users instantly, and it also means that limited computing resources can be used more efficiently. The latter could come in handy if Chameleon is adopted for use on devices such as smartphones.
Second, Chameleon has “perceptual optimization” built into it. This refers to how images are automatically rendered without manual intervention or parameter settings, ensuring that the visual quality of protected facial images is maintained.
The third feature is to harden the P3 mask so that it is robust enough to thwart unknown facial recognition models. This is done by integrating focal diversity optimization ensemble learning into the mask generation process. That is, it uses machine learning techniques that combine the predictions of multiple models to improve the accuracy of the algorithm.
Ultimately, the researchers hope to apply Chameleon’s obfuscation techniques beyond protecting individual users’ private images.
“We want to use these techniques to ensure that images are not used to train artificial intelligence generative models. We can prevent image information from being used without consent,” said Dr. said Tiansheng Huang, a student in the course who also helped develop it. chameleon.