AI fake face generators can be rewound to reveal the real faces they trained on


The work raises serious privacy concerns. “The AI ​​community has a misleading sense of security when it shares trained deep neural network models,” says Jan Kautz, vice president of learning and perception research at Nvidia.

In theory, this type of attack could apply to other data linked to an individual, such as biometric or medical data. On the other hand, Webster notes that the technique could also be used by people to verify whether their data has been used to train an AI without their consent.

An artist could check if his work has been used to train a GAN in a commercial tool, he says: “You could use a method like ours to evidence copyright infringement.”

The process could also be used to ensure that GANs don’t expose private data in the first place. The GAN was able to check whether their creations resembled real examples in their training data, using the same technique developed by the researchers, before publishing them.

However, this assumes that you can get that training data, Kautz says. He and his colleagues at Nvidia have come up with a different way to expose private data, including images of faces and other objects, medical data, and more, that doesn’t require access to training data at all.

Instead, they developed an algorithm that can recreate the data that a trained model has been exposed to by reversing the steps the model goes through when processing that data. Take a trained image recognition network: To identify what’s in an image, the network passes it through a series of layers of artificial neurons, with each layer extracting different levels of information, from abstract edges to more shapes and features. recognizable.

Kautz’s team found that they could interrupt a model in the middle of these steps and reverse its direction, recreating the input image from the model’s internal data. They tested the technique on a variety of common GAN and image recognition models. In a test, they showed that they could accurately recreate images from ImageNet, one of the most popular image recognition data sets.

ImageNet images (top) along with recreations of those images made by rewinding an ImageNet-trained model (bottom)

Like Webster’s work, the recreated images look a lot like the real thing. “We were surprised by the final quality,” says Kautz.

The researchers argue that this type of attack is not simply hypothetical. Smartphones and other small devices are starting to use more AI. Due to battery and memory limitations, artificial intelligence models are sometimes only half processed on the device and the semi-run model is sent to the cloud for the final computing crisis, an approach known as split computing. Most researchers assume that split computing won’t reveal any private data from a person’s phone because only the AI ​​model is shared, Kautz says. But his attack shows that this is not the case.

Kautz and his colleagues are now working to find ways to prevent models from leaking private data. We wanted to understand the risks to minimize vulnerabilities, he says.

Although they use very different techniques, he believes that his work and Webster’s complement each other well. Webster’s team showed that private data can be found in the output of a model; Kautz’s team demonstrated that private data can be revealed by going the other way around, recreating the entry. “Exploring both directions is important to better understand how to prevent attacks,” says Kautz.


www.technologyreview.com

Leave a Reply

Your email address will not be published. Required fields are marked *