How do you get neural networks to crate stuff rather than simply recognizing existing stuff?
The approach, known as a generative adversarial network, or GAN, takes two neural networks—the simplified mathematical models of the human brain that underpin most modern machine learning—and pits them against each other in a digital cat-and-mouse game.Both networks are trained on the same data set. One, known as the generator, is tasked with creating variations on images it’s already seen—perhaps a picture of a pedestrian with an extra arm. The second, known as the discriminator, is asked to identify whether the example it sees is like the images it has been trained on or a fake produced by the generator—basically, is that three-armed person likely to be real?Over time, the generator can become so good at producing images that the discriminator can’t spot fakes. Essentially, the generator has been taught to recognize, and then create, realistic-looking images of pedestrians.The technology has become one of the most promising advances in AI in the past decade, able to help machines produce results that fool even humans.GANs have been put to use creating realistic-sounding speech and photorealistic fake imagery. In one compelling example, researchers from chipmaker Nvidia primed a GAN with celebrity photographs to create hundreds of credible faces of people who don’t exist. Another research group made not-unconvincing fake paintings that look like the works of van Gogh. Pushed further, GANs can reimagine images in different ways—making a sunny road appear snowy, or turning horses into zebras.The results aren’t always perfect: GANs can conjure up bicycles with two sets of handlebars, say, or faces with eyebrows in the wrong place.
No comments:
Post a Comment