You may have seen This Person Does Not Exist. (If not, check it out, it’s quick. Every face there is computer-generated.)
These were made by StyleGAN2, a type of neural network known as a Generative Adversarial Network. Most applications are either practical (characters in a video game) or worrying (propaganda or spam Twitter accounts).
But what can GANs tell us about ourselves? What would StyleGAN2 do if it was trying to generate your face? Would it make you? Would it make a long-lost relative? Or would it utterly fail?
Try it for yourself
… if you’re lucky. Because it uses GPUs, it costs about $15/day to run the site, so it’s not up forever. But maybe you’ll catch it while it’s up! If not, here’s how it looked (video sped up for demo purposes):
How well does it work?
Here are some demos on celebrities you may recognize. Note that I can’t claim credit for any of this; all this work was done by the team that built StyleGAN2, with some extra help from Mikael Christensen’s Google Colab notebook.
Reflections on GANs and people
- Nobody has tried this and said “Yeah, that looks just like me.” At best, it looks like a distant cousin. I take this to mean that the space that StyleGAN2 can generate is big, but the space of human faces is way bigger! And even tiny differences will make us think it’s a new person.
- It’s very sensitive to changes in face size and alignment. Fair; StyleGAN2 was trained to generate faces of a certain size with eyes at the center.
- If you try to mess with it (e.g. by leaning back so your face is small, or by putting two people’s faces in the frame, or by using the back of your head), it will still try to generate a face; this makes for really creepy nightmare faces. I still want someone to use GANs to make a horror movie.
- It might work better for white people than minorities. I don’t know if that’s just my perception. I know it was trained on the ffhq dataset, which “contains considerable variation in terms of age, ethnicity, and image background”, but I can’t tell more than that. This is a common refrain in ML, and it sucks. I hope it’s getting better; I know ffhq is more diverse than celeba (a previous widely used dataset), at least.
- (wait, is it good to be fakeable?)
- I’d apologize for being part of the problem, but I hope that the takeaways from this project are in the direction of “AI doesn’t actually work”.
- Speaking of takeaways: I’ve found it interesting to look at my near-selves. Every time I run this on my own face, it looks different, and every time I don’t like it. This feels important: if I look at someone like me and instinctively don’t like him, what does that say about my relationship with myself?
- (maybe it just says, “uncanny-valley faces are uncanny-valley.")
Exhibition
Shown at Gray Area in San Francisco, Mar 2-3, 2022.