Donald Kravitz/Getty Images
I’m sorry, but the deadline has passed. You have missed the opportunity of a lifetime: the chance to become the first human to win a beauty contest judged by robots. Friday 15 January marked the last day to submit your selfie.
If, like me, you are picturing a demented space-age beauty pageant presided over by Jetsons-style cartoon robots, prepare for further disappointment. There will be no evening attire. No halting interviews with mechanised hosts. There won’t even be make-up.
The winners – five men and five women – will be crowned on Thursday. But this article is about the losers – the rest of us.
The rationale for a machine-judged beauty contest was high-minded enough, if you believed theannouncing the launch of Beauty.AI in November: “We believe that in the nearest future, machines will be able to get a lot of vital medical information about people’s health by just processing their photos. Learning to estimate people’s attractiveness is the first small but crucial step to this future, because healthy people look more attractive despite their age and nationality.” In keeping with this air of legitimacy, partners included Microsoft and graphics chip maker Nvidia.
The instructions were simple: Humans would submit their mugshots to be judged. Programmers would submit their algorithm to do the judging. Humans were instructed to keep their faces bare of make-up, glasses and even beards. And the rules for the programmers of beauty-finding algorithms? On specifics, they were left to their own devices.
In the absence of any official guidance, programmers have several options when teaching their AI to sort the hot from the not. One option is to select the characteristics you personally like. Here, Beauty.AI offered a few helpful pointers: maybe select for skin shininess, elasticity and “celebrity similarity”; select against dark under-eye circles and the presence of wrinkles.
If that’s too subjective for you, you could go the crowdsourced data-driven approach. A team at the Swiss Federal Institute of Technology (ETH Zurich) tried that in October, creating a, by training it on millions of “hot or not?” ratings. But they found that this approach has its own problems. The technology was bamboozled by… well, other technology: “we observed that [the most popular Instagram] filters lead to an increase in predicted hotness,” the authors wrote.
Or you could go back to the contest’s cri de coeur: robust health. What better way to determine optimal appearance than the just-so stories that litter the back issues of evolutionary psychology journals? Your algorithm would prefer men withover a clean shave (beardiness being a sign of a good protector – but also prohibited for contest entrants!). For either gender, the horizontal distance between the eyes should measure – .
Never mind that the science on whether attractive people are in fact healthier seems to be split, and easy enough to manipulate with subtle make-up or plastic surgery. And never mind that scientists have already started developinglike , mood or heart rate, all without the need to find babes in big data.
Being beautiful offers other advantages, though. Attractive people earn more money; are more likely to receive promotions,or run successful companies; and are .
That would explain why people are obsessed with finding external confirmation of their place in the beauty hierarchy.
Of course, that’s easier said than done. In 2014, US radio journalist Esther Honig sent her photo to Photoshop experts in 25 different countries and asked them to make her “beautiful”. Some lightened her skin, others sprayed her with digital tanner. Others recoloured her hair or her eyes, still others slathered her in make-up..
The results reiterated that when it comes to assessing beauty, there is no agreed standard. In an experiment published in October, psychologists at Wellesley College in Massachusetts asked more than 35,000 people to rate the attractiveness of different faces. They found that most people’s preferences depended on their own genetics and life experiences – even identical twins couldn’t agree. In the end, no individual’s preferences agreed more than 50 per cent with any others in the study.
So it’s not hard to see the appeal of outsourcing our conflicting opinions on objective beauty to a machine. It’s the same promise of god-like impartiality that makes artificial intelligence so alluring for other applications: picking stocks, driving cars, combing through piles of research. Unlike humans, AI doesn’t get hung up on its own fallible opinions. It alone can divine objective truth.
The problem is, the robot’s opinions are still programmed by humans. Even vaunted deep-learning algorithms are trained on human preferences.
But who cares if someone builds an attractiveness algorithm? Programmers already bake artificial intelligence into all kinds of frivolous things, like a neural network that cooks up its own nonsense scripts for Friends or a robot that mixes cocktails.
The problem is, when artificial intelligence judges us, by criteria known only to the person who wrote the program, strange things can happen.
In 2013, Harvard professor Latanya Sweeney discovered. “I was shocked,” she writes, even paying the company to check that there wasn’t anyone else with her same name and a record. A follow-up study found that names typically associated with the black community – Deshawn, Aisha, Tyrone – prompted more of the ads suggesting the person had an arrest record than typically white names. On one site, a black-identifying name was 25 per cent more likely to get an ad suggestive of an arrest record.
What could explain a racist algorithm? Sweeney suggests the advertiser might have, maliciously or inadvertently, written templates for ads skewed to sample black-identifying names. Or users might have biased the algorithm towards those names by clicking ads for them more often.
Another iffy ad on Google.
The point is, while the ads probably weren’t the work of tech-savvy white supremacists, neither were they an objective assessment of race and gender. They were just a sad reflection of the world we live in. And while being relegated to the “not” pile can’t be compared with racism or sexism, the discrimination in all cases is offloaded to algorithms.
All about the money
Or maybe judgements on beauty are not so harmless. In a feature story on anorexia published last month in Slate, journalist Katy Waldman explores the science of the insula, a small region of the brain responsible for, among other things, monitoring your awareness of yourself and your own body. Brain scans indicate that people with anorexia have poorly functioning insulae. Instead of looking to their own internal assessments of their bodies, they become more reliant on outside approval, seeking validation from external signals like the number on the scale.
It’s not difficult to imagine what a hot-or-not algorithm to measure beauty might mean in the hands of a young or insecure person. Every teenager wants to know whether they’re a “5” or a “10”.
Modern technology is rife with signals that try to validate the worth of your existence: Fitbit steps, Instagram likes, Facebook friends, 500+ LinkedIn connections, inbox zero.
Why not throw another number onto the pile? Because when you pull the curtain back, you’ll always find someone standing there ready to put a hand in your pocket. The psychology of beauty might be shaky, but there’s a thriving industry based on it.
One of the groups behind the contest is, in fact, RYNKL, a start-up whose app launched today, billed as ““. It tracks your wrinkles over time and lets you compare the progress with your friends. And presumably, directs you to the nearest fix-it cream.
So to prepare for Thursday’s big reveal, put on your best not-surprised face.
More on these topics:
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.