A new Stanford University study has determined that artifical intelligence can, with high accuracy, determine sexual orientation. By analyzing one photograph, the study claimed, an algorithm could assess if a person was gay or straight in 81 percent of cases for men and 74 percent for women. The accuracy increased if multiple images were provided.
LGBT groups are not pleased with the report.
In a joint statement released Friday, GLAAD and the Human Rights Campaign criticized the study as "dangerous" and "junk science." The LGBT groups accused the researchers of employing flawed methodology, including bi erasure, as well as a failure to include people of color in its analysis or be peer-reviewed.
James Heighington, GLAAD’s chief digital officer, stated:
Technology cannot identify someone’s sexual orientation. What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated. This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites.
At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous.
Ashland Johnson, HRC director of public education and research, added:
This is dangerously bad information that will likely be taken out of context, is based on flawed assumptions, and threatens the safety and privacy of LGBTQ and non-LGBTQ people alike. Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay. Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world — and this case, millions of people’s lives — worse and less safe than before.
The statement also stated that "media headlines that claim AI can tell if someone is gay by looking one photo of your face are factually inaccurate." (The Advocate's headline was "AI Can Tell If You're Gay From a Photo, and It's Terrifying.")
In response, the researchers of the study, Michael Kosinski and Yilun Wang, released their own statement alleging that HRC and GLAAD were waging a "smear campaign" against them that was a "knee-jerk dismissal of the scientific findings":
It really saddens us that the LGBTQ rights groups, HRC and GLAAD, who strived for so many years to protect the rights of the oppressed, are now engaged in a smear campaign against us with a real gusto.
They dismissed our paper as "junk science" based on the opinion of a lawyer and a marketer, who don’t have training in science. They spend their donors' money on a PR firm that calls journalists who covered this story, to bully them into including untruthful allegations against the paper. They lie to people that "Stanford has distanced itself from our results." They sent a press release full of counterfactual statements.
They assured people that "Technology cannot identify someone’s sexual orientation," but did not explain how they arrived at this conclusion.
Let’s be clear: Our paper can be wrong. In fact, despite evidence to the contrary, we hope that it is wrong. But only replication and science can debunk it — not spin doctors.
If our paper is indeed wrong, we sounded a false alarm. In good faith.
But what if our findings are right? Then GLAAD and HRC representatives’ knee-jerk dismissal of the scientific findings puts at risk the very people for whom their organizations strive to advocate.
Kosinski and Wang went on to dispute each point of criticism from HRC and GLAAD's press release. Notably, they said their study is indeed peer-reviewed and is set to be published in the Journal of Personality and Social Psychology.
The researchers also said they focused on white participants as well as only gay and straight sexual orientations because those were the most represented identities in their database. "But this does not invalidate the findings of the study in any way," their statement read. The algorithm analyzed photographs of 14,000 Caucasian Americans taken from a dating website.
The researchers concluded their letter with a call to work together with HRC and GLAAD "toward the urgent common goal of protecting the rights and well-being of the LGBTQ community. We would be also delighted to address any criticism that they might have. Any scientific findings can be wrong, but dismissing them and their implications without due consideration could be dangerous and ill-informed."