Dr. Oliver Haimson and his colleagues didn’t set out to study how marginalized people feel about artificial intelligence. The team had been studying how transgender people understood augmented reality. However, during data collection, the researchers kept hearing people’s skepticism about AI instead.
“We didn’t even ask them about AI, but it just kept coming up,” says Haimson, an associate professor of information at the University of Michigan and author of the book Trans Technologies, which explores how tech is being used to address the needs and concerns of transgender people.
The persistent unease about AI led the team to pivot the project and conduct a large-scale, nationwide survey that examined perceptions of AI among different demographic groups. The results were first published in June in the research article, “AI Attitudes Among Marginalized Populations in the U.S.: Nonbinary, Transgender, and Disabled Individuals Report More Negative AI Attitudes.”
The findings revealed that nonbinary people held the most negative views of AI. Haimson suspects it comes down to nonbinary people’s relationship to and understanding of categories.
“If you think about nonbinary identity, it’s really about rejecting categorization and rejecting being put into these boxes,” Haimson says. “And if you think about AI technologies, the only reason it works is because of these large-scale systems that are basically placing things into categories and boxes. And so to me, it feels like it’s just fundamentally at odds with nonbinary identity in a lot of ways.”
Many multiracial participants also had reservations about AI.
“People who don’t necessarily fit into these standard boxes in terms of gender or race are kind of going to be skeptical of a technology that’s forcing everything into boxes,” Haimson says.
Those with disabilities — especially those who identified as neurodivergent or who had mental health conditions — held more negative views than those who did not have these disabilities, according to the study. Trans participants held more negative opinions on AI than cisgender people, and women had lower trust in AI than men.
“AI harm experienced by gender minorities can be physical, psychological, social, and economic, resulting in algorithmic misgendering and violations of privacy and consent,” Haimson and his colleagues write in the article.
The issue of consent is also of great concern, he says
“Sometimes [AI is] being used on you without your permission or consent. In employment context, health care context, all of these different day-to-day things, you don’t necessarily get to choose if AI is being used on you or not,” Haimson explains. “People seem to know that there’s a lot of data going in and there’s a lot of data going out, but there’s not any transparency about that.”
And while Haimson says the results of the research weren’t surprising when it comes to trans and nonbinary responses, it’s important to have empirical evidence of these attitudes for further research.
Companies, including those creating new AI programs, often don’t have trans and nonbinary people in mind when creating technologies, which is where many trans digital creators come in. Many technologies created by trans people focus on the community itself. An example Haimson gives is the ShotTraX app, which, as the name suggests, tracks hormone shot doses for those undergoing gender-affirming health care.
“There’s so many mainstream technologies that just don’t really think about trans users and nonbinary users,” Haimson says. “And so it’s a way of taking back this agency to create something that actually works for this community.”
This article is part of The Advocate’s Jan-Feb 2026 issue, which hits newsstands January 27. Support queer media and subscribe — or download the issue through Apple News, Zinio, Nook, or PressReader.
















