Even as videos about real issues facing LGBT people get deemed unfit for youth consumption, YouTube has recently tried to explain why it allowed a significant amount of video that seems bizarre, if not outright concerning, to be pushed at children as family content.
A number of videos featuring baby dolls getting syringe shots into the butt, posted by users with names like My Disney Toys, started getting millions of plays this year, becoming some of the most popular content on the platform. Estimates show that likely brought in hundreds of thousands of dollars for those videos, with YouTube pocketing 45 percent of the money, before the content was deemed potentially offensive. Likewise, a number of videos of clowns stalking children in their homes have proven to be major click draws. In those cases, critics say the content wins traffic by being marketed as family-friendly. YouTube has started demonetizing much of that content as well.
Meanwhile, public relations around hate speech has dogged YouTube through most of 2017, with major brands pulling millions in advertising from the platform in March. As YouTube cleans up its act, though, the company assures the public it hasn't pocketed money made from hate, from torment of children, or from anything else it later bans.
Officials at Google, which owns YouTube, tell The Advocate that revenue raised on content later deemed inappropriate gets credited back to advertisers.
A spokesperson for the tech giant says the company isn't profiting from this. "When we find that we have made an error in letting ads run against content that doesn't comply with our ads policies, we immediately stop serving ads on that content," reads a statement to The Advocate. "When someone demonstrates a pattern of violations of our policies, we terminate their account, withhold any unpaid revenue, and credit our customers."
At least on a company-wide basis, monetary problems seem to have passed. As Google reported a surge in advertising revenue as 2017 progressed, Credit Suisse announced a boycott over hateful content was effectively over and Raymond James analyst Aaron Kessler spotlighted YouTube advertising revenue growth as a major part of Google's 20-plus percent year-over-year growth in the third quarter.
Ultimately, Google took a $1 billion hit in revenue in the first quarter of 2017, which still meant a $24.5 billion intake, but its most recent report shows it collected a record $27.47 billion in revenue through the third quarter of 2017. The bulk of the company's revenue comes from advertising.
While the money rolls in, the company continues to feel pressure and criticism for hate speech, something Google executives have vowed to combat. Google press officials referred media to a post on the company blog from YouTube CEO Susan Wojcicki. "We took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats," she writes. "We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies."
At the same time, sources inside Google say that machine learning contributed significantly to an embarrassing problem with the censorship of LGBT content earlier this year.
At the time, putting YouTube on "Safe Mode," a switch intended to screen out material inappropriate for children, it pulled an enormous amount of content made by LGBT creators, including a number of videos that didn't appear immediately objectionable. That was hitting creators at the same time issues with the YouTube search algorithm led to decreased monetization all around.
Users like Megan Bacon-Evans, who operates the YouTube channel WhatWeganDidNext with wife Whitney Bacon-Evans, says their content was heavily impacted despite the fact the channel did not discuss sex, drugs, violence, or many of the other things Safe Mode intended to block. Since YouTube implemented changes in Safe Mode earlier this year, most of the content on the channel no longer gets screened, she says, but some posts like video of the couple's civil partnership does get hidden. "Furthermore, the algorithm is still affecting us with views," Bacon-Evans says. "We are still not seeing the right amount of views to correlate with our subscriptions. It's evident that YouTube is not promoting or recommending our videos, and perhaps not alerting our subscribers. It's frustrating because we work very hard to film and edit content that we think our followers will enjoy."
On December 4, YouTube announced a series of measures it would take to better screen video. Officials say that more human reviewers have been reviewing and removing content as needed. Since June, more than 150,000 videos have been removed for violent extremism, and thanks to machine learning from those efforts, YouTube reports that nearly 70 percent of such speech gets taken down within eight hours of its initial upload; and about 50 percent of it comes down within two hours.
Sources within the company acknowledge, though, with an open sharing platform like YouTube, it's difficult to completely police content that can be uploaded directly to the site from all over the world.
And while these measures flag hateful content, they bring more risk for YouTube content creators who rely on monetization to pay their own bills. Many videos spotlighted by critics as objectionable now have been demonetized, and require anyone going to the video to acknowledge the content might by objectionable before they are allowed to view it.
As YouTube implements more screens, that has also meant increased problems for LGBT creators. "More than likely, whenever I post a video with trans, LGBT, lesbian, or gay in the title, it gets flagged for being inappropriate, and it's stressful as hell," says YouTuber Arielle Scarcella.
On the post, she voiced a commitment to policing bad actors exploiting the site, whether for spreading violent rhetoric or simply for "spamming" the platform with content masquerading as family-friendly. "We need an approach that does a better job determining which channels and videos should be eligible for advertising. We've heard loud and clear from creators that we have to be more accurate when it comes to reviewing content, so we don't demonetize videos (apply a 'yellow icon') by mistake," she writes. "We are planning to apply stricter criteria and conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should. This will help limit inaccurate demonetizations while giving creators more stability around their revenue. We will be talking to creators over the next few weeks to hone this new approach."