Facebook CEO Mark Zuckerberg speaks at the F8 Facebook Developers conference in San Jose, California, May 1, 2018 (Photo/JTA-Justin Sullivan-Getty Images)
Facebook CEO Mark Zuckerberg speaks at the F8 Facebook Developers conference in San Jose, California, May 1, 2018 (Photo/JTA-Justin Sullivan-Getty Images)

On social media, hate speech takes a dangerous turn

In Pittsburgh, it was 11 elderly Jews. In New Zealand, it was 51 Muslim worshippers. In Poway, it was a 60-year-old wife and mother. All were killed in the last six months by white supremacists, racists and anti-Semites inspired by extremist ideology they found online and on social media.

Ugly sentiments saturate the internet, where anti-Semitism lives in company with Islamophobia, racism, anti-immigrant sentiment, homophobia, misogyny and a host of other hateful attitudes. Anti-Defamation League CEO Jonathan Greenblatt said online and real-world hate are intertwined.

“Social media allows the poison to spread, and individuals are able to find content and become radicalized,” he said. “It’s a combustible mixture.”

To tackle this issue, members of the House Judiciary Committee sat down last month in Washington, D.C., to hear from a panel of experts, including the ADL. The session was live-streamed on YouTube, the video-hosting site that allows people to comment anonymously. What happened illustrates just how pervasive the problem is.

“wow maybe more pittsburgh fun will happen nation wide cause of this, I hope so” was just one of the many anti-Semitic comments on the YouTube stream.

Within an hour, YouTube announced on Twitter that it had disabled comments — only to face another stream of invective in response: “A person can only take so much,” wrote a commenter. “The near constant assault on white existence from every major institution is reaching a breaking point. Things are going to get very ugly.”

Nathaniel Deutsch, co-director of UC Santa Cruz’s Center for Jewish Studies
Nathaniel Deutsch, co-director of UC Santa Cruz’s Center for Jewish Studies

None of this surprised Nathaniel Deutsch, an endowed chair in Jewish studies at UC Santa Cruz. “If there is an opportunity to post comments, very quickly the comments will cascade, almost inevitably, into anti-Semitism,” he said.

Online hate has been identified as a serious problem that has gotten out of hand. But whose is it to solve? Should the government step in more authoritatively, or do social media and tech companies need to take more responsibility? Do we turn to technological solutions or human ones? Or must speech be protected at all costs?

What to do about online hate is a knotty question at the crossroads of ethics and technology. “These are questions that, again, go far beyond anti-Semitism,” said Deutsch, who co-led a discussion called “Anti-Semitism and the Internet” earlier this month in Mountain View. “But anti-Semitism is a big part of it.”

Many individuals live a big portion of their lives online, and where people congregate, ugly behavior tends to rear its head. That’s why hate speech and harassment are common in high-traffic virtual worlds, whether on social media sites or in gaming chat rooms. A recent ADL/YouGov survey found that 37 percent of Americans experienced online harassment or hate in 2018, double the number from the year before.

“In the past year the sky has darkened a little,” said Daniel Kelley, associate director of ADL’s Silicon Valley-based Center for Technology and Society, which tracks and analyzes online hate speech.

Daniel Kelley, associate director of ADL’s Center for Technology and Society
Daniel Kelley, associate director of ADL’s Center for Technology and Society

The problem has become so prevalent that social media and tech companies are finally facing the fact that ignoring or downplaying the threat is making them look indifferent or even complacent.

A few months ago, Facebook CEO Mark Zuckerberg acknowledged that “bad content” turns off users, but he said his company’s technology isn’t perfect at catching all of it. (Six weeks later, Facebook came under fire for not immediately removing live video of the New Zealand mosque shooting, uploaded by the killer.) On May 2, the company announced a ban on several prominent controversial figures, including Infowars’ Alex Jones and Nation of Islam’s Louis Farrakhan.

But do bans really help to mitigate the problem? There’s a concern, presented in an ADL survey released last month, that banning users just means they move to other, more fringe corners of the internet where they are further isolated. And right-wing sites use these bans as a way to recruit new members — meaning as a whole, the problem gets worse, not better.

“There’s the dark web; there are other sites that emerge,” said Deutsch, who also co-directs the Center for Jewish Studies at UC Santa Cruz.

Other steps taken so far by major players like Facebook, Twitter and YouTube have been criticized as vague and halfhearted. Journalist Noam Cohen, author of a book on Silicon Valley influence called “The Know-It-Alls,” said the companies are playing catch-up.

“No one saw all this coming,” he said. “They’re just muddling through.”

YouTube has been denounced for its algorithms that suggest new videos to encourage people to keep watching. The algorithms can quickly lead viewers to racist and anti-Semitic content. YouTube said it had taken a “number of significant steps” to correct this problem, but as a user put it, “It is literally IMPOSSIBLE to watch a YouTube video where the recommended sidebar won’t lead you to Nazis in a few steps. This is a joke. YouTube IS among the primary places for hate speech.”

Twitter, which has a reputation for excessive online harassment, last year released a report on measures it had implemented to “improve the health of the public conversation.” But an ADL analysis around the same time found that 3 million Twitter users had shared or reshared 4.2 million anti-Semitic tweets over a recent 12-month period.

The ADL identified 4.2 million anti-Semitic tweets in a 12-month period.
The ADL identified 4.2 million anti-Semitic tweets in a 12-month period.

ADL’s Center for Technology and Society is publishing these numbers to counter the tendency of tech companies to be vague, noncommittal or just plain quiet about whether they intend to take action.

“This kind of research is necessary, because it’s hard to get the ground truth about hate at these companies,” Kelley said.

One of the center’s first projects after it was set up in 2017 was the Online Hate Index, a collaboration with UC Berkeley that uses artificial intelligence and a concept known as “machine learning,” in which researchers can “teach” a program to identify hate speech. It’s still in the research phase, but in the meantime the ADL has begun to tackle bias in new areas, like gaming.

A recent international industry study found that people who play video games spend upwards of six hours a week doing so, on average. And because many games are played live with multiple players who are linked by anonymous text chat or audio, the kind of hate speech and anti-Semitism that turns up in comment threads and forums online is rampant in games, too.

Those same worrisome patterns of behavior are cropping up on another relatively new medium, virtual or augmented reality. “We’ve seen people are taking on Hitler as an avatar,” Kelley said.

The newness of VR gives some people hope that the problem can be headed off before it becomes as entrenched as it is on, say, Twitter. But legally speaking, regulating speech isn’t simple, whether it’s a tweet or a virtual conversation.

Platforms that regulate speech can become classified as publishers, which makes their content vulnerable to expensive copyright lawsuits, said intellectual property lawyer Kimberly Culp, who works with digital and virtual reality firms. So rather than tackle hate speech, she said, they leave well enough alone. At the same time, companies don’t want to scare away users by allowing too toxic of an environment.


NEW: We are partnering with ProPublica to collect reports on hate incidents around the Bay Area. Here’s how you can help.


“Those are all business decisions,” she said. “How much will we tolerate?”

The S.F.-based online platform Patreon faced this problem in a very public way when it banned some of its users, including a known anti-Semite and neo-Nazi, at the end of 2018.

The platform offers artists and creative types a way to be paid by their audiences or customers, but Patreon felt that some of its users were making their living by making videos or hosting radio shows that violated the site’s community guidelines on hate speech, “such as calling for violence, exclusion, or segregation.”

So Patreon cut them off.

Jacqueline Hart, head of trust and safety at Patreon
Jacqueline Hart, head of trust and safety at Patreon

“We’ve never said we are free speech absolutists,” Jacqueline Hart, Patreon’s head of trust and safety, told J.

Patreon experienced a backlash and accusations of overreach, but Hart said the company has not changed its approach.

“It has to be a space where creators are comfortable dealing with other creators,” she said.

Hard bans like Patreon’s or Facebook’s reveal some of the ethical fault lines in online policing of speech. Some people think that the host companies should take the bulk of responsibility, while others believe the government should step in with regulations. And some don’t want any regulation at all.

“A lot of people in this country don’t want tech companies in any way, shape or form to be manipulating what they see,” said Samuel Woolley, a researcher at the Palo Alto-based Institute for the Future who studied harassment of Jewish American journalists and leaders during the 2018 midterm elections as a fellow at ADL.

Samuel Woolley, researcher at Palo Alto-based Institute for the Future
Samuel Woolley, researcher at Palo Alto-based Institute for the Future

At the April House Judiciary Committee hearing — the one where YouTube had to shut down comments — Rep. Tom McClintock of Northern California’s 4th District reiterated a common position.

“Speech can be ugly, disgusting, hateful, prejudiced and alarming,” he said. “But it can never be dangerous to a free society as long as men and women of goodwill have the freedom of speech to dispute it, challenge it and reject it.”

Besides moral arguments, a common rationale against regulating online speech is that it violates First Amendment rights. But companies like Twitter or Facebook are not government entities; they are privately owned, and as such are free to remove speech or even ban people from using their services.

But some sites don’t want to. The shooter in the Poway incident posted his anti-Semitic manifesto on a site known for its tolerance of extremist content, saying “I’ve only been lurking for a year and a half, yet what I’ve learned here is priceless.” A recent New York Times article noted that right-wing terrorists across the globe are connected, both by their ideology and by online networks. And yet another recent study by ADL found that white supremacists use fringe online social networks “to spread hate and encourage like-minded followers to head down the path to violence.” It’s a way of pulling people in.

“Extremist groups use social media to crowdsource,” said Oren Segal, director of the ADL Center on Extremism. “They put up flyers available for anyone to download. That’s why you see similar wording in flyers” in different cities.

Caroline Knorr, parenting editor at Common Sense Media
Caroline Knorr, parenting editor at Common Sense Media

Caroline Knorr, parenting editor at Common Sense Media, a Bay Area nonprofit that helps parents mediate the games, films and social media that their children are consuming, said these dark corners of the internet can push some kids in the wrong direction, especially those who feel like outsiders “in real life” and find the wrong kind of tribe online.

“That’s one way hate speech can galvanize kids toward a negative perspective on people, even if you’ve raised your kids with tolerance and open-mindedness,” she said.

Many people argue that technology, which has amplified the voices of anti-Semites, can be used to stop them. Advances in machine learning, artificial intelligence and algorithms to identify bots are being created everywhere, from universities to think tanks to private companies.

“At the same time we’re seeing problems, we’re seeing new solutions,” Kelley said.

Others say that humans created the problem and therefore humans must resolve it. While artificial intelligence can check for clear instances of anti-Semitism, it can’t recognize an inside joke or identify a coded neo-Nazi reference. So companies like Facebook and Patreon are hiring more and more content moderators.

But whether people or machines are employed to clamp down on online hate, new challenges emerge all the time, with bots getting more clever and white supremacists finding websites where they can egg each other on. Experts are trying to get ahead of the curve and think about the next nesting ground for hate, such as “deepfake videos,” which manipulate faces and sound so that any words can come out of the mouth of any person.

Journalist Cohen, who has written extensively on the rise of hatred and conspiracy online, admitted that it’s “frightening to talk about all this.”

“How do you put the genie back in the bottle?” he asked.

Maya Mirsky
Maya Mirsky

Maya Mirsky is a J. Staff Writer based in Oakland.