By Asaf Shalev
March 10, 2022 (JTA) – When it comes to antisemitism on social media, the algorithms governing the major platforms shoulder some of the blame for their reach. But the Anti-Defamation League hopes to fight the spread — by creating an algorithm of its own.
The Jewish civil rights group announced Tuesday that it has built a system called the Online Hate Index, describing it as the first tool ever developed to measure antisemitism on social media platforms. The program can sift through millions of posts quickly to detect antisemitic comments and aid in their removal.
This system uses an algorithm informed by artificial intelligence to find and classify posts as possibly antisemitic. Those posts are then fed to a team of both volunteers and experts, who use their judgment to make the final call. The system also tracks whether the posts are eventually taken down.
The Online Hate Index was needed because social media companies are not being transparent enough about their efforts to curb the spread of hate speech on their platforms, according to ADL CEO Jonathan Greenblatt, whose organization has been pressing the big tech companies on the issue for years.
“We will use this tool to hold social media platforms accountable for how well they proactively take down hate and how well their content moderators respond to reports,” Greenblatt said in a statement.
One of the project’s goals is to demonstrate that if the ADL has developed the technology to track antisemitism, surely Silicon Valley can do so as well — and can therefore be doing more to address the issue.
Social media companies have attempted to tackle antisemitism in the past, but their track record is mixed at best. Facebook (now known as Meta) has stumbled following its decision to ban Holocaust denial on its platforms; engineers developed screens that also sometimes blocked legitimate educational posts meant to spread awareness about the Holocaust.
For its first analysis, the ADL used its system to scrutinize Reddit and Twitter, collecting posts from one week in August of last year. The ADL chose these platforms because they are the only major ones that provide open access to their data. Facebook, by contrast, does not typically allow outside groups to tap in for research.
The algorithm used by the ADL was trained to spot instances of possible antisemitism. In a process known as machine learning, human beings had labeled comments as antisemitic and fed them to the algorithm, which in turn began recognizing patterns. The more comments the algorithm processed, the better it became at catching the antisemitic ones.
Antisemitic statements like “Jews are lizard people prove me wrong” and “Jew mind control magic” were among the roughly 2,000 Reddit posts pinpointed by the ADL system, out of some 40 million total comments added to Reddit during that week.
The number of people who view a comment on Reddit is in part determined by whether users “upvote” or “downvote” it — and there’s some good news in this regard. Users are on average scoring antisemitic comments a third lower than other types of posts, according to a report ADL published about its analysis.
“Statistical analysis of those scores shows that antisemitic content on Reddit is rewarded significantly less than non-antisemitic content,” the report said. For Twitter, which provides only a limited snapshot of its data, the ADL estimated there were some 27,400 antisemitic tweets among the 440 million posted during the week its software examined, and that these tweets could have been viewed by as many as 130 million people.
The ADL cautioned that it designed its dragnet to be conservative and that it looked only at English-language text, meaning that video, audio and images were excluded, as well as anything written in a foreign language.
On both platforms, most of the antisemitic comments stayed up for months after being posted and were not removed even after the ADL alerted the platforms about them.
One of the challenges for any attempt to stamp out antisemitic speech is defining the term, with scholars and members holding a wide variety of views on the question. One particularly contentious issue is deciding when criticism of Israel crosses the line into antisemitism.
The ADL report says that its algorithm is trained by in-house experts and volunteers from the Jewish community. That doesn’t mean human judgment is entirely outsourced to computers. In the ADL’s system, artificial intelligence is simply used to sift through masses of content, with its human teams ultimately determining which posts constitute antisemitism. To aid them in their decisions, each volunteer gets a primer that’s also available on the ADL website. That primer includes a reference to the definition of antisemitism drafted by the International Holocaust Remembrance Alliance, which has proven controversial because it focuses on anti-Israel speech.
Some examples in the primer of statements that can be considered antisemitic include “claiming that the existence of a State of Israel is a racist endeavor” and “denying the Jewish people their right to self-determination.”
Critics say that the IHRA definition is improper because it has the potential to delegitimize pro-Palestinian activism if adopted by universities and governmental bodies. Supporters, on the other hand, say that any discussion of antisemitism today must contend with attacks on Israel.
In a post on its website predating the introduction of its software tool, the ADL rejects the idea that the adopting the definition could prohibit criticism of Israel, arguing that expressing such criticism is protected under the U.S constitution..