Can AI Fight Anti-Semitism?

Anti-Semitism has taken many forms over the centuries. Like other forms of hate, threat and violence, it has moved online. A new program means to fight online anti-Semitism with artificial intelligence. It not only advances that critical cause, but the process could be extended to other forms of online threat, hate and violence.

The Alfred Landecker Foundation has created Decoding Antisemitism, a three-year project with the Center for Research on Antisemitism at the Technical University of Berlin, King’s College London. Several other scientific organizations are also involved. The first funding will be €3 million, and the initial focus will be in Germany, France and the United Kingdom.

How does it work? The Alfred Landecker Foundation wrote:

In order to be able to recognize and combat not only explicit but also implicit hatred more quickly, an international team comprised of discourse analysts, computational linguists and historians will develop a highly complex, AI-driven program (AI = Artificial Intelligence). Computers will be “fed” the results of qualitative linguistic and visual content analysis and use these to train algorithms that are continuously tested. One of the aims is to develop an open source tool that at the end of the pilot phase is able to scan websites and social media profiles for implicitly antisemitic content.

The approach will be interdisciplinary and will make use of disciplines from linguistics to anti-Semitism studies and machine learning. Perhaps the most ambitious part is that the process will attack implicit and explicit bias. The computing power required and the amount of data that will be reviewed is extraordinary.

The news invites questions about how this system might be used at Facebook, YouTube and other social media sites that have huge, related problems, some of which these companies say cannot be policed. Whether policing is possible at this point has become the subject of fierce debate, as is whether Facebook has the capacity to do this now. An outside project like the one from the Alfred Landecker Foundation would prove, to some extent, whether Facebook has done this kind of screening appropriately in the past and whether this new effort could make a difference in the future.