Pallone urging social media companies to take action to stop hate speech

Congressman sends letter, holds roundtable to discuss how to stop online misinformation directed toward Jewish community

Hate speech on social media has long been a concern for the Jewish community. In fact, Rabbi Abraham Cooper, the associate dean and director of the Global Social Action Agenda for the Simon Wiesenthal Center, spoke on it during an event at Saint Peter’s University last spring — long before the current wave of antisemitism exploded around the globe.

U.S. Rep. Frank Pallone Jr. (D-6th Dist.) is now trying to do his part.

Last week, Pallone sent letters to the CEOs of Meta, TikTok, X and YouTube questioning how they are mitigating the proliferation of extreme, graphic, false, misleading or harmful content in the aftermath of Hamas’ October attacks and Israel’s ongoing response in Gaza.

On Wednesday, he held a roundtable with Jewish community leaders on social media companies’ failure to contain violent content and misinformation.

“I appreciate the briefings committee staff recently received from social media companies about the harmful content proliferating on their platforms in the aftermath of Hamas’ October attacks and Israel’s ongoing response in Gaza,” he said. “While these meetings were helpful in most instances, I was particularly disappointed that X came with only half-answers and seemed more interested in forcing users to act as unpaid content moderators rather than operating its platform responsibly.”

Pallone said this problem could repeat itself in the future.

“I am especially concerned that weakened trust and safety policies and increased reliance on automation over humans to monitor content undermines these companies’ abilities to adequately handle sudden and complex world events,” he said. “These problems are only exacerbated by company policies that intentionally amplify divisive and extreme content so they can bring in more ad dollars — choosing profits over the American people. As a follow-up, I’m expecting the companies to provide more comprehensive answers on their efforts to address harmful content on their platforms.”

Almost every day since the attacks on Oct. 7, there have been new reports describing how terrorist organizations and others have used social media platforms to inflict more suffering on civilians, mislead the public about the Israel-Hamas conflict and spread hate. In other cases, users spread doctored content designed to incite outrage among the Israeli and Palestinian public.

In his letter, Pallone pointed to several examples, including:

  • A video on X purported to show a second air assault on Israel that was later debunked as a clip from a video game;
  • Widely-viewed images on X intended to illustrate the human toll of the war on Palestinians were later found to be from other conflict zones or disaster sites, such as Syria and Tajikistan;
  • A video on Meta purported to show a young Israeli women being beaten by a Palestinian mob was later revealed to be a 2015 clip depicting gang violence in Guatemala; and
  • Videos widely circulated on TikTok depicting Israeli victims of Hamas’ massacre were revealed to be deepfakes generated through artificial intelligence.

Pallone demanded each platform respond to a series of questions to understand its efforts to address extreme, graphic, false, misleading and harmful content on its platform, including:

  • What company policies, manuals or documents were designed to address violative content on your platform related to the current crisis in Israel and Gaza?
  • How much content relating to the crisis in Israel and Gaza has been posted to your platform since Oct. 7, and how much has been removed for violating your trust and safety terms, and why it was removed?
  • How many accounts have been suspended since Oct. 7 because of a link to a terrorist organizations involved in the conflict between Israel and Hamas, including Hamas, Hezbollah or Palestinian Islamic Jihad?
  • What measures are in place to allow users to report concerns about extreme, graphic, false, misleading and otherwise harmful content related to the crisis in the Middle East, including content that is antisemitic or Islamophobic?
  • What is your policy on allowing misinformation and disinformation to remain on your platform as it relates to events in Israel and Gaza since Oct. 7?
  • What is the current composition of your content moderation workforce and procedures and what, if any, modifications has your company made to the number of human personnel reviewing content for violations of your trust and safety criteria since Oct. 7?

Like so many other things, staffing is an issue.

In total, estimates say tech companies — including social media giants like Meta and X — have laid off over 100,000 staff, many of whom were responsible for helping provide a safer online environment.