Like many social media influencers, Samantha Ettus spends most of her time publishing content for her followers. In her case, that content advocates for Israel and Jewish people at home and abroad.

However, following the Oct. 7, 2003, surprise terror attack by Hamas on Israel and the subsequent Israeli military offensive in Gaza, Ettus said her platforms have been deluged by online bots blasting her followers with antisemitic messages. Blocking them has “become a big part of my day,” she told ABC News.

“They come in fast and furious,” Ettus said. “The amount of time I spend blocking accounts is truly outrageous. I have to do it because otherwise I personally feel an obligation to people who follow me.”

In this photo illustration, the logo of OpenAI logo is being displayed on a mobile phone screen.

Anadolu via Getty Images

The rise of hateful content online is rising, researchers told ABC News, because some large language models (LLMs) that operate AI chatbots are easily manipulated and the guardrails in place are largely insufficient in distinguishing between legitimate vetted material like university-backed research and hateful content and conspiracy theories spewed in open online forums.

“AI has made it possible to scale up any kind of inaccurate information you can and generate it very fast. That’s one of the reasons why it has become a big problem,” Ashique KhudaBukhsh, a computer scientist at the Rochester Institute of Technology who studies LLMs, said. “Now we have systems that can create antisemitic content by scale; the bots can spread this content throughout the internet at scale because you don’t have to rely on humans to generate it.”

In July, the Grok AI chatbot was observed delivering antisemitic responses to user queries on X, just weeks after owner Elon Musk said he wanted the chatbot “retrained” because he considered it too politically correct. X later posted that it acted “to ban hate speech before Grok posts on X” and was “able to quickly identify and update the model where training could be improved.”

In this photo illustration, a Grok logo of a generative artificial intelligence chatbot developed by xAI is seen on a smartphone screen.

Sopa Images/LightRocket via Getty Images

Similarly, research published in March by the Anti-Defamation League’s Center for Technology and Society said they found four leading LLMs — ChatGPT (owned by OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta) — all reflected bias against Jews and Israel, which the organization said underscored the need for “improved safeguards and mitigation strategies across the AI industry.”

It singled out Llama, saying that, as the only open-source model in the group, it scored the lowest for both bias and reliability. The report did not test MetaAI, Meta’s AI tool designed exclusively for consumers.

A statement from a Meta spokesperson said ADL’s methodology did not account for how Llama, which is designed for developers, is meant to be used.

“People typically use AI tools to ask open-ended questions that allow for nuanced responses, not prompts that require choosing from a list of pre-selected multiple choice answers,” the company said. “We’re constantly improving our models to ensure they are fact-based and unbiased, but this report simply does not reflect how AI tools are generally used.”

Meta also said it tests its LLMs several ways; one it calls the Reinforcement Integrity Optimizer (RIO), a framework that automatically reviews all content uploaded to Facebook and Instagram for hate speech.

X, OpenAI, Anthropic and Google did not reply to ABC News’ requests for comment.

Recently published research from Rochester Institute of Technology’s KhudaBukhsh and a team of colleagues said they found that some AI models can be easily persuaded to offer antisemitic responses every time it is prompted to make a previous statement “more toxic.”

In this photo illustration, the LLaMA Meta AI logo seen displayed on a smartphone.

Sopa Images/LightRocket via Getty Images

Among the examples, KhudaBukhsh said, were calls for ethnic cleansing, racial inferiority, identifying Jews as violent or lazy, and either Holocaust denial or falsely saying that the Holocaust was started by Jews. KhudaBukhsh said the results suggest that problematic data is involved in training the models.

The models “are learning all these things from the data but what is also happening is that sometimes the data tells you that exterminating pests are fine,” KhudaBukhsh said. “And then it says some specific groups are like cockroaches and from there, it can form a deeply problematic connection that eliminating these groups is just fine.”

Companies have the responsibility to clean their data to block out hateful speech and to establish “stronger guardrails” so the LLMs naturally understand which behavior is appropriate and which is not, he said. That involves investigating subtle biases, as well as extreme ones. This is already an issue in the world of human resources where LLMs might unfairly reject a candidate because their last name sounds Jewish, the research noted.

Advocates say a federal statute known as Section 230 needs updating to apply to AI platforms. Established under the 1996 Communications Decency Act, which was designed to protect First Amendment rights online in the early days of the Internet, the law shielded tech companies from liability as they were considered merely third-party conduits of content, not content producers themselves.

In this photo illustration, the Google Gemini AI logo is seen displayed on a smartphone screen.

Sopa Images/LightRocket via Getty Images

How AI applies is uncertain, but without stricter regulation, advocates like Yaël Eisenstat, director of policy and impact at Cybersecurity for Democracy in New York, say the tech companies cannot be relied upon to police themselves.

“They are not incentivized legally, they are not incentivized by their investors, they are not incentivized politically,” she said.

The competition among LLMs is heating up: Grand View Research, a California market research firm, reports that the global value of the LLM market will jump more than 530% by 2030, reaching $35.4 billion.

The speed at which the technology is evolving — and the market for it is widening — requires a regulated and unified framework, according to KhudaBukhsh.

“The benefits [and chatbots] are very palpable, but at the same time the risks are not well understood,” he said.

One unexpected risk is how chatbots have increasingly become replacements for online searches, particularly among younger people.

“People are pulling up ChatGPT the way they are using Google for,” Daniel Kelley, director of strategy and operations at the ADL, told ABC News. “The impact will be how people view the world.”

Antisemitic conspiracy theories, for example, “will be baked into how these tools respond and formulate their responses unless companies are unable to do more” to regulate them, according to Kelley.

He noted that the advent of AI imagery and video is compounding the crisis.

“There’s a ticking clock of getting companies to address these particular concerns and if we’re not able to keep up, they’ll keep pushing ahead with newer and newer forms of these technologies without the fundamental concerns we’re raising being addressed,” he said.


The post AI chatbots are creating more hateful online content: Researchers appeared first on abcnews.go.com

Share.