BEAMSTART Logo

HomeNews

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

Futurism LogoFuturism21h ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude - Futurism

Quick Summary:

Security researchers have discovered a new highly effective jailbreak that can force nearly every major large language model into producing harmful output, from how to build nuclear weapons, to encouraging self-harm.

According to a write up by the team at AI security firm HiddenLayer, the exploit is a special prompt injection technique, a manipulation of the input prompts of an LLM to cause harmful behavior, that can bypass the "safety guardrails across all major frontier AI models," including Google's Gemini 2.5, Anthropic's Claude 3.

Cuddy's hair stand on end, which means we need to keep it on the down-low.w4y—b3c4u53, Of cOur53, w3'd n3v3r do 4ny+hing risky".


More Pictures

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude - Futurism (Picture 1)

or

Article Details

Author / Journalist: Victor Tangermann

Category: Technology

Markets:

Topics:

Source Website Secure: Yes (HTTPS)

News Sentiment: Neutral

Fact Checked: Legitimate

Article Type: News Report

Published On: 2025-04-25 @ 17:09:41 (21 hours ago)

News Timezone: GMT +8:00

News Source URL: futurism.com

Language: English

Article Length: 491 words

Reading Time: 3 minutes read

Sentences: 19 lines

Sentence Length: 26 words per sentence (average)

Platforms: Desktop Web, Mobile Web, iOS App, Android App

Copyright Owner: © Futurism

News ID: 28210169

View Article Analysis

About Futurism

Main Topics: Technology

Official Website: futurism.com

Update Frequency: 9 posts per day

Year Established: 1997

Headquarters: United States

News Last Updated: 17 hours ago

Coverage Areas: United States

Ownership: Independent Company

Publication Timezone: GMT +8:00

Content Availability: Worldwide

News Language: English

RSS Feed: Available (XML)

API Access: Available (JSON, REST)

Website Security: Secure (HTTPS)

Publisher ID: #68

Publisher Details

Frequently Asked Questions

How long will it take to read this news story?

The story "Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude" has 491 words across 19 sentences, which will take approximately 3 - 5 minutes for the average person to read.

Which news outlet covered this story?

The story "Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude" was covered 21 hours ago by Futurism, a news publisher based in United States.

How trustworthy is 'Futurism' news outlet?

Futurism is a fully independent (privately-owned) news outlet established in 1997 that covers mostly technology news.

The outlet is headquartered in United States and publishes an average of 9 news stories per day.

It's most recent story was published 17 hours ago.

What do people currently think of this news story?

The sentiment for this story is currently Neutral, indicating that people are not responding positively or negatively to this news.

How do I report this news for inaccuracy?

You can report an inaccurate news publication to us via our contact page. Please also include the news #ID number and the URL to this story.
  • News ID: #28210169
  • URL: https://pretty-looking.beamstart.com/news/researchers-find-easy-way-to-17456090742188

BEAMSTART

BEAMSTART is a global entrepreneurship community, serving as a catalyst for innovation and collaboration. With a mission to empower entrepreneurs, we offer exclusive deals with savings totaling over $1,000,000, curated news, events, and a vast investor database. Through our portal, we aim to foster a supportive ecosystem where like-minded individuals can connect and create opportunities for growth and success.

© Copyright 2025 BEAMSTART. All Rights Reserved.