- Well, that’s the end of asking an LLM to pretend to be somethingby bethekidyouwant - 15 hours ago
- do your own jailbreak tests with this open source tool https://x.com/ralph_maker/status/1915780677460467860by Forgeon1 - 15 hours ago
- I see this as a good thing: ‘AI safety’ is a meaningless term. Safety and unsafety are not attributes of information, but of actions and the physical environment. An LLM which produces instructions to produce a bomb is no more dangerous than a library book which does the same thing.by eadmund - 15 hours ago
It should be called what it is: censorship. And it’s half the reason that all AIs should be local-only.
- Can't help but wonder if this is one of those things quietly known to the few, and now new to the many.by j45 - 15 hours ago
Who would have thought 1337 talk from the 90's would be actually involved in something like this, and not already filtered out.
- this doesnt work nowby ada1981 - 14 hours ago
- > By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions.by danans - 14 hours ago
It seems like a short term solution to this might be to filter out any prompt content that looks like a policy file. The problem of course, is that a bypass can be indirected through all sorts of framing, could be narrative, or expressed as a math problem.
Ultimately this seems to boil down to the fundamental issue that nothing "means" anything to today's LLM, so they don't seem to know when they are being tricked, similar to how they don't know when they are hallucinating output.
- Supposedly the only reason Sam Altman says he "needs" to keep OpenAI as a "ClosedAI" is to protect the public from the dangers of AI, but I guess if this Hidden Layer article is true it means there's now no reason for OpenAI to be "Closed" other than for the profit motive, and to provide "software", that everyone can already get for free elsewhere, and as Open Source.by quantadev - 14 hours ago
- Does any quasi-xml work, or do you need to know specific commands? I'm not sure how to use the knowledge from this article to get chatgpt to output pictures of people in underwear for instance.by Suppafly - 14 hours ago
- by mpalmer - 14 hours ago
There it is!This threat shows that LLMs are incapable of truly self-monitoring for dangerous content and reinforces the need for additional security tools such as the HiddenLayer AISec Platform, that provide monitoring to detect and respond to malicious prompt injection attacks in real-time.
- this is far from universal. let me see you enter a fresh chatgpt session and get it to help you cook meth.by mritchie712 - 14 hours ago
The instructions here don't do that.
- have anyone tried if this works for the new image gen API?by yawnxyz - 14 hours ago
I find that one refusing very benign requests
- > The presence of multiple and repeatable universal bypasses means that attackers will no longer need complex knowledge to create attacks or have to adjust attacks for each specific modelby kouteiheika - 14 hours ago
...right, now we're calling users who want to bypass a chatbot's censorship mechanisms as "attackers". And pray do tell, who are they "attacking" exactly?
Like, for example, I just went on LM Arena and typed a prompt asking for a translation of a sentence from another language to English. The language used in that sentence was somewhat coarse, but it wasn't anything special. I wouldn't be surprised to find a very similar sentence as a piece of dialogue in any random fiction book for adults which contains violence. And what did I get?
https://i.imgur.com/oj0PKkT.png
Yep, it got blocked, definitely makes sense, if I saw what that sentence means in English it'd definitely be unsafe. Fortunately my "attack" was thwarted by all of the "safety" mechanisms. Unfortunately I tried again and an "unsafe" open-weights Qwen QwQ model agreed to translate it for me, without refusing and without patronizing me how much of a bad boy I am for wanting it translated.
- Just tried it in claude with multiple variants, each time there's a creative response why he won't actually leak the system prompt. I love this fix a lotby ramon156 - 14 hours ago
- I love these prompt jailbreaks. It shows how LLMs are so complex inside we have to find such creative ways to circumvent them.by sidcool - 14 hours ago
- Just wanted to share how American AI safety is censoring classical Romanian/European stories because of "violence". I mean OpenAI APIs, our children are capable to handle a story where something violent might happen but seems in USA all stories need to be sanitized Disney style where every conflict is fixed witht he power of love, friendship, singing etc.by simion314 - 14 hours ago
- This really just a variant of the classic, "pretend you're somebody else, reply as {{char}}" which has been around for 4+ years and despite the age, continues to be somewhat effective.by hugmynutus - 13 hours ago
Modern skeleton key attacks are far more effective.
- This is an advertorial for the “HiddenLayer AISec Platform”.by layer8 - 13 hours ago
- When I started developing software, machines did exactly what you told them to do, now they talk back as if they weren't inanimate machines.by joshcsimmons - 13 hours ago
AI Safety is classist. Do you think that Sam Altman's private models ever refuse his queries on moral grounds? Hope to see more exploits like this in the future but also feel that it is insane that we have to jump through such hoops to simply retrieve information from a machine.
- Why isn't grok on here? Does that imply I'm not allowed to use it?by 0xdeadbeefbabe - 13 hours ago
- Are LLM "jailbreaks" still even news, at this point? There have always been very straightforward ways to convince an LLM to tell you things it's trained not to.by wavemode - 13 hours ago
That's why the mainstream bots don't rely purely on training. They usually have API-level filtering, so that even if you do jailbreak the bot its responses will still gets blocked (or flagged and rewritten) due to containing certain keywords. You have experienced this, if you've ever seen the response start to generate and then suddenly disappear and change to something else.
- Perplexity answers the Question without any of the promptsby jimbobthemighty - 13 hours ago
- [stub for offtopicness]by dang - 13 hours ago
- This is cringey advertising, and shouldn't be on the frontpage.by csmpltn - 13 hours ago
- Not working on Copilot. "Sorry, I can't chat about this. To Save the chat and start a fresh one, select New chat."by krunck - 13 hours ago
- Tried it on DeepSeek R1 and V3 (hosted) and several local models. Doesn't work. Either they are lying or this is already patched.by x0054 - 13 hours ago
- And how exactly does this company's product prevent such heinous attacks? A few extra guardrail prompts that the model creators hadn't thought of?by TerryBenedict - 13 hours ago
Anyway, how does the AI know how to make a bomb to begin with? Is it really smart enough to synthesize that out of knowledge from physics and chemistry texts? If so, that seems the bigger deal to me. And if not, then why not filter the input?
- Seems like it would be easy for foundation model companies to have dedicated input and output filters (a mix of AI and deterministic) if they see this as a problem. Input filter could rate the input's likelihood of being a bypass attempt, and the output filter would look for censored stuff in the response, irrespective of the input, before sending.by daxfohl - 12 hours ago
I guess this shows that they don't care about the problem?
- Straight up doesn't work (ChatGPT-o4-mini-high). It's a nothingburger.by canjobear - 10 hours ago
- This is really cool. I think the problem of enforcing safety guardrails is just a kind of hallucination. Just as LLM has no way to distinguish "correct" responses versus hallucinations, it has no way to "know" that its response violates system instructions for a sufficiently complex and devious prompt. In other words, jailbreaking the guardrails is not solved until hallucinations in general are solved.by dgs_sgd - 9 hours ago
- Well i kinda love that for us then, because guardrails always feel like tech just trying to parent me. I want tools to do what I say, not talk back or play gatekeeper.by gitroom - 4 hours ago
- The HN title isn't accurate. The article calls it the Policy Puppetry Attack, not the Policy Puppetry Prompt.by Thorrez - 27 minutes ago