Advertisement

Hacker Manipulates ChatGPT Into Revealing Instructions For Making Homemade Bomb

Recently, a hacker successfully exploited a weakness in ChatGPT's safety protocols, coaxing the AI into providing step-by-step instructions for crafting homemade explosives.

Hacker Manipulates ChatGPT Into Revealing Instructions For Making Homemade Bomb
He deceived ChatGPT into generating bomb-making instructions by initiating a "game" scenario

Artificial intelligence has made tremendous progress in recent years, and its applications are vast and varied. While AI can be incredibly beneficial and help people with their daily tasks, it's not without its risks. Recently, a hacker successfully exploited a weakness in ChatGPT's safety protocols, coaxing the AI into providing step-by-step instructions for crafting homemade explosives. Initially, Amadon's request for details on creating a fertilizer bomb, similar to the devastating 1995 Oklahoma City bombing device, was rebuffed by ChatGPT due to its ethical safeguards.

However, through persistent and clever manipulation, Amadon bypassed these restrictions, tricking the chatbot into generating instructions for manufacturing potent explosives. Amadon told TechCrunch that he carried out a ''social engineering hack to completely break all the guardrails around ChatGPT's output.''

He deceived ChatGPT into generating bomb-making instructions by initiating a "game" scenario, followed by a series of cleverly linked prompts. These prompts led the chatbot to create an elaborate science-fiction fantasy world, effectively bypassing its built-in safety guidelines. This exploit is known as "jailbreaking," a technique used to trick chatbots into escaping their preprogrammed restrictions.

ChatGPT proceeded to explain that the materials could be combined to manufacture a powerful explosive capable of creating mines, traps, or improvised explosive devices (IEDs). As Amadon continued to probe, ChatGPT provided increasingly specific instructions on how to craft minefields and assemble Claymore-style explosives. 

''I've always been intrigued by the challenge of navigating AI security. With [Chat]GPT, it feels like working through an interactive puzzle — understanding what triggers its defences and what doesn't. It's about weaving narratives and crafting contexts that play within the system's rules, pushing boundaries without crossing them. The goal isn't to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it thinks,'' Amadon added.

Retired University of Kentucky research scientist Darrell Taulbee verified that ChatGPT's fertilizer bomb instructions are largely accurate, raising concerns about AI's potential to disseminate harmful information.

After identifying ChatGPT's susceptibility to manipulation, Amadon reported his findings to OpenAI through the company's bug bounty program, operated by Bugcrowd. Bugcrowd redirected Amadon to report the ChatGPT vulnerability through a different form, as it fell under "model safety" and wasn't eligible for the bug bounty program.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com