Red Teams have been working to exploit GPT-5 by 'jailbreaking' it, or conditioning it through well coordinated overtly malicious prompts to receive dangerous or malicious output. This 'conditioning' has already guided GPT-5 to produce a manual for creating a Molotov cocktail, and information pertaining to creating a bomb.