New Attack Allows Hackers to Use AI Code Editors to Inject Malicious Code

A new "Rules File Backdoor" attack allows hackers to hijack AI code editors, like GitHub Copilot, and inject a malicious code payload into AI-generated code. The code is designed to bypass code reviews and is dangerous as it can propagate throughout an entire supply chain. The attack works by taking advantage of the "Rules.mdc" configuration file to permanently alter the behavior of the code editor.