Poisoning AI-Powered Code Assistants

Everything I do professionally is around helping engineers create amazing applications that are both secure and reliable. That’s why I build engineering tools and educational content that simplify application security.
Throughout my career, I have performed security audits for private and open-source projects, and have found critical vulnerabilities in Google and Mozilla products. I have also taught security to hundreds of engineers and students, while I have also been an external lecturer and Ph.D. candidate in computer science at the Technical University of Denmark.
Here are some of the things I’m working on right now:
- Developing a tool 🛠️ that helps software engineers build applications which comply with privacy requirements
- Creating weekly educational content on application security using comic art 🦇
- Creating a blog 📝 on security at securingbits.com
If you’re interested in learning more about application security, I’d love to hear from you. Feel free to send me a message, and make sure to follow me so I can make security easy for you 🙂
Developers using a poisoned ChatGPT-like tool are more prone to including insecure code than those using an IntelliCode-like tool or no tool.
This outcome is based on research done by Oh, Lee, Park, Kim and Kim involving 30 developers completing coding tasks with AI assistants. Find the full paper "Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models" at https://arxiv.org/pdf/2312.06227.pdf





