Developers using a poisoned ChatGPT-like tool are more prone to including insecure code than those using an IntelliCode-like tool or no tool.
This outcome is based on research done by Oh, Lee, Park, Kim and Kim involving 30 developers completing coding tasks with AI assistants. Find the full paper "Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models" at arxiv.org/pdf/2312.06227.pdf