Jenn Gile
Jenn Gile is a tech educator and community builder. Currently she's Head of Community at Endor Labs, and previously worked at F5, NGINX, and the U.S. Department of State. Outside of work, she's very involved in the cycling community as a board member with 2nd Cycle
Session
Are you skeptical about the security of code generated by tools like Cursor, GitHub Copilot, and Windsurf? Does it seem like devs spend more time reviewing and debugging AI-generated code? You’re right to be concerned. Studies show that tasks take 19% more time when devs use AI tools, and 62% of AI-generated code has issues. But these tools aren’t going away, so what can we do about it? In this workshop, you’ll get hands-on experience with three techniques that are proven to improve AI-generated code security: prompts, rules, and MCP servers. You will learn how each technique works and experiment with using them to eliminate security bugs. In addition to improving code security, you’ll see how these techniques also make AI tools more usable and improve developer experience.
