2025-12-13 –, Workshop Room 4
Are you skeptical about the security of code generated by tools like Cursor, GitHub Copilot, and Windsurf? Does it seem like devs spend more time reviewing and debugging AI-generated code? You’re right to be concerned. Studies show that tasks take 19% more time when devs use AI tools, and 62% of AI-generated code has issues. But these tools aren’t going away, so what can we do about it? In this workshop, you’ll get hands-on experience with three techniques that are proven to improve AI-generated code security: prompts, rules, and MCP servers. You will learn how each technique works and experiment with using them to eliminate security bugs. In addition to improving code security, you’ll see how these techniques also make AI tools more usable and improve developer experience.
Prerequisites for this workshop include:
- An internet-capable laptop
- Accounts for GitHub and Cursor
- GitHub Desktop
- Cursor
This workshop includes lecture, interactive conversations, and hands-on activities. During the portion on secure prompting, participants learn about the four types of prompts (priming, reasoning-based, decomposition-based, and refinement-based) and will see how each effectively guides AI code assistants to create more secure code. When we learn about rules, attendees will experiment with managing the context window and providing documentation through rules files, and we will also cover using LLMs for test-driven development. Finally in the section on MCP servers, attendees learn how to bring in real-time security signal so that LLMs can fix issues before devs even know they exist.
Jenn Gile is a tech educator and community builder. Currently she's Head of Community at Endor Labs, and previously worked at F5, NGINX, and the U.S. Department of State. Outside of work, she's very involved in the cycling community as a board member with 2nd Cycle
