HomeCisco tackles AI coding security with open-source frameworkUncategorizedCisco tackles AI coding security with open-source framework

Cisco tackles AI coding security with open-source framework

Software developers worldwide are using AI assistants to boost their coding productivity, but security hasn’t kept pace with adoption.

AI-generated code frequently contains vulnerabilities: insecure defaults, missing input validation, hardcoded secrets, weak cryptography, and deprecated dependencies. These flaws slip into production unnoticed.

The tech industry has needed a unified, open approach to secure AI-assisted coding. Cisco is now open-sourcing Project CodeGuard, its internal framework for securing AI-generated code.

Project CodeGuard integrates secure-by-default rules into AI coding workflows. It includes a community-driven ruleset, translators for popular AI coding tools, and validators for automatic security enforcement. Cisco said the goal is to “make secure AI coding the default, without slowing developers down.”

The rules work throughout the AI coding lifecycle. They can be applied during product design and spec-driven development, in the planning phase to guide models toward secure patterns, during code generation to prevent issues in real-time, and after generation for automated review. AI agents like Cursor, GitHub Copilot, Codex, Windsurf, and Claude Code can use the rules at any stage.

This creates layered protection without sacrificing the speed that makes AI coding tools valuable.

An input validation rule, for example, can suggest secure handling patterns during generation, flag unsafe processing in real-time, and verify proper sanitisation in the final code. A secret management rule could prevent hardcoded credentials, alert on sensitive data patterns, and confirm secrets are externalised securely.

The rules steer AI toward safer patterns and away from common vulnerabilities, but they don’t guarantee secure output. Cisco stresses that developers using AI for coding must still apply standard security practices, including peer review. Project CodeGuard is a defense-in-depth layer, not a replacement for engineering judgment or compliance requirements.

Version 1.0.0 includes core security rules based on OWASP and CWE guidance, automated scripts that translate rules for Cursor, Windsurf, and GitHub Copilot, and documentation for new contributors.

The roadmap includes broader language coverage, more platform integrations, and automated rule validation. Future versions will automatically translate rules for new platforms, suggest rules based on project context and tech stack, maintain consistency across agents, reduce manual configuration, and provide feedback loops to improve rules based on usage patterns.

Cisco wants community involvement to further improve their open-source AI coding security tool. Security engineers, software developers, and AI researchers can contribute by submitting rules for specific languages, frameworks, or vulnerabilities; building translators for other AI tools; or sharing feedback through bug reports, improvement suggestions, and feature proposals.

See also: AI skills now vital, but software developers say people matter most

Banner for AI & Big Data Expo from TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

Developer is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Home
Services
Careers
Call Us
Contact