The rapid ascent of Agentic AI and vibe coding in 2025 has transformed how software is built, promising unprecedented productivity and economic gains. But as organisations rush to capitalise on these trends, the security implications of AI-generated code demand urgent attention.
The Security Dilemma: More Code, More Vulnerabilities
AI-powered coding tools and agentic systems are rewriting the rules of software development. Yet, research consistently shows that code generated by large language models (LLMs) is frequently riddled with vulnerabilities-unless explicitly guided otherwise. These vulnerabilities span from poor input validation and hardcoded secrets to the use of outdated dependencies and insufficient error handling.
A 2025 study found that nearly half of code snippets produced by leading LLMs contained impactful bugs or security flaws, opening the door to malicious exploitation. AI-generated code is not inherently secure, and the risk of introducing business logic errors, SQL injection, cross-site scripting (XSS), and other critical vulnerabilities is significantly higher compared to code written by experienced human developers.
Why Is AI-Generated Code So Risky?
Several factors contribute to the heightened risk profile:
Secure-by-Design: Is It Achievable?
While secure-by-design is possible in theory with robust prompting, detailed guardrails, and security-focused review processes it remains a work in progress. The industry is still maturing, and most organisations lack the comprehensive systems needed to ensure that AI-generated code is consistently safe.
The Productivity Trap: Security as an Afterthought
The promise of Agentic AI is irresistible: faster delivery, lower costs, and democratised software creation. But this very promise can lead organisations to deprioritise security, relying on point solutions that generate high volumes of false positives and security noise. This overwhelms security teams and leaves developers without actionable guidance.
Empowering Developers: From Noise to Action
To address these challenges, security teams must shift from reactive, fragmented approaches to proactive, orchestrated strategies:
Meet Smithy: Security Orchestration for the AI Era
Imagine a platform, Smithy-that orchestrates all your security scans (SAST, SCA, Secrets, DAST, IaC, Container Scanners, SBOMs), correlates and deduplicates findings, and surfaces only the most critical, actionable vulnerabilities with clear fix guidance. Smithy empowers security teams to cut through the noise, enabling developers to ship secure code faster even in the age of Agentic AI and vibe coding.
Agentic AI is rewriting the future of software, but security cannot be an afterthought. With the right orchestration, actionable insights, and a relentless focus on secure-by-design, organisations can harness the power of AI without sacrificing trust or safety.