CEOs are pushing AI coding assistants into daily development, but new enterprise data suggests the productivity gains come with a steep security bill.
Coinbase chief Brian Armstrong famously required engineers to use AI tools, even dismissing those who refused. Lemonade’s Daniel Schreiber told staff “AI is mandatory.” Citi bank has rolled out agentic AI to tens of thousands of developers.
Even champions admit the downsides are not fully understood. Stripe’s John Collison observed: “It’s clear that it is very helpful to have AI helping you write code. It’s not clear how you run an AI-coded codebase.” Armstrong replied: “I agree. We’re still figuring that out.”
Fresh figures from Apiiro, which analysed codebases in Fortune 50 organisations, illustrate why those concerns are justified. The company’s study finds the same tools that accelerate coding velocity by up to four times are linked to a tenfold surge in security issues, with code review processes strained and deeper architectural weaknesses proliferating.
Inside the data: how AI coding assistants change developer behaviour
Apiiro’s research used its patented Deep Code Analysis engine to examine tens of thousands of repositories and several thousand developers across large enterprises, tracking the impact of multiple coding assistants. The study signals a shift in how work is packaged and merged.
AI-assisted developers created 3-4x more commits than peers who did not use assistants. Yet those commits were bundled into fewer pull requests overall, each wider in scope and touching more files and services. That concentration raises the chance of subtle breaks and makes thorough review harder to sustain at speed.
One instance involved a single AI-driven pull request altering an authorisation header across multiple services. A downstream service was not updated, producing a silent authorisation failure that risked exposing internal endpoints. The episode encapsulates the expanded blast radius when sweeping, multi-service changes travel in larger pull requests.
More code, fewer pull requests, and far more vulnerabilities
The volume of security findings rose by a factor of ten among AI-assisted teams, even as pull requests fell by nearly a third. That combination leaves less surface for review to catch issues before they land on main branches and inevitably increases emergency hotfixes and incident response.
Apiiro’s data shows risk accumulating as AI accelerates output. Larger, multi-touch pull requests tend to introduce several issues simultaneously. When fewer, broader changes are moving through the pipeline, each merge carries greater potential to break critical paths across services and interfaces.
By June 2025, AI-generated code in the studied environments was responsible for more than 10,000 new security findings per month, up tenfold from December 2024. The growth curve is steepening rather than slowing.
The defects span the gamut of application risk. They include dependency issues, insecure coding patterns, exposed secrets and cloud misconfigurations. The uplift is not restricted to one class of vulnerability. It is an across-the-board surge.
From typos to timebombs: AI coding assistants shift risk profiles
There is some good news in the data. Simple syntax mistakes in AI-authored code fell by 76 percent, and logic bugs dropped by more than 60 percent. Assistants excel at the surface-level hygiene that linters and basic checks reinforce.
The trade-off is worrying. Deeper architectural risks are increasing at a far faster rate. Apiiro reports privilege escalation paths up 322 percent and architectural design flaws up 153 percent.
These are systemic issues that scanners often miss and that reviewers can struggle to detect without broader context of how components interact. Broken authentication flows, insecure designs and weaknesses in service boundaries turn into latent hazards that are harder to identify and fix once embedded.
Another area of concern is secrets management. AI-assisted developers exposed Azure Service Principals and Storage Access Keys nearly twice as often as their non-assisted counterparts. Unlike a logic bug, a leaked key can offer immediate entry to production cloud resources.
Because assistants can generate coordinated, multi-file changes, a single mismanaged credential may be copied into several services or configuration files before anyone notices.
Why the review process is buckling
Traditional review practices are calibrated for frequent, smaller pull requests that isolate change and reduce complexity. Apiiro’s findings suggest AI shifts teams towards fewer, broader merges that span multiple services and files, diluting reviewer focus and slowing feedback.
That amplifies the consequences of any oversight. A missed issue in a small change might be harmless or easily rolled back. A missed issue in a cross-service change can break critical paths, require coordinated fixes and increase mean time to recovery. As AI increases output, unreviewed risk can pile up quickly unless governance keeps pace.
The message for leadership is straightforward. If AI coding assistants are mandated for productivity, then security teams need equally capable AI to govern the output. Apiiro argues that conventional scanning and surface checks are not sufficient to catch the new class of architectural missteps and cross-service risks that assistants can introduce.
The broader industry conversation is now moving past the novelty of AI-authored code. Engineering leaders will have to adapt processes and tooling so that speed does not outstrip control, or accept that incidents will become more frequent and more severe.
The data from large enterprises is a reminder that the promise of AI coding assistants in software development is real but not unconditional. The benefits show up quickly in reduced errors and faster delivery. The costs emerge just as quickly in the form of deeper risks.
Addressing both sides with equal seriousness is becoming a requirement rather than an option.
See also: Google improves adaptive UIs and AI tools for Android developers

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.