The Force won’t save your PR. Catch AI slop before it ships | May 4, 12pm ET

→ Register

The AI Coding Paradox

Many organizations have accelerated code generation faster than they have built the systems needed to validate that output. This report, based on the results of a survey of 500 U.S. IT engineers and engineering leaders uncovers the growing gap between AI coding velocity and the systems organizations have built to validate the output.

What you’ll learn

  • How often AI-generated code is causing production incidents across organizations of varying sizes, and which incident categories are most common
  • Why developer confidence in AI code and developer scrutiny of AI code are both rising at the same time, and what that signals about how teams have adapted
  • Where the review burden is landing, and ow AI is reshaping the review burden, and why time savings are unevenly distributed across the engineering population
  • How automated gate adoption correlates with outage rates, and why the largest enterprises are the most exposed
  • What reviewers are actually scrutinizing in AI-generated code, and how those concerns map to the incidents organizations are reporting in production

The data at a glance:

89% of organizations have had at least one AI-related production incident.

40% of the largest enterprises (10,001+ employees) have had a production outage caused by AI-generated code. The highest outage rate of any size bracket in the survey.

41% of developers spend more time on manual review than they did before AI coding tools existed. Productivity gains are real for many, but not for everyone.

95% of developers review AI-generated code with more scrutiny

Download the PDF

Other Resources