At Thunders, we surveyed 30 CTOs and 36 other tech leaders to cut through the AI hype and understand what development teams actually need.
This wasn't designed as a broad industry study; it was a focused conversation with the people who write checks for new tools and live with the consequences of technical debt.
What emerged was a portrait of an industry grappling with fundamental tensions between ambition and execution.
Why the CTO Perspective Matters
CTOs made up 45.5% of our respondents, with CEOs representing another 28.8% and CPOs 10.6%. These are the leaders making technology decisions and bearing responsibility for their outcomes.
When a CTO says hiring quality engineers is their #1 challenge (15.1%), they're not just complaining about tight labor markets. They're admitting that their current team can't scale to meet their ambitions.
When they rank production monitoring and software testing as tied concerns (13.6% each), they're revealing something more uncomfortable: they're shipping code they don't fully trust, and they're not confident they'll catch problems before customers do.
The Stack Tells a Story
The technology choices reveal the age and nature of these organizations. GitHub dominates version control at 49%, but the CI/CD landscape fragments across GitHub Actions (35% combined), Jenkins (19%), and a long tail of alternatives (23% using "other" solutions).
This fragmentation reflects companies at different maturity stages, each having made different bets on their pipeline infrastructure.
React leads at 17%, TypeScript at 16%, NodeJS at 13%, a modern JavaScript-heavy stack that confirms these are relatively young companies (average age: 6 years).
The near-absence of .NET (4%) isn't a commentary on Microsoft's technology; it's a demographic signal.
These companies were born in the cloud-native era.
The deployment platforms reinforce this: AWS leads at 28%, but 13% are deploying to mobile apps, and only 4% to desktop applications. These aren't enterprises maintaining legacy systems: they're companies racing to ship web and mobile products.
The Automation Paradox
When asked about the value of various automation tools, respondents revealed a fascinating hierarchy that exposes the gap between desire and belief:
Automated software testing:
71.2% rated it extremely or very valuable
This isn't surprising. Testing is painful, time-consuming, and everyone knows they should do more of it. The demand is clear, and the pain is acute.
Product prioritization:
The practice vs. the promise
Only 33.3% of respondents rated product prioritization as a highly valuable practice in their development process—the lowest rating among all practices surveyed. Yet when asked about AI-powered product prioritization as an automation tool, 65.1% rated it highly valuable.
This gap is fascinating. Teams don't particularly value doing product prioritization themselves, but they see massive potential in having AI do it for them.
This suggests one of two things: either the current manual approaches to prioritization are so painful and ineffective that teams would rather delegate it entirely, or teams recognize prioritization matters but lack confidence in their own ability to do it well. Either way, it's a cry for help disguised as enthusiasm for automation.
AI-assisted code reviews:
57.7% high value, but 83% acknowledged some value
This is the most telling datapoint. Code reviews achieved the most consistent positive reception across the board, yet rank lower in "extremely valuable" ratings. Why? Because teams recognize code reviews matter deeply, but they're not convinced AI can actually do them well yet. This is a belief tempered by lived experience with tools that overpromise.
AI-powered documentation:
53% high value
The fact that documentation ranks lowest among the automation tools is revealing. Everyone hates writing docs, but only about half believe AI can solve it. The other half have probably tried auto-generated documentation and found it useless. Technically accurate but contextually empty.
What the "Magic Wand" Question Revealed
When we asked what respondents would automate if they could, the responses clustered predictably around testing, code reviews, documentation, and bug detection, a.k.a. the visible pain points.
But buried in the responses were glimpses of deeper frustrations: automating customer feedback translation, market research, and user interviews.
These aren't engineering problems: they're product problems.
Some respondents want to automate the entire discovery and validation process.
They're not just looking for faster code; they're looking for faster certainty about what to build.
This reveals a crucial insight: the leaders most frustrated with their engineering challenges are often wrestling with upstream product challenges. They're trying to solve "we're building slowly" when the real problem might be "we're building the wrong things."
The Adoption Barrier: It's Not What You Think
Cost ranked as the #1 factor influencing AI adoption decisions (22%), followed immediately by ease of integration (21%). Demonstrated ROI came in third at only 17%.
This ordering is backwards from rational decision-making. ROI should dominate. But it doesn't, because teams don't trust the ROI projections anymore.
They've been burned by tools that promised 40% productivity gains and delivered integration nightmares, workflow disruptions, and marginal improvements buried in noise.
So they've learned to lead with the tangible questions: "How much does this cost?" and "How much pain will this cause?" before they even engage with "How effective is this?"
Security and privacy concerns ranked at 15%( surprisingly lower than expected). This suggests either that teams trust modern AI vendors to handle data responsibly, or more worryingly, that they haven't fully internalized what it means to send their codebase to a third-party AI system (oof).
The 3-5 Year Outlook: Automation vs. Transformation
31% of respondents see AI primarily enhancing automation and productivity.
Handling routine tasks, improving code quality, and making developers faster at what they already do.
This is the optimistic, incremental view. AI as a power tool.
But 24% anticipate a fundamental transformation of engineering roles.
These respondents aren't talking about developers writing code faster; they're talking about the job itself changing. What does a software engineer do when AI handles implementation?
When nearly a quarter of tech leaders expect role transformation, that's not background noise: that's an earthquake warning.
Another 19% expect deeper AI integration into development environments themselves. They're imagining IDEs that don't just autocomplete but actively participate in architectural decisions, that don't just suggest fixes but understand system implications.14% focused on AI's impact on product development by using AI to determine what to build, not just how to build it.
And 12% expressed genuine uncertainty, which might be the most honest answer.
What This Data Really Tells Us
The contradictions matter more than the consensus.
Teams desperately want automated testing (71% high value) but remain skeptical about AI code reviews (57% high value despite 83% seeing some value). Why the gap? Testing feels like a solved problem that just needs execution. Code reviews require judgment, context, and understanding of consequences, things AI hasn't proven it can deliver reliably.
Leaders struggle with hiring (15.1% top challenge) but show inconsistent views on product prioritization. You can't hire effectively if you don't know what you're building or why. The hiring crisis might be downstream of a strategy crisis.
They lead adoption decisions with cost and integration concerns rather than ROI, revealing an industry that's been oversold and under-delivered to repeatedly. Trust has eroded faster than capability has improved.
They're split almost evenly between evolution and revolution, with nearly a quarter expecting their roles to fundamentally transform within five years, while a third expect incremental improvement.
This isn't disagreement about facts; it's genuinely different visions of the future and thought processes.
The Real Question
This data doesn't show an industry that's figured out AI in software development.
It shows an industry in the middle of figuring it out, with all the uncertainty, contradiction, and cautious optimism that entails.
The question isn't whether AI will change software development. The evidence for that is overwhelming. The question is whether the tools being built today are solving the problems teams actually have, or the problems vendors assume they should have.
When CTOs say they value automated testing at 71% but AI code reviews at only 57%, they're not being contradictory. They're telling us exactly where the credibility gap lies. They believe in automation where they've seen it work. They're skeptical where they've seen it fail.
The companies that bridge this gap (that deliver genuine ROI with minimal integration pain on problems teams actually feel) won't just win market share. They'll reshape what software development means.
About this research:
Thunders surveyed 66 tech leaders, including 30 CTOs, 19 CEOs, and 7 CPOs from companies averaging 6 years in age. The survey focused on decision-makers with both budget authority and technical depth to understand real-world adoption patterns rather than aspirational industry trends.
Data slides