The software paradox: agentic AI trims demand for dev skills while coding more code
Agentic AI is moving fast across enterprises, and it's changing who gets hired. A new IEEE survey of 400 CIOs, CTOs, and IT directors across Brazil, China, Japan, India, the UK, and the US suggests demand for software development skills inside AI roles is set to fall next year.
The headline shift: software dev skills for AI roles dropped 8 points to 32 percent. Meanwhile, leaders want more AI ethics, data analysis, and core machine learning capability on their teams.
Hiring signal: what's down, what's up
- Software development skills in AI roles: 32% demand (down 8 points year over year)
- AI ethics expertise: 44% (up 9 points)
- Data analysis: 38% (up 4 points)
- Machine learning capabilities: up 6 points
Translation for teams: fewer headcount slots for pure coding inside AI programs; more emphasis on governance, data fluency, and model-centric skills.
The paradox inside engineering orgs
While demand for dev skills falls in AI roles, 39 percent of leaders plan to use agentic AI to assist software development itself. The tool is replacing parts of the work it helps create.
Cybersecurity still tops use cases at 47 percent for real-time vulnerability spotting and prevention, though even that figure is dropping. Big vendors are leaning in: one major database and cloud provider says most of its new code is AI-assisted, and Salesforce is pushing agent-based tooling for teams under its Agentforce banner.
Where AI will hit hardest next
- Software: 52% expect significant transformation
- Banking and financial services: 42%
- Healthcare: 37%
- Automotive and transportation: 32%
Expect more AI-driven workflows, fewer repetitive coding tasks, and tighter integration between data, models, and production systems.
Temper the hype: failure rates and reality checks
Gartner estimates two in five agentic AI projects could be scrapped by 2027 due to rising costs, weak commercial outcomes, or poor risk controls. It also reports office-task agents get things wrong roughly 70 percent of the time.
Forrester expects some companies to quietly rehire people they cut for "AI efficiency." That's your cue to build human-in-the-loop processes and clear exit ramps for pilots that don't deliver.
What this means for IT and development teams
- Shift your skill stack: ethics and policy literacy, data analysis, ML fundamentals, and evaluation methods (offline/online) move up the priority list.
- Engineer for oversight: add guardrails, approval flows, and rollbacks for agent actions. Track sources, prompts, and outcomes.
- Invest in observability: monitor cost, quality, and drift like you would latency and error rates. Kill what doesn't meet thresholds.
- Treat agents as junior devs: fast, helpful, and error-prone. Code review, tests, and staging aren't optional.
- Re-scope roles: fewer boilerplate coders; more AI product owners, data engineers, security engineers, and model risk specialists.
Practical next steps
- Use AI to accelerate specs, tests, migrations, and refactors. Keep core architecture, security decisions, and releases with humans.
- Pilot agentic workflows on low-risk tasks. Define success metrics upfront and a clear sunset policy.
- Create an AI review board for ethics, privacy, and security. Document model and data choices.
- Level up your team's skills with structured learning paths. See curated options by job role at Complete AI Training.
Context and sources
The hiring data comes from an IEEE survey of global tech leaders. You can track their work here: IEEE. Forecasts on project attrition and error rates are consistent with analyst notes from Gartner.
Bottom line
Agentic AI is shrinking demand for pure dev labor inside AI teams while increasing demand for ethics, data, and ML judgment. Teams that combine AI-assisted coding with strong controls, measurement, and cross-functional skills will ship faster-and avoid expensive rework later.
Your membership also unlocks: