EA's AI-for-everything push is backfiring - buggy code, fewer artists, and layoff fears

EA is pushing staff to use AI for pretty much everything, spurring rework, layoff fears, and legal risks. Managers need clear rules, human review, and outcomes over quotas.

Categorized in: AI News Management
Published on: Oct 29, 2025
EA's AI-for-everything push is backfiring - buggy code, fewer artists, and layoff fears

EA's AI Mandate Is a Warning Shot for Managers

A new report says Electronic Arts has been urging its ~15,000 employees to use AI for "just about everything" - coding, concept art, even coaching managers on how to deliver tough messages about pay and promotions. Inside accounts describe flawed code that needs heavy rework, creative teams asked to train tools with their own work, and fear that roles like character art, level design, and QA will shrink.

One former QA lead believes a round of layoffs was tied to AI systems reviewing and summarizing playtest feedback - a task his team owned. Internal documents reportedly push mandatory AI training, daily usage targets, and positioning generative AI as a "thought partner."

The core issue: speed without guardrails

EA has publicly acknowledged the risk. In its 10-K, the company notes that misusing AI could create social and ethical issues, legal exposure, brand damage, and financial impact. That's the tell: pressure to adopt without disciplined governance creates rework, morale debt, and reputational risk.

Meanwhile, industry signals show AI is becoming standard. The latest GDC survey says a majority of developers now use generative tools, and SteamDB even lets users filter games that disclose AI usage. Customers care how AI is used - and they're starting to vote with filters.

If you lead a team, here's the playbook

  • Clarify where AI is allowed, optional, or off-limits. Be specific by task: code suggestions, test summarization, concept exploration - approved. Sensitive comms, performance decisions, and anything impacting compensation - manager-owned with HR review.
  • Keep humans in the loop with real quality gates. Enforce code reviews, unit tests, static analysis, and security scanning for AI-assisted code. Tag AI-generated changes in your repos and apply stricter checks until defect rates match human baselines.
  • Protect creative IP and consent. Don't require artists to train tools with their personal style without clear rights, opt-in, and incentives. Track dataset provenance and run legal reviews on models and training data.
  • Plan your workforce - don't surprise it. Map tasks most susceptible to automation. Commit to reskilling, redeployment paths, and transition support before you scale automation. If a role changes, publish what "good" looks like in the new model.
  • Stop the daily-use quota mindset. Incentivize outcomes (quality, cycle time, customer impact), not tool usage. Start with opt-in pilots and a small group of champions who share repeatable workflows.
  • Set a real communication policy. AI can help draft, but managers must own sensitive conversations. Provide approved templates and require HR/Legal review for complex topics.
  • Stand up an AI governance board. Engineering, Product, HR, Legal, Security. Approve tools, define data boundaries, audit logs, and vendor DPAs. Review incidents and publish decisions.
  • Measure what matters. Track defect rate for AI vs. human code, rework hours, cycle time, security findings, creative acceptance rate, and an employee trust index. Tie expansion to hitting thresholds.
  • Be transparent externally. If AI contributes to shipped content, disclose it. Consider user-facing labels and filters, similar to how SteamDB highlights games that report AI usage.

A simple 30/60/90 rollout

  • 30 days: Publish an AI use policy, inventory tools, run 2-3 narrow pilots (e.g., code review assist, test summarization, asset tagging). Launch baseline training for managers and ICs.
  • 60 days: Stand up an approved tools list, data controls, and audit logging. Add quality gates. Start reporting weekly metrics and incident reviews.
  • 90 days: Compare pilot metrics vs. baselines. Scale what meets quality targets. Pause or redesign what doesn't. Share outcomes and next steps with the whole org.

What the EA story teaches

  • AI isn't a silver bullet. Without clear guardrails, it just moves defects downstream.
  • People fear secret roadmaps. If employees think the plan is "use AI or be replaced," they'll resist. Show the path: skills to learn, roles to grow into, and safeguards for quality and ethics.
  • Customers watch signals. Filters and disclosures are becoming normal. Treat transparency as part of product quality.

Helpful references

Industry adoption data: GDC State of the Industry Survey
EA investor information: EA Investor Relations

Skill up your managers and teams

If you're formalizing AI roles, policies, and workflows, a structured path helps. Start with role-specific learning and practical certifications that map to your policy and tooling stack.

Bottom line

Push for outcomes, not blanket AI mandates. Define where AI helps, keep humans accountable, measure quality, and communicate the plan. That's how you get the upside without burning trust - or shipping avoidable flaws.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)