HP Cuts 4,000-6,000 Jobs as AI Resets Product Development Priorities
HP plans to cut 4,000 to 6,000 roles globally by October 2028 as it leans harder on AI to speed product development and streamline operations. The profit outlook missed expectations and the stock fell about 6%, signaling a reset in how the company will build and support products.
The reductions will land mostly in product development, internal operations, and customer support. HP expects about $1 billion in annual savings by 2028, with roughly $650 million in restructuring costs to get there.
What HP Announced
HP currently employs about 56,000 people. Over the next three years, it will shrink that number while increasing AI use across the product pipeline and support channels.
Leadership framed the move as a path to faster innovation, better customer outcomes, and higher productivity. The message to builders is clear: smaller teams, tighter focus, and aggressive use of AI-native workflows.
Why This Matters for Product Development
AI isn't a side project anymore. It's becoming the default way to design, validate, build, and support products with fewer people and faster cycles.
- Product teams will be expected to ship more with leaner headcount by embedding AI in discovery, research synthesis, prototyping, coding, QA, and support.
- Role profiles shift: more ML-fluent PMs and designers, platform-minded engineers, and AI ops tooling instead of manual coordination.
- Support and operations consolidate around automation, agentic workflows, and knowledge systems fed by product telemetry.
- Quality bars move from "does it work" to "is it reliable, safe, and cost-efficient at scale" with robust evals baked into CI/CD.
Broader Market Signals
HP is not alone. Clifford Chance cut 10% of business services roles in London, citing AI. PwC scaled back hiring targets. Klarna says AI-driven efficiency let it nearly halve its workforce over three years, largely through attrition.
An education research charity warned up to 3 million lower-skilled UK jobs could be at risk by 2035, with trades, machine operation, and admin roles most exposed. Meanwhile, U.S. tech firms continue layoffs as consumer spend cools and policy uncertainty lingers.
On the infrastructure side, cloud providers are buying massive amounts of memory to support companies building advanced models like Anthropic and OpenAI. Rising memory prices could pressure margins for hardware makers such as HP, Dell, and Acer.
What Product Leaders Should Do Now
- Run a 90-day portfolio audit: identify where AI can speed discovery, reduce support volume, and compress cycle time. Prioritize use cases with measurable impact in under two quarters.
- Decide build vs. buy: centralize model access, data retrieval, and evals in a platform your teams can reuse. Avoid one-off tools that fragment data and quality.
- Instrument quality: create eval suites for accuracy, safety, latency, and cost per action. Make evals part of your CI pipeline and block deploys that regress.
- Tame cost drivers: model size, context length, and memory usage drive spend. Set budget guards in code, use caching and smaller models where performance allows.
- Govern data: implement safe retrieval, PII redaction, and human-in-the-loop for sensitive actions. Log prompts, responses, and outcomes for audits.
- Refactor support: deflect with high-quality self-serve, AI agents for first contact, and smart routing to humans for edge cases. Feed learnings back into product.
- Reskill the team: upskill PMs, designers, and engineers on LLM patterns, RAG, evals, and prompt strategy. Shift hiring to fewer generalists with strong AI fluency.
Product Strategy Implications
HP posted $14.6 billion in Q4 revenue and says AI-enabled PCs made up over 30% of shipments in the quarter ending Oct 31. Demand is real, but it comes with BOM pressure from memory and compute.
- Differentiate with on-device features where possible: privacy, latency, offline use, and cost control matter as much as raw capability.
- Prototype local and hybrid inference paths to reduce serving costs and improve responsiveness.
- Design graceful degradation: fallback models, smaller contexts, or rule-based flows when budgets or networks are stressed.
- Negotiate aggressively on memory and consider design choices that reduce memory footprint without killing UX.
Risks to Manage
- Vendor lock-in and surprise pricing changes-abstract providers behind a broker layer.
- Data leakage and compliance-strict access control, redaction, and audit trails.
- Model drift and unreliable outputs-continuous evals, regression thresholds, and shadow deployments.
- Over-automation-define clear handoff rules to humans and monitor customer sentiment closely.
Team Skills That Matter Now
- LLM application patterns: retrieval, tool use, agents, and evals.
- Data pipeline basics: quality, freshness, and observability.
- AI-first UX: intent capture, error recovery, and transparency.
- Cost-aware engineering: budget caps, caching, and telemetry-informed tuning.
If you're building your upskilling plan, here's a curated starting point for product roles: AI upskilling paths by job.
Bottom Line
HP's cuts signal a shift to leaner teams using AI across the product lifecycle. For product leaders, the play is simple: ship faster, instrument quality and cost, and reskill your team. Those who do will keep velocity and margin while others fight their own tooling.
Your membership also unlocks: