HP's AI Pivot: 4,000-6,000 Layoffs and What It Means for Product and Support Teams
HP plans to cut 4,000-6,000 roles globally by fiscal year 2028 as it leans harder into AI to streamline operations, improve efficiency, and lift customer satisfaction. The company expects about $1 billion in gross run-rate savings over three years.
Teams in internal operations, product development, and customer support will feel the biggest impact. This follows a February round in which HP let go of 1,000-2,000 employees as part of a broader restructure.
Why HP Is Moving Now
AI-enabled PCs are shaping demand. In the quarter ending October 31, more than 30% of HP's shipments were AI-enhanced devices. That demand has pushed up memory costs across DRAM and NAND, which HP expects to feel in the second half of fiscal year 2026.
CEO Enrique Lores said the company is "taking a prudent approach" while qualifying lower-cost suppliers, reducing memory configurations, and taking price actions. Translation: tighter configs, tighter budgets, faster iteration.
If You Work in Product Development
- Prioritize AI features with a clear problem-solution fit and measurable business impact. Kill "nice to have" experiments early.
- Define success upfront: target metrics like feature adoption, engagement lift, defect rate reduction, and time-to-ship.
- Ship small, safe pilots first. Use offline evaluations and A/B tests before wide rollout.
- Keep a human-in-the-loop for risky use cases (recommendations, summarization, code generation). Build escalation paths inside the product.
- Tighten your data layer. You need clean event data, well-scoped schemas, and documented prompts to debug model behavior.
- Control costs: cache results, cap context lengths, and monitor unit economics (inference cost per active user or per task).
- Document everything: prompts, evaluation datasets, failure modes, and rollback runbooks. Don't make future releases guesswork.
If You Work in Customer Support
- Use AI for first-response triage, summaries, and knowledge surfacing-keep humans on edge cases, complex billing, and escalations.
- Measure what matters: CSAT, first-contact resolution, and deflection quality-not just handle time.
- Build a living knowledge base. Tag sources, set freshness SLAs, and auto-summarize updates after each resolved ticket.
- Set guardrails: response tone, restricted topics, and compliance filters. Log every AI suggestion and final human decision for QA.
- Adopt a "two-pass" workflow for critical issues: AI draft, human verify. Keep red team tests for risky prompts and adversarial inputs.
- Train agents on prompt patterns, quick fact-checking, and privacy-safe workflows. The tool is only as good as the operator.
AI Works-Until It Doesn't: Avoid "AI Debt"
AI debt is what piles up when teams rush deployments without the foundations-bad data, weak guardrails, unclear ownership, and no post-launch review. Costs show up as rework, outages, complaints, and brand damage.
- Pick the right problems: repetitive, high-volume, high-friction tasks with clear success metrics.
- Get data-ready: source of truth, access controls, retention rules, and PII handling.
- Pilot with controls: offline evals, canary launches, and kill switches.
- Human-in-the-loop where risk is non-trivial. Define who approves, when, and how.
- Governance: prompt/version control, evaluation sets, incident response playbooks, and model/cards documentation.
- Post-launch: monitor quality and cost weekly; run error sampling; ship fixes quickly.
90-Day Action Plan
- Audit workflows in your team. List 5-10 tasks to automate or augment and rank by effort vs. impact.
- Stand up a small evaluation harness: test prompts, track accuracy, latency, and cost per task.
- Ship two low-risk pilots: one internal (agent assist or dev tooling), one customer-facing with strict guardrails.
- Define success metrics and a review cadence. If a pilot misses its target, fix it or stop it.
- Create a shared prompt library and changelog so everyone learns from what works.
- Align with procurement on vendors, data privacy, and unit economics before you scale.
Skills That Will Keep You Valuable
- For Support: prompt patterns for triage and summaries, knowledge-base structuring, AI QA sampling, privacy-safe workflows, and effective escalation.
- For Product: problem framing, evaluation design, prompt and retrieval engineering, cost/perf tuning, and shipping guardrails.
- For Both: data literacy, writing clear SOPs, incident response basics, and communicating trade-offs to stakeholders.
If you want structured upskilling by role, see these resources:
The Bigger Picture
Across tech, companies are replacing some work with AI-and the outcomes are mixed. The lesson is simple: execute with discipline, or pay for it later.
If you're in product or support, this moment rewards people who can ship useful AI safely and measurably. Keep your scope tight, your metrics honest, and your process visible. That's how you stay essential, regardless of the org chart churn.
Your membership also unlocks: