HP moves AI off the cloud and onto your PC to protect your data

HP is banking on AI PCs that run models locally to keep data private and under control. Expect AI PCs to become default as firms seek speed, lower costs, and compliance in Asia.

Published on: Dec 10, 2025
HP moves AI off the cloud and onto your PC to protect your data

HP's AI PC play: local models, safer data, and a practical path for adoption

HP is steering its device strategy toward AI that runs on the device, not the cloud. Chief Commercial Officer David McQuarrie says the shift is about privacy, security, and control of data. His take: "In a world where sovereign data retention matters, people want to know that if they input data to a model, the model won't train on their data."

He expects AI PCs to become the default choice. As he put it, "Longer term, it will be impossible not to buy an AI PC, simply because there's so much power in them."

Why on-device AI matters now

Local models keep sensitive data on the machine and cut exposure to third-party services. They're faster for everyday tasks, cheaper at scale for heavy inference, and easier to govern in regulated markets. For leaders, this lowers risk while improving user experience-two levers that actually move adoption.

Smaller models, big edge for SMBs

McQuarrie highlights the power of compact, local models for small businesses and individual pros. They have valuable, context-rich data that doesn't need to leave their environment. This approach supports role-specific workflows, reduces latency, and strips out recurring cloud costs for routine inference.

Regulation is pulling AI to the edge-especially in Asia

Many Asian governments enforce rules that keep citizen data within their borders. China is strict on data residency. South Korea is building national AI capability with local players like Naver. Singapore is funding regionally tuned models such as SEA-LION, built for Southeast Asian languages and contexts (AI Singapore's SEA-LION).

HP's opening: Asia is small today, but it's growing fast

Asia is currently HP's smallest market, yet it grew the fastest in the last fiscal year. Sales from Asia-Pacific and Japan were up 7% to $13.3 billion, about a quarter of HP's $55.3 billion total. McQuarrie sees room to be "disruptive" across the region.

Adoption reality check

Interest in AI is high. Actual deployment is hard. Recent industry research shows many companies are still stuck in pilots and proofs of concept. Execution is the gap to close (McKinsey: State of AI).

McQuarrie believes adoption in Asia could move as fast-or faster-than other regions. Consumer comfort with AI is higher in several Asian markets, which often spills into workplace behavior.

What this means for executives

  • Define your data lines. Classify data by sensitivity and residency rules. Decide what must stay on device, what can live on-prem, and what can go to the cloud.
  • Run focused pilots with AI PCs. Start with roles that suffer from latency or privacy hurdles (sales, support, finance). Track accuracy, response time, and user satisfaction.
  • Adopt smaller, local models for context-heavy tasks. Pair with retrieval from approved internal sources so the model stays lean while answers stay relevant.
  • Negotiate the right device specs. Prioritize strong NPUs, RAM, secure enclaves, on-device encryption, model sandboxing, and admin controls with detailed audit logs.
  • Plan edge model operations. Standardize versioning, update cadence, rollback, and telemetry that protects privacy. Treat local models like critical apps, not side projects.
  • Design for data sovereignty markets. Use local cloud partners where needed and keep content moderation, logging, and training data compliant with local law.
  • Build a simple trust policy. No auto-training on user inputs. Clear prompts, clear disclaimers, and automatic redaction. Keep humans in the loop for high-impact actions.
  • Budget with TCO, not hype. Compare cloud inference costs vs. on-device over 12-24 months. Include downtime risk, offline capability, and support overhead.
  • Make AI invisible in the workflow. Integrate features where work already happens. Reduce clicks, not add new dashboards.

What to look for in an AI PC rollout

  • Hardware: NPU performance (TOPS), RAM/VRAM headroom, battery life under AI load, and thermal stability.
  • Security: Device encryption by default, secure enclaves for model/runtime, policy controls to block data egress.
  • Model stack: Support for compact LLMs and speech/vision models; easy updates without breaking IT policy.
  • IT controls: Centralized provisioning, telemetry with privacy controls, role-based access, and audit trails.
  • User experience: Sub-second responses for common tasks, offline capability, and features that feel native.

The end state: AI that "just works"

McQuarrie's north star is simple: make AI so seamless that people benefit without thinking about it. "The future of work is a device that makes your experience better and your productivity greater. The fact that we're using AI in the background? They don't need to know that."

If your roadmap leans on privacy, speed, and fit for local markets, on-device AI isn't a nice-to-have-it's the operating baseline.

Want a quick way to upskill your org on edge AI, AI PCs, and governance? Explore curated learning paths by role here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide