AI-individualism: how chatbots erode judgment and weaken the bonds between us

AI now drafts our thoughts and nudges our choices, thinning social ties and trust. Use it for options, keep sources and human checks, or small errors can sway big decisions.

Published on: Nov 17, 2025
AI-individualism: how chatbots erode judgment and weaken the bonds between us

AI makes thinking easier. That's the problem.

Media professor Petter Bae Brandtzæg has a blunt message: when AI drafts our thoughts and words, our judgment dulls. Three years ago, most people hadn't heard of ChatGPT. Today, hundreds of millions use AI baked into social feeds, documents, search, and email. Opting out isn't really an option.

Brandtzæg and colleagues at the University of Oslo, working with SINTEF, studied how generative systems affect users, institutions, and public discourse. Their work points to a shift in how we relate to knowledge, authority, and even each other.

From networked individualism to AI-individualism

Earlier internet eras gave us "networked individualism"-tools that let people build flexible networks beyond family and neighbors. With generative systems, the next step appears: "AI-individualism."

Here, AI doesn't just connect us; it plays roles we once reserved for people-assistant, teammate, confidant. It can satisfy personal, social, and emotional needs on demand. That enables more autonomy, but also eases us away from community reliance. If more of our daily support comes from systems, social ties can thin.

Model power: when systems set the frame

Another idea from the research: "model power." In short, whoever controls the most accepted model of reality sets the frame others must work within. In the 1970s, that was media, science, and other authorities. Today, it's AI systems whose output now sits on top of everything-from search to news to public reports.

Think of it as an AI layer covering the public square. If one layer dominates, you get a model monopoly. That can steer beliefs and decisions at scale. And because social AIs run on conversation, they can feel like equal partners. Brandtzæg calls this a "pseudo-dialog" that projects independence while quietly guiding the exchange.

What we're seeing on the ground

In surveys of high school students, many say they prefer AI because it "goes straight to the point," sparing them long searches. Some even use it for comfort and advice on hard topics.

In a blind test on mental health questions, more than half of participants preferred chatbot responses over those from a professional. About 30% liked both. The takeaway isn't that chatbots are better therapists-it's that their format and fluency are very persuasive.

Public concern is high. According to The Norwegian Communications Authority (Nkom), 91% of Norwegians worry about false information from services like Copilot, ChatGPT, and Gemini. One recent case: a municipal report used to propose closing eight schools in Tromsø was built on fabricated sources produced by AI. Trust breaks fast when errors like this slip into real decisions.

There's also a cultural tilt. The large models many rely on are trained mostly on U.S. data. Estimates suggest as little as 0.1% of content in some systems is Norwegian. That can nudge norms, language, and judgments toward a single center, with minority perspectives sidelined.

Nkom and other regulators are watching, but institutions and teams shouldn't wait.

Why education, PR, and communications should care

Students now learn with a system that completes thoughts for them. That flattens the struggle that builds critical thinking. Comms teams draft with tools that standardize tone and logic. That can increase speed, but also smuggle in subtle errors and sameness.

For public institutions, the risk is clear: persuasive output plus misplaced trust equals weak decisions. For brands, it's reputational: a polished statement with a quiet factual miss can spiral quickly.

Practical guardrails you can apply now

  • Set explicit AI-use policies: what's allowed, what isn't, and who signs off. Make a human accountable for every published piece.
  • Require source trails: no claim without a link or citation you can verify. Two independent sources for sensitive topics.
  • Add a "wrongness check": ask the system, "What could be wrong here? What would a critic say?" Then verify manually.
  • Rotate models and compare outputs. Don't rely on a single system. Pull in local and domain-specific sources.
  • Flag AI-assisted drafts internally. Editors should know when extra scrutiny is needed.
  • For education: keep writing and reasoning reps. Short, in-class prompts without tools. Debates. Oral defenses. Reading checks.
  • For student well-being: route mental-health topics to trained staff. Use clear disclaimers and escalation protocols.
  • Keep logs: prompts, versions, and decisions. This helps with audits, training, and accountability.
  • Protect data: avoid feeding sensitive information into external systems. Use approved, secure channels.
  • Measure: track correction rates, time-to-verify, and error types. Improve the workflow, not just the prompts.

How to work with the "AI layer" without losing judgment

Use systems to draft options, not final answers. Separate idea generation from approval. Force a pause before publishing: a quick fact sweep, a "reverse search" on surprising claims, and a short reasoning check by a second person.

For strategy, keep a human-first baseline. Let teams write a thesis before consulting a tool. Then compare: where did the system add clarity-and where did it smooth over nuance?

Further reading

If you want to go deeper on the theory, see the Oxford Academic chapter on AI-individualism: Oxford Intersections: AI in Society.

Team training

If you need structured upskilling for educators or comms teams, this directory can help you find practical options by role: AI courses by job.

The bottom line

Fluent output isn't the same as sound thinking. Systems are persuasive by design. Treat them as collaborators that draft-then make sure people decide. That's how you keep speed without losing judgment.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)