IPRA Sounds Alarm on AI Risks to Children at UN Forum, Urges Responsible Design

At a UN event, IPRA warned that AI can expose kids to privacy breaches, bias, and harmful content. Writers should use clear, transparent reporting and follow ethical safeguards.

Categorized in: AI News Writers
Published on: Dec 03, 2025
IPRA Sounds Alarm on AI Risks to Children at UN Forum, Urges Responsible Design

IPRA Flags AI Risks for Children at UN Event - What Writers Need to Know

The International Public Relations Association (IPRA) UN Bureau took part in a Journalists and Writers Foundation (JWF) event at the United Nations focused on "Empowering Children's Rights," with youth voices centered on building a sustainable and inclusive future. The conversation zeroed in on the ethical risks Artificial Intelligence poses to young people-and what responsible practice should look like.

Diallo Kiernon presented specific risks children face with AI: data misuse, privacy violations, and exposure to violent content. With AI moving into classrooms and homework tools, the call was clear-safeguards must keep up with usage.

IPRA noted that generative AI can help Public Relations-but it also opens the door to bias, misuse, deception, and harm, creating reputational risk for practitioners. In response, the association has issued guidance that interprets five of the 18 articles in its code of conduct for AI use in communications and beyond. You can review the IPRA Code of Conduct here: ipra.org.

Why this matters for writers

Writers inform parents, educators, and policy makers-and influence how AI tools are adopted in schools and homes. Clear language, evidence over hype, and transparency about AI use in our own work are now baseline expectations. This isn't about fear; it's about accuracy, consent, and accountability.

Practical steps for your reporting and content

  • Disclose if AI assisted your draft or edits. Be specific about what it did.
  • Never include identifiable data about minors. Strip metadata from images and documents.
  • When covering "kid-safe" AI features, ask for independent testing or audits-not just marketing claims.
  • Push vendors for data policies: What's collected? How long is it kept? Can parents opt out?
  • Check for safety basics: default privacy settings, age-appropriate design, parental controls, and content filters.
  • Quote young people responsibly. Get informed consent, protect anonymity where needed, and avoid trauma triggers.
  • For PR content, request details on training data provenance, bias testing, and guardrails before you publish.
  • Include resources for parents and teachers at the end of your pieces, not just product links.

Policy and product design: what the event called for

Speakers urged ethical guidelines that set clear limits for AI use in learning environments. They also highlighted policy frameworks that drive responsible technology design-parental controls, screen time limits, and meaningful input from young people themselves.

Further reading

Level up your AI workflow (safely)

If you're testing AI in your writing process, start with tools that respect privacy and offer transparent controls. A useful starting point is this curated list: AI tools for copywriting.

About the organizers

The New York-based Journalists and Writers Foundation is an international civil society organization focused on the culture of peace, human rights, and sustainable development. The statement confirming IPRA's participation was signed by Philip Sheppard, Secretary General.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide