Metacritic pulls Resident Evil Requiem review amid alleged AI byline scandal
Metacritic has removed a Resident Evil Requiem review after reports indicated it was generated by AI and attributed to a writer who doesn't appear to exist. The review, hosted on Videogamer and credited to "Brian Merrygold," was a 543-word mess that raised immediate red flags.
Investigations found no online footprint for Merrygold prior to October 2025. Making it worse, the filename of his profile image reportedly began with "ChatGPT-Image-Oct-20-2025," a tell that the byline and headshot were fabricated.
Former Videogamer staff say most of the team was let go last week and replaced by AI systems. Other listed authors on the site show no history before October 2025 despite claiming years of experience. One editor linked the wrong Videogamer X/Twitter account on her profile, and her following list includes several accounts suspected to be AI, many spun up in the same timeframe.
"The RE Requiem review and a handful of other Videogamer reviews from 2026 have been removed from Metacritic," said Metacritic co-founder Marc Doyle. "Metacritic's policy is to never include an AI-generated critic review on Metacritic and if we discover that one has been posted, we'll remove it immediately and sever ties with that publication indefinitely pending a thorough investigation."
There are similar claims elsewhere. The former co-founder of Esports News UK, who left in 2025, alleges some authors on that site are fabricated as well. The story is still developing.
For writers and editors, this isn't a niche industry drama. It's a signal. Trust is a finite asset. Bylines are proof of work. If your brand ships AI-written criticism under fake names, you burn both.
Key takeaways for writers
- Your byline is a contract. It promises lived context, taste, and accountability. If you use AI, disclose it clearly. If you don't, protect your name with receipts.
- Aggregators have hard lines. Platforms like Metacritic will pull AI-generated reviews and cut ties. That fallout follows you.
- Readers can spot hollow prose. Nonsensical structure, generic claims, and mismatched details are credibility killers.
Rapid verification checklist (use it before you publish)
- Author identity: Search for prior clips, conference appearances, podcast panels, and social posts older than a year. No history? That's a stop sign.
- Profile image: Run a reverse image search. Check filenames and metadata for AI indicators. Stock or AI face? Reject.
- Claims vs. evidence: Demand links, citations, and first-hand notes. If the review reads like a plot summary and vibes, not reporting, escalate.
- Editorial trail: Keep a changelog. Who edited it, when, and what changed. If you can't reconstruct the draft history, you can't defend the piece.
- Account hygiene: Profiles must link to the correct brand accounts. Cross-check follower/following lists for newly created, low-signal accounts.
- AI policy: Publish a clear, public standard for AI assistance, disclosure, and byline requirements. Enforce it.
If you run a newsroom or a solo shop
- Hire for voice, not volume. One sharp critic beats ten synthetic bylines.
- Set non-negotiables: Real names, verifiable histories, and reachable editors. No ghost mastheads. No AI headshots.
- Pre-aggregator checklist: Before syndication, confirm authorship, sourcing, and originality. Keep proof on file.
- Crisis plan: If a piece is challenged, respond with transparency: what happened, what you've removed, and how you'll prevent a repeat.
AI can help with drafts and research, but it doesn't replace authorship or ethics. Writers who protect their name-and publishers who protect their readers-win long term. Everyone else will get filtered out.
Want a deeper playbook on responsible AI use, workflow design, and byline integrity? Explore AI for Writers.
Further reading on coverage and context: Kotaku.
Your membership also unlocks: