ASU researcher works to set global standards for detecting and correcting AI-generated media

Humans correctly identify AI-generated images only 51% of the time - no better than a coin flip. An ASU researcher is developing detection standards and "machine unlearning" tools to make synthetic media identifiable and fixable after release.

Categorized in: AI News Science and Research
Published on: Apr 09, 2026
ASU researcher works to set global standards for detecting and correcting AI-generated media

ASU researcher pushes standards for AI-generated media

People can distinguish AI-generated images from authentic ones only about 51% of the time - roughly the accuracy of a coin flip. That finding, published last year in Communications of the Association for Computing Machinery, signals a widening gap between what generative tools can create and what humans can verify.

The real-world costs are mounting. Online retailers face a surge of fraudulent returns using AI-generated product images. Deepfake-related financial losses exceeded $200 million in just three months of 2025. Yet the systems to detect or control synthetic media remain fragmented or absent.

Yezhou "YZ" Yang, an associate professor of computer science and engineering at Arizona State University, is working to close that gap by establishing technical standards that make AI-generated content identifiable and, in some cases, correctable.

Making AI-generated content detectable

Yang's approach centers on a straightforward concept: require generative AI systems to embed detectable signals - similar to digital fingerprints - directly into the content they produce.

"It's like a wireless protocol," Yang said. "If everyone agrees to the protocol, then every model generating images would embed something like a watermark that detectors can read later."

Yang's team began studying this problem in 2020, focusing on subtle statistical patterns left behind by generative models. These digital traces are invisible to humans but detectable by machines.

The challenge: as models improve, those traces become harder to find. Detection risks becoming an ongoing technological arms race. That realization pushed Yang to explore solutions beyond detection alone.

Teaching AI systems to forget

Yang's newer work focuses on machine unlearning - teaching AI systems to selectively forget specific data, concepts, or behaviors without retraining the entire model from scratch.

This matters because retraining massive models can take months and cost millions. Unlearning methods target and remove unwanted information directly.

"Whatever data is learned - the good and the bad - it sticks," Yang said. "Unlearning gives us a way to go back and fix that."

Yang's group has developed two methods. The first, Robust Adversarial Concept Erasure (RACE), removes sensitive concepts like explicit imagery from generative models while resisting attempts to recover them through adversarial prompts. The second, EraseFlow, redirects models away from unwanted concepts while preserving overall image quality.

These approaches point toward AI systems that are not only transparent but also editable after deployment. Such capability could help companies comply with "right to be forgotten" laws, remove copyrighted material when licenses expire, or eliminate harmful biases discovered after release.

Building consensus beyond the lab

Technical advances mean little if they remain isolated in research labs. Yang collaborates with initiatives like the Coalition for Content Provenance and Authenticity and organizations such as the World Privacy Forum to shape international conversations around AI transparency and governance.

The goal is creating shared standards for how AI systems should behave across their entire lifecycle, not just at the moment of content creation.

"The technology starts with computer scientists," Yang said. "But the impact on society requires a much bigger conversation."

Ross Maciejewski, director of the School of Computing and Augmented Intelligence at ASU, said this collaborative approach is essential. "Addressing the risks of AI isn't just a technical problem. It's a societal one," Maciejewski said.

As AI-generated media becomes more realistic and widespread, the challenge extends beyond identifying what's fake. It requires systems that can both identify synthetic content and adapt, correct, and improve themselves over time.

Learn more about generative AI and LLM technologies or explore AI research courses to deepen your understanding of these emerging standards.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)