Researchers Propose Framework To Make Generative AI Safer
Researchers from Kookmin University and the University of British Columbia have developed a framework designed to improve safety controls for generative AI systems that create images and videos.
The proposal addresses a specific problem: the speed at which generative AI can produce vivid visual content has outpaced efforts to control and prevent misuse. The framework aims to close that gap by strengthening existing safeguards.
For researchers and practitioners working with generative AI and LLM systems, the framework offers practical approaches to reducing harmful outputs. The work is relevant to anyone developing or deploying image and video generation tools.
The research comes as organizations across industries grapple with balancing the capabilities of generative AI against the need for safety measures. Universities have become key contributors to this work, offering independent assessments and technical solutions that inform both industry practice and policy.
Details about the specific mechanisms in the framework were not disclosed in available sources. Practitioners interested in the full methodology should contact the researchers directly or monitor academic publications from both institutions.
Your membership also unlocks: