World Science Fiction Convention Faces Backlash Over AI Panelist Vetting
This summer's World Science Fiction Convention (WorldCon) in Seattle sparked controversy after using artificial intelligence to screen over 1,300 potential panelists. The organizers employed ChatGPT to gather background information on applicants, aiming to streamline the vetting process and save volunteer hours.
Kathy Bond, chair of the convention, explained that the AI received only the panelists' names and was prompted to evaluate any scandals related to issues like homophobia, transphobia, racism, harassment, sexual misconduct, sexism, and fraud. The AI was instructed to examine digital footprints, including social media, articles, blogs, and specifically file770.com. The outputs were then reviewed by staff for accuracy before decisions were made.
Why Authors and Fans Are Concerned
Despite assurances that AI alone did not determine panelist acceptance, many authors expressed strong opposition. David D. Levine criticized the approach for relying on AI trained on creators' work without permission or compensation. He also highlighted the environmental cost of running such models and argued that simpler, less controversial methods could have been used.
Jason Sanford shared his frustration, pointing out that his own stories were used without consent to train AI systems. He and others viewed vetting via AI as dismissive of the artists and writers who contribute to the community. Many panelists reported not giving permission for their information to be processed this way, sparking more than 100 critical comments on the subject.
Consequences and Apologies
Following the backlash, key figures including the Hugo administrator and deputy administrator resigned from their roles. On Friday, Bond issued a formal apology, admitting the initial communication missed the community's concerns and acknowledging the harm caused.
Bond and program head SunnyJim Morgan later clarified that all panelist reviews would be redone without AI. Morgan condemned the use of OpenAIβs tools, describing their training methods as unethical and possibly illegal. He accepted responsibility for approving the AI use and apologized for the mistake.
The Broader Issue for Writers
The incident reflects wider tensions between creative professionals and AI. Many writers see AI as a threat to their livelihoods and rights, especially when training data is sourced without consent. Organizations like the Authors Guild are actively challenging AI companies to protect authors' copyrights and ensure fair compensation.
While some authors use AI for research or assistance, the distrust stems from concerns about copyright infringement and the accuracy of AI outputs. This case underlines the importance of transparency and respect for creators when integrating AI into professional workflows.
Next Steps for WorldCon and the Community
- WorldCon organizers are conducting a full re-evaluation of all panelists without AI involvement.
- They have committed to more careful consideration of community feedback moving forward.
- Writers and artists remain vigilant about AI's role and impact in creative spaces.
For writers interested in understanding AI technologies and their implications, exploring courses on AI ethics and tools can be valuable. Resources such as Complete AI Training offer practical insights into navigating AI responsibly in creative fields.
Your membership also unlocks: