Sci-Fi Community Tightens Rules on Generative AI: What Creatives Need to Know
Key institutions in sci-fi and adjacent creative circles are drawing harder lines on generative AI. SFWA updated the Nebula Awards rules to exclude work created wholly or partly with large language models. San Diego Comic-Con revised its art show policy to ban AI-generated material. Bandcamp has also moved to restrict generative AI on its platform.
The new rules, in plain terms
- SFWA / Nebula Awards: Works written wholly or partly with generative tools are not eligible. If an LLM was used at any stage, the work can be disqualified.
- San Diego Comic-Con Art Show: "Material created by artificial intelligence (AI) partially or fully is not allowed at art shows."
- Bandcamp: Restrictions on generative AI apply on the platform, signaling broader policy shifts across creative marketplaces.
"Our approach and wording were wrong, and we apologize for the suffering and mistrust we caused." - SFWA Board
Why this is happening
Creators are pushing back on two fronts: ethics and creative value. There's concern about unlicensed training data and the erosion of authorship. Some argue these tools flatten voice and weaken narrative intent.
"If you are using any online search engines or computer products right now, you are most likely using something that runs on or is connected to an LLM." - Jason Sanford
"Because of this, we must be careful about how broadly we define LLM usage, especially since these generative AI products are being pushed by their corporations." - Jason Sanford
Comic-Con's shift
Artists flagged a loophole that allowed display but blocked sales of AI art. The rule is now clear: no AI material, partial or full, in art shows. There were reports that the long-standing intent was simple: "NO! Plain and simple." - Glen Wotten
What this means for working creatives
- Contest and showcase entries: Assume zero tolerance for AI assistance. Even light AI edits or prompts can get a piece tossed.
- Client and platform work: Expect more contracts and TOS updates that restrict or require disclosure of AI use.
- Attribution and consent: Demand clear licenses for models and datasets. Be ready to prove your process.
Practical next steps
- Split your workflow: Maintain a contest-safe pipeline with no generative tools, and a separate AI-assisted pipeline for commercial or exploratory work. Label your files accordingly.
- Document everything: Save drafts, source files, and timestamps. Keep a short process log for each project.
- Clarify with organizers: Ask how they define "LLM usage." Do spellcheckers, AI-enabled search, or reference tools count? Get answers in writing.
- Mind your data sources: If you use AI anywhere, stick to tools with explicit, verifiable licensing and opt-in datasets.
- Update your disclosures: For clients and platforms that allow AI, include a clear note on what you used and where.
Where to read the rules
The bigger shift
This isn't a small policy tweak. It's a signal that communities are redrawing lines around originality, authorship, and fair credit. Expect other shows, contests, and platforms to adopt similar positions this year.
If AI is part of your toolkit, keep it out of contests and art shows with bans, and fortify your paper trail elsewhere. If you're staying fully human-made, make that explicit in submissions and listings.
Keep your edge without crossing new lines
Want structured ways to keep your skills sharp while staying compliant with contest and platform policies? Explore role-based learning paths here: Complete AI Training - Courses by Job. Training and development managers may find the AI Learning Path for Training & Development Managers especially useful for building governance and role-specific workflows.
Your membership also unlocks: