Who is the author when AI writes? What writers need to decide now
AI can produce clean copy at scale. Yet the core question lingers: who gets credit when a machine drafts the words? A recent study in AI & Society compared public views from 2017-2018 to 2024-2025 and found that clarity hasn't caught up with capability. People still hesitate, split, and switch labels depending on how AI involvement is framed.
That uncertainty flows into your bylines, your contracts, your liability, and your reader's trust. So if the public won't settle the debate for you, you need a practical standard you can defend.
What changed (and what didn't) from 2017-2018 to 2024-2025
Participants were shown four conditions: a simple byline first, then disclosures about computer generation, human involvement, and organizational backing. Across both timeframes, no single attribution model won out. Some chose the human, some the system, some the team, some no author at all.
Two shifts stood out. Fewer people rejected the authorship question outright at the start, hinting at growing comfort with hybrid work. And once AI involvement was disclosed, more people credited the system itself-often based on shaky assumptions that these tools "understand" or write with intent.
The takeaway: better AI doesn't equal clearer authorship. People still tie authorship to intention, experience, and purpose-not just output. That's why the label "the AI wrote it" doesn't resolve responsibility, meaning, or accountability.
Why this matters to working writers
Authorship signals trust. Readers want to know who stands behind a claim, a tone, and a judgment. A vague "AI-assisted" tag can raise as many questions as it answers-especially in news, health, finance, or any piece that could mislead.
Authorship also points to liability. If harm occurs-misinformation, defamation, bad advice-someone owns the decision to publish. People hesitate to pin that on a model, and they're wary of faceless corporate labels. That puts the heat back on human editors, leads, and publishers.
On the legal side, many jurisdictions still anchor rights to human authors. In the U.S., the Copyright Office has stated that works without human authorship aren't protectable, and AI contributions must be disclosed at registration. See their guidance: copyright.gov/ai.
Your attribution playbook (use it as policy)
- Pick your default label. Examples:
- "Written by [Name], AI-assisted" (human drafted with tool support)
- "Draft generated by [Model], edited and verified by [Name]" (machine-first, human-final)
- "Reported by [Name]; AI used for transcription/summary" (limited assist)
- Define "assist." List the allowed uses: ideation, outline, first draft, translation, summarization, headline testing, or data cleanup. Ban use cases where your publication can't verify facts or intent.
- Keep an audit trail. Save prompts, versions, human edits, sources, and the model/version used. This protects you in disputes and improves future work.
- Assign human accountability. Name the editor-of-record who signs off. If risk is high, require a subject-matter check before publish.
- Avoid anthropomorphism. Don't write "the AI thinks." Say "the model produced" or "the tool suggested." It keeps responsibility human.
- Clarify payments and rights. Set rates for editing AI drafts vs. original reporting. State who owns the final text and what must be disclosed to clients.
- Use consistent placement. Put disclosures where readers expect them-byline area or an upfront note. Burying them erodes trust.
A quick decision guide
- If the piece expresses judgment, breaks news, or could cause harm: human author + explicit disclosure + editor-of-record.
- If the piece is low-risk (e.g., meta descriptions, alt text): tool note in workflow log; public disclosure if client policy requires.
- If AI drafted most of the text: credit the human editor who took responsibility; state the model's role.
- If you used AI for structure only (outline, headlines): "AI-assisted" is enough; keep your edit logs private unless requested.
Common traps to avoid
- Over-crediting the model. Tools don't carry intent or liability-you do.
- Under-disclosing. Readers forgive help; they don't forgive hidden hands.
- Vague language. "AI helped" means nothing without scope. Name the tasks.
- One-off decisions. Treat attribution as policy, not a case-by-case scramble.
The bigger signal from the study
Public perception is moving slowly. People accept that machines can output words but still link authorship to human purpose and responsibility. Disclosure by itself won't fix trust if it's unclear who stands behind the final cut.
Set your standard now. Decide how you credit, what you disclose, and who signs. Make it repeatable. Then write like it's all on you-because, to your readers, it is.
Further resources
- Journal context: AI & Society
- U.S. copyright policy on AI: U.S. Copyright Office - Artificial Intelligence
- Tools for writers: AI tools for copywriting (shortlist and guides)
Your membership also unlocks: