AI Ink: What Creatives Can Learn From Jason Van Tatenhove's Risky Bet on AI
AI Ink: Writing, Publishing, and Misinformation at the Dawn of the AI Age
Jason Van Tatenhove - Skyhorse Publishing - $49.99
Jason Van Tatenhove has lived inside the machine. He served as national media director for the Oath Keepers, testified before Congress about January 6, and later wrote Perils of Extremism. Then life hit hard: his wife, Shilo, died after a long illness. Grief pushed him toward AI as a crutch, and he let tools like ChatGPT, Jasper, and Grammarly into his writing process.
He admits he "gave himself permission to cheat." In AI Ink, he doubles down on that decision and argues that AI made him a better writer. To his credit, he's transparent: he footnotes AI contributions using a system he calls the Colorado-Asimov Ethical Citation Standard (CA-ECS). For creatives, that's the first useful takeaway-own your tools, and disclose their role.
The book's core: a fast tour through AI's rise
Van Tatenhove sketches the arc from Rosenblatt's perceptrons to backprop, machine learning, and deep learning. He highlights the 2017 transformer breakthrough from Google's "Attention Is All You Need," which unlocked today's generative systems like ChatGPT and BERT. If you want a primer, this section is clear and quick.
For a deeper look at the architecture he cites, see the original paper: Attention Is All You Need.
Where the book gets uncomfortable
As AI scaled, so did the blowback. Van Tatenhove notes copyright concerns over datasets built from millions of books, and the flood of spam, scams, deepfakes, and junk content. He pushes back on doomsday warnings-like Geoffrey Hinton's departure from Google and Eliezer Yudkowsky's call to halt development-arguing that assuming "smarter than human" means "hostile" is a leap. That optimism sets his tone.
But there's another ledger. Petra Molnar's The Walls Have Eyes shows how border tech dehumanizes refugees. Darren Byler's In the Camps documents how AI systems help control Xinjiang's Muslim minorities. These aren't hypotheticals; they're field reports.
The big blind spot: who actually calls the shots
Van Tatenhove's hope-"Let's wield these tools for good"-lands soft against the reality of corporate control. AI direction isn't decided by a town hall. It's set in boardrooms, optimized for profit, and rolled out at scale. Expect more speed, more capability, and more inequality unless incentives shift.
So, what should working writers do?
If you create for a living, you can't ignore AI. But you also can't outsource your voice, judgment, or ethics. Here's a practical operating system you can adopt today.
- Automate the grunt work: transcription, summaries, shot lists, data cleanup, and keyword clustering. Keep creative decisions in human hands.
- Separate idea gen from execution: use AI for prompts, outlines, and counterarguments. Draft in your own voice. You're the filter.
- Adopt "CA-ECS" principles: disclose when AI touched your work and how. Footnote prompts, models, and version numbers. Transparency builds trust.
- Run a two-pass fact check: first with search and primary sources, then a human pass for logic and tone. Assume the model will hallucinate.
- Build a source stack: a short list of expert books, papers, and databases you always verify against. Don't rely on model citations.
- Protect your IP: keep high-value drafts offline or in tools with clear data retention policies. Check training and usage terms before pasting.
- Create a misinformation fail-safe: log claims, add citations, and timestamp major updates. If you publish an error, correct publicly.
- Develop a distinct style guide: phrases you use, phrases you never use, narrative rhythm, and formatting rules. Feed it to your editor-human or AI.
- Measure output by outcomes: not word count. Track reader retention, saves, replies, and conversion-then adjust prompts and process.
Where AI helps-and where it hurts
- Good bets: outlining multiple angles fast, condensing interviews, turning research into briefs, testing headlines, repurposing longform into social posts.
- Bad bets: publishing first drafts from a model, sensitive reporting without human verification, overwriting your tone with generic "AI speak," relying on unvetted facts.
Moral clarity for creatives
AI can make your workflow faster. It can also lower your standards if you let it. The responsible stance is simple: use AI to remove friction, not to replace taste, truth, or authorship.
Van Tatenhove is right about one thing: the future of AI is about humanity. But which humanity gets served depends on who profits and who's accountable. Your best defense is a clear process, verified sources, and a voice that can't be copied.
Next steps
- Study the transformer basics so your prompts and expectations improve: the transformer paper.
- Set up your creative stack and prompt library. For writers, start here: AI tools for copywriting and prompt engineering guides.
Use the tools. Keep your standards. Credit your sources. That's how you stay credible-and stay paid.
Your membership also unlocks: