Military Writer Develops Method to Keep Humans in Creative Loop With AI
An Air Force instructor has created a prompt engineering technique designed to prevent writers and planners from offloading creative thinking to AI systems. The method, called HWIT (Here's What I Think), forces users to contribute their own ideas before asking an AI to generate output.
The problem HWIT addresses is real. When people work with large language models, they often shift the burden of creative and critical thought to the AI-a phenomenon called cognitive offloading. For military planning, academic writing, and other complex creative work, this poses a strategic risk.
Why AI Output Tends Toward Average
Large language models work by finding statistical patterns in massive datasets. They treat frequency as importance. This means their output naturally gravitates toward what's common in their training data, not what's novel or surprising.
The result: AI-generated writing often feels predictable. The same prompt entered multiple times produces variations on the same theme. For military strategy, this is a vulnerability. If adversaries can approximate the inputs a planner fed to an AI, they can predict the resulting plans.
Hallucination, sycophancy, and overused phrases all stem partly from this same statistical foundation. An AI anchors on a term and produces output correlated with related terms in its data, regardless of whether those associations are relevant or true.
How HWIT Works
HWIT expands the standard AI prompt from "Here's a request, now generate a response" to "Here's a request, here's my idea, now generate a response." Users specify a task, provide context, then articulate their own thinking before asking the AI to respond.
The template requires users to:
- Describe the final output needed
- Provide relevant background (audience, stakes, tone, constraints)
- State their own starting ideas, hunches, or angles
- Ask the AI to restate their ideas and pose clarifying questions
- Optionally request critical feedback or alternative framings
The AI must answer the user's questions before generating final output. This forces the human to stay engaged in the creative process rather than passively accepting what the system produces.
The Mechanism Behind It
HWIT works similarly to retrieval augmented generation by cueing the AI to incorporate the user's thinking as external knowledge. It also resembles prompt chaining by inducing iterative conversation where the human-not the AI-must build on their own ideas.
The technique doesn't eliminate AI's role. Instead, it positions AI as a sounding board and critical partner, not a replacement for human judgment.
Clear Limitations
HWIT isn't a fix for all AI problems. It won't address hallucination entirely, though anchoring the AI to specific user ideas may reduce some instances. It won't correct bias in the AI's training data or challenge a user's own biases unless they deliberately ask it to.
The method also requires genuine cognitive effort. A vague or throwaway response in the "Here's what I think" section won't force meaningful participation. Lazy users can still offload responsibility if they choose to.
HWIT is overkill for routine tasks like summarizing notes, formatting slides, or checking grammar. It's designed for creative work-drafting military orders, developing strategic options, writing academic papers, or planning complex operations.
Relevance for Creatives
For AI for creatives, HWIT addresses a core tension: how to use AI as a tool without letting it become a substitute for thinking. Writers, strategists, and planners can use the system to stress-test ideas, explore alternatives, and refine arguments while maintaining ownership of the creative work.
The technique assumes that writing is thinking. If HWIT requires writing-articulating your own ideas before asking the AI to respond-then it requires thinking. That distinction matters for anyone trying to avoid the trap of submitting AI-generated work as their own.
The Middle Ground
Complete rejection of AI carries its own risk. Professionals who refuse to learn how to work with these systems may fall behind in their industries. But uncritical adoption-treating AI as a creative replacement-undermines the higher-order thinking that creative and strategic work demands.
HWIT represents a middle path: using AI effectively without using it as a crutch. The method acknowledges that humans excel at adapting to uncertainty, imagining alternative futures, and making moral judgments. AI excels at speed and pattern recognition. Structured interaction between the two can produce better results than either working alone.
The catch is that this approach requires discipline. There's no shortcut that removes the need for humans to think hard about their own work before asking an AI to help develop it.
Your membership also unlocks: