The Double Standard: Why AI-Assisted Writing Should Be Normalized Like AI-Assisted Coding
Engineers accept AI for coding but criticize it for writing. Here's why that double standard doesn't hold up, and why the human → AI → human workflow deserves the same legitimacy.
- Dr. Vivek Shilimkar
- 8 min read
The Double Standard
I’ve noticed something strange in technical communities. The same engineer who proudly uses GitHub Copilot, who asks Claude to refactor their functions, who relies on AI to generate boilerplate code will turn around and criticize someone for using AI to help draft an article.
The criticism usually sounds like this: “AI-generated content is low quality.” Or: “It floods the internet with spam.” Or simply: “It’s not really your work.”
Think about the absurdity of this for a moment. AI-assisted coding is widely accepted and used to build production software—systems that handle real user data, process financial transactions, and power critical infrastructure. Yet AI-assisted writing for sharing knowledge, which carries far less risk, is somehow considered unacceptable. This is beyond my comprehension.
But here’s the thing—when I use AI to help me write, I’m doing exactly what that engineer does when they code with AI assistance. I’m using a tool to amplify my thinking. I’m still responsible for the output. I’m still the one who decides what gets published. The AI doesn’t write for me; it writes with me.
So why is AI-assisted coding accepted while AI-assisted writing is treated with suspicion? I think it’s worth examining that double standard, because it reveals something interesting about how we think about creation, authorship, and the nature of technical work itself.
In a few years, AI-assisted writing will be as normalized as AI-assisted coding is today. I intend to be among the first to make that case—not by argument alone, but by demonstrating that the workflow produces work worth reading.
The Workflow: Human → AI → Human
Let me be clear about what AI-assisted writing actually looks like in practice, at least the way I do it.
It’s not: ask AI, copy output, publish.
It’s more like this:
-
I think deeply about what I want to say. I identify the core argument, the structure, the audience, the tone. I know what points matter and what evidence supports them. This is the hardest part, and AI doesn’t do it for me.
-
I ask an AI agent to draft. I provide context, constraints, examples of my writing style, specific requirements. The AI produces a first draft based on my direction.
-
I review, edit, validate, and refine. This is not a quick pass. I read critically. I rewrite sections that don’t sound right. I verify facts. I adjust tone. I add details the AI couldn’t know. I remove generic phrases. I make sure every sentence reflects what I actually believe and can defend.
-
I take full responsibility for what gets published. If there’s an error, it’s my error. If the argument is weak, that’s my failure. If the writing resonates, that’s because I shaped it to resonate.
This is a collaborative workflow, but the human is still the architect, the editor, and the final authority. The AI is a drafting assistant, not a replacement author.
Now compare that to how many engineers use AI for coding:
- Think about what the code should do
- Ask AI to generate a function or module
- Review the code, test it, refactor it, integrate it
- Take responsibility for what ships
It’s the same pattern. Human thinking → AI assistance → human validation and ownership.
So why is one celebrated and the other criticized?
Thinking vs. Typing
I think the confusion comes from conflating two different things: thinking and typing.
When you write code, the hard part is not typing the syntax. The hard part is understanding the problem, designing the solution, anticipating edge cases, making architectural decisions, and maintaining coherence across a complex system. Typing is just the mechanical step that translates thought into executable form.
AI assistance speeds up the typing part. It doesn’t eliminate the need for engineering judgment. No one thinks that using Copilot means you’re not really an engineer. We understand that the tool doesn’t replace the expertise—it amplifies it.
Writing is the same.
The hard part of writing is not putting words on the screen. It’s clarity of thought. It’s structuring an argument. It’s knowing your audience. It’s deciding what to include and what to leave out. It’s choosing the right level of detail. It’s making sure each paragraph flows logically from the last.
AI can help with the mechanical act of drafting, just as it helps with the mechanical act of typing code. But it doesn’t replace the thinking. It doesn’t decide what’s worth saying. It doesn’t absolve the author of responsibility for the final output.
If you accept AI-assisted coding, you should accept AI-assisted writing. They’re fundamentally the same pattern: using tools to accelerate the translation of thought into artifact, while retaining full human ownership of the result.
The Quality Concern
I understand the concern about low-quality AI spam. It’s real. There are people who generate thousands of SEO-optimized articles with no editorial oversight, no fact-checking, and no real human involvement beyond clicking “publish.”
But that’s not an argument against AI-assisted writing—it’s an argument against publishing without responsibility.
The same problem exists in code. There are developers who copy-paste Stack Overflow answers without understanding them. There are teams that ship AI-generated code that’s untested, insecure, or poorly designed. But we don’t conclude that AI-assisted coding is inherently bad. We conclude that irresponsible engineering is bad.
The solution is not to ban the tool. The solution is to hold people accountable for what they publish, regardless of how they created it.
When I publish an article, I’m making a claim: this is worth your time, this is accurate to the best of my knowledge, and I stand behind every word. If I fail to meet that standard, that’s on me—not on the tool I used.
A well-edited, carefully validated article written with AI assistance is infinitely more valuable than a sloppy, error-filled article written entirely by hand. What matters is the quality of the final output and the integrity of the process, not whether a particular tool was involved.
Ownership and Responsibility
Here’s what I think authorship actually means in a world where AI assists both coding and writing:
You own the output if you own the decisions.
If you decided what argument to make, if you validated the facts, if you refined the structure, if you rewrote the weak sections, if you’re prepared to defend every claim—then you’re the author. The fact that an AI helped you draft doesn’t change that.
This is true for code, and it’s true for writing.
When you use Copilot to generate a function, you don’t put “Co-authored-by: GitHub Copilot” in the commit message. You take ownership. You integrated it, you tested it, you shipped it. It’s your code.
The same principle applies to writing. If I use AI to help draft an article, but I’m the one who reviews it, validates it, refines it, and publishes it, then it’s my article. I’m the one responsible for its accuracy and its quality.
And that’s how it should be.
AI as a Force Multiplier
The real value of AI—in both coding and writing—is not that it replaces human effort. It’s that it amplifies it.
I can think through ideas faster because I don’t get stuck on phrasing. I can experiment with different structures without rewriting from scratch every time. I can focus on the hardest problems—what to say, how to say it, whether it’s true—because the mechanical drafting is accelerated.
This is the same reason engineers love AI-assisted coding. It doesn’t make you less of an engineer; it makes you a more productive one. You can focus on design, architecture, and correctness, while the AI handles boilerplate and repetitive patterns.
The goal is not to eliminate human involvement. The goal is to spend human effort where it matters most: on judgment, creativity, validation, and responsibility.
That’s what I do when I use AI to help me write. I’m not outsourcing my thinking. I’m accelerating the path from thought to finished article, so I can spend more time on what actually matters: making sure the argument is sound, the facts are correct, and the writing is clear.
Evaluate the Idea, Not the Tool
Ultimately, I think we need to shift how we evaluate content.
Don’t ask: “Was AI involved?”
Ask: “Is this accurate? Is this insightful? Is this worth my time?”
If the answer is yes, then it doesn’t matter whether the author used AI, just as it doesn’t matter whether a developer used Copilot.
If the answer is no, then it’s bad content—regardless of how it was created.
The double standard we’ve created around AI-assisted writing doesn’t make sense. It’s inconsistent with how we think about AI-assisted coding, and it misunderstands the nature of authorship in both domains.
Writing is thinking made visible, just as code is logic made executable. In both cases, AI can assist with the mechanical translation, but it cannot replace the human who decides what’s worth expressing and takes responsibility for the result.
I use AI agents to help me draft articles. I carefully review, edit, validate, and publish them. And I stand behind every word.
That’s legitimate authorship. And it should be normalized, just like AI-assisted coding already is.
Judge the ideas, not the tools. That’s the standard that actually matters.