You need an AI policy for docs and you need it now

Posted on Nov 10, 2025

The dam of AI-written doc contributions might be about to break. It’s already cracking for code, with posts wondering how to review a vibe-coded pull request consisting of nine thousand new lines of code. In the midst of what Tom Johnson describes as acceleration, docs-as-code writers wonder how to contain the seemingly inescapable wave that could bury their backlogs in AI slop. The answer could lie in taking a stance. This means crafting an AI policy for docs.

Even if all the major AI companies were to go bust tomorrow, as some tech augurs zealously predict, the products they developed would stay and become part of the computing landscape, their training and deployment commoditized to the point you’d build your own model for free and run it on your phone. With such a powerful tool for text generation at everyone’s disposal, it’s only a matter of time before folks start using it to contribute in docs-as-code settings.

So it begins

Tech writers can’t expect AI to go away and not interfere with their daily work. Sadly, opposing the application of AI to docs authoring, while sometimes grounded on ethical and professional concerns, isn’t practical if what we seek is the survival of our jobs. Even worse is just being indifferent to the changes happening around us and deciding to go with the flow: it’s the little death of the learning and diplomatic skills that make tech writers so adaptable.

If technical writing wants to survive as a craft, it’ll have to deal with yet another wave of technological change. To do so, it’ll have to fully participate and contribute.

To defend tech writing, own and define the rules of the game

Twenty years ago I practiced a bit of Aikido, a martial art where folks defend themselves while also protecting attackers from injury. In Aikido, one doesn’t block attacks; instead, they redirect the attacker’s force and use it to their advantage. When I think of AI and docs, this could be translated as creating guidelines and tools to generate docs through AI instead of forbidding its usage entirely or avoiding the issue. A policy would be our ude gaeshi.

Despite its overly bureaucratic sound, a policy is nothing but a declaration of intent, a stance articulated through principles and rules. Policies set the tone and the stage for working with new tools and methodologies, stating what is to be expected and what is or isn’t acceptable as the output. You can write an AI & Docs policy because you own your work, much in the same way you’ve defined a style guide, an issue template, or the content strategy of your docs.

Before crafting those policies, though, you need to know how you want to have LLMs interact with your work. Here I offer some ideas that might inform your own policy making efforts.

Start by laying out the core principles governing docs and AI

The guidelines should open with the key principles governing the application of generative AI or LLMs to user-facing docs. The policies that you develop have to lean on those principles. For example, you should make responsibility and accountability of the human besides the machine a core principle: you, as the human initiator and overseer of AI work, are responsible for its output. Similarly, you should define who has the last word on quality (that’s likely yourself).

If the company you work for already has generative AI policies and guidelines, as it should have, you can piggyback on them and develop AI & Docs policies as an extension of those, building on the core principles while providing more specific guidance and advice. Chances are, for example, that the company already has a list of allowed and procured GenAI tools and models. If those policies don’t exist, setting your own policy can be an enlightening example for engineering.

Define what warrants human expertise

Contributors might feel lost when it comes to understanding when and where they can use LLMs for docs-related work. Assist them by providing a list of use cases focused on augmentation and efficiency. You might want to leave out or explicitly forbid using LLMs for certain scenarios, such as architectural work, drafting entire new doc sets from scratch, and so on. Given the age of exploration we’re living, remember Aikido: say what can be done, not what cannot.

For example, you can state that one can use generative AI as an alternative to classic search and replace, or as a faster way of editing patterns than regular expressions. You can be explicit about usage that’s always permissible as long as the user validates the output, such as automatic completion through LLMs. Use your own experience to draw the line of the kind of contributions you’d like to see happening.

Review AI contributions as any other contribution

The opencontainers project is currently debating the adoption of AI policies. Part of the debate is devoted to whether AI contributions should be treated as any other, especially in an open source setting. The same question easily applies to documentation. Some commenters postulate that the source of the contribution should not be treated differently. My personal take can be summarized by this comment posted by user alexchantavy:

As an open source maintainer, I don’t have an issue with AI; I have an issue with low quality slop whether it comes from a machine or from a human.

As a documentarian, you get to set the bar for quality in your docs. Docs made using LLMs or written with substantial AI intervention should go through the same review process as the rest. You might still want to know if AI intervened though, so you can detect hallucinations and so on. It’s easier to do if you know that the pull request was authored with the help of AI, which leads me to the next recommendation.

Ask to disclose and describe the usage of AI, even if undetectable

When someone documents how they used AI (which tool, what prompts, what worked, what failed) everyone benefits. Over time you start seeing patterns: this type of doc responds well to AI, that type needs more human work than it’s worth. We’re past the point where you can reliably spot AI-generated text. Detection tools are unreliable at best. So disclosure becomes a matter of professional honesty, not enforcement.

Useful disclosure includes the tool or model the contributor used, the initial prompt, how much editing was needed, what the AI got wrong, and whether you’d use AI for this again. This helps the team get better at AI-assisted workflows and helps you refine your policy over time. Don’t get too finicky though, disclosure can be a complicated thing, especially in a context of AI shaming. If you suspect AI was involved, gently remind the contributor to disclose this fact.

Rely on deterministic safety nets and tests

Good old deterministic testing (dumb robots) is the best way to keep in check LLMs (smart robots) at volume. AI doesn’t get special treatment from your automated checks. Run everything through your existing quality gates: linters, link checkers, spell checkers, build tests. CI workflows don’t care who wrote the content. This frees up human reviewers to focus on what matters, which is style, structure, usefulness, impact, and so on.

If your current tools don’t catch AI failures, such as hallucinated commands, made-up API methods, or plausible but wrong syntax, it might be a good time to extend your test suite, treating docs as tests, as Manny Silva advocates for. You should treat docs with the same infrastructure rigor you’d treat code. As I wrote in What docs as code really means:

You would never push code into production without a proper review and tests: don’t publish docs in a hurry either, because docs are infrastructure.

Let’s continue being the masters of technological adaptation

Technical writers have survived technological changes for decades. The advent of AI shouldn’t be different. If this reminds you of Realpolitik, you wouldn’t be wrong: LLM-generated contributions are already happening and there’s no going back. Writing an AI & Docs policy means taking full ownership of the process and reminding folks that documentation requires expertise and has high quality standards. That kind of strategic thinking is what will keep us afloat.

What I would find dangerous for technical writing right now is sitting on our hands, hoping that someone else sets the rules of the game. We can’t pretend that this will sort it out on its own, so start drafting your policy, your stance, and iterate as you learn more. Your policy will also act as a reminder of the value you bring to the organization. Worst case scenario, it’ll open useful internal debates that you might have missed otherwise. We need that depth.