Code wikis are documentation theater as a service

Posted on Nov 15, 2025

Code Wiki, a new AI tool by Google, claims to generate a complete set of docs, including diagrams, from code repos. The landing page goes as far as saying “Stop documenting. No more stale docs. Ever”, a claim that made me stagger and reach for the nearest chair.

That these tools are laughably bad isn’t reassuring; their emergence hints at a deeper and more unsettling cultural problem.

Ain’t nobody got time for docs – let’s regurgitate words and boxes!

The way Code Wiki and DeepWiki work is as follows: you select a code repository and the LLM generates a seemingly complete set of docs. Both apps allow you to ask questions about the docs using an agent. The process takes around ten minutes. I’ve tried it on one of my pet projects and it produced an entire “wiki” full of dev docs. Magic, right?

As dev tools, they might fulfill the task of project summarization. They’re helpful when trying to explore poorly documented codebases and get a sense of how the pieces fit together, but they stop short of anything that could be called “docs” or “wiki”. I would describe their output as byproducts of an LLM digestion process (yes, blergh indeed).

It’s impressive for the first ten seconds, but as you start reading, you begin to notice that for most complex projects, the wiki is at best dry reference, at worst a dumpster fire of hallucinations, Escherian information architecture, and subtly wrong facts and diagrams. This is not a lazy dismissal: read the comments on Hacker News to get an idea of how dismal their results can be.

I can already see the pitch meetings where someone pulls up one of these wikis and says, “Look, we’ve got docs now”. The bar for what counts as documentation gets lower every time we pretend these outputs are good enough.

You can choose to insult your intelligence by calling them docs

What makes me furious with the heat of a thousand Suns is the fact that some devs could be lured into thinking that this sort of ersatz manuals can check the docs box at all. That some could find these self-inflicted wikis a valid replacement of human-made docs shows the same kind of ignorance (or cynicism) about technical writing that folks display when praising art made by AI.

Docs are a product. They require sustained and intelligent effort over a long period of time, as well as a community ready to contribute. Docs are the mirror and lore of your code, instruments that help you think about what you’re designing and developing. Docs serve purposes and needs. Great docs explain, guide, help, illustrate. They’re deeply flawed and human. They must be.

These wikis, on the other hand, are cargo cult documentation as a service. Their structure could be described as randomly assembling brutalist housing blocks from piles of bricks, their nav a barely plausible FrankenBar that always feels the same. Of course docs must have diagrams, they say, we’re not dummies. As I wrote in When docs become performance art, everybody loses:

Documentation theater […] is the act of creating documentation for the sake of it, so that a box can be checked, an issue closed, or a project completed. […] Tech comm is performative when it’s done to adhere to a certain form, criterion, or standard.

And so README files are written because everybody has them, docs are released because there’s a legal mandate to do so, and so on. In the most nightmarish cases, entire docs sites are created following frameworks because that’s The Right Way™.

While it’s very positive that developers see the lack of docs as an issue, these wikis are not a solution. Their net effect on documentation is harmful, their impact on the craft of technical writing potentially devastating.

Augmentation means opening doors faster, not crashing through them

It would be wrong to just stop at the surface. If products like DeepWiki gain traction it’s also because there’s a need for developers to interact more with knowledge and to break the barrier of increasingly complex, increasingly abundant knowledge. When speed and noise increase, the role of tech writers as curators, preserving what Rahel Anne Bailie describes as Content Integrity, is more important than ever.

Tom Johnson recently wrote that the tech industry is shifting to a narrative where “AI provides tools to speed up your work in a constantly accelerating environment”. Acceleration, in this context, means you can’t be bothered with docs work, at least according to the creators of Code Wiki. You end up sacrificing quality on the altar of delivering even more stuff for the sake of delivering it.

This is a case where some docs aren’t better than none, because they signify the triumph of intellectual laziness over trust and accountability. While I’m not concerned about their impact (any enterprise customer would shred you to pieces if you provided docs like those), I think that they represent a trend that we must fight, because docs entirely written by AI are wrong.

Let’s draw the line: docs generated entirely by AI should never ship to users. You can use AI judiciously to audit your docs for gaps, to spot inconsistencies, to generate first drafts that you’ll edit later. But the moment you let any LLM have the final word on what users will read, you’ve abandoned the basic contract of docs: that someone who understands the system has taken responsibility for explaining it truthfully.