Skills are docs, and docs need tech writers
Not a month goes by without someone claiming they’ve killed tech writing. A few weeks ago, it was CodeWiki and its docs theatre. Now it’s the turn of Claude Skills and their ecosystem of Markdown instructions served as if they were executable code or macros. Docs, though, are tougher than they’d expect, because they’re the strongest signals in a sea of noise, and because they’re pretty much everywhere. Including, well, skills.
Here we go again. Someone is trying to reinvent docs under a new name
What made me grunt like a weary town sheriff this morning was a blogpost titled How Claude Skills replaced our documentation, where the author, who seemingly ignores what technical writing is, explains how they replaced internal procedural docs with skills. It then goes on to explain why skills are better than docs. Hold on to something. Are you ready? Here we go:
This isn’t documentation in the traditional sense. It’s an instruction set optimized for an AI reader.
Which is to say that skills are not docs because they are docs.
In fact, that’s what skills are: docs. They literally are docs with some frontmatter sprayed on top. You could describe them as convenient pre-packaged docs, but an LLM is not a compiler, and skills are not code. They’re prose, and what producing good prose requires are good writing skills, no matter the audience, artificial or human.
That your skills are anecdotally better than some docs (not all, because they only cover procedures, not concepts), means that your docs were bad in the first place. If AI is doing a better job at distilling procedures than you, either you didn’t dedicate resources to docs or the docs you produced were poorly written or incomplete to begin with.
The author recognizes this towards the end, where they admit that skills are not a substitute for thinking, that they need maintenance, and that you can’t encode everything as a skill. I wonder if what prevents them from connecting the dots and understanding that good in-repo docs are actually superior to skills lies in the fact that docs aren’t the shiniest new thing.
There’s an irresistible, almost demoralizing irony in the fact that developers are discovering docs and accessibility only now due to AI. They needed docs and didn’t know it until they had at their disposal an ersatz user in the form of an LLM that asked for context. Treating AI as minions, they are now delighted to write for them. There’s danger around the corner, though.
There’s a huge risk in letting LLMs write skills without human supervision
When skills are created in a black box, they only rely on model knowledge (which might be outdated) and code (which knows nothing about user needs and concerns). It can only guess what you’re trying to accomplish by reading the prompt you used to create the skill, which at times is just a sentence like “Create a skill to do this and that”.
If you let AI produce skills without supervision, you’re essentially letting entropy in by giving up on high-signal content made by humans who know what the things are about, that is, docs. Picture a scenario where LLMs create skills that other LLMs will use to create code that, in turn, will end up improving the skill: a telephone game that can only end in informational decay.
A study by Mohamed et al., LLM as a Broken Telephone, found that iterative LLM processing degrades factual content at a consistent rate, with information distorting measurably across chains. Keisha et al. wrote on knowledge collapse: in recursive synthetic training, factual accuracy deteriorates, producing outputs that read well but are quietly wrong.
The pattern extends to docs. An empirical study on architectural design decisions found that LLMs produce relevant descriptions of what code does but fall short of capturing why decisions were made. When design documents were provided as context, LLM-generated comments improved task completion by 28%; without them, design rationale was absent entirely.
Complaining about bad skills won’t make them go away, though, because the instinct behind them isn’t wrong; it’s the lack of writing expertise and editorial oversight that makes them dangerous. You solve this by claiming ownership.
Tech writers must co-own skills as a new channel for documentation
Skills are different formats for docs, one that pre-packages them into self-contained cartridges that you can share and reuse in LLM contexts. Knowing this, documentarians must acknowledge that skills are needed and useful, and work to operationalize their generation, distribution, and maintenance based on the documentation.
Start with meta skills as a way to ensure that new skills are made in accordance with existing docs, for example by calling a docs MCP server, and following their style. A skill that creates new skills following your editorial advice and rules is a quality checkpoint that also provides better skills performance (because LLMs benefit from great docs as much as humans).
Next, create agentic workflows that check if skills drift from existing documentation (again using an MCP server) and open pull requests to fix them if they do. Use evals and skill validators in CI pipelines (I use Dachary’s) to ensure skills aren’t broken or hallucinating. These are all safety nets in case meta skills aren’t used or skills aren’t updated.
Lastly, involve tech writers as consulted parties to make sure that skills knowledge is reflected in docs and identify chances for skills creation and improvements. You can also promote skills in user-facing docs as an interactive, LLM-friendly complement to docs themselves. Working on ways to “skillify” existing documentation is another good path.
Creating the right context is a shared concern
The problem of using LLMs without the right context is real, and this is something both the author of that post and I fully agree with. The need for strong context curation and docs accessibility is such that tech writers and developers must work together to find solutions to it. As tech writers, our obligation is to pick up new habits to make that happen.
To the developers reading this: we’re not gatekeeping your skills. They’re a good idea executed without the people who’ve been solving this exact problem for decades. Invite us into your .claude/skills/ folder: you might be surprised at how much better those instructions get when someone who writes for a living has a look at them.