Own the prompt: Build your own tech writing tools using LLMs

Posted on Apr 26, 2025

While some developers wrinkle their noses at the sight of Copilot and similar AI-powered tools, tech writers find them to be great sidekicks. Creating a script to automate edits or content migrations takes at most a few minutes of tinkering. The same goes for code examples and snippets for dev documentation, docs sites’ enhancements, and even wacky experiments in retrocomputing. With local LLMs running at decent speed on laptops, not even carbon footprint is a concern.

That’s why last week, spurred by the appearance of a tool that claims to generate tutorials from code repos, I fired up Copilot and rage-created a playground for tech writers, Aikidocs. Tech writers must own the prompt, I thought, so I made a tool that allows anyone to define a set of files as the prompt, together with whatever local context they want to process. You configure the LLM you want to use, add API credentials, and run the script. That’s it. Here’s a demo using Gemini:

All the coding prompts I used to create Aikidocs

To get to the current state of Aikidocs, I started with the following prompt after selecting Claude 3.7 in Agent mode. Note that the folder already had a git repo initialized, which is extremely important to roll back changes in case the LLM strays away (and they do that quite often); after reaching a point where you’re happy, you just commit your changes and keep going.

I want to build a NodeJS script that takes context inside the “context” folder by default (or a path defined by the user when calling the command), compresses it and cleans it up, and sends it as context to any of the three major LLMs (Gemini, Claude, OpenAI) using the credentials stored in credentials.txt and the prompt stored in prompt.txt (by default – can be defined via CLI arguments). The script must output the response inside the output folder by default.

The previous prompt requires a few things: you have to know which technologies you want to use (and why), have a high level overview of the architecture in your head, as well as a specific idea of how user interaction should happen. Software technical writers usually have a good grasp of all these aspects because they routinely document them or are exposed to them in developer chats.

The prompt generated an initial set of files and folders. The tool was barely functional, so I continued with the following prompts to refine its state. Since setting the credentials is the only configuration step required, I spent some time making it easier. Claude stumbled when trying to understand what behavior I was aiming for, but it managed to find the right solution in the end.

Change the credentials logic so that, if a model isn’t explicitly indicated, the first model with credentials is used.

The new logic fails to detect if a valid key is set or not. We need a better automatic mechanism.

Do not show the output of the model in the console. Only produce a file inside the output folder.

Would it help to have a folder for prompt instructions instead of a prompt.txt file? Thinking of having different files, like a style guide, etc. Perhaps that’s context more than prompt?

I’ve created the prompts folder. Modify the Node script to account for the new prompt location and situation.

The script was now working as intended, but I wanted to explore ways of improving it, so I switched Copilot to Ask mode. In this case it did something that many LLMs do, that is, importing unnecessary dependencies. Again, it’s on the human at the keyboard to understand whether the dependencies used by the LLM are truly required. They clearly weren’t in this case.

How would you improve this project? The aim is to provide an easy to use scripting platform for tech writers to explore prompting and context.

Cost estimation sounds interesting, especially before sending the request (user would have to confirm first.) How would you go about it?

Isn’t pricing available in each LLM API as an endpoint? I wonder.

Let’s just implement the token counting and confirmation message for now.

Still says that Error: inquirer.prompt is not a function. Remove the inquirer bits and dependencies and just use a classic prompt. Make the default option in the confirmation prompt be “yes”.

While quite competent and fast, LLMs require hand holding. When I added support for local models through Ollama, Claude clumsily hardcoded some bits while not realizing that API keys are not required for local models. It ended up doing what I wanted after I explained the situation and the pseudo user story I had in my mind. Writing in a clear and effective way always helps.

I’d like to add the option of running the calls against local models using Ollama.

No API key should be required for ollama models.

We still want to be able to define the Ollama model name in the credentials. Currently it’s hardcoded (llama3.2) and that’s not OK.

Aikidocs was now working as desired. The last bit was adding some documentation. I felt a bit guilty of not writing it myself – I’m a tech writer after all – but delegated the first effort to the LLM.

Create a README files with instructions for this project. Explain the folder structure, too, and add usage examples.

I edited the resulting README file manually for format and style, added some additional examples, slapped a license on it, and pushed the whole thing to GitHub. Voilà, les jeux sont faits!

What does owning the prompt mean for tech writers

When I say that tech writers should own the prompt that generates documentation, I mean two things: that they should design and maintain the prompts, and that they should spearhead docs automation initiatives themselves, as I suggested in my tech writing predictions for 2025. It’s not just about using LLMs at work or tolerating their existence: writers must lead the way and own the conversations with AIs around docs.

What Aikidocs aims at showing is that you can work with an LLM as you would with a tech savvy intern: you provide a style guide, concrete guidance, and source materials to get acceptable output on the other side of the black box. All the content created in those carefully fenced pens will follow your content strategy more than if you let opinionated tools do it for you.

Kayce Basques expressed these thoughts very effectively in a comment to this post:

What we really mean is “own the end-to-end docs automation” and in that case we should just say that outright. My fear is that TWs will take that literally and only think about prompt engineering, when in reality we should be setting our sights much bigger and more broadly.

It’s not vibe coding: it’s LLM and AI surfing.