What to do when you're feeling AI anxiety as a tech writer
Some technical writers in my network are genuinely worried about their professional future in the AI age. Will large language models take my job, they wonder. Are we going to be replaced by GPT, they ask in meetups and community forums. My short answer is “No”. My longer answer is “No, unless you reject the benefits of LLMs”. For my complete answer, keep reading this post.
By now, you should have some notion of what large language models are and how they work. LLMs are particularly powerful blenders of information. They don’t think, they lack motivation, and can’t act on their own. A fun way of picturing LLMs is imagining C3PO, the golden droid from Star Wars, sitting next to you, trying to be helpful when you ask it something. When it works, you want to pat its shoulder; when it doesn’t, it’s irritating. Whatever the outcome, calling it AI is a poetic license.
Given the current state of LLM technology, replacing human writers entirely with LLMs, or docs with chatbots, is not what I’d call a wise decision. It may be tempting in terms of cost cutting, especially if one falls into the trap of thinking that LLMs are sentient (they’re not). Instead of replacing writers with LLMs, a more promising direction is AI-augmentation and enriching human-made docs with AI-generated content where it makes sense. That’s a C/Fe society, in Asimov’s words:
“Carbon is the basis of human life and iron of robot life. It becomes easy to speak of C/Fe when you wish to express a culture that combines the best of the two on an equal but parallel basis.”
—Isaac Asimov, The Caves of Steel
There’s lots of value in being human in this particular moment, in being the C in the C/Fe formula. This is true for technical writers as it is for software developers, UX designers, copywriters, and other professions that use language as the primary tool of their work. What follows are some reasons why humans will always be needed in human-made enterprises that rely on LLMs.
- Humans must feed the machine. The more specific the task, the more LLMs require high-quality data, including documentation. If that’s not available, or all LLMs can consume is bad docs, the output will be similarly disappointing. Fine tuning, quality assurance, training… All that requires real human writers and editors. You can be one of them.
- Liability is for humans. Companies can’t afford to roll out grossly inaccurate, wrong, or untested documentation and software. In sectors like industrial machinery, energy, and pharmacology, a small mistake in a document generated by AIs could lead to disastrous consequences. Humans are still necessary because they care, while machines don’t, because they can’t.
- Context matters more than content. The importance of disciplines such as information architecture and content strategy lies in the uniquely human capacity to take a step back and see the bigger picture, a task which exceeds the skills of LLMs. Writers don’t write in a vacuum: their content fits a purpose and has to adapt to ever changing circumstances.
- One cannot augment what isn’t there. The quality of the output you get from LLMs is a function of the quality of your questions and inputs. AI augmentation can only make you a more efficient writer if you’re already a good writer. The same goes for coding and other skills. You need human writers in order to harness AI writing. Which leads me to the next point.
- Questions matter more than answers. The success of StackExchange or Quora lies in their ability to provide questions, not answers. Questions are hard. When an artificial agent will be able to formulate valid questions and guess which are the most relevant for a user based on a set of incoherent speech acts, I’ll be willing to concede that docs might be at risk. Until then, docs and AI will need each other.
So, stop shivering. Here’s what you should be doing instead:
- Think strategically, as in content strategy. The potential of AI content in certain scenarios, such as integration docs or code samples, is huge. Figure out where AI should interface in your information architecture and let the LLMs roam within the boundaries that you build for them. Shepherd AIs.
- Test your assumptions, test everything. It’s already common knowledge that default LLMs’ output is good only up to a certain point, if not outright unusable. Even my kids can tell whether the stories GPT came up with are lame. Stage A/B tests and user research to verify how good LLMs really are.
- Embrace metrics and docs observability. Don’t just unleash AI on a product and forget about it; instead, measure the impact of the AI-generated or AI-edited content across your product and content properties, see where they have the greater impact and where they could hurt your product’s credibility.
- Hire with AI augmentation in mind. As I explained in Hiring technical writers in a ChatGPT world, writing skills are based on the same pattern matching and retrieval skills that LLMs mimic. Unless you expect writers to work offline on parchments, tolerate a certain degree of AI augmentation.
- Advocate for your craft at work. Tech writers only write during a fraction of their time — the rest is spent chasing subject-matter experts, organizing information, and more. Don’t let stakeholders think that the deliverable is your job: Remind them how the cake is actually made.
Don’t fear WriterBot: it can be your best friend if you make room for it and think how you can best partner up with it. In the meantime, make yourself heard and start thinking about clever ways to integrate LLMs into your daily routine.