Webinar: What's Wrong with AI Generated Docs
Today I discussed how tech writers can use AI at work with Tom Johnson and Scott Abel. It all started from my post What’s wrong with AI-generated docs, though we didn’t just focus on the negatives; in fact, we ended up acknowledging that, while AI has limitations, it’s also the most powerful productivity tool at our disposal. Here are some of the things I said during the webinar, transcribed and edited for clarity.
About the impact of AI on my work
“I feel like the areas where AI has had the biggest impact for me are supporting tasks, things around what I do, but not actual writing. I very seldom ask AI to write stuff. It helps me more at the margins, doing things like completing fixes, writing following some patterns, or explaining a code base, or validating that some stuff reads in a certain way. So it’s all the things around my job that require feedback or a set of careful eyes. That’s where I feel AI has had an impact.”
About the impact of AI on our profession
“There’s a fundamental shift happening here. Picture the lone technical writer doing lots of manual work to get the docs out. That has defined their job for many years. Now we have LLMs helping us, potentially freeing us from all those manual steps. And this forces us to do two things: The first is to think about documentation as the product. And in that sense, you need to peek under the hood and see how your docs release process works.
At the same time, it forces you to think about strategy, which is something that I feel like writers have been avoiding for many, many years because of, well, they had a very good excuse, which was, I have to write the actual docs. Now they can produce those docs in a different way, but they have to think of all the moving parts of where they want the docs. Our job will become more and more like becoming a strategist and a product owner of what we call docs.”
About supervising the work done by AIs
“The moment that LLMs came out and, and this technology surfaced, I already had the feeling that our role was to become that of an editor. And I think that what we are doing is shepherding AI, limiting it to certain contexts. We are learning where it’s best to call it, how is best to feed it. And what to do with the output. So is it looks very much like an editorial process, an editorial workflow where you provide some initial input, maybe some some idea on what content to produce, then you review it. There’s always that quality assurance, quality control side, the supervision.
AI is not really autonomous. It relies a lot on us. And I feel like sometimes there are days where, when coding through AIs or doing some assisted writing, I’m spending more time helping out the AI doing the actual task that I’m asking the AI to do. But I take this as a learning process. I read this article the other day, Nobody knows how to build with AI yet. And it was a developer saying that they haven’t quite figured out how to best work with AI. There were lots of comments around the fact that you have to spend lots of time, you have to learn how to talk to it, and when the model changes, you have to also maybe change something you’re doing. You have to learn how to optimize your time. But your presence is always mandatory.”
A reminder: AI is just a tool
“Here’s something I would like to remind people: LLMs are a tool. Sometimes we operate under the illusion that we are talking to a person. It’s just a tool, a computer program. So you can actually be quite ruthless with it. If it starts getting lost, just open a new session from scratch; don’t feel bad about it.”
Our duty as technical writers
“I think it’s our duty right now as technical writers to explore these technologies and let future employers know that we are actively exploring these technologies, that we are aware of them, that we are aware of their limitations as well as of their pros and cons. This informed position is the best one to have currently.”