Just when we thought that we wouldn’t be replaced by WriterBot, a mounting concern is ruining many a breakfast: Bad actors could still get hired as technical writers by feeding take-home assignments to ChatGPT and presenting the resulting soliloquy as their own. Nevermind that ChatGPT content is often wrong or trite: Some think that it’d still fool hiring managers. Let me suggest two solutions to this robocalyptic scenario.
I was reluctant to write about this topic, because it entails a pessimistic view of both AI and people. Is there really a horde of job applicants willing to game a writing test to get a month or two of tenure? Unfortunately, yes: Fraud exists and it’d be naive to dismiss the possibility of folks using ChatGPT to fake their way into a job. Hiring mistakes are very expensive, so of course you want to defend against AI assisted scams. I get that.
You can’t stop progress, though. There is no Skynet to burn, nor infallible ways of detecting text authored by AIs (although there are some talks of text watermarking and detectors). If you think your writing test is vulnerable to scammers that use ChatGPT, I only see two possibilities: You either change your take-home tests or modify the way you think about technical writing. The latter, in my opinion, is the best way to go.
ChatGPT is extremely good at regurgitating existing knowledge. For example, you can ask it to document how to call an API endpoint or how to use a Python function. In a way, it’s the C3PO version of Google Search. If your take-home test consists of explaining an existing technology, know it’ll be highly vulnerable to AI. Even good old plagiarism might still be able to crack your test. Dump it.
Hiring writers only because of their writing is like hiring someone to be a ninja based solely on how they chop stuff with a katana; important, but not the whole thing. Tech writers bring a diverse set of skills to their jobs, writing being but one of them. We often juggle multiple imperfect sources of information, adapting our tone to the situation while extracting feedback. What makes our job irreplaceable are the human bits.
Instead of going for generic requests, go for a realistic take-home assignment that shows the complexities of the job. For example, you could provide four or five different and slightly contradictory artifacts (official docs, Jira tickets, CLI logs, etc.) and ask applicants not only to write docs using that little bundle of chaos, but also to write down what they’d ask SMEs for clarification and how they’d manage feedback.
Another option is to switch the writer to an editor role: Ask applicants how they’d improve a bad document or improve information architecture of an existing site. That sort of contextual awareness is out of GPT boundaries for now. Editing requires an opinionated stance, something that a gelatinous hive mind doesn’t really possess – because it can’t, because a stance is the product of being a living, hungry thing that makes mistakes.
You’d even ask to document something GPT is unlikely to have lots of material about, like a fictitious piece of software or a random object that does something similarly random. What you can no longer do is to go the traditional route and ask remote job applicants to do things that search engines and AI tools are better suited for. The only way of negating AI based scams is to increase complexity or have applicants work in real time.
Even after all the above, a highly skilled operator of AIs could still crack your test. If that’s the case, you might as well hire that person, which brings me to the next option: To assume that the usage of AI agents like ChatGPT or GitHub Copilot will be commonplace at work, as it happened with StackOverflow and Google.
In The Rise of WriterBot I advocated for tighter integration of AI assisted writing in technical writing workflows:
By removing the most repetitive and least engaging bits of the job, WriterBot would allow tech writers to humanize their work and focus on building relationships and improving the way products communicate.
Writing skills are based on the same pattern matching and retrieval skills that ChatGPT mimics. Unless you expect writers to work offline on parchments, you have to tolerate a certain degree of hive mind influence. (I’m sure many writers and editors criticized automatic grammar checks as unfair when they appeared.)
You can be frank about this in your take-home assignments and ask candidates whether they used AI tools or not. Push things a little further and ask them to use ChatGTP in one of the exercises and explain how they edited the output or what they think is wrong with it. One more step and you’d ask how they would use AI at work.
If a tool is better at something, wouldn’t you give it to qualified operators if you were running a business? Picture chainsaws vs. hatchets or cars vs. rickshaws. The key here is what makes someone a qualified operator of an AI based tool. To answer that, one must first accept that AI can enhance our abilities.
I don’t believe in the Singularity, nor in AIs taking over the planet. I do believe it’s time to start thinking of AI augmentation as the future of most technical jobs, including technical writing. It’s not about replacing people with robots (we really can’t); rather, it’s about embracing transhumanism.
You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence.
— Byte, April 1985
So, no, don’t cling too much to the idea that writers must do everything on their own. Instead, think hard about the complexities of the work you do and hire accordingly. Accept that parts of the work could be executed with the assistance of artificial intelligence, software, and other devices. But first, talk to Legal.