Do it yourself: User research for technical documentation

Jun. 24, 2022

As a technical writer, I often want to know what works and what doesn’t in the docs I’ve released. Oftentimes, I also want to know if the documentation is achieving its purpose. There’s a very good way of getting answers about the quality of your documentation, and that is user research. Analytics or feedback widgets can only get you so far.

At its core, user research looks deceivingly simple: you recruit external users, ask them questions, and analyze their responses. In this sense, user research is not unlike other forms of data gathering in social sciences, such as anthropological studies, sociological surveys, or psychological experiments. What’s different is the scope, which is products.

In the case of documentation, you can treat it as a product, too, and perform user research (or UX research) on it to answer pressing questions on your docs. You can mix user interviews, where you ask questions about a product or flow, with usability testing, which consists of tasks that the user must accomplish.

I strongly believe that we technical writers should be doing more user research. We’re uniquely positioned to study our documentation as part of the products, and we’re familiar with the issues caused by terminology and UX decisions. Through user research, you can answer questions such as the following:

Sometimes, user research can lead you to unexpected findings due to the sheer variety of hardware and software configurations among your user base. For example, you might discover that your docs are not ready for dark mode browser extensions, or that they don’t work well with screen readers. That is valuable information, too.

Here I summarize what worked for me in the past; I’m not a professional user researcher, but as a psychologist and technical writer, I settled on a formula that helped me carry out small user research projects without having to neglect other duties.

Get help from UX researchers

This is step 0. If you’re lucky enough to have in-house UX researchers, start by asking them for help. UX researchers are brilliant, caring, and usually overworked folks: they will love to learn that you can do extra research, and will gladly guide you through the entire process, show past results, and give you access to their resources, such as user pools and research guidelines.

Make it abundantly clear that you’ll be bearing the bulk of the burden, so that they can focus on validating your research plan and assisting you at each stage. It’d be the start of fantastic cross-functional collaboration.

Start with good research questions

Good research plans start with great questions. This is perhaps the most difficult bit of UX research, and an art in itself. Great research questions are ones that cannot be easily answered by available evidence and that focus on your core business objectives. They must be clear, relevant, and not obvious. If they make you feel slightly uncomfortable because they’d expose hard truths, then you’re on the right track.

Questions can come from a variety of sources:

The way questions are phrased matters, too. In general, you want users to open their mind and let you glimpse into their mental processes.

You don’t need a detailed script: start from core topics and broad questions and then apply Socratic questioning to formulate questions on the go and dig deeper into findings, like a journalist would do while conducting a scripted interview. Be ready to improvise!

Prepare your test and dogfood it

If you go for a hybrid UX research approach (user interview + user testing), carefully prepare your test scenario beforehand:

Engineers, sales, and product managers can help you nail down a test scenario that is relevant to success metrics, SLAs, or similar indicators. Above all, try the scenario yourself, and ask others at your company to try it out. You don’t want a user test to fail because users weren’t able to sign up for some reason, or because their stack is incompatible.

For example, to test some of my OpenTelemetry docs at Splunk, I created two Docker images, got them reviewed and tested by the team, and then tried out all the steps myself. Preparations included sending a doc with requirements to each user, and checking with them if they could follow the instructions.

Always have a Plan B ready for those cases where the user couldn’t do the preparations. In the case of the user research project I did at Splunk, I asked users to follow the steps in the documentation as if they were entering the commands in the container they couldn’t get to launch. Not the same thing, but still a thing.

Recruit great user candidates

Great questions amount to nothing if directed to the wrong profiles. To extract high quality data, you need users that are relevant to your product and situation. While current customers might be ideal for new features, their experience might bias results; internal users might be even more unpredictable due to the curse of knowledge. The best pool from which you can draw users is the outside world, but don’t go random about it:

The amount of users you need for user research is a common question. The majority of experts seem to agree that a number between 5 and 10 is good enough for most purposes, as fewer than 5 users might be insufficient, and more than 10 might be inefficient and repetitive. Nielsen Norman’s answer is 5.

Be a gentle host

Users willing to share their thoughts are a treasure; they’re doing you a great favor. Yours is the task of making them feel comfortable. Introduce the session with a short explanation of why they’ve been summoned, explain what they’ll be supposed to do, and remind them that there are no wrong or right answers. Tell users that they can speak their mind freely and that, in fact, constructive criticism can be immensely valuable to you.

It’ll frequently happen that users get stuck during a user test or interview. If they can’t answer a question, clarify and reformulate it, or try to understand why they’re unable to answer (you might want to alter your question for future sessions). If they get stuck while trying to accomplish a task, provide some light clues, but not before asking open questions that might help them in their heuristic process (“What else could you do to advance?”). As with questions, you don’t want to suggest solutions.

Sometimes, an obstacle proves too hard to conquer. Don’t despair and skip it altogether: It’s better to have partial data than no data at all. With this in mind, try designing the tests in a modular fashion, so that single steps can be skipped without jeopardizing the session.

Record everything

It goes without saying that you should be recording everything from the start, and inform the user about it. Some services, like Zoom, generate recordings in the cloud right after the meeting, and come with handy automated transcription tools. Do not fully trust automatic transcriptions, as they’re usually ridden with errors.

For long sessions, what I do is listen to the entire recording and transcribe the most significant passages, with timestamps that help locate the fragment later on. A curated list of sentences and actions is usually more digestible than an entire transcript in some cases. Plus, powerful statements from users provide quotes for your executive summary.

Keep the links to each recording and transcript, as well as a local copy of the videos. Much as lab notebooks, recordings are proof of your research and you should protect them.

Create an executive summary

Once you’re done with the sessions and transcribed the videos, you should end up with one or more of the following materials:

The way I like to structure my executive summaries after a user research project is centered around themes and actions:

  1. Goals of the study.
  2. The user persona you used.
  3. How you recruited the users.
  4. A description of the UX test.
  5. Issues found during testing (yours or the product’s).
  6. Findings by theme, with user quotes or screenshots.
  7. Conclusions (Quick wins, Potential projects, Recommendations).
  8. Links to resources (recordings, test materials, etc.).

List stuff that’s easy to fix in a “quick wins / low hanging fruit” section, and make sure to create tickets or issues in your issue tracker so you don’t forget about them. More complex issues can become epics, projects, or initiatives to be discussed with the team.