Welcome to the latest instalment of my series on writing as user experience design. I’m discussing key principles of user-centered design with the aim of applying these ideas to writers’ contributions UX and UI.
Let’s talk about testing. As UX writers, we’re well aware that we work in a space obsessed with evaluating usability. And we love it! We’re in this for the users, after all. We try to understand their every value, want, and need so we can craft perfect pieces of UX copy and microcopy.
Yet our niche of the UX world is still new. While UX designers and researchers have a marvellous array of methods at their disposal, I’m often left wondering how UX writers can test our words. How do we narrow in on our part of the design process? What testing methods are available to us right now?
I don’t just mean readability scores. Flesch-Kincaid Index and similar metrics of readability can be a great baseline for user-friendly text, but they aren’t perfect. Nor can the algorithms powering these scores tell if your words are striking the right tone. As anyone writing in a particular brand voice can attest, every word needs to reflect the persona encoded by the internal style guide. Flesch-Kincaid can’t help you with that.
(I promise I’m not trying to be too harsh against readability scores. I use Hemingway all the time to keep my writing concise!)
Turns out, we aren’t starting from scratch when it comes to testing content and microcopy. Here are a few methods from around the web to help you start building testing into your process!
A note on participants: When testing comprehension, it’s important to recruit participants representative of your actual users. Proxy users aren’t enough. Running cloze tests on stakeholders from the marketing department or surveying engineers walking down the hallway won’t work here. These insiders bring company-specific assumptions and knowledge and can’t provide an accurate measure of comprehensibility.
Evaluate users’ comprehension with a cloze test, sometimes called a cloze deletion test. Based on Gestalt theory of mental self-organization (“cloze” is related to “closure,” or the mechanism by which individuals understand an incomplete object as complete), this method was first proposed by W. L. Taylor in 1974.
Cloze tests are great for figuring out if your intended audience actually understands what you’ve written.
How to run a cloze test:
- Select a piece of text and replace every Nth word with blanks. A higher N value makes the test easier. In typical tests, N = 6. Of course, UX writers working with microcopy will need to have a much smaller N!
- Ask each participant to read the altered text and fill in the blanks with their best guesses about the missing words.
- Calculate the score: the percentage of correctly guessed words. Synonyms and misspellings are allowed (in fact, if a synonym recurs, maybe that’s the word you’ve been looking for). If your participants score 60% or higher on average, your text is reasonably comprehensible for this user profile.
How to run highlighter testing
- Select a piece of text and determine what you’re testing for. Code a set of highlighters according to these factors. In the linked example, participants highlighted words that made them feel more confident about the service in green and words that made them feel less confident about the service in red.
- Provide participants with a printout of the text, the coded highlighters, and clear instructions about marking the page. If you’re testing for more than one value pair, avoid confusion by providing the user with a clean printout and new instructions for each pair.
- Calculate the results. The colors provide an easily assessed visual indicator of what text works and what might need rewriting. If you’re a metrics person, you can also calculate statistics based on counts of color-coded lines.
Think back to your high school English class. Do you remember having weekly quizzes to make sure you were reading and understanding the assigned books? Your teacher was on to something.
Comprehension surveys are basically quizzes that test whether a user understands a piece of content. You can test long pieces, short strings, and everything in between.
How to run a comprehension survey
- Determine the text you’d like to test. You can present it all at once, line-by-line, or a mix of both.
- Develop questions. A List Apart has great tips for writing high quality questions, including stating questions in positive form, providing only one correct or clearly best answer in a multiple choice list, and providing incorrect but plausible distractor answers. You might want to add an option for “I don’t know,” as random guessing isn’t helpful here.
- Provide your participant with the content and questionnaire. This isn’t the SAT, so keep things short — twenty minutes is more than enough time to gauge your user’s comprehension.
- Calculate the scores. If you use a survey platform like SurveyMonkey or through a UX testing tool such as Optimal Workshop, you’ll get a handy score report automatically!
I hope these testing methods are useful for you. If you’re a UX writer or content writer and have other ideas about measuring the effectiveness of your words, let the world know in the comments!
This article was originally published on Annie’s Medium page.