The other day, I saw a bad prototype. It came up during a design critique. We were going to start user testing that very prototype the next day. I wrote down my notes and mentioned it to the designer.
The problems I saw were:
- Typos.
- Simple design flaws (different colors for the same hierarchical information).
- Information that made no sense in the context.
For me, these were easy things to fix. I wanted to give the designer this feedback for two reasons:
- To make the design cleaner, and to encourage attention to detail.
- To ensure we were getting the right feedback from the users.
The second reasoning behind this, for me, is more important. When I present a prototype to users, I would rather not spend the precious time we have with them confused about easily changed information. Or information that doesnât matter in the test. It can be a big waste of time, and participants can get stuck on these small inconsistencies.
What happened in this case? Unfortunately, the designs didnât get changed in time. We tested the older designs with the issues I mentioned. While we did get some great feedback on the user experience, there was some time wasted on those smaller inconsistencies. I had to explain them and wave them off as small mistakes that donât matter. However, it made the interviewing experience seem less productive and prepared. Almost every user recognized the changes I had asked to be made.
Regardless, again, we still received great information from users, but it felt clunky to explain away the prototype. Now, I know prototypes are supposed to be far from perfect, but this felt beyond the usual prototype spiel I give.
So, my biggest question in this case: is it okay for the researcher to request the changes I did? Or is it on the researcher to perform the interview in a manner where these inconsistencies donât matter to the user?
When should user researchers give feedback?
This particular case made me question when, and at what level, user researchers should give feedback to our teams. I generally give feedback during the following opportunities:
- During the idea/concept phase
- During the prototype phase
- After a design is completed
- Synthesis from research sessions
Iâm not entirely sure if this list encompasses every opportunity, but it was my starting off point. I donât want designers or other team members to think I am an expert in UI/UX (or any other field, for that matter) or that I am overstepping boundaries.
Here is how I give feedback at each of these steps:
- During the idea/concept phase. I do my best to ensure teams come to me with ideas very early on in the development process so we are able to test the viability of the idea with users. When they come to me with ideas, they are generally solutions, rather than problems. I ask them to come up with the problem they are trying to better understand and the questions they would ask during to find out more information. Some teams have come to me with a fully developed idea, which I knew would not stick with users, or solve any pain points. In this case, I was new to the company, so I was forced to test it with users. It proved the point that it is important to do some upfront user testing before we come with fully built solutions based on assumptions. Now, when people come to me with solutions, I request they go back to the drawing board and start with a user problem, and questions they would like to ask.
- During the prototype phase. This is similar to the example I gave above with the prototype. I try to get a look at all prototypes before we put them in front of users. I will have the designer walk me through each screen, and I will point out any small inconsistencies. This gives the designer a second pair of eyes on the designs and helps ensure the design and flow make sense. Prototypes can still be âmessy,â as in low-fidelity, but they need to make sense. We donât want to waste time having users comment on small things that are insignificant to the usability test.
- After a design is completed. This is where the whole feedback concept starts to get tricky for me. Once a design has completed user testing and is off into the wild world of being âlive,â what do we do? Since it was already user-tested, do we have the right to give additional feedback? For this stage, I will wait a bit and then follow-up with any feedback we are receiving on the particular design (or feature). If the design did not go through user testing, I will test at this stage and do a heuristic evaluation to give some additional feedback to the designer.
- Synthesis from research sessions. And finally, synthesis. Maybe for some, this is the most straightforward but, for me, it can get complex. We all talk about how synthesis is one of the most important parts of the user research role, but rarely is it discussed in full. At this point, how I understand synthesis is as follows: we digest and analyze the research sessions, and then give âactionable recommendationsâ on what should come next. What does this mean? Are we telling people what to do? What are we recommending? This brings me to my next pointâŚ
At what level should we give feedback?
Since the first three opportunities for feedback are much more straightforward, I will focus on synthesis for this particular case. I have the following questions when it comes to giving feedback (specifically targeting synthesis):
- What should we be producing? Action items/recommendations?
- What are the action items/recommendations?
- How far should we go with action items/recommendations?
I truly believe user research isnât about giving people answers, it is about giving people tools to better contextualize something. We arenât meant to âtell people what to do.â So, with this in mind, what are we supposed to be writing in terms of recommendations?
When I tested the aforementioned solution (which was a huge feature), it was glaringly obvious our users would not find it helpful or useful. And they would not pay for it. There were a few aspects of it they liked, but, by large, it wasnât sticking with them. They simply were not interested and would rather have the company work on other features or improvements.
When I received those results, I wasnât entirely sure what to do. The solution was already half-way built, and the team had spent a good amount of time working on it. I decided to give my honest recommendation: stop working on this immediately and pivot to working on other, more impactful, areas. I gave some ideas on how we could change the idea to suit our users better, but it should not be a high priority. I was lucky to have enough people on my side to be met with little resistance.
However, when it comes to these tests, I always wonder what level we should give this feedback and these recommendations? I often state the recommendations as problems instead of solutions:
âUser is unable to locate the âpayâ button or move to the next stepâ versus âmove the âpayâ button to higher on the page or make it a bolder color.â
This still gives enough flexibility for someone to make a better decision, without me telling them exactly what to do. However, as I mentioned, I also made an honest recommendation of not continuing on with a product. Iâm not entirely sure what the balance is, but I am sure there is one. But I would still love to knowâŚ
How do you give feedback as a user researcher?
. . .
If you liked this article, you may also find these interesting:
- Burnout as a User Researcher
- User Research Isnât Black & White
- Benefits of Internal User Research
- How to Write a Generative Research Guide
If you are interested, please join the User Research Academy Slack Community for more updates, postings, and Q&A sessions :)
. . .
Originally posted on Nikki's Medium.