PeakUniversity: The Art of User Testing

Recently, Allison Reitz, an Optimization Specialist at PeakActivity, gave a PeakUniversity presentation to our employees on “The Art of User Testing.” We caught up with her afterward to gain more insight into what User Testing is, how and when it can be effectively implemented, and the unique challenges it presents. 

 

Q: What is User Testing?

A: A crucial component of the UX design and optimization process, User Testing is an early evaluation of proposed changes to a website’s design, content, features, and functionalities based on research with your customer base. A website is a living, breathing entity and there will always be improvements to be made. Those improvements can be informed by User Testing.

Q:  How is it different from other tests, such as A/B Testing?

A:  A/B Testing focuses on quantitative analysis. User Testing primarily focuses on qualitative feedback. In other words, A/B tests and website analytics help you gather data about how users behave. User Testing helps answer why they behave that way.

Q: How many users do you need for User Testing?

A: When setting up a user test, it’s advisable to start with a small group of people. Typically, you will only need to speak to five respondents to identify most major issues that your website or mobile application may have. To find less common issues, you can recruit more respondents, but be careful not to cast the net too wide. Wading through 50-100+ user test responses can bog you down and limit your ability to respond quickly to high-impact problem areas.

Q: What metrics will User Testing gather?

A: User tests specifically focus on qualitative data, such as: Why do users engage with a feature a certain way? What are their actual questions and concerns as they move through a checkout funnel? Does their understanding of a website page, feature, or product match what you intended, or are they misunderstanding something that causes later confusion?

Given that most user tests gather qualitative data, rather than quantitative, the metrics will be fundamentally different from those measured by A/B tests. But you can still compile your users’ responses in a way that provides quantitative guidance. For example, some User Testing platforms can generate heat maps that show where respondents scrolled or clicked most often, and what percent of respondents interacted in that way. 

Additionally, you can group similar responses together for rough estimates of how many users may be impacted by an issue. Say, 1 out of 5 users said they couldn’t locate the search menu, but 4 out of 5 had trouble entering their billing information. Maybe you prioritize fixing the issue that affected 80% of your respondents first, before addressing the issue that only 20% of respondents mentioned.

Q: When should you deploy User Testing?

A: User Testing can provide valuable insight in situations where website traffic is too low for an A/B test to produce statistically significant results, or where you need meaningful results faster than an A/B test can produce. Or perhaps you’ve identified an issue on your site and so you already know the “how many” and “how much,” but in order to resolve the issue, you need to better understand the “why”. User Testing can also be quite advantageous when you need guidance in or confirmation of an idea up-front, before investing in a major development or design project.

Q: Overall, what challenges are unique to User Testing?

A: In User Testing, you’re dropping your respondents into a point in your website funnel, or you’re showing them a mockup or clickable prototype, and you’re asking, “What would you do if…?” Since your respondents are using their imagination to some degree, they may respond differently than how they would actually behave in real life.

To help combat this, it’s best to first ask some questions to understand the user’s expectations: What do they expect to see on a specific page or step of a website? How do they expect a specific feature to work? Then, show them the page or feature that you want them to evaluate, and compare how they react to it, and interact with it, to the initial expectations that they set.

It’s also helpful to keep in mind that how you ask a question can change the answer, or lead your respondents to answer a certain way. 

For example, “Why?” questions tend to trigger emotion-based responses — whether they’re defensive, or frustrated, or overly confident. The goal of User Testing, however, is to gather a more introspective response. So, instead of asking “Why did you click there?”, you might instead say, “What made you decide to click there?”

Even a question as simple and common as, “How do you like this feature?” presumes that they do like it. Instead, try to ask users to respond using a scale with a defined low-end and high-end, like: “On a scale of 1 (very difficult to use) to 5 (very easy to use), how would you rate this feature?”

Meet the Presenter:

Allison Reitz, born and raised in the Northeast, attended the University of Massachusetts Dartmouth where she earned her degree in English – Writing, Communication, and Rhetoric. After a couple years as a journalist, Allison switched fields and began working in user experience research and front-end development. She discovered a love of digging through analytics and observing user behavior to find answers to otherwise difficult questions. Speaking about her role at PeakActivity, she says, “I get to do what I love at a company that I’m growing to love more by the day.”

 

About PeakUniversity

PeakUniversity is a series of peer-to-peer, TedTalk-style presentations given by passionate subject matter experts to expand knowledge and generate interest in the subject matter of each session.

Want to learn more about User Testing and how it can help your business?

Fill out the form below or visit our Web & eCommerce page for more information.