UX Research and Usability Testing – Designer vs. Developer #21

UX Research and Usability Testing – Designer vs. Developer #21

JENNY GOVE: Usability
testing is all about like, for me,
exposing the problems and exposing what works
really well for users and understanding why. MUSTAFA: Research is
not really like that. They’re just different tools. It’s like saying, is a hammer
better than a screwdriver? It depends on the thing
you’re trying to find out. [MUSIC PLAYING] So one thing I get asked
a lot about is people like the idea of research. They know it’s important. It’s like flossing. Everyone knows you should do
it, but they don’t know how. How does someone start
getting into doing research, like if they know next
to nothing about it? JENNY GOVE: If you’re
thinking about your product as you’re sort of developing
it at that stage of research, then it’s just great to get
people using it for the tasks that you’re planning it for. So even if that’s
friends, family– even if that’s
people in the office, it’s just great to get
your product exposed to people like that so that
you can watch what happens. And in usability
testing, we’re really looking for the problems
people fall into and what works well for them
and why it works well for them. And so having anybody
go through your product, you will see those things
as they work through. Over time, you’d
want to start testing with a broader range
of users and get out of those kind of
biases that come in with friends and family. But to start off and gain
confidence and start small, it’s a great place to start. MUSTAFA: So when you talk
about usability testing and the biases, is
there anything specific that you’re supposed
to look for? So you’ve designed
your app, you’ve given it to your
friends or family. Is there any, I suppose,
like, golden rules or things to look out for? Or does it not really
work like that? JENNY GOVE: Yeah,
well, what you make sure you do is kind of
set up the tasks that you want to be watching for. So basically, you have
people go through the tasks, and they speak out loud
as they do it often, depending on what
you’re testing for. And so you’ll be identifying
your sort of most critical user journeys usually, and have
them walk through that. And often, it’s something
like some language you use. We call that content the
words you’ve got on the button or where you’ve positioned
something that just don’t make sense for most other people. It totally made sense for
you in your design work or your development work, but it
doesn’t work for other people. And so you’re looking
for that, and you’ve got them speaking out
loud so you can really understand kind of
why they’re coming at things in certain ways. And that’s the value
of usability testing, as opposed to looking at
data from logs or something. You’re really
understanding the why of why they get into
certain situations where they can’t move forward
or certain flows work really, really well. MUSTAFA: I mean, is there
like a scale of research? So like usability
testing feels like it’s like the midpoint
of a product line. JENNY GOVE: That’s
exactly it, yeah. MUSTAFA: But is there
like a before and after? Because I know there’s
also the debates of– was it user groups
that you have, where you’re asking
questions or surveys? I mean, how would you say
is like the cycle of doing research? JENNY GOVE: Right. MUSTAFA: What was
the first thing you– would you recommend before
even building the thing, or maybe when you’ve
got early stages? JENNY GOVE: Yeah. So right at the
beginning, we tend to do what we call
foundational research. That’s when we’re really
understanding the domain. We might use, also,
secondary research– see what else other
people have done. But we might also do
foundational research. Often, it involves more sort
of field research techniques– going out to where people are
doing that activity that you’re interested in supporting
in your product. So field visits, doing
observational work– contextual inquiry is
a particular technique where you’re with the person
throughout the day and asking kind of questions
as you go along to really understand why
they do certain things in their environment. So that’s the whole
foundational research stage. At some point, you
might want to understand sort of broad
representations of data. You might do survey work. I’d always recommend doing
some sort of qualitative work to begin with, whether that’s
your contextual inquiry, your interviews. So especially for
surveys, so that you’ve got the range of
results that you would want to ask for a
particular survey question. If you come back and you’ve
asked this kind of close ended question with different
question answers next to it, but you’ve missed a couple of
really significant things out, then there’s real
problems with your survey. So doing that qualitative
work to begin with is really, really helpful. And then you referred to– I think you were meaning
probably focus groups. MUSTAFA: Yeah. JENNY GOVE: That
sometimes get a bad rap. MUSTAFA: Yeah. JENNY GOVE: So focus groups
aren’t used that much in user research. Occasionally, I’ve used
them, and it’s really been more for my benefit
of finding out a domain. I kind of haven’t used them for
rigorous, representative data. But conducting a
focus group helps me understand like, these are
all the important topics that matter to people
within this domain, and then I can go forward
and do my other techniques. MUSTAFA: So it’s more about
boosting your own knowledge, rather than finding
information out about– JENNY GOVE: That’s
how I’ve used them. MUSTAFA: You have
this kind of always versus mentality in
tech, but I mean, do you think there is a
better way of research with qualitative
or quantitative? Or is that like a pointless
thing to be asking? That it’s really about
using those to just get a better understanding
of the problem that you’re trying to solve? So is one better than the
other, in your opinion, or is that– it doesn’t
really work like that? JENNY GOVE: They’re
actually very complementary and you need both. So when it comes to the stage
of doing usability testing, oftentimes, we might–
when we’re really into sort of initial
use of the product or we’ve got it out
there to beta testers or we’ve already launched
and we’re doing studies then, then you can collect logs data. Logs data is really
very, very useful. You can see where people are
dropping off on the page, for example, and why they’re
not converting, for example. But you can’t necessarily
always understand why. And so partnering that
with usability testing is really valuable so that we
can really understand the why. And that’s– our quantitative
data has huge numbers, and they’re all large numbers–
as large as we can get. And the data is as
representative as we can get. We really are very
concerned about that. In usability testing, it’s
not so much of a requirement to make sure that your
participants are truly representative of the
population because you’re looking for pain
points and you’re trying to understand why it is
that people have these problems so that you understand
their mental model and you avoid designing
in that way again. MUSTAFA: Yeah. I know one criticism
of usability testing, especially like we say,
sometimes, the sweet spot is between five and eight people. JENNY GOVE: Right. MUSTAFA: Is that enough,
like, to really solve the problems you’re
trying to solve, or is it not– usability testing
is not really like that? You’re not comparing
it to, say, surveys, where you’re doing 100 people. Usability tests are about
specific pain points. JENNY GOVE: That’s
exactly right. So in surveys, you have to
be really sure that you’re getting as good as a
sample as you can– as representative as you
can of the population that’s of interest to you, and there’s
various sampling techniques to do that correctly. Otherwise, there’s going to
be problems with your survey, as we’ve often seen in voting
and polling and that kind of thing. With usability testing,
we’re doing something else quite different, really. We’re using usability
testing to understand what problems people
have with the design as they go through, as they
try and complete their task. And so what happens
is often, you see– you start seeing the same
problems again and again, and you really learn
that, oh, this is kind of a problem for everyone. In fact, there can
be problems that you see right at the beginning
of running your studies, say, after even one or two
people, and you’re like, yes, of course. I should have seen that. Of course people think
of it in that way and of course they’re
dropping off at this point. You don’t need many, many
people to prove that. And so you tend to find
after five or eight, then you’ve seen the
majority of the problems. You’ve probably seen
80% of the problems. If you carry on testing,
you will see more problems over time, but you will have
caught all the main ones in the first five to eight,
and it will be definitely diminishing returns on that. MUSTAFA: So say I’m
designing a product. Is there a point where
I’ll pivot in the research? So say we’ve done two tests
on an app or a website. These really obvious
problems– the UI’s not obvious or whatever– comes up. Do you stop and
say, right, we’re going to adjust the UI
and the design thing and carry on testing again? Or does that muddy the waters? I mean, is it OK to do that? JENNY GOVE: Yeah. No, it’s totally OK to do that. And that’s actually a form of
testing called right usability testing, and there, we schedule
two or three people to come in, and then we have a
break like half a day or a day where the team
get together and talk about what they’ve seen
and how they can adjust it. And then you’ll get two or
three more people come in and do the same again,
and iterate on it. And it’s a great idea, really,
because you don’t really want to or need to see
kind of five, six users all fall into the same problem
if you all agree as a team that that problem
exists and needs fixing. There are other problems
where you’re not sure. Somebody goes one way and
somebody else goes another, and it’s really– there might be a few different
mental models out there, and you kind of
have to see how it goes over time to see
what the main issues are and what your team
feels need fixing. But certainly, there’s
another good reason to do it, as well– that those
obvious ones that you really want to fix might be sort of
hiding other problems that wouldn’t be exposed
if you didn’t fix those ones during the
course of the [INAUDIBLE] MUSTAFA: OK. The other thing– I remember you did the
research project called the 25 Principles of App Design. JENNY GOVE: Yeah. MUSTAFA: Sometimes known
as the Gove Principles. JENNY GOVE: Web first. We did web, and then we did
app, and we did a whole set on retail. MUSTAFA: I remember
speaking with you. It’s almost like
my mind was blown when you talked about you do
a pre-study before the study to test to see whether the
study is actually good. I mean, that kind of–
like, of course you do. Could you explain a
bit about like, what– is it called a pre-study
or pilot study? JENNY GOVE: Right, it’s
called a pilot study. And so for usability
testing, we– yeah, exactly like you say. We’re testing the test. We want to make sure that
the different tasks that we ask people are sort
of lining up properly and make sense for them. We want to make sure
that those flows work as we think we should at
the time in the product. It’s absolutely crucial to
do pilot testing beforehand. Yeah, and you can do
that with all techniques of user research. So there are many techniques
that we haven’t also talked about. [? Divey ?] studies
are something that’s sometimes done. We can test the questions
that we’re asking people in the [? Divey ?] study. Even surveys– it’s important to
test your survey instrument out first, and we often do what
we call cognitive pre-testing, which we’ll sit
down with someone and go through the
questions, and make sure the questions mean to them
what we think they mean. MUSTAFA: Yeah. So just even going
back to the beginning. So you want to test. You’ve done some
tests of your family. What’s the next thing? I mean, is there– do you really, at
that point, need to get a good researcher to
help you plan these things? Or is this something that
small teams and start-ups can do now, professionally,
to test their products? JENNY GOVE: Yeah. Well, there’s absolutely–
you can do it. Google Venture’s puts
out some good guides for doing usability testing
and different sorts of user research testing. So that’s a good place to
look for how to do this from a startup perspective. You know, as a user
researcher, I personally don’t think companies hire
user researchers early enough. In order to get that kind of
foundational research done and really understand
the space, really need to be thinking about doing
that hire earlier than we’re doing it now. What tends to happen in
companies is that people hire– eventually, they realize
that they can’t do design without a designer, so
they hire a designer, and then the designer says,
well, it’s hard for me to do design without
the research findings and to really understand
the context I’m working in. So that’s the kind of pattern
it tends to happen in. It would be nice,
from my perspective, if it worked the
other way around, but that’s not to say that
people shouldn’t be doing user testing themselves. Obviously, you can get better. It’s not particularly
advocated that you should work with family and friends. It’s just a good
way to get started, and you will find issues
with your product. But it’s better to try– although you’re not aiming
for, in five to eight people, a representative sample
of the population, you do want to try and get
the same sort of people that you are aiming
your product for. So if you’re creating a
music app for teenagers, then it would be great
to test on teenagers that are the sort of target
uses for your product. MUSTAFA: It’s also important to
test for demographic, as well, like in terms of the
way users will behave in India will be very different
to the way they behave in America, right? JENNY GOVE: Right. And they’re under different
sort of constraints and different contexts
and different conditions with the kinds of
phones they use, the different
connectivity they have– all those kind of things. MUSTAFA: Cultural
things, which may be like how they respond
to UI and whatever. JENNY GOVE: Yeah. MUSTAFA: And is it
just basically you need to get out in the field? I mean, I think
one thing people do is they test too much in
front of their computer with their amazing MacBook
or amazing Wi-Fi connection. But how do you really empathize? Because I think research
is more about empathizing with the person you’re
trying to design for, right? JENNY GOVE: Right, absolutely. And so I think we need to take
account of the technological constraints that we
just talked about, making sure that we’re– if
we’re aiming for those kind of markets, that
we’re potentially– I know in India, we have a lot
better connectivity nowadays, but in certain
parts of the world, there’s a lot of the world
that’s still on 2G connections. So taking those kind of
considerations into account is important. But also, you’re right that
the cultural and different contextual factors can
be really important. MUSTAFA: So what if you’re
testing for users’ preferences over two designs? So if you’ve got
two separate things, how do you know you’re
testing for the right things or the person’s opinion and
what preferences they have? JENNY GOVE: Yeah, that’s
a really tricky one. I think that I’m not greatly
in favor of testing– using usability
testing for preferences because it’s such
a small sample. Potentially, the
sample might be biased. If you asked with
eight different people, you might come up with
different preferences, especially if you were
just adding up the numbers. I’m really not in favor of that
because it has those problems. You know, usability testing
isn’t really for that purpose. So I’m not against asking
about preferences in order to understand why people
have those preferences. I think that can be really
useful in usability studies. So understanding that they
need particular things, say, in Maps when
they’re navigating. They need particular landmarks. MUSTAFA: Yeah. JENNY GOVE: That’s really,
really interesting. But just trying to collect
raw numbers about they prefer design A rather than design B. MUSTAFA: Well, like they
preferred blue to red. I mean, is that– and
people lie, right? When you ask them
questions, they might tell you what they
think you want to know. JENNY GOVE: Absolutely. So this is why I’m
much more into, like, understanding them
completing a task and understanding, like,
what works for them and what doesn’t work for them
about completing that task. It’s really all about getting
that deeper understanding in usability studies. So yeah. Don’t add up the
numbers for preferences. But use that to understand
why the particular design is preferred over
another one and what we can do for future designs to
make it more useful for people. MUSTAFA: It’s really understand
the context, rather than the personal opinion
of yes, well– JENNY GOVE: That’s right. Yeah. That’s right, and understanding
why their context leads to that view. SPEAKER 3: I try, as much as
possible, to avoid black magic. And whenever I’m reviewing any
code that any of the designers on my team are writing, we try
and avoid anything that’s– maybe it’s a little hack and
it makes it slightly more performant, but
the truth is if we want to evolve the
material design system, we need to be able to
build on top of the code, and each layer of
that code matters.

9 thoughts on “UX Research and Usability Testing – Designer vs. Developer #21

  1. this was brilliant, sold me on watching the whole series now!! and i was watching a movie, swapped to his video and had a much better time too, thanks.. and now feel alert and inspired .. /..

  2. Are the people in these talks specifically focused on Chrome? I have no knowledge of the organizational structure and inter-product team overlap at Google. Chrome won me over with its usability years ago, but I can't say my experience has been the same with any other Google flagship products (except maybe Google Translate, which is wonderful.)

Leave a Reply

Your email address will not be published. Required fields are marked *