Start mail me! sindicaci;ón

Archive for Bad Experiences

Your choice of words matters

Yet another reason why designers and business folk talk past each other: people who are purposefully misleading to get attention.

I came to this presentation from Google on their Quality Score measure because someone referred to it by saying “Quality Score is a measure of user experience”. It obviously peaked my interest because it is precisely the qualitative characteristic of user experience that makes it hard to measure.

When you get to slide 4 you realize that Google knows better and defines Quality Score as “an automated measure of how relevant each of your keywords is to your ad text and to a user’s search query.”

It has nothing to do with measuring users’ experiences with anything whatsoever. I realize it sounds naive to be cranky about attention-grabbing people but it baffles me that people do this: misuse the notion of user experience to mean anything at all that they want. It is such a coward move. Be bold, say what you want to say!

More than that, I worry that people just have no clue what they are talking about. Because it that is the case, it is even more worrisome. If people engaged at this level of discussion (i.e.: what measures to use) don’t understand a basic thing such as what user experience means (at its most basic what PEOPLE experience when they INTERACT with something), then we’re all very far from being able to have progress in advancing the conversation about measuring success in the context of user experience.

Quant before Qual makes no sense. But it does.

As I continue to explore how designers can make better informed decisions by leveraging information, the issue with number aversion is still #1. I talked about this already in my Interaction 10 presentation, but I’ve been digging deeper and have some other thoughts (check my presentation for some base assumptions).

If we agree that quantifiable data, specifically the ever popular web analytics, provide you with rich detail to tell you WHAT is happening, it is comforting to realize that it is the type of data gathering that we already do – design research – that provides the qualitative color to answer WHY said things are happening.

What I am finding, however, is that it is more valuable to START with the quantitative work and get to the WHATs and ask WHYs based on those findings, rather than trying to figure out WHYs in exploratory mode (even if the WHAT’s are going to emerge at one point or another in this quest).

My point is that it’s not sustainable as an approach. It’s inneficient to start digging deeper to answer the WHY questions if you don’t have a baseline of WHATs identified.

The problem is that it is not intuitive for designers to start where they are uncomfortable. We are super comfortable with qualitative approaches – they are our go-to tools because that’s what makes sense for design research. However, quantitative research instruments really help narrow stuff down, but they do require you to understand those pesky numbers in order to a) dig in and get to concrete answers and b) understand what it’s saying so you can ask “why”.

In short, WHATs before WHYs are more efficient than WHYs before WHATs, but that requires designers to start with unfamiliar tools to then apply familiar tools. If it was the other way around I think it would be much easier for designers to bridge both approaches and come out the other end with more useful insights.

In other words, since we don’t particularly feel an attraction to numbers (to put it lightly), why would we start there? It’s such a leap from how we think about problems that it is counter intuitive. I don’t believe designers reject the notion of starting with Quant approaches (WHATS) to expand with Qual approaches (WHYs), but it’s inherently counter-intuitive to think that way.

How can I help designers do this when it goes against their nature? That’s what I’m working on right now. More on this later.

Learning how to make UX decisions

I just had a great time recording a Userability Podcast where Jared Spool and Robert Hoekman answer my questions about how UX practitioners can learn to make good decisions about which methods to employ in their work.

[I'll update this with a link once it's published]

My question is an old concern about how new practitioners are being introduced to User Experience Design and Research practices by being fed a multitude of methods and not given much support about how to decide the right circumstances to use them.

It is not sufficient just to know how a certain method works. It is also not sufficient having used that method once or twice. What is it about our experience as practitioners that makes us better or worse decision makers? How do we choose to dedicate time and money to an 8-week long project to produce personas instead of a different approach?

What distinguishes the practitioners that not only choose methods and know how to apply them, but choose the methods that are most effective for a given problem?

A few years ago, Jared himself told me a story about an experiment where two distinct research teams (unaware of each other I believe) were given the exact same research goal and employed the same methodology to achieve it, and came up with different results and findings.

When that sort of thing happens, I wonder: Can we really trust our methods? But more importantly, if we accept that our methods are not really scientific and that we can’t really have a high level of confidence about the results we end up with, how do we choose one over another?

Somehow we just do. But some do better than others. Some do MUCH better than MANY others. If you have the opportunity to work with practitioners with enough experience and knowledge, you see excellent arguments for why to do A versus B for a given set of circumstances. So yes, only experience will help one make better choices, but everyone’s experiences are different. As a way to try to educate new practitioners we coach and mentor by teaching the methods and also giving advice such as “be flexible” and “don’t marry a particular process” and “figure out what kind of problem you are trying to solve first”, which are all excellent advice, but not strategic enough and often not practical enough that it can really help someone make a decision when they are faced with a new challenge.

Jared’s opinion is that our field is still too young and we haven’t yet been able to articulate the criteria we use in that decision-making process. I agree, however, it worries me that many think they are advancing in their practice because they know more, when in fact, they just learned new methods, but don’t really have the skills to assess risks, and benefits, between choosing one over another.

Being a runner gets you to the finish line, knowing which way to run wins the race. I really hope we become better equipped to pass on knowledge about how we make choices and why because, paraphrasing Jared, knowing a lot of recipes a restauranteur does not make.

« Previous entries · Next entries »