-- Main.llyons - 20 Jan 2010

Project 1

My group's decided to do a "where are the user design flaws" in Application Y.2.

I know you asked some questions about usability vs learnability.
I understand the difference to be: learnability is if they were totally unfamiliar with this application at all
- then we would be testing how they're learning it, for the first time

but that's not the case, so we're testing usability - how easy is it to use, or better yet, where is it easy NOT to use
- based on a group of users that are already familiar with the previous version of the Application, Application Y.1.

We plan to ask subjects to complete task A. The thing is, I know this is the design flaw. Unless I show them how to do it, I can almost guarantee that they won't be able to figure it out.
I see that you've written in the margins you say - "if you're testing usability give them an overview"

I'm not sure how I should do that- if I show them how to use the tool, am I really testing usability?
Should I time them to see how long it takes in Application Y.1 vs Y.2 - without showing them how to do it in Y.2?

Seems like I might benefit from a Talk-Aloud, but I'm not sure what to do with that data?

If I had them do a survey - does that happen AFTER they try to list all students in their course?
Like, what did you think of the new design?

I guess I could have them do it for me once in Y.1 - and time them
-then show them (on paper?) step by step how to do it in Y.2
-then see if they can do it - and time them.
-then ask them to fill out a survey?

I guess my research question is "where are the design flaws"? in Y.2 but if I'm only measuring 1 thing-
Is that enough?
I understand the difference to be: learnability is if they were totally unfamiliar with Application Y.2 at all
- then we would be testing how they're learning it, for the first time

Yes. A few thoughts on a learnability approach. It may not do much for you other than confirm what you already know - that a certain feature is hard to find. Which is ok - especially if what you want to use the data for is to convince your coworkers that a change is needed! But there are a few details to consider. For example, when I first tried Y.1, there were a few features I literally had no idea how to find. I knew they were there (or should be there) but if someone was timing me the value would have been "infinity". So you might want to think of a reasonable benchmark before you decide to stop them - e.g., "can they find the option within 5 minutes?" just so they're not there forever. You could then find out how many people "run out the clock," and you could make it even a little more interesting by also measuring their experience with Y.1 (e.g., how many tasks they have created/managed with the system, how many years they have used it, how often they use it, etc.) to see if experience helps or hurts the ability to find the button (wouldn't it be interesting if novices had an easier time!). Still, though - this is learnability. Why? Because those Y.1 features I mentioned having a hard time finding? Now I know where they are, so I can still get to them relatively quickly. Because Y is intended for repeated, regular use - it's not a one-time-use interface, like a web purchasing page may be - "usability" must really be considered in the context of the most usual form of use - namely, regular use. So, an option that may have been impossible for me to find - one that posed a learnability challenge - is now fast and easy for me to get to, so it might actually be considered to be perfectly usable. On the other hand, if an option is in an inconvenient location - one that slows me down to get to or to use even if I already know it is there - then that is a usability problem. Does that make the distinction clearer? That's why I was suggesting giving a tour if what you're really interested in is the usability - if they know where everything is and it still takes them longer to get there than under the prior version, that's a usability problem (of course - you'd probably have to ask them to repeat the task several times until their completion time leveled off to get a more accurate measure of the usability). I hope that makes sense... let me know if not!

Seems like I might benefit from a Talk-Aloud, but I'm not sure what to do with that data?

This is a good idea if you are less interested in proving that there is a problem (see above) and instead are more interested in fixing the problem. Always a tough call - I mean, as a Y "expert" in your own right, you have a strong intuition that there is a problem - you're using your own expert evaluation technique! wink So unless you really need to justify that intuition to someone (a boss, a team) is it worth the time & effort to time how long (if ever) people take to find the button? Or would it be more useful to you to discover where experienced Y.1 users might expect to see that button (or buttons, or link - not sure what the actual UI components are, but I hope you follow the line of the argument). That's where talk aloud comes in - ask them to do the task, and ask them to explain where they will look for the option and why- if you get comments like:

"I'm looking in the sidebar menu for it - don't see it. OK, now I'm looking under the "X" link on the main page, huh, not there either. Going back"

As a moderator, you could take notes ("sidebar, "X" link, ...) while listening to the user.

THEN, after the session, you can use those notes to ask them follow-ups:

"I noticed you first looked for the option in the sidebar - why was that the first place you looked?"
"Then you tried the "X" link. Why did you expect the option to be on that page?"

Hopefully, if you run enough subjects, you'll see patterns emerging - like a tendency to check the sidebar pretty early on (as either a first or second step). This might suggest that the option should be placed in that location.

If I had them do a survey - does that happen AFTER they try to list all students in their course?
Like, what did you think of the new design?

Yeah, sure - after would be better. Might want to focus the questions if it's a written survey - think about how to cast questions as Likerts or multiple choice questions.

I guess I could have them do it for me once in Y.1 - and time them
-then show them (on paper?) step by step how to do it in Y.2
-then see if they can do it - and time them.
-then ask them to fill out a survey?

Could show them how to do it with the actual interface. Remember - if looking for usability the assumption is that no matter how counterintuitive the layout, the user will know where the option is; you're trying to gauge how efficiently they can use it. If you're into the learnability issue - well look at my first para.

Assignment 1

I'll post some of the questions we've gotten, & responses:

Question Answer
Now I have a quick question on homework 1. You said that we can pick any interface, we like. So I picked a website as an interface. I picked an existing website which is www.mapquest.com. Then I based a research question around that website, like "How easy or quickly can the user start their location search." Am I doing the assignment correctly is this how I can go about doing the entire assignment? Any feed back would be great. mapquest is a great idea! I'd suggest a slightly different question, though - I'm guessing you want to know how long it takes to get through a search. This is ok, but think about it in terms of what that means - it might not mean anything at all. For example: if you find out that the answer is 45 seconds on average, what does that tell you? What does that suggest to you as an HCI designer? Now, if your goal is to meet a benchmark - say, "everyone can find their location within", umm, let's say, "35 seconds", then you'd know something - you'd know that with an average of 45 most people don't meet your benchmark. But choosing a benchmark is not an arbitrary thing, and is really hard for HCI researchers to do. i.e., why choose 35 seconds? There needs to be a rationale. That said, many companies have an easy time creating rationales, because they're trying to make products that compete in the marketplace. If I'm trying to sell you, a grocery store owner, a new self-checkout system, I probably need to prove to you that it's as fast or faster than having a real human cashier do the transaction. I'd ask you what the average checkout speed is, averaged over the # of grocery items purchased, and if you didn't know, I'd measure it myself, and use that figure as my benchmark. Make a bit more sense I hope? But since you aren't really trying to sell Mapquest to anyone, it's harder for you to find a benchmark that would mean anything.

Another idea: use a comparison. Compare the time it takes to get directions useing Mapquest versus the time it takes to get the same directions with google maps. Then google maps is sort of like your benchmark. Or instead of measuring time you could measure the # of steps.

The reading assignments for the past few days should really help you figure this out - they are easy to read and have lots of examples. Good luck, and I hope that this helps.
how should be form a participant group? I said my requirement would be a group of people who have knowledge of ineternet explorer, who travel a lot, and from various age group. Now my participants are not limited to these but would definitely include these and more, say around 10-15 participants it depends on what your question is. Given what I wrote you just a moment ago, it depends. If you compare mapquest against google maps and you're timing people you need to think about whether or not your users have experience in using either mapquest or google maps, because if they are really really used to using one interface (say, mapquest) it will seem like that interface is better, even though that might not be true since the only reason they are faster is that they have a lot of practice. So you would need to think about how you could select people with certain experience levels or, if that is too impractical to do, how you could account for experience when you analyze your data (for example you could ask people to rate their levels of expertise with the 2 interfaces or ask them to say how often they use each of the interfaces, and use that information to help understand where you might be getting funny dips or bumps in your timing data).
When we talk about an interface, does it only have to be software based? For
example what I was planning on writing was about creating a machine in
Metra Train stations where the passengers have to put in their tickets
into that machine which checks their tickets and then onlyt they can
get in the train and not an actual person checking their tickets
during the ride. Almost like how CTA does it. I don't know if I am
making sense. Will this work? If not can you give me more insights
about how to pick a topic?
Because this is a CS class you should probably stick to a in interface that has a software component, even if the user is mostly interacting with physical items. Although there is a computerized component to the CTA ticket-taking machines, the interaction with that component is the least of a usability engineer's problems - for example, the size and shape of the card, the necessary orientation of the card, and the design of the rotating metal arms are probably more influential on the systems usability than the software is (which just reads a magnetic stripe). For this course, I'd like you to pick something where there is at least a hope of improving the usability by altering the software component. For example, an ATM is similar in that many have users interacting with physical items (buttons), but small changes to the software part can dramatically improve usability. A good example was when ATMs first introduced the "Fast Cash" or "Fast Withdrawal" option - previoulsy users had to go through several steps to take out money, the most-commonly-used feature of ATMs. Someone had the bright idea to make the most-common use path more efficient, and voila, Fast Cash was born. I hope that helps.

Also - you should chose an already existing interface. Don't create one - we'll do interface design later in the term.

The point of this assignment is to show that, once you've chosen a problem or question, you can select a good usability assessment method to address that problem or question. How do you choose a question?

My advice is to think about a piece of software that you use every day, perhaps one that gives you trouble. One example I gave in class was how to use a smart phone to find a good restaurant nearby: there are several apps that could be used, how would I find out which one is the most efficient to use? Another example: I regularly have a hard time figuring out where various options are in MS Word menus. Is there a beter way to organize the menus? How would I investigate this question?

Edit | Attach | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r4 - 2010-02-04 - 00:59:16 - Main.lsound2
Copyright 2016 The Board of Trustees
of the University of Illinois.webmaster@cs.uic.edu
Helping Women Faculty Advance
Funded by NSF