Lecture 11 Design and Prototyping

UI Hall of Fame or Shame?

GhostviewAcrobat

  • Ghostview drag window down moves page up. no scrollbars
    • As if whole page is the scrollbar
    • But, no thumb visible
    • As if dragging lens over page
    • But, lens doesn't actually move!
    • Acrobat drag page down moves page down
    • each consistent direct manipulation
    • using both makes inconsistency

    On the left is Ghostview, a Unix program that displays Postscript files. Ghostview has no scrollbar. Instead, it scrolls by direct manipulation of the page image. Clicking and dragging the page downward moves the page image upward. That is, when you drag the mouse down, more of the page image come into view at the bottom of the window. Does this make sense? What mental model, or physical analogy, does this correspond to?

    On the right is Acrobat, which displays PDF files. Acrobat has scrollbars, but you can also directly manipulate the page image, as in Ghostview - only now clicking and dragging downward moves the page downward, revealing more page image at the top of the window. What mental model does this correspond to?

    What if you used both Acrobat and Ghostview frequently (one for PDF, say, and the other for Postscript)?

    Which model does a smartphone use for scrolling, and which model do scrollbars and mouse scrollwheels use?

UI Hall of Fame or Shame?

Xerox Star
Original Macintosh
Newer Macintosh

Let's look at scrolling some more. Scrollbars have evolved considerably over the history of graphical user interfaces.

The Xerox Star offered the earliest incarnation of a graphical user interface, and its scrollbar already had many of the features we recognize in modern scrollbars, such as a track containing a scrollbar thumb (the handle that you can drag up and down). Even its arrow buttons do the same thing - e.g., pushing on the top button makes more of the page appear at the top of the window, just like the modern scrollbar. But they're labeled the opposite way - the top button has a down arrow on it. Why is that? What mental model would lead you to call that button down? Is that consistent with the mental model of the scrollbar thumb?

Another interesting difference between the Star scrollbar and modern scrollbars are the - and + buttons, which move by whole pages. This functionality hasn't been eliminated from the modern scrollbar; instead, the modern scrollbar just drops the obvious affordance for it. You can still click on the track above or below the thumb in order to jump by whole pages.

What else has changed in modern scrollbars?

User Centered Design

User-Centered Design

  • Iterative design
  • Early focus on users and tasks
  • Constant evaluation
The standard approach to designing user interfaces is user-centered design, which has three components.

Traditional Software Engineering Process: Waterfall Model

Let's contrast the iterative design process against another way. The waterfall model was one of the earliest carefully-articulated design processes for software development. It models the design process as a sequence of stages. Each stage results in a concrete product - a requirements document, a design, a set of coded modules - that feeds into the next stage. Each stage also includes its own validation: the design is validated against the requirements, the code is validated (unit-tested) against the design, etc.

The biggest improvement of the waterfall model over previous (chaotic) approaches to software development is the discipline it puts on developers to think first, and code second. Requirements and designs generally precede the first line of code.

If you've taken a software engineering course, you've experienced this process yourself. The course staff probably handed you a set of requirements for the software you had to build --- e.g., the specification of a chat client or a pinball game. (In the real world, identifying these requirements would be part of your job as software developers.) You were then expected to meet certain milestones for each stage of your project, and each milestone had a concrete product: (1) a design document; (2) code modules that implemented certain functionality; (3) an integrated system.

Validation is not always sufficient; sometimes problems are missed until the next stage. Trying to code the design may reveal flaws in the design - e.g., that it can't be implemented in a way that meets the performance requirements. Trying to integrate may reveal bugs in the code that weren't exposed by unit tests. So the waterfall model implicitly needs feedback between stages.

The danger arises when a mistake in an early stage - such as a missing requirement - isn't discovered until a very late stage - like acceptance testing. Mistakes like this can force costly rework of the intervening stages. (That box labeled "Code" may look small, but you know from experience that it isn't!)

Waterfall Model Is Bad for UI Design

  • User interface design is risky
    • So we're likely to get it wrong
  • Users are not involved in validation until acceptance testing
    • So we won't find out until the end
  • UI flaws often cause changes in requirements and design
    • So we have to throw away carefully-written and tested code

Although the waterfall model is useful for some kinds of software development, it's very poorly suited to user interface development.

First, UI development is inherently risky. UI design is hard for all the reasons we discussed in the first class. (You are not the user; the user is always right, except when the user isn't; users aren't designers either.) We don't (yet) have an easy way to predict whether a UI design will succeed.

Second, in the usual way that the waterfall model is applied, users appear in the process in only two places: requirements analysis and acceptance testing. Hopefully we asked the users what they needed at the beginning (requirements analysis), but then we code happily away and don't check back with the users until we're ready to present them with a finished system. So if we screwed up the design, the waterfall process won't tell us until the end.

Third, when UI problems arise, they often require dramatic fixes: new requirements or new design. We saw in Lecture 1 that slapping on patches doesn't fix serious usability problems.

Iterative Design

  • We won't get it right the first time
  • Evaluation will force re-design
  • Eventually, converge to good solution
  • Design guidelines help reduce number and cost of iterations
  • Isn't this just like a repeated waterfall?

Iterative design offers a way to manage the inherent risk in user interface design. In iterative design, the software is refined by repeated trips around a design cycle: first imagining it (design), then realizing it physically (implementation), then testing it (evaluation).

In other words, we have to admit to ourselves that we aren't going to get it right on the first try, and plan for it. Using the results of evaluation, we redesign the interface, build new prototypes, and do more evaluation. Eventually, hopefully, the process produces a sufficiently usable interface.

Sometimes you just iterate until you're satisfied or run out of time and resources, but a more principled approach is to set usability goals for your system. For example, an e-commerce web site might set a goal that users should be able to complete a purchase in less than 30 seconds.

Many of the techniques we'll learn in this course are optimizations for the iterative design process: design guidelines reduce the number of iterations by helping us make better designs; cheap prototypes and discount evaluation techniques reduce the cost of each iteration. But even more important than these techniques is the basic realization that in general, you won't get it right the first time. If you learn nothing else about user interfaces from this class, I hope you learn this.

You might object to this, though. At a high level, iterative design just looks like the worst-case waterfall model, where we made it all the way from design to acceptance testing before discovering a design flaw that *forced* us to repeat the process. Is iterative design just saying that we're going to have to repeat the waterfall over and over and over? What's the trick here?

Spiral Model

  • Know early iterations will be discarded
  • So make them cheap
  • Storyboards, sketches, mock-ups
  • Low-fidelity prototypes
  • Just detailed enough for evaluation

The spiral model offers a way out of the dilemma. We build room for several iterations into our design process, and we do it by making the early iterations as cheap as possible.

The radial dimension of the spiral model corresponds to the cost of the iteration step - or, equivalently, its fidelity or accuracy. For example, an early implementation might be a paper sketch or mockup. It's low fidelity, only a pale shadow of what it would look and behave like as interactive software. But it's incredibly cheap to make, and we can evaluate it by showing it to users and asking them questions about it.

Iterative Design of User Interfaces

  • Early iterations use cheap prototypes
    • Parallel design is feasible: build & test multiple prototypes to explore design alternatives
  • Later iterations use richer implementations, after UI risk has been mitigated
  • More iterations generally mean better UI
  • Only mature iterations are seen by the world

Why is the spiral model a good idea? Risk is greatest in the early iterations, when we know the least. So we put our least commitment into the early implementations. Early prototypes are made to be thrown away. If we find ourselves with several design alternatives, we can build multiple prototypes (parallel design) and evaluate them, without much expense. The end of this reading will make more arguments for the value of parallel design.

After we have evaluated and redesigned several times, we have (hopefully) learned enough to avoid making a major UI design error. Then we actually implement the UI - which is to say, we build a prototype that we intend to keep. Then we evaluate it again, and refine it further.

The more iterations we can make, the more refinements in the design are possible. We're hill-climbing here, not exploring the design space randomly. We keep the parts of the design that work, and redesign the parts that don't. So we should get a better design if we can do more iterations.

Case Study of User-Centered Design:
The Olympic Message System

  • Cheap prototypes
    • Scenarios
    • User guides
    • Simulation (Wizard of Oz)
    • Prototyping tools (IBM Voice Toolkit)
  • Iterative design
    • 200 (!) iterations for user guide
  • Evaluation at every step
  • You are not the user
    • Non-English speakers had trouble with alphabetic entry on telephone keypad

The Olympic Message System is a classic demonstration of the effectiveness of user-centered design (Gould et al, “The 1984 Olympic Message System”), CACM, v30 n9, Sept 1987). The OMS designers used a variety of cheap prototypes: scenarios (stories envisioning a user interacting with the system), manuals, and simulation (in which the experimenter read the system's prompts aloud, and the user typed responses into a terminal). All of these prototypes could be (and were) shown to users to solicit reactions and feedback.

Iteration was pursued aggressively. The user guide went through 200 iterations!

A video about OMS can be found on YouTube. Check it out---it includes a mime demonstrating the system.

The OMS also has some interesting cases reinforcing the point that the designers cannot rely entirely on themselves for evaluating usability. Most prompts requested numeric input ("press 1, 2, or 3"), but some prompts needed alphabetic entry ("enter your three-letter country code"). Non-English speakers - particularly from countries with non-Latin languages - found this confusing, because, as one athlete reported in an early field test, "you have to read the keys differently." The designers didn't remove the alphabetic prompts, but they did change the user guide's examples to use only uppercase letters, just like the telephone keys.

Three Stages Today

  • Needfinding
  • Idea Generation
  • Prototyping

Needfinding

You are not the User. Who is?

Techniques for Understanding Users & Tasks

  • Interviews & observation
  • Contextual inquiry technique
    • Interviews & observation conducted "in context", i.e., with real people dealing with the real problem in the real environment
    • Establish a master-apprentice relationship
      • User shows how and talks about it
      • Interviewer watches and asks questions
  • Participatory design technique
    • Including a user directly on the design team

The best sources of information for needfinding are user interviews and direct observation. Usually, you'll have to observe how users currently solve the problem. For the OMS example, we would want to observe athletes interacting with each other, and with family and friends, while they're training for or competing in events. We would also want to interview the athletes, in order to understand better their goals.

A good collection of information-collection techniques is summarized by Need Finding Tools.

Contextual inquiry is a technique that combines interviewing and observation, in the user's actual work environment, discussing actual work products. Contextual inquiry fosters strong collaboration between the designers and the users. (Wixon, Holtzblatt & Knox, “Contextual design: an emergent view of system design”, CHI '90)

Participatory design includes users directly on the design team - participating in needfinding, proposing design ideas, helping with evaluation. This is particularly vital when the target users have much deeper domain knowledge than the design team. It would be unwise to build an interface for stock trading without an expert in stock trading on the team, for example.

Know Your User

  • Things to learn
    • Age, gender, culture, language
    • Education (literacy? numeracy?)
    • Physical limitations
    • Computer experience (typing? mouse?)
    • Motivation, attitude
    • Domain experience
    • Application experience
    • Work environment, social context
    • Relationships and communication patterns with other people
  • Pitfalls:
    • focusing on system design not users
      • “users should have phones”
    • describing what you want your users to be, rather than what they actually are
      • "Users should read English, be fluent in Swahili, right-handed, color-blind"
  • Needfinding can tell us if requirements we want to impose are reasonable

The reason for user analysis is straightforward: since you're not the user, you need to find out who the user actually is.

User analysis seems so obvious that it's often skipped. But failing to do it explicitly makes it easier to fall into the trap of assuming every user is like you. It's better to do some thinking and collect some information first.

Knowing about the user means not just their individual characteristics, but also their situation. In what environment will they use your software? What else might be distracting their attention? What is the social context? A movie theater, a quiet library, inside a car, on the deck of an aircraft carrier; environment can place widely varying constraints on your user interface.

Other aspects of the user's situation include their relationship to other users in their organization, and typical communication patterns. Can users ask each other for help, or are they isolated? How do students relate differently to lab assistants, teaching assistants, and professors?

Many problems in needfinding are caused by jumping too quickly into a system design. This sometimes results in wishful thinking, rather than looking at reality. Saying "OMS users should all have touchtone phones" is stating a requirement on the system, not a characteristic of the existing users. One reason we do needfinding is to see whether these requirements can actually be satisfied, or whether we'd have to add something to the system to make sure it's satisfied. For example, maybe we'd have to offer touchtone phones to every athlete's friends and family...

Multiple Classes of Users

  • Many applications have several kinds of users
    • By role (student, teacher)
    • By characteristics (age, motivation)
  • Example: Olympic Message System
    • Athletes
    • Friends & family
    • Telephone operators
    • Sysadmins
Many, if not most, applications have to worry about multiple classes of users. Some user groups are defined by the roles that the user plays in the system: student, teacher, reader, editor. Other groups are defined by characteristics: age (teenagers, middle-aged, elderly); motivation (early adopters, frequent users, casual users). You have to decide which user groups are important for your problem, and do a user analysis for every class. The Olympic Message System case study we saw in an earlier reading identified several important user classes by role.

Identify the User's Goals

  • Identify the goals involved in the problem
    • Decompose them into subtasks
    • Abstract them into goals
  • Example: Olympic Message System
    • send message to an athlete
    • find out if I have messages
    • listen to my messages
The best sources of information are user interviews and direct observation. Usually, you'll have to observe how users currently perform the task. For the OMS example, we would want to observe athletes interacting with each other, and with family and friends, while they're training for or competing in events. We would also want to interview the athletes, in order to understand better their goals in the task.

Common Errors in Needfinding

  • Bogging down in what users do now (concrete tasks), rather than why they do it (essential tasks or goals)
    • "Save file to disk"
    • vs. "Make sure my work is kept"
  • Thinking from the system's point of view, rather than the user's
    • "Notify user about appointment"
    • vs. "Get a notification about appointment"
  • Fixating too early on a UI design vision
    • "The system bell will ring to notify the user about an appointment..."
  • Duplicating a bad existing procedure in software
  • Failing to capture good aspects of existing procedure

The premature-system-design mindset can affect this part too. If you're writing down tasks from the system's point of view, like "Notify user about appointment", then you're writing requirements (what the system should do), not user goals. Sometimes this is merely semantics, and you can just write it the other way; but it may also mean you're focusing too much on what the system can do, rather than what the user wants. Tradeoffs between user goals and implementation feasibility are inevitable, but you don't want them to dominate your thinking at this early stage of the game.

Needfinding derived from observation may give too much weight to the way things are currently done. The steps of a current system are concrete tasks, like "save file to disk." But if we instead generalize that to a user goal, like "make sure my work is kept", then we have an essential task, which admits much richer design possibilities when it's time to translate this task into a user interface.

A danger of concrete analysis is that it might preserve tasks that are inefficient or could be done a completely different way in software. Suppose we observed users interacting with paper manuals. We'd see a lot of page flipping: "Find page N" might be an important subtask. We might naively conclude from this that an online manual should provide really good mechanisms for paging & scrolling, and that we should pour development effort into making those mechanisms as fast as possible. But page flipping is an artifact of physical books! It might pay off much more to have fast and effective searching and hyperlinking in an online manual. That's why it's important to focus on why users do what they do (the essential tasks), not just what they do (the concrete tasks).

Conversely, an incomplete analysis may fail to capture important aspects of the existing procedure. In one case, a dentist's office converted from manual billing to an automated system. But the office assistants didn't like the new system, because they were accustomed to keeping important notes on the paper forms, like "this patient's insurance takes longer than normal." The automated system provided no way to capture those kinds of annotations. That's why interviewing and observing real users is still important, even though you're observing a concrete task process.

Idea Generation

Generating Ideas

  • Step 1: identify problem (needfinding)
  • Step 2: come up with solution
  • First generate ideas individually
  • Then come together as a group and brainstorm
    • starting with group is less effective
    • generate more diverse ideas separately
    • be visual: Write down everything on a board

    After you collect information about the users and their goals, you'll have to identify a key problem that you're going to solve by building new software. Sometimes the problem will jump out at you; if so great. If not, you'll need to generate some ideas for problems to solve. That means reading and thinking about all the information you've collected, and then doing some idea generation. These slides talk about the idea generation process. You'll find this useful not just at this stage, but also for the next step in your project, when you'll have to generate ideas for solutions to the problem you've identified.

    Note that group brainstorming by itself is not the best approach. It's been shown that you'll generate more ideas if you and your teammates first think about it privately, write down your individual ideas, then come together as a group to synthesize and build on each other's ideas. At top design firms like IDEO, if you don't bring in at least 5 ideas to every ideation meeting, then you won't last long as a designer.

IDEO's Rules for Brainstorming

  • Be visual
  • Defer judgment
  • Encourage wild ideas
  • Build on the ideas of others
  • Go for quantity
  • One conversation at a time
  • Stay focused on the topic
IDEO has developed a list of rules for good brainstorming as a group. Read more about them here.

Keep Multiple Alternatives Around

  • multiple designs lead to better results
  • integrate ideas and explore larger design space
  • Improves user feedback
    • better at comparing than absolute judgement
    • reluctant to criticize only option
Don't fixate on one approach too early. Instead, keeping multiple alternatives on the table helps with all parts of the user-centered design process - design, implementation, and evaluation. Human beings need multiple alternatives to be creative and give good feedback. Here's some evidence.
  • For individual designers: designers produce designs that are more creative and divergent when they keep multiple designs around throughout the iterative process. They also feel more confident about their designs, and the resulting final design is objectively better. (Dow et al, “Parallel Prototyping Leads to Better Design Results, More Divergence, and Increased Self-Efficacy”, TOCHI, 2010).
  • For groups: when you're sharing ideas with a group, sharing multiple ideas is better than sharing your single favorite. The group is more likely to integrate parts of multiple ideas together, the group explore more of the design space, and others in the group provide more productive critiques. (Dow et al., “Prototyping Dynamics: Sharing Multiple Designs Improves Exploration, Group Rapport, and Results” CHI 2011
  • For users: users give more constructive critiques when they're asked to use multiple alternative prototypes. (Tohidi et al, "Getting the Right Design and the Design Right: Testing Many Is Better Than One." CHI 2006.)
  • Two reasons why multiple alternatives help. First, humans are better at comparing things than they are at judging the absolute value of one thing in isolation. Second, presenting only one idea puts a lot of emotional weight on it, so the idea's presenter feels obliged to defend it, and others feel reluctant to criticize it.

Example: IDEO Shopping Cart

Watch this video about the design firm IDEO's process.
In the video, where does IDEO collect information from users and observation? What problems and goals do they discover from their observation?

Prototyping

Now we're going to talk about protototyping: producing cheaper, less accurate renditions of your target interface. Prototyping is essential in the early iterations of a spiral design process, and it's useful in later iterations too.

Why Prototyping?

  • Cheaper, less accurate versions of your interface
  • Get feedback earlier, cheaper
  • Experiment with alternatives
  • Easier to change to fix flaws
  • Intention to throw away

We build prototypes for several reasons, all of which largely boil down to cost.

First, prototypes are much faster to build than finished implementations, so we can evaluate them sooner and get early feedback about the good and bad points of a design.

Second, if we have a design decision that is hard to resolve, we can build multiple prototypes embodying the different alternatives of the decision.

Third, if we discover problems in the design, a prototype can be changed more easily, for the same reasons it could be built faster. Prototypes are more malleable. Most important, if the design flaws are serious, a prototype can be thrown away. It's important not to commit strongly to design ideas in the early stages of design. Unfortunately, writing and debugging a lot of code creates a psychological sense of commitment which is hard to break. You don't want to throw away something you've worked hard on, so you're tempted to keep some of the code around, even if it really should be scrapped. (Alan Cooper, The Perils of Prototyping, 1994.)

Most of the prototyping techniques we'll see in this reading actually force you to throw the prototype away. For example, a paper mockup won't form any part of a finished software implementation. This is a good mindset to have in early iterations, since it maximizes your creative freedom.

Early Prototyping

Sketches

Paper Prototypes

Computer Mockups

Here are some examples of early-stage prototyping for graphical user interfaces. We'll talk about these techniques and more in a future prototyping lecture.

Early Prototypes Can Detect Usability Problems

  • Even a sketch would have revealed many usability problems
  • No need for an interactive implementation
Remember this Hall of Shame candidate from the first class? This dialog's design problems would have been easy to catch if it were only tested as a simple paper sketch, in an early iteration of a spiral design. At that point, changing the design would have cost only another sketch, instead of a day of coding.

Prototype Fidelity

  • How accurate a rendition of intended design?
  • Low fidelity:
    • omit details
    • use cheaper materials
    • use different interaction techniques
  • High fidelity
    • more like finished product

Fidelity is Multidimensional

  • Breadth: % of features covered
    • Only enough features for certain tasks
    • Word processor could omit print, spell check
  • Depth: degree of functionality
    • Limited choices
    • canned responses
    • no error handling
  • horizontal prototype: all breadth, no depth
  • vertical prototype: deep implementation of narrow functionality
  • for UI, horizontal prototypes more common since usability is global

Fidelity is not just one-dimensional, however. Prototypes can be low- or high-fidelity in various different ways (Carolyn Snyder, *Paper Prototyping*, 2003).

Breadth refers to the fraction of the feature set represented by the prototype. A prototype that is low-fidelity in breadth might be missing many features, having only enough to accomplish certain specific tasks. A word processor prototype might omit printing and spell-checking, for example.

Depth refers to how deeply each feature is actually implemented. Is there a backend behind the prototype that's actually implementing the feature? Low-fidelity in depth may mean limited choices (e.g., you can't print double-sided), canned responses (always prints the same text, not what you actually typed), or lack of robustness and error handling (crashes if the printer is offline).

A diagrammatic way to visualize breadth and depth is shown (following Nielsen, *Usability Engineering*, p. 94). A horizontal prototype is all breadth, and little depth; it's basically a frontend with no backend. A vertical prototype is the converse: one area of the interface is implemented deeply. The question of whether to build a horizontal or vertical prototype depends on what risks you're trying to mitigate. In user interface design, horizontal prototypes are more common, since they address usability risk. But if some aspect of the application is a risky implementation - you're not sure if it can be implemented to meet the requirements - then you may want to build a vertical prototype to test that.

A special case lies at the intersection of a horizontal and a vertical prototype. A scenario shows how the frontend would look for a single concrete task.

More Dimensions of Fidelity

  • Look: appearance, graphic design
    • hand-drawn sketch
    • wireframe with same widgets as original
  • Feel: input method
    • pointing & writing on paper
    • feels very different from mouse & keyboard
Two more crucial dimensions of a prototype's fidelity are, loosely, its look and its feel. Look is the appearance of the prototype. A hand-sketched prototype is low-fidelity in look, compared to a prototype that uses the same widget set as the finished implementation. Feel refers to the physical methods by which the user interacts with the prototype. A user interacts with a paper mockup by pointing at things to represent mouse clicks, and writing on the paper to represent keyboard input. This is a low-fidelity feel for a desktop application (but it may not be far off for a tablet PC application).

Comparing Fidelity of Look & Feel

Lo-Fi

Hi-Fi

Here's the same dialog box in both low-fi and high-fi versions. How do they differ in the kinds of things you can test and get feedback about?

Paper Prototypes

Paper Prototype

  • Interactive paper mockup
    • Sketches of screen appearance
    • Paper pieces show windows, menus, dialog boxes
  • Interaction is natural
    • Pointing with a finger = mouse click
    • Writing = typing
  • A person simulates the computer's operation
    • Putting down & picking up pieces
    • Writing responses on the "screen"
    • Describing effects that are hard to show on paper
  • Low fidelity in look & feel
  • High fidelity in depth (person simulates the backend)

Paper prototypes are an excellent choice for early design iterations. A paper prototype is a physical mockup of the interface, mostly made of paper. It's usually hand-sketched on mutiple pieces, with different pieces showing different menus, dialog boxes, or window elements.

The key difference between mere sketches and a paper prototype is interactivity. A paper prototype is brought to life by a design team member who simulates what the computer would do in response to the user's "clicks" and "keystrokes", by rearranging pieces, writing custom responses, and occasionally announcing some effects verbally that are too hard to show on paper. Because a paper prototype is actually interactive, you can actually user-test it: give users a task to do and watch how they do it.

A paper prototype is clearly low fidelity in both look and feel. But it can be arbitrarily high fidelity in breadth at very little cost (just sketching, which is part of design anyway). Best of all, paper prototypes can be high-fidelity in depth at little cost, since a human being is simulating the backend.

Much of the material about paper prototyping in this reading draws on the classic paper by Rettig et al, “Prototyping for tiny fingers” (CACM 1994), and Carolyn Snyder's book Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces (Morgan Kaufmann, 2003).

Why Paper Prototyping?

  • Faster to build
    • Sketching is faster than programming
  • Easier to change
    • Easy to make changes between user tests, or even during a user test
    • No code investment - everything will be thrown away (except the design)
  • Focuses attention on big picture
    • Designer doesn't waste time on details
    • Customer makes more creative suggestions, not nitpicking
  • Nonprogrammers can help
    • Only kindergarten skills are required

But why use paper? And why hand sketching rather than a clean drawing from a drawing program?

Hand-sketching on paper is faster. You can draw many sketches in the same time it would take to draw one user interface with code. For most people, hand-sketching is also faster than using a drawing program to create the sketch.

Paper is easy to change. You can even change it during user testing. If part of the prototype was a problem for one user, you can scratch it out or replace it before the next user arrives. Surprisingly, paper is more malleable than digital bits in many ways.

Hand-sketched prototypes in particular are valuable because they focus attention on the issues that matter in early design without distracting anybody with details. When you're sketching by hand, you aren't bothered with details like font, color, alignment, whitespace, etc. In a drawing program, you would be faced with all these decisions, and you might spend a lot of time on them - time that would clearly be wasted if you have to throw away this design. Hand sketching also improves the feedback you get from users. They're less likely to nitpick about details that aren't relevant at this stage. They won't complain about the color scheme if there isn't one. More important, however, a hand-sketch design seems less finished, less set in stone, and more open to suggestions and improvements. Architects have known about this phenomenon for many years. If they show clean CAD drawings to their clients in the early design discussions, the clients are less able to discuss needs and requirements that may require radical changes in the design. In fact, many CAD tools have an option for rendering drawings with a "sketchy" look for precisely this reason.

A final advantage of paper prototyping: no special skills are required. So graphic designers, usability specialists, and even users can help create prototypes and operate them.

Tools for Paper Prototyping

  • White poster board (11"x14")
    • For background, window frame
  • Big (unlined) index cards (4"x6", 5"x8")
    • For menus, window contents, and dialog boxes
  • Restickable glue
    • For keeping pieces fixed
  • White correction tape
    • For text fields, checkboxes, short messages
  • Overhead transparencies
    • For highlighting, user "typing"
  • Photocopier
    • For making multiple blanks
  • Pens & markers, scissors, tape

Here are the elements of a paper prototyping toolkit.

Although standard (unlined) paper works fine, you'll get better results from sturdier products like poster board and index cards. Use poster board to draw a static background, usually a window frame. Then use index cards for the pieces you'll place on top of this background. You can cut the index cards down to size for menus and window internals.

Restickable Post-it Note glue, which comes in a roll-on stick, is a must. This glue lets you make all of your pieces sticky, so they stay where you put them.

Post-it correction tape is another useful tool. It's a roll of white tape with Post-it glue on one side. Correction tape is used for text fields, so that users can write on the prototype without changing it permanently. You peel off a length of tape, stick it on your prototype, let the user write into it, and then peel it off and throw it away. Correction tape comes in two widths, "2 line" and "6 line". The 2-line width is good for single-line text fields, and the 6-line width for text areas.

Overhead transparencies are useful for two purposes. First, you can make a selection highlighted by cutting a piece of transparency to size and coloring it with a transparency marker. Second, when you have a form with several text fields in it, it's easier to just lay a transparency over the form and let the users write on that, rather than sticking a piece of correction tape in every field.

If you have many similar elements in your prototype, a photocopier can save you time.

And, of course, the usual kindergarten equipment: pens, markers, scissors, tape.

Tips for Good Paper Prototypes

  • Make it larger than life
  • Make it monochrome
  • Replace tricky visual feedback with audible descriptions
    • Tooltips, drag & drop, animation, progress bar
  • Keep pieces organized
    • Use folders & open envelopes

A paper prototype should be larger than life-size. Remember that fingers are bigger than a mouse pointer, and people usually write bigger than 12 point. So it'll be easier to use your paper prototype if you scale it up a bit. It will also be easier to see from a distance, which is important because the prototype lies on the table, and because when you're testing users, there may be several observers taking notes who need to see what's going on. Big is good.

Don't worry too much about color in your prototype. Use a single color. It's simpler, and it won't distract attention from the important issues.

You don't have to render every visual effect in paper. Some things are just easier to say aloud: "the basketball is spinning." "A progress bar pops up: 20%, 50%, 75%, done." If your design supports tooltips, you can tell your users just to point at something and ask "What's this?", and you'll tell them what the tooltip would say. But if you actually want to test the tooltip messages you should prototype them on paper.

Figure out a good scheme for organizing the little pieces of your prototype. One approach is a three-ring binder, with different screens on different pages. Most interfaces are not sequential, however, so a linear organization may be too simple. Two-pocket folders are good for storing big pieces, and letter envelopes (with the flap open) are quite handy for keeping menus.

Hand-Drawn or Not?

Eclipse
Web browser tool
Photo album

Here are some of the prototypes made by an earlier class. Should a paper prototype be hand-sketched or computer-drawn? Generally hand-sketching is better in early design, but sometimes realistic images can be constructive additions.

The first image is a prototype for an interface that will be integrated into an existing program (Eclipse), so the prototype is mostly constructed of modified Eclipse screenshots. The result is very clean and crisp, but also tiny - it's hard to read from a distance. It may also be harder for a test user to focus on commenting about the new parts of the interface, since the new features look just like Eclipse. A hybrid hand-sketched/screenshot interface might work even better.

The second image shows such a hybrid -- an interface designed to integrate into a web browser. Actual screenshots of web pages are used, mainly as props, to make the prototype more concrete and help the user visualize the interface better. Since web page layout isn't the problem the interface is trying to solve, there's no reason to hand-sketch a web page.

The third image shows a pure hand-sketched interface that might have benefited from such props -- a photo organizer could use real photographs to help the user think about what kinds of things they need to do with photographs. This prototype could also use a window frame - a big posterboard to serve as a static background.

Size Matters

Both of these prototypes have good window frames, but the big one is easier to read and manipulate.

The Importance of Writing Big and Dark

This prototype is even easier to read. Markers are better than pencil. (Whiteout and correction tape can fix mistakes as well as erasers can!) Color is also neat, but don't bother unless color is a design decision that needs to be tested, as it is in this prototype. If color doesn't really matter, monochromatic prototypes work just as well.

Post-it Glue and Transparencies are Good

The first prototype here has lots of little pieces that have trouble staying put. Post-it glue can help with that.

The second prototype is completely covered with a transparency. Users can write on it directly with dry-erase marker, which just wipes off - a much better approach than water-soluble transparency markers. With multiple layers of transparency, you can let the user write on the top layer, while you use a lower layer for computer messages, selection highlighting, and other effects.

Paper Allows Cheap Exploration

Paper prototype of contact manager
  • idea to show social graph of contacts
  • turned out not to useful
  • so threw it away
  • code is a much bigger investment
Paper is great for prototyping features that would be difficult to implement. This project (a contact manager) originally envisioned showing your social network as a graph, but when they prototyped it, it turned out that it wasn't too useful. The cost of trying that feature on paper was trivial, so it was easy to throw it away. Trying it in code, however, would have taken much longer, and been much harder to discard.

Low-Fidelity Prototypes Aren't Always Paper

First Palm Pilot “prototype”
The spirit of low-fidelity prototyping is really about using cheap physical objects to simulate software. Paper makes sense for desktop and web UIs because they're flat. But other kinds of UI prototypes might use different materials. Jeff Hawkins carried a block of wood (not this one, but similar) around in his pocket as a prototype for the first PalmPilot. ([Interview here](http://www.designinginteractions.com/interviews/JeffHawkins))

Multiple Alternatives Generate Better Feedback

Doing several prototypes and presenting them to the same user is a great idea. When a design is presented with others, people tend to be more ready to criticize and offer problems, which is exactly what you want in the early stages of design. These three paper prototypes of a house thermostat were tested against users both singly and as a group of three, and it was found that people offered fewer positive comments when they saw the designs together than when they saw them alone.

Pictures from Tohidi, Buxton, Baecker, Sellen. "Getting the Right Design and the Design Right: Testing Many Is Better Than One." *CHI 2006*.

How to Test a Paper Prototype

  • Roles for design team
    • Computer
      • Simulates prototype
      • Doesn't give any feedback that the computer wouldn't
    • Facilitator
      • Presents interface and tasks to the user
      • Encourages user to "think aloud" by asking questions
      • Keeps user test from getting off track
    • Observer
      • Keeps mouth shut, sits on hands if necessary
      • Takes copious notes

Once you've built your prototype, you can put it in front of users and watch how they use it. We'll see much more about user testing in a later class, including ethical issues. But here's a quick discussion of user testing in the paper prototyping domain.

There are three roles for your design team to fill:

  1. The computer is the person responsible for making the prototype come alive. This person moves around the pieces, writes down responses, and generally does everything that a real computer would do. In particular, the computer should not do anything that a real computer wouldn't. Think mechanically, and respond mechanically.
  2. The facilitator is the human voice of the design team and the director of the testing session. The facilitator explains the purpose and process of the user study, obtains the user's informed consent, and presents the user study tasks one by one. While the user is working on a task, the facilitator tries to elicit verbal feedback from the user, particularly encouraging the user to "think aloud" by asking probing (but not leading) questions. The facilitator is responsible for keeping everybody disciplined and the user test on the right track.
  3. Everybody else in the room (aside from the user) is an observer. The most important rule about being an observer is to keep your mouth shut and watch. Don't offer help to the user, even if they're missing something obvious. Bite your tongue, sit on your hands, and just watch. The observers are the primary note takers, since the computer and the facilitator are usually too busy with their duties.

What You Can Learn from a Paper Prototype

  • Conceptual model
    • Do users understand it?
  • Functionality
    • Does it do what's needed? Missing features?
  • Navigation & task flow
    • Can users find their way around?
    • Are information preconditions met?
  • Terminology
    • Do users understand labels?
  • Screen contents
    • What needs to go on the screen?
Paper prototypes can reveal many usability problems that are important to find in early stages of design. Fixing some of these problems require large changes in design. If users don't understand the metaphor or conceptual model of the interface, for example, the entire interface may need to be scrapped.

What You Can't Learn

  • Look: color, font, whitespace, etc
  • Feel: efficiency issues
  • Response time
  • Are small changes noticed?
    • Even the tiniest change to a paper prototype is clearly visible to user
  • Exploration vs. deliberation
    • Users are more deliberate with a paper prototype; they don't explore or thrash as much

But paper prototypes don't reveal every usability problem, because they are low-fidelity in several dimensions. Obviously, graphic design issues that depend on a high-fidelity look will not be discovered. Similarly, interaction issues that depend on a high-fidelity feel will also be missed. For example, problems like buttons that are too small, too close together, or too far away will not be detected in a paper prototype. The human computer of a paper prototype rarely reflects the speed of an implemented backend, so issues of response time - whether feedback appears quickly enough, or whether an entire task can be completed within a certain time constraint -- can't be tested either.

Paper prototypes don't help answer questions about whether subtle feedback will even be noticed. Will users notice that message down in the status bar, or the cursor change, or the highlight change? In the paper prototype, even the tiniest change is grossly visible, because a person's arm has to reach over the prototype and make the change. (If many changes happen at once, of course, then some of them may be overlooked even in a paper prototype. This is related to an interesting cognitive phenomenon called change blindness.)

There's an interesting qualitative distinction between the way users use paper prototypes and the way they use real interfaces. Experienced paper prototypers report that users are more deliberate with a paper prototype, apparently thinking more carefully about their actions. This may be partly due to the simulated computer's slow response; it may also be partly a social response, conscientiously trying to save the person doing the simulating from a lot of tedious and unnecessary paper shuffling. More deliberate users make fewer mistakes, which is bad, because you want to see the mistakes. Users are also less likely to randomly explore a paper prototype.

These drawbacks don't invalidate paper prototyping as a technique, but you should be aware of them. Several studies have shown that low-fidelity prototypes identify substantially the same usability problems as high-fidelity prototypes (Virzi, Sokolov, & Karis, Usability problem identification using both low- and hi-fidelity prototypes, CHI '96; Catani & Biers, "Usability evaluation and prototype fidelity", Human Factors & Ergonomics 1998).

Try It

Exercise

  • Paper prototype an alarm clock
    • Digital or analog
    • Physical or phone
    • whatever you'd like to use
    • Your actual or imaginary
    • Be wild! Maybe an unusual use case?
  • Narrow prototype functionality
    • show/set alarm time
    • turn alarm on and off
  • Test it on your neighbor
    • set an alarm to wake me at 3pm

Computer Prototypes

Computer Prototype

  • Interactive software simulation
  • High-fidelity in look & feel
  • Low-fidelity in depth
    • Paper prototype had a human simulating the backend; computer prototype usually doesn't
    • Computer prototype may be horizontal: covers most features, but no backend
    • Exception: Wizard of Oz prototyping is broad and deep (but hard)
So at some point we have to depart from paper and move our prototypes into software. A typical computer prototype is a horizontal prototype. It's high-fi in look and feel, but low-fi in depth - there's no backend behind it. Where a human being simulating a paper prototype can generate new content on the fly in response to unexpected user actions, a computer prototype cannot.

What You Can Learn From Computer Prototypes

  • Everything you learn from a paper prototype, plus:
  • Screen layout
    • Is it clear, overwhelming, distracting, complicated?
    • Can users find important elements?
  • Colors, fonts, icons, other elements
    • Well-chosen?
  • Interactive feedback
    • Do users notice & respond to status bar messages, cursor changes, other feedback
  • Efficiency issues
    • Controls big enough? Too close together? Scrolling list is too long?
Computer prototypes help us get a handle on the graphic design and dynamic feedback of the interface.

Why Use Prototyping Tools?

  • Faster than coding
  • No debugging
  • Easier to change or throw away
  • Avoid having your UI toolkit do your graphic design

One way to build a computer prototype is just to program it directly in an implementation language, like Java or C++, using a user interface toolkit, like Swing or MFC. If you don't hook in a backend, or use stubs instead of your real backend, then you've got a horizontal prototype.

But it's often better to use a prototyping tool instead. Building an interface with a tool is usually faster than direct coding, and there's no code to debug. It's easier to change it, or even throw it away if your design turns out to be wrong. Recall Cooper's concerns about prototyping: your computer prototype may become so elaborate and precious that it becomes your final implementation, even though (from a software engineering point of view) it might be sloppily designed and unmaintainable.

Also, when you go directly from paper prototype to code, there's a tendency to let your UI toolkit handle all the graphic design for you. That's a mistake. For example, Java has layout managers that automatically arrange the components of an interface. Layout managers are powerful tools, but they produce horrible interfaces when casually or lazily used. A prototyping tool will help you envision your interface and get its graphic design right first, so that later when you move to code, you know what you're trying to persuade the layout manager to produce.

Even with a prototyping tool, computer prototypes can still be a tremendous amount of work. When drag & drop was being considered for Microsoft Excel, a couple of Microsoft summer interns were assigned to develop a prototype of the feature using Visual Basic. They found that they had to implement a substantial amount of basic spreadsheet functionality just to test drag & drop. It took two interns their entire summer to build the prototype that proved that drag & drop was useful. Actually adding the feature to Excel took a staff programmer only a week. This isn't a fair comparison, of course - maybe six intern-months was a cost worth paying to mitigate the risk of one fulltimer-week, and the interns certainly learned a lot. But building a computer prototype can be a slippery slope, so don't let it suck you in too deeply. Focus on what you want to test, i.e., the design risk you need to mitigate, and only prototype that.

Computer Prototyping Techniques

  • Storyboard
    • Sequence of painted screenshots
    • Sometimes connected by hyperlinks ("hotspots")
  • Form builder
    • Real windows assembled from a palette of widgets (buttons, text fields, labels, etc.)
  • Wizard of Oz
    • Computer frontend, human backend

There are two major techniques for building a computer prototype.

A storyboard is a sequence (a graph, really) of fixed screens. Each screen has one or more hotspots that you can click on to jump to another screen. Sometimes the transitions between screens also involve some animation in order to show a dynamic effect, like mouse-over feedback or drag-drop feedback.

A form builder is a tool for drawing real, working interfaces by dragging widgets from a palette and positioning them on a window.

A Wizard of Oz prototype is a kind of hybrid of a computer prototype and a paper prototype; the user interacts with a computer, but there's a human behind the scenes figuring out how the user interface should respond.

Storyboarding Tools

  • Photoshop
  • Balsamiq Mockup
  • Mockingbird

Photoshop is classically used for storyboarding (also called "wireframe" prototypes), but here are some other tools that are increasing in popularity. Balsamiq Mockup and Mockingbird each offer a drawing canvas and a palette of graphical objects that look like widgets that can be dragged onto it. These tools are different from form builders, however, in that the result is just a picture - the widgets aren't real, and they aren't functional.

These wireframe tools strive for some degree of "sketchiness" in their look, so these are really medium-fidelity tools. Not as low fidelity as hand sketch, but still not what the final interface will look like.

Pros & Cons of Storyboarding

  • Pros
    • You can draw anything
  • Cons
    • No text entry
    • Widgets aren't active
    • "Hunt for the hotspot"

The big advantage of storyboarding is similar to the advantage of paper: you can draw anything on a storyboard. That frees your creativity in ways that a form builder can't, with its fixed palette of widgets.

The disadvantages come from the storyboard's static nature. Some tools let you link the pictures together with hyperlinks, but even then all you can do is click, not really interact. Watching a real user in front of a storyboard often devolves into a game of "hunt for the hotspot", like children's software where the only point is to find things on the screen to click on and see what they do. The hunt-for-the-hotspot effect means that storyboards are largely useless for user testing, unlike paper prototypes. In general, horizontal computer prototypes are better evaluated with other techniques, like heuristic evaluation.

Form Builders

Build working GUIs from standard widgets
  • Mac Interface Builder
  • Qt Designer
  • FlexBuilder
  • Silverlight
  • Visual Basic

Pros & Cons of Form Builders

  • Pros
    • Actual controls, not just pictures of them
    • Can hook in some backend if you need it
      • But then you won't want to throw it away
  • Cons
    • Limits thinking to standard widgets
    • Less helpful for rich graphical interfaces

Unlike storyboards, form builders use actual working widgets, not just static pictures. So the widgets look the same as they will in the final implementation (assuming you're using a compatible form builder - a prototype in Visual Basic may not look like a final implementation in Java).

Also, since form builders usually have an implementation language underneath them - which may even be the same implementation language that you'll eventually use for your final interface -- you can also hook in as much or as little backend as you want.

On the down side, form builders give you a fixed palette of standard widgets, which limits your creativity as a designer, and which makes form builders far less useful for prototyping rich graphical interfaces, e.g., a circuit-drawing editor. Form builders are great for the menus and widgets that surround a graphical interface, but can't simulate the "insides" of the application window.

Wizard of Oz Prototype

  • Software simulation with a human in the loop to help
  • "Wizard of Oz" = "man behind the curtain"
    • Wizard is usually but not always hidden
  • Often used to simulate future technology
    • Speech recognition
    • Machine learning
  • Issues
    • Two UIs to worry about: user's and wizard's
    • Wizard has to be mechanical

Part of the power of paper prototypes is the depth you can achieve by having a human simulate the backend. A Wizard of Oz prototype also uses a human in the backend, but the frontend is an actual computer system instead of a paper mockup. The term Wizard of Oz comes from the movie of the same name, in which the wizard was a man hiding behind a curtain, controlling a massive and impressive display.

In a Wizard of Oz prototype, the "wizard" is usually but not always hidden from the user. Wizard of Oz prototypes are often used to simulate future technology that isn't available yet, particularly artificial intelligence. A famous example was the listening typewriter (Gould, Conti, & Hovanyecz, “Composing letters with a simulated listening typewriter”, *CACM* v26 n4, April 1983). This study sought to compare the effectiveness and acceptability of isolated-word speech recognition, which was the state of the art in the early 80's, with continuous speech recognition, which wasn't possible yet. The interface was a speech-operated text editor. Users looked at a screen and dictated into a microphone, which was connected to a typist (the wizard) in another room. Using a keyboard, the wizard operated the editor showing on the user's screen.

The wizard's skill was critical in this experiment. She could type 80 wpm, she practiced with the simulation for several weeks (with some iterative design on the simulator to improve her interface), and she was careful to type exactly what the user said, even exclamations and parenthetical comments or asides. The computer helped make her responses a more accurate simulation of computer speech recognition. It looked up every word she typed in a fixed dictionary, and any words that were not present were replaced with X's, to simulate misrecognition. Furthermore, in order to simulate the computer's ignorance of context, homophones were replaced with the most common spelling, so "done" replaced "dun", and "in" replaced "inn". The result was an extremely effective illusion. Most users were surprised when told (midway through the experiment) that a human was listening to them and doing the typing.

Thinking and acting mechanically is harder for a wizard than it is for a paper prototype simulator, because the tasks for which Wizard of Oz testing is used tend to be more "intelligent". It helps if the wizard is personally familiar with the capabilities of similar interfaces, so that a realistic simulation can be provided. (See Maulsby et al, ["Prototyping an intelligent agent through Wizard of Oz"](http://luca.bisognin.online.fr/icp/biblio/University_of_Calgary/maulsby93prototyping.pdf), CHI 1993.) It also helps if the wizard's interface can intentionally dumb down the responses, as was done in the Gould study.

A key challenge in designing a Wizard of Oz prototype is that you actually have two interfaces to worry about: the user's interface, which is presumably the one you're testing, and the wizard's.

Summary