Recent from talks
Nothing was collected or created yet.
Cognitive walkthrough
View on WikipediaThis article needs additional citations for verification. (June 2024) |
The cognitive walkthrough method is a usability inspection method used to identify usability issues in interactive systems, focusing on how easy it is for new users to accomplish tasks with the system. A cognitive walkthrough is task-specific, whereas heuristic evaluation takes a holistic view to catch problems not caught by this and other usability inspection methods. The method is rooted in the notion that users typically prefer to learn a system by using it to accomplish tasks, rather than, for example, studying a manual. The method is prized for its ability to generate results quickly with low cost, especially when compared to usability testing, as well as the ability to apply the method early in the design phases before coding even begins (which happens less often with usability testing).
Introduction
[edit]A cognitive walkthrough starts with a task analysis that specifies the sequence of steps or actions required by a user to accomplish a task, and the system responses to those actions. The designers and developers of the software then walk through the steps as a group, asking themselves a set of questions at each step. Data is gathered during the walkthrough, and afterwards a report of potential issues is compiled. Finally the software is redesigned to address the issues identified.
The effectiveness of methods such as cognitive walkthroughs is hard to measure in applied settings, as there is very limited opportunity for controlled experiments while developing software. Typically measurements involve comparing the number of usability problems found by applying different methods. However, Gray and Salzman called into question the validity of those studies in their dramatic 1998 paper "Damaged Merchandise", demonstrating how very difficult it is to measure the effectiveness of usability inspection methods. The consensus in the usability community is that the cognitive walkthrough method works well in a variety of settings and applications.
Streamlined cognitive walkthrough procedure
[edit]After the task analysis has been made, the participants perform the walkthrough:[1]
- Define inputs to the walkthrough: a usability specialist lays out the scenarios and produces an analysis of said scenarios through explanation of the actions required to accomplish the task.
- Identify users
- Create a sample task for evaluation
- Create action sequences for completing the tasks
- Implementation of interface
- Convene the walkthrough:
- What are the goals of the walkthrough?
- What will be done during the walkthrough
- What will not be done during the walkthrough
- Post ground rules
- Some common ground rules
- No designing
- No defending a design
- No debating cognitive theory
- The usability specialist is the leader of the session
- Some common ground rules
- Assign roles
- Appeal for submission to leadership
- Walk through the action sequences for each task
- Participants perform the walkthrough by asking themselves a set of questions for each subtask. Typically four questions are as
- Will the user try to achieve the effect that the subtask has? E.g. Does the user understand that this subtask is needed to reach the user's goal
- Will the user notice that the correct action is available? E.g. is the button visible?
- Will the user understand that the wanted subtask can be achieved by the action? E.g. the right button is visible but the user does not understand the text and will therefore not click on it.
- Does the user get appropriate feedback? Will the user know that they have done the right thing after performing the action?
- By answering the questions for each subtask usability problems will be noticed.
- Participants perform the walkthrough by asking themselves a set of questions for each subtask. Typically four questions are as
- Record any important information
- Learnability problems
- Design ideas and gaps
- Problems with analysis of the task
- Revise the interface using what was learned in the walkthrough to improve the problems.
The CW method does not take several social attributes into account. The method can only be successful if the usability specialist takes care to prepare the team for all possibilities during the cognitive walkthrough. This tends to enhance the ground rules and avoid the pitfalls that come with an ill-prepared team.
Common shortcomings
[edit]In teaching people to use the walkthrough method, Lewis & Rieman have found that there are two common misunderstandings:[2]
- The evaluator doesn't know how to perform the task themself, so they stumble through the interface trying to discover the correct sequence of actions—and then they evaluate the stumbling process. (The user should identify and perform the optimal action sequence.)
- The walkthrough method does not test real users on the system. The walkthrough will often identify many more problems than you would find with a single, unique user in a single test session
There are social constraints that inhibit the cognitive walkthrough process. These include time pressure, lengthy design discussions and design defensiveness. Time pressure is caused when design iterations occur late in the development process, when a development team usually feels considerable pressure to actually implement specifications, and may not think they have the time to evaluate them properly. Many developers feel that CW's are not efficient because of the amount of time they take and the time pressures that they are facing. A design team spends their time trying to resolve the problem, during the CW instead of after the results have been formulated. Evaluation time is spent re-designing, this inhibits the effectiveness of the walkthrough and leads to lengthy design discussions. Many times, designers may feel personally offended that their work is even being evaluated. Due to the fact that a walk-through would likely lead to more work on a project that they already are under pressure to complete in the allowed time, designers will over-defend their design during the walkthrough. They are more likely to be argumentative and reject changes that seem obvious.
History
[edit]The method was developed in the early nineties by Wharton, et al., and reached a large usability audience when it was published as a chapter in Jakob Nielsen's seminal book on usability, "Usability Inspection Methods".[3] The Wharton, et al. method required asking four questions at each step, along with extensive documentation of the analysis. In 2000 there was a resurgence in interest in the method in response to a CHI paper by Spencer who described modifications to the method to make it effective in a real software development setting. Spencer's streamlined method required asking only two questions at each step, and involved creating less documentation. Spencer's paper followed the example set by Rowley, et al. who described the modifications to the method that they made based on their experience applying the methods in their 1992 CHI paper "The Cognitive Jogthrough".[4]
Originally designed as a tool to evaluate interactive systems, such as postal kiosks, automated teller machines (ATMs), and interactive museum exhibits, where users would have little to no experience with using this new technology. However, since its creation, the method has been applied with success to complex systems like CAD software and some software development tools to understand the first experience of new users.
See also
[edit]- Cognitive dimensions, a framework for identifying and evaluating elements that affect the usability of an interface
- Comparison of usability evaluation methods
References
[edit]- ^ Spencer, Rick (2000). "The streamlined cognitive walkthrough method, working around social constraints encountered in a software development company". Proceedings of the SIGCHI conference on Human Factors in Computing Systems. The Hague, The Netherlands: ACM Press. pp. 353–359. doi:10.1145/332040.332456. ISBN 978-1-58113-216-8. S2CID 1157974.
- ^ Lewis, Clayton; Rieman, John (1994). "Section 4.1: Cognitive Walkthroughs". Task-Centered User Interface Design: A Practical Introduction. pp. 46–54. Retrieved April 10, 2019.
- ^ Wharton, Cathleen; Riemann, John; Lewis, Clayton; Poison, Peter (June 1994). "The cognitive walkthrough method: a practitioner's guide". In Nielsen, Jakob; Mack, Robert L. (eds.). Usability inspection methods. John Wiley & Sons. pp. 105–140. ISBN 978-0-471-01877-3. Retrieved 2020-02-11.
{{cite book}}:|website=ignored (help) - ^ Rowley, David E; Rhoades, David G (1992). "The cognitive jogthrough: A fast-paced user interface evaluation procedure". Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '92. pp. 389–395. doi:10.1145/142750.142869. ISBN 0897915135. S2CID 15888065.
Further reading
[edit]- Blackmon, M. H. Polson, P.G. Muneo, K & Lewis, C. (2002) Cognitive Walkthrough for the Web CHI 2002 vol. 4 No. 1 pp. 463–470
- Blackmon, M. H. Polson, Kitajima, M. (2003) Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web CHI 2003 pp497–504.
- Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England: Pearson Education Limited. p321.
- Gabrielli, S. Mirabella, V. Kimani, S. Catarci, T. (2005) Supporting Cognitive Walkthrough with Video Data: A Mobile Learning Evaluation Study MobileHCI ’05 pp77–82.
- Goillau, P., Woodward, V., Kelly, C. & Banks, G. (1998) Evaluation of virtual prototypes for air traffic control - the MACAW technique. In, M. Hanson (Ed.) Contemporary Ergonomics 1998.
- Good, N. S. & Krekelberg, A. (2003) Usability and Privacy: a study of KaZaA P2P file-sharing CHI 2003 Vol.5 no.1 pp137–144.
- Gray, W. & Salzman, M. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods, Human-Computer Interaction vol.13 no.3, 203–61.
- Gray, W.D. & Salzman, M.C. (1998) Repairing Damaged Merchandise: A rejoinder. Human-Computer Interaction vol.13 no.3 pp325–335.
- Hornbaek, K. & Frokjaer, E. (2005) Comparing Usability Problems and Redesign Proposal as Input to Practical Systems Development CHI 2005 391–400.
- Jeffries, R. Miller, J. R. Wharton, C. Uyeda, K. M. (1991) User Interface Evaluation in the Real World: A comparison of Four Techniques Conference on Human Factors in Computing Systems pp 119 – 124
- Lewis, C. Polson, P, Wharton, C. & Rieman, J. (1990) Testing a Walkthrough Methodology for Theory-Based Design of Walk-Up-and-Use Interfaces Chi ’90 Proceedings pp235–242.
- Mahatody, Thomas / Sagar, Mouldi / Kolski, Christophe (2010). State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions, International Journal of Human-Computer Interaction, 2, 8 741–785.
- Rizzo, A., Marchigiani, E., & Andreadis, A. (1997). The AVANTI project: prototyping and evaluation with a cognitive walkthrough based on the Norman's model of action. In Proceedings of the 2nd conference on Designing interactive systems: processes, practices, methods, and techniques (pp. 305-309).
- Rowley, David E., and Rhoades, David G (1992). The Cognitive Jogthrough: A Fast-Paced User Interface Evaluation Procedure. Proceedings of CHI '92, 389–395.
- Sears, A. (1998) The Effect of Task Description Detail on Evaluator Performance with Cognitive Walkthroughs CHI 1998 pp259–260.
- Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method, Working Around Social Constraints Encountered in a Software Development Company CHI 2000 vol.2 issue 1 pp353–359.
- Wharton, C. Bradford, J. Jeffries, J. Franzke, M. Applying Cognitive Walkthroughs to more Complex User Interfaces: Experiences, Issues and Recommendations CHI ’92 pp381–388.
External links
[edit]Cognitive walkthrough
View on GrokipediaOverview
Definition and Purpose
A cognitive walkthrough is a usability inspection method in which experts simulate the problem-solving process of a novice user by stepping through specified tasks on an interface, focusing on key cognitive actions such as forming goals, interpreting system responses, and selecting appropriate actions.[2] Developed as a theory-based evaluation technique, it emphasizes the analysis of how design elements support or hinder these cognitive processes without requiring actual user participation.[3] The primary purpose of a cognitive walkthrough is to assess the learnability of a system for first-time users, determining how easily individuals can achieve intended goals through exploratory interaction rather than relying on manuals or prior training.[2] It identifies potential barriers to task completion, such as unclear feedback or unintuitive controls, by evaluating whether the interface aligns with users' natural problem-solving strategies during initial encounters.[1] This focus on novices distinguishes it from other usability methods that may prioritize expert efficiency or broad satisfaction metrics.[3] At its core, the method rests on assumptions that users prefer trial-and-error learning through exploration, forming and refining incomplete goals based on their background knowledge and system cues, and that effective designs should facilitate intuitive action without extensive instruction.[2] These principles draw from cognitive theories of exploratory learning, positing that interfaces must provide sufficient guidance to bridge gaps in user understanding during early use.[4] For instance, in evaluating a login screen, experts might assess whether a novice can readily form the goal of entering credentials, interpret visual cues like labeled fields, and identify the submit button as the correct action, revealing any design flaws that could confuse first-time visitors.[1]Key Principles
The cognitive walkthrough method is rooted in cognitive psychology, particularly drawing from the GOMS (Goals, Operators, Methods, Selection rules) model originally developed by Card, Moran, and Newell, as extended by Polson and Lewis to address exploratory learning in user interfaces.[5][2] This framework models user behavior as a hierarchy of goals pursued through operators (basic actions), methods (procedures to achieve goals), and selection rules (choices among methods), emphasizing how novices construct mental models of the system during initial interactions. The method incorporates theories of exploratory learning, positing that users without prior experience rely on system cues, trial-and-error, and feedback to form these models and accomplish tasks, rather than relying on memorized routines typical of experts.[2] At the core of the cognitive walkthrough are four evaluation questions designed to probe the interface's support for user learning at each task step, derived from the cognitive processes in the GOMS-based theory:- Will the user try to achieve the correct effect? This assesses whether the user's high-level goal aligns with the intended task outcome, based on the assumption that novices enter with realistic expectations shaped by the system's context.
- Will the user notice that the correct action is available? This examines visibility and discoverability, evaluating if interface elements (e.g., labels or controls) are salient enough for a novice to perceive without guidance.
- Will the user associate the correct action with the effect they are trying to achieve? This question focuses on action selection, determining if the user can link the available action to their goal before performing it, based on cues and prior knowledge.
- After the correct action is performed, will the user see that progress is being made toward the solution of the task? This verifies the adequacy of feedback mechanisms, ensuring novices receive clear signals indicating advancement toward the goal to update their mental model of the system state.[4]
Methodology
Preparation Steps
The preparation phase of a cognitive walkthrough is essential to ensure the evaluation targets realistic user experiences and remains focused on learnability for new users. This involves systematically defining the scope, gathering necessary materials, and assembling the evaluation team to simulate how novices would interact with the interface without prior experience. By establishing these elements upfront, evaluators can apply cognitive principles effectively during the walkthrough, avoiding assumptions based on expert knowledge.[2] Defining the user profile begins with identifying a uniform population of potential users, particularly novices who lack specific experience with the system but possess basic relevant background knowledge, such as general computer literacy or familiarity with analogous tasks in daily life. For instance, in evaluating a banking application, the profile might specify users with no prior online banking history but moderate proficiency in using mobile devices for simple transactions. This step ensures the walkthrough simulates realistic assumptions about user goals and knowledge states, preventing evaluations from inadvertently favoring experienced users.[2][6] Task selection follows, where evaluators choose 4-6 representative tasks that cover key functionalities and align with primary user goals, derived from contextual inquiries or requirements analysis. These tasks must be concrete and goal-oriented, such as "transfer funds between accounts in a mobile banking app" or "set up a new user profile in an email client," focusing on sequences that a novice might attempt independently. The selection prioritizes critical paths that exercise core interface features without overwhelming the analysis, ensuring comprehensive yet manageable coverage of the system's intended use cases.[2][6] Interface documentation requires gathering detailed representations of the system, including prototypes, wireframes, or live implementations, along with outlined action sequences that describe the initial state, user actions, and expected system responses. For visual interfaces, this might involve annotated sketches or step-by-step flowcharts that detail button placements, menu options, and feedback mechanisms, presented without embedding expert biases to maintain an objective view of the novice perspective. This documentation serves as the foundation for simulating user interactions during the evaluation.[2] Evaluator selection typically involves assembling 3-5 individuals with expertise in human-computer interaction (HCI), including UX practitioners, cognitive scientists, or domain specialists, to provide diverse insights while leveraging their understanding of user psychology. In team-based setups, roles may be assigned, such as a facilitator to guide discussions, a presenter to demonstrate the interface, and recorders to note observations, ensuring balanced participation and efficient proceedings. Peers or even designers can participate if they maintain objectivity regarding the specific interface elements under review.[2][6] Finally, tools for the preparation include checklists structured around the four core questions derived from cognitive principles—such as whether the action is salient and whether user knowledge supports it—along with rating forms for assessing success likelihood and scenario descriptions that contextualize each task. These materials, often in printed or digital form, guide the team in documenting assumptions and potential issues systematically, facilitating a structured transition to the evaluation phase.[2][6]Conducting the Evaluation
The conducting phase of a cognitive walkthrough involves a team of usability experts simulating the cognitive processes of representative users as they attempt to learn and perform specified tasks on the interface under evaluation. This simulation focuses on the learnability of the design, with evaluators stepping through each task as if they were novice users, verbalizing their assumed thought processes to identify potential points of confusion or error. The process builds directly on the preparation phase by using the predefined task list and action sequences to guide the walkthrough.[2] For each task, evaluators break it down into discrete sub-actions, such as forming the user's goal, selecting an appropriate action from available options, executing the action, and interpreting the resulting system state. At each sub-action, the team applies a set of four core questions derived from cognitive theory to assess the likelihood of user success:- Will the user be attempting to achieve the right effect at this step (i.e., does the task align with the user's immediate goal)?
- Will the user notice that the correct action is available in the interface?
- Will the user know that this action will lead to the desired effect?
- If the user performs the correct action, will they receive appropriate feedback indicating progress toward their goal?
