Loading AI tools
Usability inspection method From Wikipedia, the free encyclopedia
A heuristic evaluation is a usability inspection method for computer software that helps to identify usability problems in the user interface design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the "heuristics"). These evaluation methods are now widely taught and practiced in the new media sector, where user interfaces are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing.
The main goal of heuristic evaluations is to identify any problems associated with the design of user interfaces. Usability consultants Rolf Molich and Jakob Nielsen developed this method on the basis of several years of experience in teaching and consulting about usability engineering. Heuristic evaluations are one of the most informal methods[1] of usability inspection in the field of human–computer interaction. There are many sets of usability design heuristics; they are not mutually exclusive and cover many of the same aspects of user interface design. Quite often, usability problems that are discovered are categorized—often on a numeric scale—according to their estimated impact on user performance or acceptance. Often the heuristic evaluation is conducted in the context of use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to be compatible with the intended users' needs and preferences.
The simplicity of heuristic evaluation is beneficial at the early stages of design and prior to user-based testing. This usability inspection method does not rely on users which can be burdensome due to the need for recruiting, scheduling issues, a place to perform the evaluation, and a payment for participant time. In the original report published, Nielsen stated that four experiments showed that individual evaluators were "mostly quite bad" at doing heuristic evaluations and suggested multiple evaluators were needed, with the results aggregated, to produce and to complete an acceptable review. Most heuristic evaluations can be accomplished in a matter of days. The time required varies with the size of the artifact, its complexity, the purpose of the review, the nature of the usability issues that arise in the review, and the competence of the reviewers. Using heuristic evaluation prior to user testing is often conducted to identify areas to be included in the evaluation or to eliminate perceived design issues prior to user-based evaluation.
Although heuristic evaluation can uncover many major usability issues in a short period of time, a criticism that is often leveled is that results are highly influenced by the knowledge of the expert reviewer(s). This "one-sided" review repeatedly has different results than software performance testing, each type of testing uncovering a different set of problems.
Heuristic evaluation are conducted in variety of ways depending on the scope and type of project. As a general rule of thumb, there are researched frameworks involved to reduce bias and maximize findings within an evaluation. There are various pros and cons to heuristic evaluation. A lot of it depends on the amount of resources and the time available to the user.
Pros: Because there’s a very detailed list of criteria the evaluator goes through, it is a very detailed process and provides good feedback on areas that could be improved on. In addition, since it is done by several people the designer can get feedback from multiple perspectives. As it is a relatively straightforward process, there are less ethical and logistical concerns related to organizing the evaluation and executing it.
Cons: Since there is a specific set of criteria, the quality of the evaluation is limited by the skill and knowledge of the people who evaluate it. This leads to another issue: finding experts and people qualified enough to conduct this evaluation. However, if you have close resources of experts and qualified evaluators, this wouldn’t pose an issue. In addition, because the evaluations are more just personal observations, there’s no hard data in the results — the designer just has to take all the information and evaluations with these considerations in mind.
According to Nielsen, three to five evaluators are recommended within a study.[2] Having more than five evaluators does not necessarily increase the amount of insights, and this may add more cost than benefit to the overall evaluation.
Heuristic evaluation must start individually before aggregating results in order to reduce group confirmation bias.[2] The evaluator should examine the prototype independently before entering group discussions to accumulate insights.
There are costs and benefits associated when adding an observer to an evaluation session.[2]
In a session without an observer, evaluators would need to formalize their individual observations within a written report as they interact with the product/prototype. This option would require more time and effort from the evaluators, and this would also require further time for the conductors of the study to interpret individual reports. However, this option is less costly because it reduces the overhead costs associated with hiring observers.
With an observer, evaluators can provide their analysis verbally while observers transcribe and interpret the evaluators' findings. This option reduces the amount of workload from the evaluators and the amount of time needed to interpret findings from multiple evaluators.
Jakob Nielsen's heuristics are probably the most-used usability heuristics for user interface design. An early version of the heuristics appeared in two papers by Nielsen and Rolf Molich published in 1989-1990.[3][4] Nielsen published an updated set in 1994,[5] and the final set still in use today was published in 2005:[6]
Although Nielsen is considered the expert and field leader in heuristic evaluation, Jill Gerhardt-Powals developed a set of cognitive engineering principles for enhancing human-computer performance.[7] These heuristics, or principles, are similar to Nielsen's heuristics but take a more holistic approach to evaluation. The Gerhardt Powals' principles[8] are listed below.
Ben Shneiderman's book was published a few years prior to Nielsen, Designing the User Interface: Strategies for Effective Human-Computer Interaction (1986) covered his popular list of the, "Eight Golden Rules".[9][10]
In 2000, Susan Weinschenk and Dean Barker[11] created a categorization of heuristics and guidelines used by several major providers into the following twenty types:[12]
For an application with a specific domain and culture, the heuristics mentioned above do not identify the potential usability problems.[13] These limitations of heuristics occur because these heuristics are incapable of considering the domain and culture-specific features of an application. This results in the introduction of domain-specific or culture-specific heuristic evaluation.[14]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.