HMI quality

HMI Quality: The quality of the Human-Machine Interaction (HMI) is a primary concern of usability engineering.

Definition

The HMI quality May Be defined by the system utility (usefulness), in terms of the user tasks, obtained by task analysis. This is in CONTRAST with automated systems, for which the quality is typically defined by attributes such as performance, reliability and recovery costs of the system units. Unlike automated systems, for which the system utility depends primarily on the system availability, performance and reliability, the utility of the HMI interaction is affected mainly by the user’s performance and reliability, in the context of the user’s expectations.

Attributes of the HMI quality

  • Performance. The time it takes for the users to evaluate the system state and decide what to do next is typically higher by an order of magnitude than the system response time. Instead of measuring the system response time, we should measure the time elapsed from the moment the user decides to perform a task until its completion. Typically, most of the elapsed time is wasted because the user fails to follow the operational procedures, attempting to recover from unwanted system response to unexpected actions. Systems Engineering should regard user productivity, rather than system performance.

Landauer, T.K., “The Trouble with Computers: Usefulness, Usability, and Productivity”, MIT Press, 1993

  • Reliability. The operators Failure rate (MTBF) is about 10% of the overall operation time, higher by several orders of magnitude than that of the system. Instead of measuring component failure rates, such as by MTBF, we should measure operational failure rates, such as the rate of almost-accidents due to user errors. This is especially true for safety-critical systems, in which the costs of an accident are much higher than those of maintenance. Operational reliability is the key to task performance.
  • Resilience. The interaction may go out of sync, namely, the system might get to an exceptional state. The exceptional states include failure of a system unit, or the result of a user’s action that does not match the interaction protocol. Resilience engineering methods may be applied to resume normal operation after getting to the exceptional state. Task-oriented System Engineering enables definition of an interaction protocol. The STAMP model

3 N. Leveson, STAMP: A framework for dynamic safety and risk management modeling

may be used to constrain the system operation according to the protocol.

  • Recovery costs. The operators' mean time to repair (MTTR) is about 50% of the overall operation time, higher by several orders of magnitude than that of the system. Instead of measuring maintenance costs, such as by MTTR, we should measure the time it takes for the users to recover from system failures.
  • Logic. An application that is logical in its internal design and produces accurate results may nevertheless be difficult to use. The reason for this is that logic is not absolute. It is subjective, it is task related, and it changes over time. Typically, it applies to the internals of the application. Therefore, the user has difficulty following the developer’s logic.

Measures of the Interaction Quality

Common practices for evaluating the interaction quality include use opinion questionnaires and usability tests.

  • Opinion questionnaires focus on deciding if the system needs usability testing.
  • Usability tests focus on identifying barriers to task performance.

Special methods have been developed to measure the rate of exceptional situations and the level of user confusion in these cases.