Researcher
Research institution
Champion
Focus team
Topic
Project status
Year ended
2011
Project ID
201001
Why should I care about this project?

Is there a difference between “bad” and “good” graphics? 

Abstract

Dr. Laura Ikuma of Louisiana State University examined different means to test the effectiveness of process graphics. A “state-of-the-art” graphic and one considered to be “poor” were used to assess if different analytic methods could identify a difference. Several techniques were tested, such as search time, mental workload, and eye-tracking. Most identified performance differences in the two displays. It was decided that this should be turned into a tool kit for member companies.

Objective

Human reliability techniques have been developed (THERP, CREAM) to quantify the probability of operator error. However, the techniques are oriented to a specific task (e.g., starting a compressor) and/or investigation of specific incidents/events, not a general assessment of error probability. Checklists for evaluation of performance shaping factors have been developed (High Performance HMI Handbook, procedure guidelines attached), but they do not indicate if failure to meet the recommendations is significant enough to potentially result in increased operator error. What is needed is an evaluation tool that when applied indicates that improvements in the human factors characteristics of the system will significantly reduce the number and/or severity of abnormal incidents.

Driving questions
  1. How can we quantify operator performance for various interface designs?
  2. Did these metrics reveal any differences between interfaces and between workload levels?
  3. Can these metrics be used to measure operator performance? (How good are the metrics?)
Background

The effects of workload level and display type on individual perceptions will be measured in terms of situational awareness, perceived workload, and eye movement. These perceptions may influence performance and will be analyzed for relationships with performance metrics (speed and accuracy). The selected measures can serve as a basis for future research studying the effects of training/experience levels, long work schedules, stress, and time of day.

Deliverables
  1. Analysis of human performance of two interfaces at different levels of workload. The analysis will examine previously established metrics of speed and accuracy and additional measurements that influence performance (situation awareness, eye movement, and subjective workload ratings).
  2. Performance measurement manual of metrics and measurements needed to determine performance and for experiment development (PDF and web formats) File Name is Metrics_guide_v4b