Research

Here you will find information about the research projects being undertaken and/or supported by the Formal Human Systems Laboratory. Please contact Matthew L. Bolton with any research opportunities you may have.

Current Spnsored Projects

Measurement Scales of Psychometric Ratings

Sponsor: National Science Foundation

Project: 1918314

PI: Dr. Matthew L. Bolton

Human error is a major factor in failures across safety-critical domains. Such failures are very complex, with human errors often arising as a result of unexpected interactions between system automation and human behavior. Thus, researchers have investigated how formal methods tools and techniques, which have been developed to mathematically prove properties about complex computer systems, can be adapted to human-automation interaction (HAI) problems. These techniques are powerful and capable of discovering unexpected, critical human errors and system failures. However, existing techniques do not provide a means for fixing discovered human errors. Further, interface changes both introduce new unforeseen errors and risk negative transfer effects, where changes that conflict with previously learned behaviors can also cause problems. This project will investigate a novel approach to HAI evaluation and repair that will help designers and analysts efficiently eliminate many kinds of potential interaction errors while minimizing the risk of introducing additional human errors. The developed methods will be validated in design cases of real safety-critical systems including an industrial furnace, nuclear power plant procedures, a radiation therapy machine, and pharmacy medication dispensing processes. The knowledge and tools produced in this research will be made available to researchers and designers and have potential applications to a wide range of many safety-critical systems. This, in turn, will help avoid system disasters, prevent injuries, save lives, and protect critical resources across society.

The project is divided into three main thrusts. First, the team will develop a theoretically grounded method for scoring the likelihood that humans will behave erroneously for a given HAI design through a novel synthesis of formal methods, erroneous human behavior models, negative transfer theory, and human reliability analyses. Second, it will introduce a new theory of formal model repair in interactive systems that will underlie the development of methods for removing problematic HAI errors by adapting both human-machine interfaces and the workflow of the associated tasks. Third, the scoring and model repair methods will be combined to allow automated model repair to find design interventions that will reduce the likelihood of changes causing problematic human errors, using a database of common error patterns and solutions to be developed through the project. Across all three of these thrusts, the team will use human subject experiments, testing, and formal proofs to validate that the advances achieve their hypothesized capabilities. The work will lead to improved methods for evaluating human reliability aspects of interfaces, widen the application of formal methods to new contexts, and provide resources for researchers, designers, and engineers to improve the reliability of cyber-human systems

Measurement Scales of Psychometric Ratings

Sponsor: Air Force Research Lab / Universal Technology Corporation

Prime Contract FA8650-1.6-C-2642 / Subcontract 18-S8401-13-C1

PI: Dr. Matthew L. Bolton

The level of measurement of a psychometric scales can have profound implications for what mathematics can be meaningfully applied to them. In this project, we introduce a new method for determining what the level of measurement is of psychological concepts captured by psychometric rating scales. We will use this method to determine the level of measurement of three common human-automation interaction psychometric scales: human trust in automation, mental workload, and situation awareness. Results will be used to draw conclusions about the validity of the analyzed scales, individual differences in psychometric rating’s levels, and the level of measurement of psychometric ratings in general.

 

Previous Projects

A Formal Approach to Detecting and Correcting Simultaneous Masking in the IEC 60601-1-8 International Medical Alarm Standard

Sponsor: Agency for Healthcare Research and Quality

Contract: R18HS024679

PI: Dr. Matthew L. Bolton

The failure of humans to respond to auditory medical alarms has resulted in numerous patient injuries and deaths and is thus a major safety concern. This project will use a novel computational technique to identify how auditory medical alarms described in the IEC 60601-1-8 international standard can interact in ways that can make alarms unhearable and construct tools engineers can use to design alarms that avoid this problem. By ensuring that standard-compliant medical alarms are perceivable, humans will be more likely to respond to them and thus prevent patient death and injury.

Young Investigator Program (8.5): Preventing Complex Failures of Human Interactive Systems with Erroneous Behavior Generation and Robust Human Task Behavior Patterns

Sponsor: Army Research Office/Army Research Laboratory

PI: Dr. Matthew L. Bolton

Failures in complex systems often occur because of unexpected erroneous human interactions. Erroneous behavior classification systems (or taxonomies) are critical for understanding why and how erroneous human behaviors occur and predicting how they can cause system failures. In this work, Dr. Bolton is developing a novel erroneous behavior taxonomy based on where, why, and how human behavior deviates from a normative plan of actions. He plans to show that this taxonomy will subsume the leading existing erroneous behavior classification systems and resolve incompatibilities between them. Further, he will use this taxonomy as the basis for an erroneous behavior generation system that will be usable in computational safety analyses that will help engineers harden systems to erroneous human behavior. This work will give engineers unprecedented tools for understanding erroneous human behavior and predicting how it could contribute to system failure. It will thus have applications in safety critical systems in medicine, aviation, and the military.

Scenario Development Through Computational and Formal Modeling for Verification and Validation of Authority and Autonomy (A&A) Constructs in Aviation

Sponsor: NASA Ames Research Center / Georgia Institute of Technology

PI: Dr. Amy Pritchett (Georgia Tech)

Institutional PI: Dr. Matthew L. Bolton

This project is investigating how formal verification with model checking can be used to do sensitivity analyses of agent-based simulation scenarios. This method is being used to evaluate human performance for different allocations of authority and autonomy in NextGen aviation concepts and systems.

EAGER: Automatically Generating Formal Human-computer Interface Designs from Task Analytic Models

Sponsor: National Science Foundation

PI: Dr. Matthew L. Bolton

The concurrent nature of human-computer interaction can result in situations unanticipated by designers where usability is not properly maintained or human operators may not be able to complete the task goals the system was designed to support. Mathematical tools and techniques called formal methods exist for modeling and providing proof-based evaluations of different elements of HCI including the human-computer interface, the human operator's task analytic behavior, and usability. Unfortunately, these approaches require engineers to create formal models of the interface designs, something that is not standard practice and prone to modeling error. This work strives to address this problem by showing that a machine learning approach can be used to automatically generate formal human-computer interface designs guaranteed to always adhere to usability properties and support human operator tasks.

More information on this project can be found here.

Using Model Checking to Discover Human Perceptual Problems with Medical Alarm Interactions

PI: Dr. Matthew L. Bolton

Alarms are regularly used in medicine to alert individuals to conditions that require attention. In particular, telemetry monitoring describes a hospital process where a technician must respond to alarms that trigger in response to the monitored rhythms, heart rates, and oxygen saturations of remote patients. In this environment, the timing of a technician’s response is critical as seconds can mean the different between life and death. Clearly, any situation that prevents a technician from perceiving and thus properly responding to an alarm could have extremely adverse effects on a patient. Unfortunately, in cases where multiple alarms can alert concurrently, alarms can interact in ways that render one or more of them imperceptible. This is known as auditory masking, a condition that can occur based on physical and temporal relationships between sounds. Detecting such problems can be difficult due to the highly concurrent nature of alarms. Model checking is an analysis tool that is specifically designed to find problems in models of concurrent systems using a form of automated theorem proving. This research will develop tools and techniques for using model checking to detect auditory masking conditions in alarms used in medical telemetry monitory systems and investigate solutions to discovered problems.

Verification Models for Advanced Human-Automation Interaction in Safety Critical Flight Operations

Sponsor: European Space Agency

PI: Dr. Francisco Barreiro (IXION Industry and Aerospace)

Institutional PI: Dr. Matthew L. Bolton

Engineers have developed a number of tools and methods to help themselves eliminate design errors and minimize the risk of failure in safety critical systems. However, breakdowns in complex systems can still occur, often as a result of unanticipated interactions between system components. In particular, the increasing use of automation can result in unexpected, and potentially unsafe, human-automation interactions. Thus, for safety critical systems like those used in space and aerospace operations, there is a need for methods capable of providing safety guarantees for a system that considers both anticipated and unanticipated human-automation interactions. Formal verification, a method for mathematically proving whether a model of a system does or does not adhere to system safety properties, is capable of providing such guarantees . This project will develop novel methods for using formal verification to evaluate the safety of systems that rely on human-automation interaction. In particular, it will develop a technique for generating formally verifiable system safety properties from task analytic models of human behavior. The work will show how a model checker can be used to automatically verify (formally prove) if a model of the target system (which will contain both automation behavior, human behavior, and their interaction) does or does not adhere to these properties. The developed method will be applied to two aerospace case studies.

NEXTGENAA: Integrated Model Checking and Simulation of NextGen Authority and Autonomy

Sponsor: NASA Ames Research Center

PI: Dr. Ellen Bass (Drexel University)

Institutional PI: Dr. Matthew L. Bolton

NextGen systems are envisioned to be composed of human and automated agents interacting with dynamic flexibility in the allocation of authority and autonomy. The analysis of such concepts of operation requires methods for verifying and validating that the range of roles and responsibilities potentially assignable to the human and automated agents does not lead to unsafe situations. Agent-based simulation has shown promise toward modeling such complexity but requires a tradeoff between fidelity and the number of simulation runs that can be explored in a reasonable amount of time. Model checking techniques can verify that the modeled system meets safety properties but they require that the component models are of sufficiently limited scope so as to run to completion. By analyzing simulation traces, model checking can also help to ensure that the simulation's design meets the intended analysis goals. Thus leveraging these types of analysis methods can help to verify operational concepts addressing the allocation of authority and autonomy.