Log in
HFES Policy Statement on Autonomous and Semiautonomous Vehicles
 

Download a PDF of this policy statement.

Semiautonomous and highly autonomous vehicles have the potential to enhance the safety and efficiency of the American transportation system. However, automated driving technologies significantly affect human performance, potentially negating those benefits, and should be designed and tested to address human performance issues before being introduced onto public roads. The human performance issues that automated driving technologies could introduce include loss of driver engagement and low situation awareness,[1-3] poor understanding of and overreliance on automated systems,[4-6] and loss of manual skills needed for performance and decision-making.[7, 8]

The expectation that automated driving systems will necessarily enhance safety fails to take into account the significant effect these systems have on human performance. To summarize the results of more than 30 research studies on human-automation interaction, “the more automation is added to a system, and the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed.”[2] Further, automated vehicles are not currently fully reliable or capable of recognizing or avoiding all accident conditions. Although it is easy to point to accidents in which human drivers play a significant role, this view neglects the strong safety component that experienced and knowledgeable drivers bring to the avoidance of accidents on a daily basis.[9] The availability decision bias can lead designers and policy makers to neglect future automation errors and only see the potential for automation to avoid driver errors.[10, 11]  Failing to sufficiently attend to the potential for automation to degrade human performance, and to consider the needs of drivers and other roadway users to develop accurate levels of trust in these devices, can significantly impact safety and undermine public acceptance of the technology.[12]

As policy makers seek to create policies and a regulatory framework for the governance of these vehicles, HFES therefore endorses the following policy positions for each SAE level of automation:
 
I. Automated vehicles require careful testing before deployment on public roads.

  1. The design, development, and testing of automated and semiautomated vehicles requires the careful assessment of human performance when operating in conjunction with such systems. Autonomous and semiautonomous driving systems must be required to pass testing that demonstrates that the combined performance of the driver and the vehicle technology is as safe as or safer than human drivers alone in a wide range of driving and weather conditions. [SAE Level 2/3/4/5]

  2. Highly automated systems should perform at a level equivalent to that required of human drivers. In addition, such systems must be required to perform basic tasks that are currently performed by human drivers (including detection and identification of safety signage, and detection and avoidance of obstacles, vehicles, cyclists, and pedestrians). For fully autonomous vehicles [SAE Level 5], testing must include, at a minimum, an ability to detect and safely avoid obstacles, debris, pedestrians, bicyclists, vehicles, and animals, and manage other roadway conditions and hazards. It must include the ability to accurately detect and recognize roadway signage and signaling, even when that signage has been degraded by sun, weather, dirt, tree branches, and other factors common in the driving environment. [SAE Level 4/5]

II. Automated vehicles should support the needs of human drivers and other users.
  1. The design of semiautomated vehicles must avoid known human performance issues[2] and provide effective mechanisms for human oversight and intervention. Semiautonomous vehicle systems must be required to demonstrate equivalent or improved safety, across both situations in which it is reliable and those in which it is not (i.e., safety must be established in automation failure conditions that involve resumption of control or override by human drivers). In cases in which the automation fails, or in situations that it cannot handle, safe transition to human control within the time available to allow accident avoidance is required, taking into account human decision-making and maneuvering time as well as overcoming human vigilance deficits as affected by automation reliability, robustness, and breadth of implementation across vehicle systems. [SAE Level 2/3/4]

  2. The ability of the automation to function reliably in the current and upcoming conditions should be clearly displayed to the driver. Driver interfaces should provide accurate situation awareness of the state of the vehicle and the external driving environment as well as automation transparency, to include the automation’s current state, settings, and mode; highly salient warnings of automated mode transitions, including transitions to manual mode; and what the automation is aware of, its interpretation of data received, and projected plans or intentions of the automation. [SAE Level 2/3/4]

  3. Remote-control interfaces for operating road vehicles must include operator interfaces that provide situation awareness of vehicle trajectories, systems, and states; automation; automobile, cyclist, and pedestrian traffic; and environment and road conditions equivalent to that of an in-vehicle driver, as well as the ability to avoid collisions. [SAE Level 1/2/3/4/5]

  4. Fully autonomous vehicles should accommodate people with disabilities. [SAE Level 4/5]

III. Automated vehicles should be safe and understandable.
  1. Automation reliability standards and requirements for the conditions that automated vehicle systems should be able to handle must be established for each SAE level to support testing, training, and implementation approval. [SAE Level 2/3/4/5]

  2. Highly automated systems should include provisions for safe fallback states when the automation fails for any reason. The safety of these fallback states should consider the consequence of multiple vehicles seeking the same state at the same time. [SAE Level 4/5]

  3. Automated systems should include features that allow it to communicate intended actions to cyclists, pedestrians, law enforcement, and other road users [SAE Level 4/5].

  4. Automation design should make the underlying algorithms and their behavior interpretable so that its capabilities and limits are clear to designers and policy makers. [SAE Level 2/3/4/5]

 IV. Automated vehicles should be accompanied by detailed training for drivers.
  1. Automobile manufacturers should provide sufficient training on the capabilities, limitations, and behaviors of its automated and semiautomated systems (including the range of operational conditions it can handle) so that drivers obtain an accurate mental model required for effective oversight and interaction with them. New training should be provided on any automation updates that are made over the course of the system’s lifetime so that the automation’s behavior remains predictable to the driver. [SAE Level 2/3/4]

  2. Automated vehicle test drivers operating on public roadways should receive extensive training on the capabilities of the automation, as well as instructions for remaining vigilant and providing rapid intervention. They should be provided with displays and controls to support this role with and monitoring systems to ensure they remain vigilant and able to intervene rapidly. [SAE Level 2/3/4/5]

Approved September 30, 2018

References
 
1. Endsley, M. R. and E. O. Kiris, The out-of-the-loop performance problem and level of control in automation. Human Factors, 1995, 37(2): pp. 381-394.
2. Endsley, M. R., From here to autonomy: Lessons learned from human-automation research. Human Factors, 2017, 59(1): pp. 5-27.
3. Onnasch, L., et al., Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 2014, 56(3): pp. 476-488.
4. Lee, J. D. and K. A. See, Trust in automation: Designing for appropriate reliance. Human Factors, 2004, 46(1): pp. 50-80.
5. Sarter, N. B. and D. D. Woods, "How in the world did I ever get into that mode": Mode error and awareness in supervisory control. Human Factors, 1995. 37(1): pp. 5-19.
6. Schaefer, K. E., et al., A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 2016, 58(3): pp. 377-400.
7. Casner, S. M., et al., The retention of manual flying skills in the automated cockpit. Human Factors, 2014, 56(8): pp. 1506-1516.
8. Young, J. P., R. O. Fanjoy, and M. W. Suckow, Impact of glass cockpit experience on manual flight skills. Journal of Aviation/Aerospace Education and Research, 2006, 15(2): pp. 27-32.
9. Woods, D. D. and R. I. Cook, Incidents - Markers of Resilience or Brittleness, in Resilience Engineering: Concepts and Precepts, E. Hollnagel, D. D. Woods, and N. Leveson, Editors, 2006, Aldershot, UK: Ashgate.
10. Kahneman, D., P. Slovic, and A. Tversky, Judgment Under Uncertainty: Heuristics and Biases, 1982, Cambridge, UK: Cambridge University Press.
11. Levinthal, D. A. and J. G. March, The myopia of learning. Strategic Management Journal, 1993, 14: pp. 95-112.
12. Lee, J. D. and K. Kolodge, Understanding attitudes towards self-driving vehicles: Quantitative analysis of qualitative data. In Proceedings of the Human Factors and Ergonomics Society 62nd International Annual Meeting, 2018, pp. 1399-1403.