Human Factors and Automation

(Excerpt from Chapter 2)

The field of human factors consists of "the study of human beings and their interaction with products, environments and equipment in performing tasks and activities" [3]. Among the objectives of this field is to maximize system efficiency and human health, comfort, safety, and quality of life [62]. From a research perspective, this field studies the capabilities and limitations of humans and how these factors may determine design methods used to build anything from the simplest manual hand tool to complex interactive automated systems. From the application perspective, this field offers the opportunity to apply such methods to the design, evaluation, and commissioning of engineered systems [63].

Because the study of human factors focuses on how the human element affects the performance of a system within its goals and environment, this field draws from perspectives in systems theory. Aristotle refers to systems in his metaphysics as "all things which have several parts and in which the totality is not, as it were, a mere heap, but the whole is something beside the parts" [64], i.e. the whole is more than the sum of its parts. In the case of human factors, the performance of the system is evaluated in terms of the context of the human-machine system, defined as "a system in which an interaction occurs between people and other system components, such as hardware, software, tasks, environments, and work structures "[3]. A familiar case of a human-machine system composed of a human user and a personal computer is shown in Figure 1. The characteristics of the human being and computer are used to describe the interaction taking place between them. The description includes the sensory, cognitive, and motor characteristics of the human, which may be influenced by age, gender, and training. The computer description shows sensors and transducers as inputs; processor and memory as a counterpart to \thinking;" and visual, auditory, and tactile devices as outputs. The interaction, as described in Figure 1, is an interplay between the actuation of the computer system and the actions of the human.

Figure 1: Example of a human-machine system [3].

The development of systems in engineering has been traditionally guided by the reductionist approach inspired by Ren e Descartes in his Discourses [65], hereby instead of studying the behavior of a system as a whole, he rather proposes to focus on the analysis of its components in isolation. The field of human factors aims to complement the reductionist approach by bringing into consideration ideas from systems theory by concerning itself with both the behavioral/system-oriented approach as well as the constitutive/reductionist view of the system. Thus, human factors aims to make use of systems theory as a unifying framework for these two complementary perspectives [66].

The adoption of the systems approach began during World War II, when the complexity of military systems became a problem for their successful operation. The early stage of human-automation systems has been described in three phases, as shown in Figure 2 [2].

Figure 2: History of human-machine systems engineering [2].

The initial use of the human-machine systems concept is represented by Phase A. During this time, special attention was given to the field of civilian and military aviation and weapon systems. The concept also found application in the automotive and communication industries. The period in which the human factors field began to borrow models from systems in engineering to describe human performance, e.g., concepts from control systems theory were used to propose models to describe and predict the performance of human operators, is reflected in Phase B. Finally, a so-called "human-computer interaction" period is referenced by Phase C [2], characterized by the use of computing power and automation. This phase dramatically changed the way in which humans and machines interacted. Such advances posed new challenges to both designers and operators. On one hand, operators would perform less physical work, while having more cognitive-intensive interactions with automated systems. On the other hand, designers would have to consider how automated systems would help operators perceive, detect, think, and make decisions in real time [67, 68]. In consequence, human factors professionals needed to know more about the attributes of information processing and cognition in humans to integrate these considerations into their designs, leading to the emergence of cognitive engineering [69].

Cognitive engineering focuses on "complex, cognitive thinking, and knowledge-related aspects of human performance, whether carried out by humans or by machine agents" [70] and overlaps with the fields of cognitive science and artificial intelligence [3]. The relationship of the latter with cognitive engineering is illustrated in Figure 3 [2].

Figure 3: The trend of progress in human-supervised automation [3].

Artificial intelligence is described from the perspective of supervisory control in Figure 3 as the upper-right corner of the chart [8]. In this case, the automation is ideally intelligent, e.g. a robot with a high level of intelligence, able to operate in unstructured environments and follow various goals, where the role of the human would be that of an observer, an assisted subject, or a peer. However, most systems require human supervision for their operation, as expressed by the spectrum of degrees of automation. Also called levels of automation (LOA), these go from the completely manual to the fully automatic extremes, and for various job complexities. Cognitive engineering finds its work domain in the intermediate range of this chart, i.e. in the combination of humans and machines operating at increasing degrees of automation. Two different levels of automation are shown in Figure 4: (a) Robonaut, a teleoperated robotic system [4]; and (b) RobuBOX-Kompai, an autonomous system that finds application in health care and assistive robotics [5].

Figure 4: (a) Robonaut [4]; (b) RobuBOX-Kompai [5].

Another way to represent the notion of LOA is shown in Figure 5 [6, 8]. Six types of supervisory control architectures are compared in Figure 5: Type 1 represents the purely manual control, while Type 6 shows a fully autonomous control. The intermediate architectures of supervisory control are characterized both in a strict formal sense (Types 3-5) and in a broader sense (Type 2) [8]. The strict formal sense of supervisory control implies that human operators may intermittently interact with the computerized system, configuring operating conditions and adjusting settings in an interface. In a broader sense of supervisory control, the interface between the human and the machine produces an integrated display of data becoming a telerobotic/teleoperated system.

The elements common to all supervisory control architectures are the tasks to be considered and the human operator. The questions are: to what LOA should human-automation systems be developed? and more specifically, how will automation technology complement and enhance humans in conducting their tasks and directing their goals? How are the goals defined in each case? Depending on whether the tasks are performed by humans or machines, a wide variety of problems may arise. Such problems may be due to hardware/software design, variations in human performance, or the interaction between the human and the automation components.

Figure 5: Supervisory control in human-robot interaction [6].

Some variations in human performance may be attributed to human information-processing functions, such as perception, attention, working memory, or long-term memory, among others. These considerations and those related to interaction issues pose challenges that are addressed in "Issues between Humans and Automation"