Humans and Automation

(Excerpt from Chapter 2)

Automation for the sake of automation, or just because it is possible, can be entertaining and become a curious academic exercise, but may not always be a good idea in practice. It is meant to support human work and activities, and not vice versa. Furthermore, it should maximize the extent to which energy may be utilized toward this purpose. The field of human-automation systems brings about issues that, together with advances in automation technology, have evolved over time but nevertheless persist due to the presence of the human element [2]. The following paragraphs describe some of these issues.

Complexity: Emerging Behaviors

Automation is composed of sensors, decision elements, and actuators. These may form intricate networks from predetermined relationships established intentionally by engineers and designers. In this case, complexity refers to emerging relationships and behaviors that result from the interaction of these elements given a particular context. Such relationships tend to be unanticipated and studied only after the behavior has been expressed during tests. This is especially true for large-scale systems, like power grids, which are said to be highly dynamic in terms of (a) pace of change, (b) scale of operations, (c) integration of operations, (d) aggressive competitions between elements, and (e) deregulation by government [78]. Furthermore, the distributed nature of these automated systems may lead to conflicting decision elements, each of which may pursue a different goal at the same time. This is known as the mixed-initiative problem which, when involving humans as decision elements, is called the mixed human and computer initiative problem [2]. One approach to this particular problem is the use of a "human-machine overseer" or a "meta-supervisor," which aims to coordinate behaviors and goals. One example of such approach, drawn from artificial intelligence, is called the subsumption architecture [79, 80, 81], which inspired the FAM-based agent architecture [60, 82] introduced in Subsection 1.1.2. Additionally, the human may be unable to develop a mental model of how the system works, degrading human performance by limiting the extent to which operators comprehend the situation of the system and how its behavior may evolve over time [57].

Monitoring: A Burden to Humans

One result of applying automation in the workplace is the changing role of the human. Instead of manually conducting his/her work, the human trains to take on the role of supervisor. Ideally, supervision involves some form of interaction that would maintain cognitive engagement by the human. However, in many cases automation is designed to conduct repetitive tasks in a monotone manner, becoming a source of boredom for the human supervisor [2]. This can be counterproductive for two reasons [57]: (1) if the system operates continuously and without anomalies, the human may divert his/her attention to other sources of information, perhaps even tunneling his/her attention to signals that will not help to maintain situation awareness; and (2) it may mistakenly take the human out-of-the-loop, putting the system in risk of failure by human error in the event of an anomaly. In such case, it would be challenging for the human to regain control of the system due to his/her slow response in comparison to automation. In fact, it is said that the human nervous system is limited to a range of bandwidths that is far slower than that of automation: "At the low frequencies humans fail statistically, and at high frequencies, above 1Hz, they fail reliably" [2].

Decision Support: Undertrust and Overtrust

Due to the supervisory role undertaken by humans, one artifact that becomes apparently convenient as part of the human interface is a decision aid or decision support system. The issues of providing decision support originate from: (1) the inability of the engineer/developer to obtain a complete model of the controlled process, and (2) the non-existence of a design objective for the decision aid. The engineer/developer would ideally need a complete model of the dynamic system and an objective function in order to design and evaluate the decision support system. However, if these were available, the decision aid would not be necessary, because the system could then be fully automated. This is known as the Rosenborough Dilemma [83], which concludes that \in any system requiring a human operator, the objective validity of a specific decision aid can never be established." Another author, however, offers a way out from this dilemma by validating the decision aid in situations in which the human may make mistakes, demonstrating the motivation for using decision support. Even in use, there is no guarantee: the human operator may always decide whether or not to use the information offered by the decision aid depending on how suitable he/she finds it in any given situation [2]. This may lead to undertrust of the decision support system by the human operator, which may especially be the case if the decision aid frequently gives false alarms, i.e. becomes a nuisance, resulting in the crywolf syndrome. At the other extreme, routinely relying on decision aids may cause the human operator to develop a dependence on what the decision aid recommends, partially or totally abandoning his responsibilities and thus, through human error, potentially causing system failure.

Levels of Automation

As discussed in Subsection 2.1.1 with Figure 3, there is a range of degrees for the development of automated systems, from the completely manual to the fully autonomous. Most people, however, believe that systems can only be controlled either manually or automatically, and discard the possibility of having humans work with automation at various degrees. Such is the case in the domain of space exploration: most people take extreme sides and consider fully robotic missions versus manned ones, and do not highlight the advantages of having humans and automation collaborate in a shared mission [2]. This polarization is most probably due to the lower costs of conducting purely robotic missions. The question becomes: to what degree should a system be automated? The so-called technological imperative [8] has driven the trend to automate the easiest processes, leaving the remaining tasks, and often times more difficult tasks, to the human. Such trends may lead to situations in which humans find themselves executing tasks that are counterintuitive, or that are not directly related, producing other kinds of vulnerabilities in human performance. Such incoherences are said to result in the degradation of the overall human-automation system performance, and lead to certain contradictions or "ironies" in the uses of automation [84, 85]. One way to discern what tasks are to be assigned to humans or to automation is to perform a function allocation, which can be guided by the MABA-MABA list [1], named from "Men-Are-Better-At Machines-Are-Better-At."

Other researchers have preferred to decompose the human-automation system in sequential stages, i.e. information acquisition, analysis, action decision, and implementation, each of which can be automated to a certain degree [86]. Researchers following this perspective observe comparable degrees of automation along these various stages [8].

User-Centered Automation

During the 1990's, user-centered design gained wide popularity as the most appropriate way to integrate humans and machines in various domains [87, 88, 89]. The main issue has been defining what is user-centered design. The research community has debated on its meaning according to their fields of application. What they have agreed upon are some of the characteristics found in user-centered designs and objections to each.

Other authors have approached the definition of user-centered design by specifying what it is not [57]. For these authors, user-centered design does not mean: (1) asking users what they want and providing it, (2) only presenting supposedly needed information to users, (3) providing a system that makes decisions for the user, nor (4) doing at anytime everything for the user. Instead, the objective of the system is to: (1) organize technology around the goals, tasks, and abilities of the user; (2) support human information processes and how operators make decisions; and (3) maintain the user in the control loop with awareness of the state of the system. This approach makes use of situation awareness as a driver in the design of user-centered automation systems, and uses human error as the dependent variable to be minimized.

Model of the Human Component: A Limit to System Design

Initially motivated by computer science, the field of cognitive psychology has made efforts to model the human mind such that the interaction of information processing functions in the human mind is analogous to computer systems [90, 91, 92]. These functions focus mainly on describing how humans make use of perceptions, how they transform these perceptions to aid decision making, and to finally perform an action [93]. However, these models do not take into account the interaction of the human with his/her environment, thus limiting their use for the design of human-automation systems. In contrast, models used in ecological interface design (EID) take the environment into consideration by looking more into the flow of information between the environment and the human and less into the details of the internal processing sequences [94, 95]. One outcome of the EID approach is the development of interfaces that bring control elements and displays to the reach of the human operator, mimicking dynamic relationships present in the environment and certain characteristics of how humans perceive them [74]. In contrast to the perspectives of cognitive psychology and EID, cognitive engineering presents a top-down approach [67]; it draws knowledge from these two bottom-up approaches and combines ideas in control theory and engineering in order to enable design methods that consider the overall system goals and constraints. Instead of focusing primarily on the interaction of the human with a physical system, cognitive engineering centers its analysis on knowledge structures both in the machine and inside the human mind [93]. The main challenges in human modeling consist of achieving a description of the mental models created by humans in different situations, defining the relationship between these models and the decisions aids, and coping with the flexibility inherent in the human capacity to adapt and learn.

Uses of Automation and Domains of Application

The commissioning of automated systems has historically been led by its application to industries in a business-to-business fashion, i.e. affirm that specializes in equipment, procurement, and training offers their products and services to industrial and corporate customers. With advancements in automation technology, the availability of these systems has progressively found ground in the consumer market, including assistive robots for individuals with disabilities or ailments [96, 97, 98], home automation [99, 100], robotic kits [101, 102], and entertainment and toys [103, 104]. It is evident that automation technology will continue finding applications in evermore aspects of human activity, although the nature of the interaction between the human and the system may change.

The more traditional fields of application include process control, manufacturing, aviation and air traffic control, trains, ships, spacecrafts, robotic vehicles, healthcare systems, battle field command and control, office systems, and education [2]. All of these fields employ automation to varying degrees, and are found in different areas of Figure 3. As an example, some application domains are distributed in different areas in relation to more familiar references, as illustrated with Figure 7.

Figure 7: Human-automation system examples [8]

Figure 7 shows, for example, that prespecified tasks can be fully automated by present robots and replace human workers, as shown in the lower-right corner. These robots can be found in motor-vehicle assembly lines and other production lines. Others, more complex, such as in surgery, may employ automation but to a limited degree. An example of such system in healthcare is the da Vinci robot [105, 106, 107], which is increasingly used to perform critical tasks that require precision and minimal invasion of the patient's body for a faster recovery. A still more recent and debated application of automation and robotics is found in unmanned aerial vehicles (UAV) for military applications [108, 109, 110]. Questions arise about the human roles in the operation of UAV, if the human should always be in-the-loop for the deployment of weapons and how mechanisms in adjustable autonomy may enable such capability in an ethical way.

Still, other applications of human-automation systems are yet to be explored. Such is the case in the interaction of crowds with computer systems to enable functionalities or capacities not feasible by other means. These systems make use of crowds as sources of information and intelligence, i.e. crowdsourcing. One application of crowdsourcing is widely spread in CAPTCHA systems, in which computers implement what is called an inverse Turing test, to detect when a human is interacting with the system or not. Others include swarming dynamics to generate recommendations in social networks, such as Amazon, YouTube, MySpace, and Facebook [111]. Scientists, as well, are developing crowdsourcing tools to take advantage of the computational power latent in entire populations, implementing what is being called "citizen science" [112].