With the rise of A.I. in our daily lives, questions around what it means for the future of society command our attention. These questions range from inquiries into the nature of humanity, to explorations of the complex effects on economies. And lurking in the collective, a healthy sprinkle of dystopian evil robot overlord fear as a counter-balance.

The more we discuss and learn together, the more we realise how little we understand about our own intelligence, particularly how our thinking and reasoning impacts the world around us.

A.I. is meant to assist people as they go about the world, so it stands to reason that our A.I.s will reflect the thinking and reasoning of their users. This begs a complicated question:

How do people contribute to bias in A.I.?

Cognition Gives Rise to Bias

Bias results from cognition, specifically how people think, reason, and react based on their knowledge and experiences. As people navigate the world they interact with information differently depending upon their knowledge base and the availability of mental resources.

Sometimes people process information via automatic, subconscious processes. In this mode we focus more on psychologically salient features of information to guide our responses, rather than logically important ones . A psychologically salient feature is one that draws someone’s attention, such as a bright color or a voice that sounds like someone we know.

In psychology literature, this type of thinking is associated with System 1, a faster and more reactive set of cognitive processes.

Other times people process information by dissecting the content in relation to what they know, integrating it into their knowledge base and then reacting to it.

Psychology literature identifies this type of information processing as part of System 2. Where System 1 focused on psychologically salient features of information, System 2 focuses on logically important features.

At first glance, these descriptions might lead one to think that System 1 responses could be more erroneous than those of System 2. However, psychologists have shown that System 1 produces accurate interpretations of information and logical responses in many circumstances, while System 2 can produce incorrect interpretations and poor responses from time to time (Evans, 2013).

As mentioned earlier, how people process information depends upon what they know, and their ability in the moment to focus on and interpret information. Someone with limited domain knowledge of information they encounter will rely upon psychologically salient features in order to draw analogies to what they do know.

Someone with expert domain knowledge will also focus on psychologically salient features of information, but in a way that helps them quickly locate necessary knowledge in their long term memory.

Despite a difference in available knowledge base, in both of these scenarios people seek to reduce the amount of mental resources they expend. Therefore, even if someone has available mental resources to engage System 2, they may not do so; and those who have fewer available mental resources will primarily engage System 1.

Regardless of which system someone uses to process information, patterns of thinking and reasoning may emerge that, while efficient for preserving mental resources, lead to inaccurate conclusions. These faulty patterns constitute the observed when interacting with an A.I.

Design Invites Bias

A.I. becomes refined through use by processing data and interacting with users. They primarily support people in activities that might otherwise limit their ability to perform other activities requiring dedicated mental resources. A.I. can perform complex activities quickly and more accurately than people. A.I. can also perform mundane tasks faster than people and without any decrement to task performance.

Offloading complex and mundane tasks to an A.I. frees people to perform other tasks in their environment that do not require the assistance of an intelligent system. People do not need to interact with the A.I. until it produces task-related information for them to interpret and react to.

In an ideal world, people would consume the data provided via A.I. and react objectively to it, but reality is often different. People prefer expending the least amount of cognitive effort, leading to errors in how we interpret information, store information in memory, and retrieve information from memory. These errors arise from particular patterns that constitute cognitive biases.

What influences which biases appear, and when? Sometimes environmental factors make certain information more meaningful than others. Or, we might not understand enough about the information we encounter to accurately determine what information is important. Another reason is personal preference: Some information will be more important to us because we like who or what presented it to us. These factors can appear at the time we encounter information, when we start to encode it to memory, or when we retrieve information from memory.


Although bias is tightly tied to our thinking and reasoning, ways exist to reduce it. Debiasing techniques disrupt erroneous thought processes behind bias through the use of incentives, nudges, or training to induce cognitive dissonance. Cognitive dissonance is mental discomfort arising from discrepancies between thoughts and actions. People will reduce cognitive dissonance either by changing their cognitions, reducing the importance of dissonant cognitions, or adding cognitions to reduce dissonance around actions. Each debiasing method induces cognitive dissonance differently, and with varying levels of success.

Incentives reward people for performing a desirable behavior, with rewards ranging from monetary gain to building social capital among peers. Not all incentives are created equal, however, as what motivates one person might not motivate another. Incentives work when the reward is personally relevant and not overwhelmingly difficult to obtain. Also, the success of incentives in offsetting cognitive biases is strongest when increasing effort makes the difference in biased versus unbiased processing (Soll, Milkman, & Payne, 2015) .

Nudges use a carefully architected set of options in order to facilitate behavioral change. Desired choices appear as easy-to-perform default actions among other options, giving people the sense that a particular course is being recommended to them, but that they may still choose another option (de Ridder, 2014). Nudges tend to be most successful when they help people make informed decisions, whereas those that appear paternalistic or offer overly complex information tend to fail (Sunstein, 2017).

Training requires an expert source to teach people how to respond to domain-specific information. It can be used to teach strategies for evaluating information and situations, eventually leading to improved responses. Training produces improved decision-making on its own and when used in conjunction with the other debiasing techniques (Morewedge, Yoon, Scopelliti, Symborski, Korris, & Kassam, 2015).

Debiasing Interactions with A.I.

Since most interactions with A.I.s occur via a GUI, debiasing can be accomplished by leveraging existing information and interaction design techniques. This is made all the more easier by the fact that based on how biases arise, they produce predictable errors (Morewedge et al., 2015). Such predictability could help further hone in on the type of debiasing strategy to implement, and how.

Incentivizing Ideal A.I. Interactions

As mentioned earlier, incentives provide external motivation for behavior change. Some incentives may be monetary, which may not be appropriate for some human-A.I. interactions. Other incentives focus on building social capital among peers. If interactions with A.I.s could be linked back to particular users, and the details of those interactions made known to their peers, then motivation might increase among users to always do their best when providing feedback to an A.I. Since incentives effectively motivate cognitive effort, they can address biases arising due to a lack of attention and processing to information presented by an A.I.

Nudging Users of A.I.

Nudges seek to balance gentle guidance toward an ideal outcome with the autonomy and awareness of individuals. A common A.I. interaction is the presentation of recommended actions to users. For some users, simply reordering a list of actions to take such that the recommended action is listed first and pre-selected might be enough to signal the desired action. However, some users might require further information before accepting the recommended action, so providing explanations unobtrusively to increase the success of the nudge is an important factor.

Training Users of A.I.

Training teaches how to react to information, and a necessary component of training is providing feedback to the trainee. Specialized A.I. systems, such as those applicable to cybersecurity, can provide feedback to users based upon the actions they take. Novices in a domain especially benefit from intelligent systems as the feedback they receive improves their knowledge base and increases their skills quickly. Experts can also benefit from feedback. A.I.s can quickly and efficiently analyze large volumes of complex data, so an A.I. may provide an expert with insights unavailable to them at the time they take actions, empowering them to take a different and more appropriate action.

Final Thoughts

Although people occasionally demonstrate biased thinking and reasoning, the ramifications of which could negatively influence any A.I.s they interact with consistently, that does not mean we limit engaging with A.I. Rather, those designing A.I.s should educate themselves on the biases that commonly arise in whatever domain area the A.I. intends to serve, and devise ways to reduce those biases through communication, interaction design, and visual design. Admittedly, this will not be easy, but if A.I. experts work together with user experience researchers, designers, and technical writers, then a cohesive product design strategy can be implemented to address and manage the potential for bias in human-A.I interactions.


de Ridder, D. (2014). Nudging for beginners: A shortlist of issues in urgent need of research. The European Health Psychologist, 16(1), 2–6.

Evans, J. St. B.T. (2013). Dual-Process Theories of Deductive Reasoning: Facts and Fallacies. In The Oxford Handbook of Thinking and Reasoning (pp. 115–133). New York, NY: Oxford University Press.

Morewedge, C. K., Yoon, H., Scopelliti, I., Symborski, C. W., Korris, J. H., & Kassam, K. S. (2015). Debiasing decisions: Improved decision making with a single training intervention. Policy Insights from the Behavioral and Brain Sciences, 2(1), 129–140.

Soll, J. B., Milkman, K. L., & Payne, J. W. (2015). A user’s guide to debiasing. In G. Keren & G. Wu (Eds.), The Wiley Blackwell handbook of judgment and decision making (pp. 924–950). Chichester, UK: Wiley-Blackwell.

Sunstein, C. R. (2017). Nudges that fail. Behavioural Public Policy,1(1), 4–25. doi:https://doi.org/10.1017/bpp.2016.3

Source link https://blog..io/-biases-to-a-i-f8f36772d9e2?source=rss—-eb297ea1161a—4


Please enter your comment!
Please enter your name here