What to keep in mind when designing for AI — a black box full of algorithms
With this article, I am sharing my personal learnings from creating an AI-enabled UI. I developed four baseline principles which can be applied when designing for AI-enabled user interfaces: 1) discovery and expectation management, 2) design for forgiveness, 3) data transparency and tailoring, and 4) privacy, security and control.
Next to the applying basic UX knowledge, keep these design principles in mind and you will have a good base for your AI-enabled user interface.
AI promotes efficiency and convenience
Since the dawn of the industrial age, we have been using technology to automate and eliminate labor-intensive tasks in order to become more efficient. Technology makes it possible for humans to have more convenient lifestyles.
AI is an emerging technology that promotes efficiency and convenience. It’s revolutionary changing the way people interact with machines. Companies generally treat AI as a technology that takes away the human from the equation and is improving convenience and productivity. A recent example is Google AI Assistant (using Google Duplex technology) that takes away the “labor-intensive task” of making phone calls and empowers the human to be more efficient.
Genuine AI is far beyond field’s current capabilities
Whatsmore, Google Duplex is actually showing the world what is currently possible (and thus what is currently not possible yet) with the technology rather than only human-centered design. The announcement of Google Duplex can be seen as a reminder that genuine AI is far beyond the field’s current capabilities — even at a company with the best AI experts, enormous quantities of data, and huge computing power. It also stirs up discussion on the boundaries of what people actually want to have executed by an AI or not.
So what’s AI?
Artificial Intelligence (AI) is still considered new (while it actually emerged in 1950). People can be anxious about AI because it’s a black box full of their privacy-sensitive data that runs on algorithms with undiscovered possibilities.
The term, artificial intelligence (AI), is applied when a machine mimics cognitive functions such as learning and problem-solving. AI is programmed by humans (with biases) to complete certain tasks and tries to become as effective as possible with it. Moreover, the AI doesn’t really “understand” or “learn” like humans do. The AI just follows the human instructions it is programmed to do and will improve itself while doing so.
What AI does well is quickly processing repetitive, complex and focused tasks. AI is mimicking “learning” by being fed large quantities of data — as humans learn by being “fed” with experiences. By being stimulated by “big data”, AI can learn how to find and discover. It creates a path of least resistance to reach its goal building upon brute force algorithms.
AI and its weaknesses
Also, AI has its weaknesses — like not being able to understand nuance or context (yet). The AI behind a photo library has trouble recognizing photos with depth or photos which are upside down. This AI can be tricked too. Google’s research team showed that if a sticker (“the psychedelic toaster sticker”) is added to a scene with other objects, the AI will disregard the other objects and classify the scene as “a toaster”.
Moreover, the AI behind the Roomba failed to recognize puppy-feces, and as a result, made Roomba ran over it and smeared floors with feces (read the funny story here).
Designing against a black box full of algorithms
Above examples are due to the fact the AI was not fed crucial “small data” and therefore was not programmed and prepared for these “sad paths” — unexpected scenarios. That’s why it’s important for designers to not only uncover ideal scenarios (where everything goes as expected — e.g. no errors) but more crucially discover and prepare for the unexpected ones. The complex challenge is for (human) designers to have to design against the black box of algorithms. A black box that offer a powerful set of options — with a human to discover these possibilities.
Types of AI
According to Chris Noessel (Designing for Agentive Technology, 2017), AI can be broken into three categories:
- Narrow artificial intelligence — intelligence that can learn and infer, but cannot generalize. Narrow intelligence is focused on one task. This is the AI that exists today.
- General intelligence — a computer intelligence which would be like that of a human brain.
- Super intelligence — intelligence that would far surpass that of humans.
To avoid confusion, in this article I use the abbreviation AI to refer to narrow artificial intelligence.
What are AI-enabled user interfaces?
For AI to be used by humans, a user interface (UI) is required. The UI is the medium where interactions between humans and machines occur.
So what are AI-enabled UIs? It’s the interface with simulated cognitive functions that facilitate the interaction between humans and machines.
Examples of AI-enabled user interfaces are Amazon Alexa, Nest Thermostat, Jarvis, IBM Watson, iRobot Roomba, Netflix, and Spotify.
These examples are mere manifestations. For clarity, AI can be implemented anywhere — such as a search engine, behind your photo library, or in a vacuum cleaner. The UI depends on the fit of the AI with the task and channel.
Within AI-enabled UI, the AI has the ability to understand user’s commands and present the user the best possible predictions based on all the available data it has. The interaction with the AI should be easy to use, simple and efficient so users can easily accomplish their goals. The challenge designers face now is how to design AI-enabled UI which users can trust with their personal data.
Developing AI-enabled UI at Deloitte
The AI tool functions as an agent to the audit experts by processing big data and brute force algorithms at high speed. The AI removes the manual, time-consuming, and tedious tasks that are embedded in auditor’s current jobs such as dossier cross-checking. Moreover, the AI is able to analyze tons of documents, read a text, find trends and is able to generate predictions. It monitors and remembers auditor’s behaviors and choices, and learns from this for next time. The tool is able to convert big data into insightful information for the auditor.
Nonetheless, being an auditor requires deep professional knowledge and experience in the audit process. The AI tool still needs the auditor to understand the context of data. Therefore, the AI — being an agent — gives the final say/decision to the auditor.
Design principles for AI-enabled UI
When designing the interface for the audit AI-tool, I was challenged to apply AI principles to my user experience (UX) design process. I developed four baseline principles which can be applied when designing for AI-enabled user interfaces. These principles are based on exploratory and benchmark research I did.
There are 4 categories of principles to implement:
- discovery and expectation management,
- design for forgiveness,
- data transparency and tailoring, and
- privacy, security and control.
1. Discovery and expectation management — Set user expectations well in order to avoid false expectations
1. Users should be aware of what the tool can, and cannot do — People are still unfamiliar with AI and therefore the design needs to be more guiding. Manage expectations, and let the user know the possibilities of the tool, how the AI learns and what the user needs to do to accomplish their goals. Integrate this into the on-boarding process.
For example, the only expectation I have of a pet feeder is to have it feed my cat multiple times per day without my physical presence. However, Petnet’s smart feeder offers me more than that. The AI uses my cat’s weight, age, breed, and activity level to match the cat with portion sizes.
2. Users should expect most benefit from minimal input — Design the product so the user can expect a valuable product for ‘natural input’. Natural input can be defined as input that feels natural to the user, and thus feels like (nearly) zero input. It should be easy to use, efficient, and accomplish user’s goals simply and efficiently.
In the previous cat example, the pet feeder only has to be set up once. Additionally, it sends an alert when it’s running low on food. The only task I have to do is to fill it with kibbles when it prompts me to do so.
3. Prepare for undiscovered and unexpected usage — The use of AI in people’s daily lives is still new. Users will discover the possibilities of using the technology for ways it was not designed for. That’s why designing for discovery is crucial. Really invest time to discover the (unexpected) possibilities of the usage of the AI tool.
For example, I lost my earring which is small in size. I have carpeting, and could not locate the earring. I eventually cleared Roomba’s bin, and let it roam the room. Few moments later, I found my earring in Roomba’s bin.
4. Educate the user about the unexpected — the AI will make mistakes. Human designers are prone to error and are not all-knowing. It’s a safe bet to state that users will encounter scenarios which designers did not discover and included in the algorithms (yet). Educate the users and be honest about the possibility that they might encounter sad path scenarios.
For example, if you would ask Siri the following: “Show me the nearest fast food restaurant which is not McDonald’s” — Siri will sadly show you *drumroll* the location of the nearest McDonald’s. This is because Siri is programmed to recognise words such as “nearest”, “fast food restaurant” and “McDonald’s”, however, Apple did not discover to include the possibility that users might ask Siri to not show them something.
2. Design for forgiveness — The AI will make mistakes. Design the UI so users are inclined to forgive it.
1. Design the tool in a way that users will forgive it when it makes mistakes — A way to design for forgiveness is to use a UI that simulates creatures or objects that humans are already naturally inclined to forgive, like children or animals. Examples are “care robot Alice” which resembles a grandchild and therapeutic robot Paro, which is a stuffed animal seal. Deloitte created “AIME”, which is designed to not resemble a living creature.
Apple did not design Siri for users to forgive her. Siri is designed as an adult assistant. She sounds like an adult and she speaks like an adult. Therefore, users are less inclined to forgive this adult assistant when she cannot “understand” simple commands.
2. Design delightful features to increase the likelihood of forgiveness — Part of designing for forgiveness is to offer users delightful features for them to forgive the AI when it makes mistakes. An example is to design humor within the UI, like with Siri and Alexa.
Cocorobo is a vacuum cleaner. The delightful feature is that it can talk to people. A result of this example is that it helped a Japanese adult feel less isolated — since the robot greeted him every time he came home (to an empty house). I bet he would forgive his little friend if it smeared his house with puppy-feces.
3. Design ability to use AI without internet connectivity — It is important to avoid designing the UI that is solely relying on internet connectivity. Users should gain value from the AI regardless of the fact whether it’s connected to the internet.
The Pet feeder, Google Assistant, Roomba, Spotify, or Netflix are all examples that AI-enabled UIs should be able to function without internet connectivity. Voice assistants Siri and Alexa do not function without internet connectivity.
3. Data transparency and tailoring — Be transparent about collecting data and offer users the ability to tailor it.
1. The AI should be transparent in what data it has of the user — The AI possesses data of the user. With the concerns of privacy and data leaks, be transparent to users and offer them the ability to monitor AIs data and activity.
It’s not necessary to show the algorithms to the user, but it is important to mention which data is used in order for it to be an agent to the user.
For example, Amazon Alexa knows when you go to sleep or when you’re out of town. If she gets hacked and this information gets into the wrong hands (like burglars), it’s something to be anxious about. That’s why it’s so important to address the privacy issue here and be transparent about what data the AI has.
2. Users should be able to provide input so the AI can learn — Offer users the ability to tailor the collected data since the AI can not apply reason and logic within context. Machines need humans to provide context via feedback. Design ways for the users to provide this input.
In the Google Translate Community, Google offers users to manually translate and confirm words and sentences. It helps the machine to learn and understand the context and to translate sentences from one language to another. Thus, the AI is continuously “learning” since it is fed with big tailored data at a high speed. This way it keeps getting better in translating.
In the example of the Roomba and the puppy, the Roomba could ask what the problem is and use this input to learn for next time. This way the interaction would be beneficial for both user and the robot.
3. Users should be able to adjust what AI has learned — The AI configures itself based on machine learning and monitoring the user’s behavior. However, the AI will make mistakes and will output predictions that the users do not desire. Therefore, besides designing for discovery and forgiveness, offer the user to tailor predictions to their liking by e.g. adjusting what the AI has learned. This is what makes AI unique. It can forget what it has learned completely while with humans, an experience stays and bias and assumptions are created until proven otherwise.
E.g. You own a Netflix account. Your friend does not. Your friend watched movies on your Netflix account which you don’t like — This means the result of the algorithm has been changed due to the behavior of your friend. In this case, you would like to configure the predictions by deleting your friend’s preferences/history.
4. Privacy, security and control — Gain trust by driving privacy, security and the ability to control the AI.
1. Design top notch security for users to trust AI with personal data — Current technologies offer users to secure and lock their personal data by means of face- and voice recognition, fingerprints, and two-factor authentication (e.g. combination of passwords and passcodes through call- and text messages). Design this in the UI to not only let the user feel the AI is secure, but to actually secure their data and protect their privacy. A mere passcode won’t be enough. When AI-enabled UIs drive privacy and security, users are more likely to trust it more.
What I noticed when jumping on the Crypto-currency train was the importance of privacy and security of the UIs that handle my money. I did not even mind the two-factor authentication setups because I knew it was for my own good.
Knowing that AI will have crucial, personal and privacy sensitive data of mine — as a user it is important that the UI does its very best to protect my data.
2. Prove delivery on promises by offering test runs — Especially when a product is new, users want to test whether it can really deliver what it promises to do. When the user knows the product indeed delivers what it promised to do, the user will trust the product more.
I did this when I configured my Roomba to automatically vacuum my house every day. The first time I set this up, at 14:46, I scheduled the time for it to run automatically at [14:47]. I wanted to check whether it is indeed cleaning on scheduled time. After I saw that it did, I trust the Roomba to do this for me every day at 12:00 while I am at work.
3. Design ability for users to intervene and take over control — Design the tool in such a way that the user can take over control over the tool with means of input, anytime the user wants.
I can press two buttons on my pet feeder to output kibbles any time I want. Moreover, I can press the “clean” button to make Roomba start or stop cleaning (on unscheduled time) or physically move the Roomba to clean on a spot I direct it to.
4. AI should learn from user’s intervenience — When a user intervenes and takes control over the AI, let the AI learn from this behavior. The AI should remember the intervenience in order to give better output to the user for the next time.
For example, the Nest thermostat learns what you like by turning it up or down at different times. It remembers your intervenience and creates a personalized schedule for you.
5. AI should not do anything without user’s consent — The AI should ask for reviews and permission to execute tasks which have significant consequences. Yes, the AI should be proactive, however, the user is still the final decision maker and he or she should confirm whether they want the AI to do something proactive.
E.g. Digit is a platform that analyzes your spending habit. It acts as a chatbot and proactively asks consent from you to transfer your money in order for you to save.
6. AI should notify users of system errors — Notify the user when there are hurdles. Like in any interface, the machine should be clear on what it needs from the user. The AI can offer the user solutions to fix the error.
When my Roomba gets stuck, it makes a beeping noise to let me know there’s something wrong. If it did not beep, I would not know it stopped working.
When designing for AI, always have the user in mind. Make sure the product is easy to use, useful, efficient and will be trusted by users.
To sum up, the design principles for AI-enabled UI are:
1. Discovery and expectation management
Set user expectations well in order to avoid false expectations.
* Users should be aware of what the tool can, and cannot do
* Users should expect most benefit from minimal input
* Prepare for undiscovered and unexpected usage
* Educate the user about the unexpected
2. Design for forgiveness
The AI will make mistakes. Design the UI so users are inclined to forgive it.
* Design the tool in a way that users will forgive it when it makes mistakes
* Design delightful features to increase the likelihood of forgiveness
* Design ability to use AI without internet connectivity
Data transparency and tailoring
Be transparent about collecting data and offer users the ability to tailor it.
* The AI should be transparent in what data it has of the user
* Users should be able to provide input so the AI can learn
* Users should be able to adjust what AI has learned
Privacy, security and control
Gain trust by driving privacy, security and the ability to control the AI.
* Design top notch security for users to trust AI with personal data
* Prove delivery on promises by offering test runs
* Design ability for users to intervene and take over control
* AI should learn from user’s intervenience
* AI should not do anything without user’s consent
* AI should notify users of system errors
Next to applying basic UX knowledge, keep these design principles in mind and you will have a good base for your AI-enabled user interface.