The Design & Content conference is just over a month away! Designers and content strategists team up. Hear from industry leaders about crafting experiences and telling stories that shape the future of the web. A day of workshops, two days of talks, and thoughtful extras in one of the most beautiful cities, Vancouver, BC, Canada.
Save 10% on your registration with discount code uxbooth.
Paige Maguire will present “Conversation Design for Disappearing Interfaces” on the final day of Design & Content.
By now, most designers have either interacted with a chatbot or been asked to create one for a client. We obsess over the details once our bot has come into being. More than anything, we emphasize to ourselves — and to our clients — how hard we work to create a human-like experience. More often than not, we begin anthropomorphizing it after we’ve named the thing and started writing its script.
Simple bots with only a couple of tasks in a closed content system will usually work just fine with this approach. And it’s easy to understand why we’d begin this way. As designers, we’re focused on human-centered experiences, and we think, “A user will have the best experience with this bot if the conversation feels human.”
Then we start designing a simple machine that can never meet these expectations.
All a bot needs to do is understand the user well enough to accomplish a task. This is the most basic experience we design for, but it’s mostly an illusion. The bot doesn’t understand anything at all, it’s simply trained to respond to an event — usually a keyword. These experiences only display intelligence. The only difference between this experience and a notification from your calendar is that this one has been given a human name.
There’s nothing wrong with this. In fact, this is the purest way to think about human-machine conversation. This essential thinking machine is the experience Grady Booch calls a “simple brain.” It need not be gendered, apologetic, empathetic, or anything else we recognize as a human trait. We needlessly overcomplicate experiences by inserting our own desire to ‘surprise and delight’ or ‘make it feel human.’ But bots don’t need to act human to be human-centered.
I don’t mean to imply we shouldn’t try to create human-like AI experiences. We must. Our ability to do so will have lasting socio-political impact beyond any current project. What I’d like to challenge is that this work begins after we’ve chosen AI as the tool we’ll use to solve the user problem. Simple thinking machines won’t be the tools that deliver on AI’s promise, something much more profound will take up that torch. To design the profound, we must start work earlier, and in many ways, consider the machine another type of user.
We’re most capable of designing effective human-machine conversations when the machine is a simple Thinking Machine, and we’re not layering personality on to a tool that can’t live up to the expectation of human. That’s a simple approach for a simple brain: a bot that only needs to interpret and display simple data. Truly intelligent data — and the bots that represent it — still needs human knowledge to be effectual, ethical, and profound.
Human knowledge, however, is not simple. It’s the dominant obstacle when it comes to even the most straightforward AI. This complicates not just the way we design conversation, it impacts how we train our tools to learn. Human knowledge includes all the nuance, morals, empathy, and fortitude we want from our machines, but it also includes bias, irrationality, and behavior based on experiences, not facts. Designing AI for humans means much more than chirpy, agreeable copy and emoji apologies. It means designing machines that are capable of contextualizing verbal behavior — however irrational — and continuing to provide value that does no harm.
This is what I call a Feeling Machine: a tool that includes the basic thinking machine, with or without human attributes, and builds upon it, truly interpreting the nuance of human behavior in a meaningful way. We must train our tools to consider both form and context, to understand verbal and non-verbal (facial recognition, for example) exchanges. To advance from thinking to feeling, designers must understand verbal behavior, embrace the machine, and consider community.
Feeling at Fjord
At Fjord, we ask teams to think symbiotically about the machines they’re training to think and feel. Remember, we’re designing tools that should extend our experiences in profound ways, and the mode is conversation. Consider it a currency between the machine and the human, both sides shaping effectiveness and satisfaction.
Typically, a workshop including the entire team is the best way to begin to design for success. Start by doing a stress test on the selection of a bot in the first place.
- What problem does the business need to solve?
- Given AI’s capabilities and complexities, is a bot the right tool for the job?
- Will it be a closed content system or will it utilize machine learning?
- Who are its human guardians?
Once you’ve agreed on what kind of tool you’ll use, take a moment for an inclusion check. The easiest way to fight data bias and unintended consequences with AI is to curate data, test with as diverse a set of humans as possible, and get out of the studio to do it.
Next, gather the team and client together to develop the machine’s Eidos. It can be as simple as a collective persona creation. If it will be human-like, interpreting human behavior, and doing no harm, it must have its own motivations, and you design them. Document what its users want, what it wants to do to help, how it learns to help, and what it thinks and feels about the tasks as well as the humans it serves. Keep in mind that one day, your AI might be serving another AI, so these thoughts and feelings should be less emotional, and more responsive to various inputs.
Remind your team that there are unavoidable characteristics to design for, requirements that we don’t get to choose. No matter how human the conversation, AI will always be captive, unable to disagree, literally tireless, and unable to improvise to start. As it learns, it can ‘grow out’ of some of these restraints, but it’s critical to not blindly assign human characteristics to our machines without acknowledging their core.
Once you’ve developed the essence of your AI, bring the teams together again to test your work and ensure that you’re ready to design for both thinking and feeling. Do an improvisation session where different team members act out scenarios, and document the different ways you’ll design accordingly. Use Post-Its to map conversations on the wall. Ask for feedback and work together to make sure you’re addressing business goals and user needs while staying true to the machine you’ve designed.
Make the most of team consensus to lay the groundwork for all the future work you’ll do. If the conversation will be designed by a content designer and a data scientist, empower them with all the scenarios you can possibly imagine.
Above all, understand that we’re doing more than designing effective conversations. There is more at stake with this work than we might realize. Designers have a unique opportunity to develop and own the process by which we teach machines to learn from us, and it is imperative that we do so with all the humanity, empathy, and inclusivity we can muster.
Source link http://tracking.feedpress.it/link/7644/9507140