Conceiving an aspirational from scratch without writing a single line of code

My team and I at Skookum recently completed an aspirational chatbot for an enterprise client. This is the -step process that we took to get conclusive insights into the best practices for implementing the chatbot. Our goal wasn’t to build a chatbot but to provide the visual design, user experience, personality, and conversation flow of the chatbot. The bot would be developed by another team.

The chatbot’s objective:

We set out to design a chatbot that will allow visitors a simpler way to access information about testing locations near them. The bot’s goal was very specific, it aimed to display information about the testing location and direct visitors to where they can make an appointment.

Here’s the basic information that needed to be presented by the bot:

  • Distance from visitor’s zip code
  • Hours of operation
  • A phone number to call
  • Address & directions
  • If walk-ins are welcome or not

Step 1: Discovery

The discovery phase involved obtaining a solid understanding of who the core users will be and what their goals are. The discoverability phase was focused on brainstorming ideas for how we can help a user achieve their goal as quickly and as easily as possible. In order to provide users with an improved experience, we first needed to understand the pain points in the current state of the website.

We found that the client had three core users:

Chronic testers: Most analog, confident and familiar, have had multiple tests before and understand the process.

Worker bees: Digital focused, proactive and demanding, in a rush to get the information as quickly as possible.

Parents: Information-driven, anxious and overwhelmed, want to make sure they have all of the necessary information beforehand.

Once we knew who our users were, we aimed to create an experience specifically tailored to assisting them with their goals.

Step 2: Conversation & Personality Design

Conversation mapping with

The first step to creating a conversation flow was to map out the conversation with branches for each possible user selection. We avoided free-form user entry as much as we could and provided guardrails to ensure the smallest margin for error. The bot has a specific goal, so we designed the interface more like a smart survey with free-form being used only for questions like “What’s your zip code?”

Once we had a skeleton for the conversation flow, we began adding some meat to the bones, the personality.

We had to give the bot a conversation style and a persona. This is where we had to flex our creative muscles a bit. In our initial plan, we developed a persona of a 27-year-old educated female with a whole laundry list of attributes that we would like to see emulated in our bot’s conversation style.

There was just one problem though, we quickly realized how hard it was to write the copy in the tone of our persona since we weren’t them.

This was when we decided to interview and select someone who matched the persona that we were creating for the bot. Once we found this person we told them to write copy naturally for the bot in their own voice as though they were speaking with a friend. The results we received were unrivaled by our attempts to mimic the personality of the bot persona we selected.

By finding a real-life human to be the bot’s persona we were able to create natural conversations, seamlessly.

Step 3: Building A Functional Prototype

TARS Bot Prototype Test

While researching tools for the project, we found a tool called TARS which allowed us to create a fully functioning chatbot prototype without writing any code. It isn’t that we didn’t have the capability to develop the chatbot, we just weren’t hired to.

We started constructing different conversation flows that our user testers could provide feedback for. We a base conversation then duplicated it a several times and altered it slightly to match different variables that we wanted to test in the conversation.

TARS made it incredibly simple to create a customized conversation flow that our users could actually interact and chat with. The TARS bot didn’t allow us to test elements of the visual design but we were able to ask questions like “How did the bot’s introduction make you feel?”

We saved the visual testing for Invision.

Step 4: Visual Design

Chatbot visual design (Sketch)

Once we had a solid base for the conversation map, personality, and a functional prototype to use for testing, it was time to begin visualizing how the chatbot would look and feel.

As the lead designer on the project, I began by creating wireframes and jotting down on a notebook the different chat instances that would be needed for the visual design.

We needed instances like:

  • Free-form user entry
  • User entry with buttons
  • Display of testing locations
  • Hand-off to a representative
  • Chatbot closed
  • Chatbot location on page

Once I had everything laid out nicely, I started rendering some high fidelity mockups for the chatbot.

This project was mainly a user experience challenge but I aimed to use visual elements to improve the experience for users.

We had a few challenges with the visual design, like how do we display a lot of information in a concise and easy to understand manner. For the testing locations for example, I chose to display this long string of information as a carousel which allows users to easily view the closest location to them then scroll to the right to see additional locations.

It was important that we ensured our users never felt stuck or confused when using the chatbot, so I added a “help” button in the top right of the chat interface to allow users an easy way to restart, speak with a representative, or visit the FAQ.

Step 5: User Testing Testing

Once we had all of our ducks in a row and hypothesis for the chatbot and how users would prefer to use it, we created a test plan. The test plan consisted of all the aspects of the experience that we wanted to test. We leveraged UserTesting to gather testers then sent them off on guided experiences where they were asked to voice their opinions about different components of the chatbot that we wanted to test.

Our test plan consisted of the following goals:

  1. What kind of tone/conversation type do users prefer?
  2. How do we make the chatbot the most discoverable to users?
  3. Do users like giving their name?
  4. Does the image or name of the bot adjust people’s opinions of the interactions?
  5. What is the best way to hand off customers to a rep?
  6. Is a chatbot is quicker than the new scheduling UI? Does it feel quicker?
  7. Is a chatbot is quicker to find and use locator flow if it’s located on the homepage?
  8. What page should the chatbot be on?

We ran a test that focused on each of these different testing points. I was responsible for reviewing all of the user testing videos. It was an exhaustive process but the results were extremely helpful and insightful. Some test results were as expected, while others yielded some surprising and unexpected results.

For example, we assumed that users would generally enjoy the bot’s personality that we had meticulously constructed. It was friendly, professional, and most of all, informative. However, when we tested the friendly bot versus a cut and dry, to the point bot with no fluff in the conversation, users preferred the terse bot to the friendly one. When we reviewed the results we learned that users didn’t want to waste time being friendly with a bot because they just wanted the information as quickly as possible and they knew it wasn’t a human.

This discovery didn’t mean that users didn’t want a friendly bot, it just meant that saying things like “That’s great, thank you for the information! Let me get that squared away for you.” added more reading for the users. Replacing that message with “Got it, here are the results!” made users feel like there was less in reaching their goal and they preferred keeping their interaction brief.

Source link—-eb297ea1161a—4


Please enter your comment!
Please enter your name here