For consumers, the continuing growth of voice UX is undoubtedly a welcome development – as evidenced by the proliferation of platforms and devices integrating the tech, and the increasing number of people getting acclimated to using voice. But for web and application designers, voice interaction represents, perhaps, the biggest UX challenge since the dawn of the touchscreen age.
By now, everyone is familiar with all the quirky names of voice assistants—Siri, Cortana, Bixby, and Alexa—a testament to the new level voice UI has reached in the last few years.
The technology only looks to soar higher, with its capabilities penetrating more and more digital ecosystems.
What is Voice UI?
Voice User Interfaces (VUI), essentially, is the ability to speak to devices and its capability, in turn, to understand and act upon users’ commands.
From asking Siri to set an alarm at 5 a.m. the next morning, to asking Alexa to play their favorite show on Amazon Prime, the act of using voice commands instead of typing has become increasingly natural for users.
The Rise Of Voice
The rise of voice can be mainly credited to the evolution of AI and cloud computing capabilities. With machine learning and the leaps seen in natural language processing, technology now has the ability to interpret human speech more accurately and in real-time, while also taking note (and learning from) individual users’ speech tendencies.
According to Shawn DuBravac of the Consumer Technology Association, the world has seen more progress in VUI in the last 30 months than in the last 30 years, adding that ultimately, vocal computing is replacing the traditional graphical user interface.
Despite the significant leaps VUI has taken the past few years, the reality is that the tech is still in its nascent stages. As such, while it’s well on its way to mainstream use, it’s still uncertain if this voice revolution will lead to a total paradigm shift in the way people interact with their devices.
What looks apparent for the meantime, though, is a shift into multi-modal interface. As compared to voice-only interface (as with Amazon Echo and Google Home), multi-modal has the added benefit of presenting information on a screen – the next step in UX design towards more frictionless interaction.
For web designers, this means realizing three key aspects.
The importance of words
Words matter again.
In recent years, web design has been increasingly dependent on visuals to catch users’ attention. But with the inevitable integration of voice in web design, designers will again need to pay an exponential amount of attention to detail when it comes to language.
Among other things, this includes rethinking site navigation and information architecture based on voice UX.
For example, a person on an e-commerce site who wants to reorder the last thing he purchased cannot be expected to go through a process similar to how it would be done by clicking tabs. If the person were to say, “Reorder last item purchased,” web designers should be able to program a language that would do just that.
Ideally, there would be some sort of standardized command phrases and keywords, but expecting users to try and remember what could be a litany of commands defeats the entire purpose of VUI.
Additionally, the site must be able to understand words that can have different meanings depending on which context it was used in – much like it does in real life.
Understanding user intent and adaptability
Despite voice in web design being a multi-modal interface, consistent interpretation of commands between visual and voice interfaces remains a crucial aspect in web UX design.
One thing designers could do is to come up with a predictive dialogue based on a user’s previous action – essentially anticipating user intent at different points of the conversation.
For example, if a user visits a hotel’s website and immediately asks to go to gallery, the site can then ask if the person wants to book the room photo that’s currently on-screen. Or say, a person asks to check Airbnb availability in Chicago, the site could immediately ask which dates are preferred to return a filtered search of properties in the Chicago area.
It won’t always be this simple, though, so it’s important that designers are able to prompt users with the right questions at the right time to elicit appropriate responses from users – ultimately making voice site navigation as seamless as can be.
Engagement and personalization
Despite the development of digital ecosystems that enable the use of VUI, there remains the possibility that the novelty of voice interaction will wear off for the pedestrian user. This presents both a challenge and an opportunity for web designers to maintain user engagement. In fact, part of the human brain (called nucleus accumbens) lights up when people crave something—the same part that is highly stimulated by unpredictability.
As such, instead of users mindlessly navigating a familiar site, with voice UX, designers can introduce variability that continually engage users. For example, error messages could be crafted in a way that’s not only less annoying but also get users back on track, while presenting additional options.
Additionally, once enough data is collected from the user, designers could offer personalized voice UX based on, for instance, a particular user’s way of speaking and preferences.
And the personalization can go both ways, with branding being a prospectively powerful tool designers can integrate into the entire experience.
Yes, voice-activated systems have come a long way since Siri first came into fold (with some numbers suggesting voice assistants now being more than 90 percent accurate). But understanding the correct speech input still doesn’t translate to accurate results.
As noted by Social Media Today, seemingly mundane queries like “I need a doctor” can either result in a list of nearby doctors or a Wikipedia entry on doctors. And expectedly, more complex queries, at times, fall further off the cliff.
Of course, the multi-modal interface entailed by voice UX on the web has the benefit of taking away some of the confusion. But that doesn’t mean users won’t be turned off when voice UX adds more friction than previously encountered before it came along.
At this developmental stage, popular cloud voice services support a very limited number of languages. Should the voice revolution in UX continue to push through, this will have to be addressed by designers if they have ambitions of providing a seamless voice UX worldwide.
At least for now, VUI appears to be a transition phase towards a more frictionless interaction – something that all new technology is designed to do.
But the reality is, despite being projected to enjoy mainstream use in the coming years, it’s still uncertain how far the boundaries of VUI will be pushed – both by developers and users. Will its novelty wear off? Can the technology be improved enough before it does?
Though questions still surround this promising tech, that doesn’t mean designers can relax and wait for the phase to pass, as risking unpleasant interactions is something brands can rarely afford.
Not everyone should get in the action now, but designers should pay close attention to the developments surrounding voice UX. UX was always designed to make interactions as similar to the real world as can be and voice has the potential to make that a reality.
Source link http://tracking.feedpress.it/link/7644/9368189