The whole misunderstanding was preventable

And now we come back to Hanlon’s Razor.

This particular incident did not come about as the result of sinister machinations. That doesn’t mean that it does not represent a significant problem, nor does it even mean that there isn’t a hint of the sinister lurking beneath the surface.

Let’s start off with the obvious: the tech industry needs a viral rumor about Orwellian censorship like it needs a hole in the head. Through their own doing, the big tech brands currently enjoy a reputation approaching that of Bank of America in 2009. Somewhere between Zuckerberg experimenting on his users, Twitter enlisting a rogue’s gallery of villainous organizations to form their “Trust and Safety” Council, and Google demonetizing harmless channels, not to mention the fact that every one of them collaborates with the Chinese government, which still maintains pictures of a certain clown-haired mass-murderer on their buildings, the tech giants are increasingly seen as the evil corporations from 1980s cyberpunk flicks.

Combine that with the fact that there is a general apprehension about the notion that we are in a “post-truth world” (we aren’t), and you have a powder keg awaiting some scandal to ignite it.

Your typical designer or product manager can’t do much about that whole issue, unfortunately. Nor can designers prevent programmers from building sloppy algorithms that lead to misleading errors. You’re just going to have to work around these realities by minimizing the chances that a design blunder will be misconstrued as a digital atrocity and tank your employer’s stock value, or even put them on the hit list of a regulatory agency.

Here are three takeaways from this story that you would ignore at your peril.

Modes are a minefield

Dan Saffer, author of Microinteractions, refers to modes as “a fork in the rules”. When a particular interaction can behave in multiple ways depending on a certain variable, it has multiple modes. One ubiquitous mode is “edit mode”, found everywhere from the iPhone’s text message screen to Medium’s articles. In edit mode, you no longer merely consume the information on the screen, but you can alter it.

Misunderstanding of what mode you are in can cause all sorts of problems. If you don’t realize you are in “edit” mode, you might accidentally delete a text message thread. If you don’t realize your keyboard is in Caps Lock mode, you might send off a reply that makes you look UNHINGED.

The case of the altered is a clear-cut example of mode confusion. The user thought he was in original text mode, when in fact he was in Russian-to-English mode. Simply making it more obvious to this user the mode he was in could have prevented the entire fiasco.

Currently, this is how Chrome indicates that you are in translation mode:

That’s it. A single, tiny mystery meat button in the URL bar. What are the chances that the user will even notice it? Why not text that states “Translating Russian to English”?

The user is in control

What I gather from this particular incident is that the user was unaware that translation was even a possible explanation for what happened, which suggests that he did not ask for a translation of the page. I don’t know enough about the underlying software to know if he ever manually set the browser to always translate Russian to English, but if he did not do so, that is a big problem.

The more you take control away from the user, the less they will understand the way your system works, and the less they will trust it. In the short term, you may believe that you are taking a load off the user’s shoulders, however over time, they will begin to question what is happening, especially if anything should go wrong, which it will.

The product is not your personal soapbox

While political motivations played no role in this incident, users’ distrust of YouTube and Google has partly political origins. And it isn’t just about corporate-level skullduggery. Tech products are often designed in ways that transparently betray the creators’ personal values, and even impose them on the user.

One infamous example is Apple’s autocorrect. If you have ever wanted to throw your phone into a wall because it corrected you to “ducking” for the 300th time, you know what I’m talking about. Apple does not allow you to train the autocorrect and autocomplete to reflect your personal linguistic values. Is it such an outrageous stretch to go from the soft censorship of autocorrect to the hard censorship of public comment Bowdlerization?

Even though, in this case, the user was mistaken, Google got themselves into this mess. Lucky for them the rumor was quickly squelched and that was that. They might not be so lucky next time.



Source link https://uxplanet.org/youtube---your-comments-yet-ca7cf2ef22db?source=rss—-819cc2aaeee0—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here