The other day I was logged into my Gmail account when I saw a small message overlaid on one of my emails asking if I wanted to follow up on said email. It had been 5 days since I sent the email. A google bot had crawled it, read it, understood I was asking a question of the recipient and then logged that I hadn’t gotten a response yet so it prompted me to follow up on the issue. Was the interaction helpful? Sure. Did it give me the chilly feeling of being totally invaded of my privacy? Definitely.

I think many of us in 2018 understand, on an intellectual level, that the data we host online is not private. We are reminded of this with updated privacy disclaimers, breaking news stories, court cases and lots and lots of social media ads clearly geared towards our individual interests. Yet, I believe there is still an element of cognitive dissonance among many users that the information stored in our password protected maintains some semblance of privacy as well; which allows us as users to interact with these in a blissful and somewhat uninhibited manner. Interactions that challenge that perception, even when helpful, can be unsettling to us. Yes, it was nice of google to remind me to follow up with my colleague, but it was equally being so blatantly confronted with the fact that google was reading, understanding and tracking the content of my emails. As a professional I am writing this article not to engage in a debate over the ethics of collecting users’ personal data but rather, in our current reality of interacting daily with these applications, I am concerned with creating an environment that allows users to comfortably engage with applications with minimal voyeuristic reminders of how public our data really is. For those users who either don’t realize how much data is recorded and mined from our interactions on these applications or just don’t like to think about it on a regular basis, interactions like the one previously mentioned break that fourth wall and have the ability to change the experience of using an application from enjoyable to unnerving.

So, where does the balance between useful, targeted interactions based on collected personal data and the notion of respecting, at least the semblance of, user privacy exist? It’s happening at this time whether we like it or not and the only real way to stay off the grid is to disconnect completely from all data sharing applications (and probably the internet entirely). For many of these applications, data and analytics farming are their main source of income. And many of the interactions that personal profiles allow for are incredibly helpful. However, though on some level we understand that Facebook knows just about everything there is to know about us, we don’t need it slapped in our face on a regular basis.

I have been exploring a couple of possible solutions to this issue.

Asking permission:

If you own an smartphone you may have noticed that after installing a new application the OS will prompt you asking if the app is allowed to send you notifications or not. Depending on the application and how you use it you may decide to allow or deny those notifications but either way, an interesting emotional exchange is happening during that interaction; a respectful nod to your personal boundaries on behalf of the OS and the application you are using. Asking permission can go a long way in establishing a trusting symbiotic relationship between users and the applications they engage with. If google were to have asked me if I might be so inclined to receive helpful notifications on my personal emails I may not have been so taken aback by the sudden reminder that all my personal email data is not mine alone. Consider it similar to a doctor obtaining consent before performing a medical procedure on a patient. The procedure is beneficial, the patient will in all likelihood grant consent because they’d like to utilize the benefits of the procedure, yet the acknowledgment of the sensitivity of the patient’s own autonomy creates an environment of respect and mutual understanding between the doctor and patient.

The truth is that applications already do something akin to this on a broad scale when they prompt users to agree to privacy policies. Yet, those documents are often long, wordy, arduous to navigate through and difficult to fully understand. Agreeing to the legalese in those documents covers these applications from a legal standpoint but does little to reassure the user and create a respectful data sharing atmosphere.

Informing users of new features and asking for opt ins.

Respecting users’ desire to opt out:

One of the downsides for applications of requesting permission on features is a user’s ability to decide he/she does not want to take advantage of such features. This is often a difficult thing for applications to accept. Often these organizations develop interactions based on user feedback, extensive testing and a firm belief that their new interactions will eventually make the end user’s life easier. Though this may be frequently true, perhaps the coercion to adopt new, and initially invasive interactions comes at the hefty price of user comfort and trust in an application. I posit that in the name of mutual respect it might be worth it for applications to forgo coerced adoption of features that might make users uncomfortable — i.e. interactions in google that remind users that the bots are reading our content — and rather give users the opportunity to make educated decisions on the interactions they’d like to utilize at the time (see above image). That is not to say that consent needs to be granted for every new feature an app rolls out, but it may be worthwhile for companies to test the emotional component of an interaction as well as the overall usability of the new feature before implementing that feature across the board. If the new interaction, though useful, appears invasive an application can choose to allow the users to choose their feelings of privacy over the new interaction as an act of respect for those using the application; or an option to opt out later if they find an interaction too prying.

As a designer I often have to make the call for the users of my application as to whether or not a feature will be forced upon them or gently suggested. It isn’t always an incredibly easy task and I’ve often found that there can be a lot of push-back from users when they are coerced into using new features or new interfaces in general. Perhaps it’s time for us as designers to start respecting that — at least when it comes to features that give our users the feeling that we are invading their privacy -the users should have a choice as to whether or not they want to engage with the semi-voyeuristic atmosphere those interactions can create. I may sound like I am advocating for blissful ignorance, and in some ways I am, or at least for user choice in the ability to remain as ignorant as they want to be. At the end of the day, the trust and comfort between a user and the application he or she is using is paramount to create a symbiotic environment that keeps users coming back for more. Without that an application can be incredibly useful and innovative but if it makes its users uneasy that’s a huge challenge to its longevity and could be a potential cause for its ultimate demise.



Source link https://uxdesign.cc/-less-creepy-applications-f8c95c7b93f5?source=rss—-138adf9c44c—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here