Part 2

Painting by me :)! See more:

Our lives are increasingly becoming intertwined with technology and they* say “With great power comes great responsibility”. So, in my last article, I proposed as a way of ensuring that we create a sustainable and trusting relationship with technology rather than going ‘Oops!’ later. As promised, here is the follow-up article on details of conducting risk to identify and address use-related risks for products and their .

A Risk Manager works with a multi-disciplinary team to ensure that the risk analysis takes place with the right people with proper follow-up. The team could consist of user researcher and/or usability engineer, domain specialist, Design Lead, System engineer, Marketing Specialist, Data officer, Privacy Officer,users, etc.

If the team gets too big, then you can even split into themes, e.g., Safety, Privacy, Data Security, etc.


The groundwork: User Research, Use Scenarios, and Task Analysis

As always, start with user research. Only through understanding your users, the context of use and use environments can you understand and discover use related risks. Create workflows, varied use scenarios in different contexts of use (including worst case, sub-optimal and non-happy situations :)) and step-by step task analysis that you can use during the risk analysis session.

Ask users for real-life examples of their interactions with technology that felt ‘unsafe’ or ‘wrong’ or ‘uncomfortable’.

The Rearview Mirror, m’dear: Desk Risk Analysis

Before the risk analysis session, you can already start to uncover potential risks by looking at events that may have happened with either the previous releases or competitor’s products or comparable products/websites. Especially for privacy related matters, there have been enough news recently to learn from, e.g., Facebook’s child messenger app or Google’s smart city.

Look at articles, news, incidents, court cases, or even read or watch science fiction stories set in the future, for example, in the movie Fahrenheit 451 based on Ray Bradbury’s novel, technology and use interactions are used to control the masses, e.g., allowing people to write on social media only through emoticon-like graphical language, burning hard copy books and only allowing specific electronic books, or rewriting historical archives to misinform people and skew opinions. I hear that Black Mirror is a futuristic Netflix series that could provide fodder for such sessions too (am yet to gather the courage to watch it :D).

Plan the risk analysis session(s)

Once you and the team have done some background research and have been individually terrified/entertained/inspired/excited etc., you can then share that feeling with a bigger group. So come together for multi-disciplinary risk analysis sessions. In general, I prefer breaking down the session(s) into the following steps:

Session 1. Identify and list all the potential risks that come to mind

  • In this session, you can first go through overall risks that team members might already have on their mind and note those down. For example, if your website deals with automatically tagging photos and you suggest tags for people from outside the users’ own network, you run the risk of creating privacy breaches or in worst cases, even safety issues.
  • Then, for each scenario, you can go through the step-by-step task analysis and list down other risks that come to mind.
  • Don’t worry about solutions at this point, but just list down the issues that come to mind — however, if solutions come to mind, note those down as comments and move on.
  • It is very important to think about the context, the same interaction may have diffrent consequences under different contexts. For example, a loud sound alarm for a message when driving at 30 kmph in good weather would affect you differently than when driving at 120 kmph and changing lanes on a busy highway.
  • Sometimes the risk may not originate from the user interface interaction but it may need to be communicated to the user, this too should be reflected in your risk analysis. For example, if designing an automatic car, what happens if the algorithm of the car ‘catches’ a bug and this leads to erratic driving. While the error here is not necessarily related to the user interface but the design would need to consider if the car needs to communicate regarding the error with the user and perhaps with other ‘users’ or cars in the neighborhood.

Session 2. Evaluate each risk and prioritize based on the consequences

Session 3. With a Multi-disciplinary team, discuss user interface or system level solutions

My reviewer friend(s) refuse to read articles that are ‘too’ long, so for details of Session 2 and Session 3, look out for the final article in this trilogy :)!

In conclusion,

  • user research and ask users about technology interactions that felt ‘unsafe’ or ‘wrong’ or ‘uncomfortable’. Also, look at incidents with previous products, competitors or similar interfaces, etc.
  • Risk manager to plan sessions with a multi-disciplinary team to identify potential risks by going through the use scenarios and task analysis.
  • Pay attention to context and to risks which don’t originate from the user interface but where solutions need to communicate or interact with the user.

I am curious if you are already using any such approach at your work?

(*I heard this in Spiderman but apparently, it could be much older, look here :))

Source link—-138adf9c44c—4


Please enter your comment!
Please enter your name here