A few months ago, I was passionately arguing with a friend about the amateurish behavior of big web companies where they create or update design features without giving any thought to how they might be used and what kind of risks could be involved. For example, if users have stored personal information on your website and you are conducting research using their profiles, what kind of risks could it lead to. To err is human, but to not even look at your design or processes in terms of risks to safety, privacy, or freedom of choice is unprofessional. Especially now that technology is an integral part of our lives and goes way beyond ‘just entertainment’ or a ‘way to say hello to a friend’ to potentially influencing political and economic landscapes.
Later I felt a bit guilty that I may have come on too strong. However, another friend told me that my irritated rant had inspired her to start a conversation about using Risk Analysis for user interfaces at her work and that they are working with this approach now. Ah, a tiny bit of relief from my guilt ;)!
Overall product companies are required to analyze risks related to their products and mitigate them, e.g., Consumer Product Safety Act. Healthcare companies are explicitly asked to assess risks related to the user interface of their devices and help mitigate them through design or engineering to help avoid injury (FDA, IEC 62366–1 & ISO 14971).
However, given that now technology can affect physical safety, as well as freedom of thought, choice, privacy, I would propose that systematic Risk Analysis of user interfaces should become part of any hardware or software product development. My intent here is not to rant against technological advancement, but the opposite; it is an acknowledgment of our increasingly intimate relationship with technology and propose Risk Analysis as a part of the effort to making that relationship sustainable. Companies working with any sort of consumer products should take initiative to take a thorough look at their products and assess any use-related risks (whether relating to the interface or the back-end).
We may not be able to eliminate all the risks completely but that would be no excuse for doing our due diligence to reduce them as much as possible and acknowledge any remaining risks to the users.
As an example, to take heed of Eli Pariser’s thoughts on Filter Bubbles, risk analysis of a news website might reveal that ‘customized’ reading lists based on a user’s reading history could lead to them reading only one-sided articles on sensitive issues and could further polarize communities. One mitigation could be that the interface always presents such articles with other articles offering differing points to view or always providing multiple points of view within the same article to help create more complete and hopefully less polarized picture.
Of course, as the example above shows, and from what I experienced, Risk analysis usually makes us evaluate our core values — What sells vs. What is the role of our product in the society…and reflection is a good thing usually, no :)?
In case you are somewhat convinced about the merits of trying risk analysis systematically, below is a quick summary and watch out for follow up articles with more details.
So what does Risk Analysis for the user interface look like?
- A risk owner or manager and a multi-disciplinary team are essential to covering risks from multiple points of view, e.g., user researcher and/or usability engineer, domain specialist (if applicable), Design Lead, System engineer, Privacy officer, Marketing lead, Data officer, etc.
- With this team, you would do a step-by-step analysis of your use interactions and functionalities under varied use scenarios and try to identify any errors or risky situations that may have adverse consequences for users’ safety, freedom, privacy and so on. Read more on multi-disciplinary teamwork sessions here.
- Then you evaluate each of these risks in terms of severity of consequences.
- Finally, design ways to minimize/avoid or eliminate these risks using design, engineering, processes, policy, etc. The idea is to reduce or eliminate the possibility that something happens and/or if it happens then the consequences are not severe for the user.
- As our intimacy with technology increases, risk analysis could be a helpful way of identifying and addressing use-related risks (even beyond physical injury) with technology.
- A risk owner or manager and a multi-disciplinary team are essential to covering risks from multiple points of view, e.g., user researcher and/or usability engineer, domain specialist (if applicable), Design Lead, System engineer, Privacy officer, Data officer, etc.
- Plan risk analysis sessions with a multi-disciplinary team to: a) List all potential risks b) Evaluate each risk and prioritize based on severity of consequences c) Discuss solutions for risks in terms of design, engineering, processes, policy…
I am curious to hear of this could be useful in your work or if you are already using any such approach and I am preaching to the choir :)?