Design has a long and rich . Humanity has been creating and ‘designing’ tools/products/systems for people since before we were able to record it. Design as we have come to think of it started to take hold in the early 1900s, and has grown by leaps and bounds since then. Many people, systems, and events have shaped us and have helped us grow as a practice.

For example, most in the field would probably point to XEROX PARC as a seminal point for user experience design. A lot of great products and concepts emanated from the work done here, and these helped shape some of the most well-known companies that exist today.

If your background is in Human Factors, you are probably familiar with the research coming out of the US Army (before the Air Force existed) as a pivotal moment in the history of design, in which researchers started figuring out that finding successful pilots was more a design issue than a personnel selection/training issue.

And with this broad history, there are a lot of lessons to be learned that all designers should carry forward. Yet in many cases it feels like we haven’t we learned from our history at all. All too often it seems that we as an industry have failed to capture, teach, and learn the fundamental lessons that gives design the potential that we all talk about.

We as an industry have failed to capture, teach, and learn the fundamental lessons that gives design the potential that we all talk about.

I want to take some time here to highlight some key moments, groups, and people that are not commonly talked about in the history of design. These (and many others unnamed) have provided valuable lessons that all designers could use and should be covered more.

Three Mile Island Incident

The incident at Three Mile Island had a big impact on showing how important good design is to helping people understanding and head-off problems before they become critical.

In March of 1979, the Three Mile Island Nuclear Facility in Pennsylvania experienced an incident in one of the reactors (#2). After a series of events, a valve got stuck open but the sensor reported that it had closed. As operations continued, some of the nuclear reactor coolant was able to escape. Since the indicator showed that the valve was closed, operators were unable to identify the issue and took actions that made the problem worse — they were unable to form a good mental model of the state of the plant. It took several hours and a new shift of employees to determine the situations and resolve the issue.

Beyond instilling fear of nuclear power as a viable energy provider, the incident had a number of implications. A key outcome that affects us as designers is that it “stimulated an international and multidisciplinary process of inquiry about how complex systems fail and about cognitive work of operators handling evolving emergencies.” (From David D. Woods book chapter: ‘On the Origins of Cognitive Systems Engineering: Personal Reflections’ in the book ‘Cognitive Systems Engineering: A Future for a Changing World’). Governments and industries started better recognizing that how people think about the domain or problem space will affect how they work in that space. From this incident — combined with a thorough examination of system design practices and the early stages of AI development at large — the entire field Cognitive Systems Engineering began.

Cognitive Systems Engineering (CSE) seeks to augment systems engineering to consider the human (cognitive) implications as part of the system. People play an essential role in most systems, whether acting, deciding, or managing — even ‘autonomous’ systems need a human in the loop in at least a supervisory role. Given the complexity of the world (and design is crossing into ever more complex fields every day) and our key role in it, it makes sense to marry what we know about people (how people decode, make sense of, and act on the information available) with an understanding of the problems we are trying to solve and build systems from this joint perspective.

This has provided a wealth of value to complex domains. Although early work started at nuclear power plants, CSE quickly broadened into a lot of complex domains, including military command and control, space flight (NASA), and financial industries among others. You may not have heard of it, but User Experience folks would be well-served to look at the research and work done in CSE and extract some lessons (Note: Cognitive Systems Engineer was my first job title, so I clearly see value here).

By studying CSE, you will learn about Cognitive Work Analysis and its variants. Cognitive Work Analysis helps to capture goals and information needs in complex domains (not of particular users but the system as a whole) and serves as a valuable tool to capture and structure user research insights. We can better understand how people make decisions, especially under some kind of pressure. CSE shows us that how people interpret information is dependent on how we encode it (mixed in with whatever personal history that person is using as context).

CSE has provided a lot of valuable research and concepts that the larger design community should be utilizing (e.g., longshot displays, decision ladders, functional abstraction networks). Likewise CSE has a lot to learn from . At the end of this article, I list several books that may serve as good starting points to start exploring Cognitive Systems Engineering.

The Challenger Explosion

How you present matters. The Challenger Explosion, for example, may have been averted if the engineers could have convinced NASA that the shuttle was in danger. This chart would have made clear the dangers associated with launch.

If you’ve studied Dr. Edward Tufte at all (you should) you might have come across this one before. In 1986, NASA, its astronauts, and the world experienced a tragedy as the Challenger space shuttle exploded during launch, and all 7 astronauts aboard the ship perished in the accident. A part within the shuttle (the O-ring) was designed to work and operate within a certain operational temperature range, but the launch was taking place on a day when temperatures were below this safe range.

The shuttle engineers knew there was a danger in the launch and tried to tell leadership, but couldn’t convince them to delay it to another day. This incident highlighted one very essential thing: How you present your information to an audience is critical in determining how well the audience gets the message.

How you present your information to an audience is critical in determining how well the audience gets the message.

This is valuable on two (probably more) fronts. First, it reveals the importance of data presentation — often in chart/visualization. Pretty, evocative images are great, but the goal of communication is to transmit the message you want to send with as little loss of information as possible. The clearer you can capture the information needs of the decision-makers, the more-likely you will have an impact (how the designer encodes information impacts how people will decode it).

Tufte’s image (which he created after the incident and is redesigned above) shows the trend of O-ring damage at different temperatures. This image makes a compelling case that the cold temperature on launch day has the high potential to cause system failure. The engineers that day didn’t have this visualization, and thus were unable to make as compelling an argument, though they tried.

The other valuable lesson is that convincing people to take the right action is as important as knowing/defining what the best course of action is. This is something that Mike Monteiro preaches all the time. The best ideas mean nothing if they never get past the idea stage. A lot of good tools have died on the vine because we haven’t been able to convince the people who control the money that it’s worth their time. Conversely, a lot of crap products have come to market because people know how to make a convincing argument.

A lot of good tools have died on the vine because we haven’t been able to convince the people who control the money that it’s worth their time.

The goal of design is not to show people what they want. We are to convey the information people need in a way that effectively gets the message across. We are trying to drive action (or sometimes, inaction) and we need to figure out ways to align user need, context, and preconceptions with the real world implications.

The Airline Industry

The airline industry has a lot of lessons for designers on how to make better systems — and a few that design could teach them.

Sometimes it boggles the mind that humans have mastered flight. We were not built to fly but have engineered solutions to make it possible. It has been a bumpy ride. The industry has been littered with crashes, but each year gets safer and safer. 2017 was by far the safest year for airlines, with only 44 total deaths. This progression from dangerous to safe has a lot to teach us as designers of systems.

First, the airline industry is constantly . Every incident, accident, and near-miss is meticulously recorded and inspected. The FAA implemented the Aviation Safety Reporting System (ASRS) which allows people to anonymously report incidents that occur without fear of reprisal. They truly want to learn from failure (but do not actively seek to fail: take note all those who want to ‘fail fast’). Any crew member who identifies something that is potentially dangerous or places the aircraft at risk has an avenue to report the incident. The FAA can compile this data, identify patterns, and recommend changes to make the industry safer. When our systems fail, we should have a similar mechanism in place. We should be able to learn from our collective failures rather than carry those failures individually.

We should be able to learn from our collective failures rather than carry those failures individually.

Second, the airline industry has learned a valuable lesson about how important every single employee is to successful, safe execution. There are a number of documented cases of accidents that show that people in the crew had knowledge of a potential problem but failed to speak up due to a presumed hierarchical structure — i.e., the pilot is the authority on the aircraft (often called ‘the captain is god’).

Now, every member of a team is seen as equal. Every person on the team has important information. They should be encouraged to contribute and their insights should be considered and valued. We should emulate this and understand that even the most senior designers can make errors. Every one on the team can point to something and provide valuable input. Good ideas can come from anyone. We could learn some lessons from the airline industries ‘Crew Resource Management’ to be better designers.

We have also learned a lot about human-automation teaming from the airline industry, although the implications have not propagated nearly enough inside and outside the industry. While crashes are significantly down, some still do occur. And when they do, there is often an issue with human-automation coordination. Inevitably, the investigators pin the fault on the pilots.

The Air France crash was a recent example of problems in the design of human-automation teams.

The crash involving Air France 447 is an example of this. The standard scenario unfolds as such: 1) an unexpected/unlikely event occurs, 2) the automation doesn’t know how to respond, 3) the automation hands over control to the pilots, 4) the pilots are unable to interpret the situation in time, 5) the pilots take the incorrect action, and 6) the plane crashes. Given the pilots erred, they are blamed. Investigators will talk in terms of more training, or attentiveness, but this misses the point.

In all systems, at some point, a human is involved in a decision that leads to the problem. Investigators will continue to look through the cascading failures until they find a person who could have made a better decision. They use hindsight (always 20:20) to declare ‘Of course person X should have done Y’. But as we see in their understanding of crew resource management that all parts of a system are responsible for airline safety. This includes the design of the human-automation team and the design of the user interface between the two. Pilots should never be in a position where they are handed a problem without warning and lack the information they need to act appropriately. But this is what we see in most accidents.

People don’t fail and cause accidents. Systems fail. People are just one part of that…

People don’t fail and cause accidents. Systems fail. People are just one part of that even if they take the action that leads to the failure. It’s easy to blame the user, but that is short-sighted and will not help anyone learn and make things better. We as designers, both in aviation and beyond, must understand this, and design accordingly. As automation continues to become central to our lives, getting this coordination correct will be hugely important for successful system design.

Build a Better Future

Design has a rich history. There is so much to learn from. And yet, designers continue to make the same mistakes that have been made for decades (I am sure that I’ve done it myself) or rediscovering solutions that haven’t been novel since the 90s.

We need to look into our history, study it, and make sure that we continue to grow the field. This would make sure that we don’t repeat history, but instead build off of it.

Some CSE / decision-making resources to check out:

Cognitive Systems Engineering by Jens Rasmussen, Annelise Mark Pejtersen , and L. P. Goodstein

Joint Cognitive Systems by Erik Hollnagel and David D. Woods

What Matters by Dr. John Flach and Fred Voorhorst

Sources of Power: How People Make Decisions by Gary Klein

Thanks for reading. For more of my UX ramblings, follow me on Twitter: @bkenna1 or here on Medium.



Source link https://uxplanet.org/learning-from-history-964dd29086d?source=rss—-819cc2aaeee0—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here