Every designer receives feedback on their designs when working with others. But not every designer receives actionable feedback they’re confident applying to their design. Most of the time they get unactionable feedback that frustrates and confuses them.
What’s the difference between the two and how can designers prevent unactionable feedback?
Unactionable feedback is vague and only tells you there’s a problem, but doesn’t tell you what that specific problem is. Designers get frustrated because they’re not getting direction on how to improve the design. This makes it harder for them to meet expectations and tackle problems.
Not only that, but designers are told to make changes, but aren’t told how those changes will improve the user experience. Nor are designers given specific reasons behind the claims made against their design. They’re expected to go along with the feedback they get no matter what.
Designers aren’t machines, they’re advocates for the user. Their job is to ensure that all changes they make to the design benefit the user. It’s the critic’s responsibility to be specific with their criticisms. But it’s the designer’s responsibility to encourage and prompt critics to do this.
Biased feedback that isn’t vetted through others is also unactionable. Applying unvetted feedback to your design will lead to revisions that fail to tackle the real problem. In some cases, it could also make your design worse because your making changes based on incorrect information.
Imagine working hard on one part of a design only to realize the issue wasn’t with that part. Time was wasted and no progress was made all because you applied unvetted feedback. Without verification from others, you’ll spend time working on phantom problems rather than real ones.
Actionable feedback is what designers need more of from their team. It’s feedback that specifies why an aspect of your design isn’t meeting user needs, which needs it doesn’t meet, and how to meet them.
Specificity prevents designers from misinterpreting the feedback and doing needless work. It gives them clear direction on what the problem is and what to do to fix it. But specific feedback can still be subjective if it’s not vetted for objectivity.
Actionable feedback is also free of bias. If the feedback you get is coming from only one person, it’s biased. You need a group of people to vet your design to check if the majority experience the same issue. This group can consist of co-workers, clients, or users.
If the majority do experience the same issue, it’s likely you’re getting objective feedback that you should follow. If the majority don’t experience the same issue, it’s likely you’re getting subjective feedback that you should not follow.
Design vetting allows you to check criticisms for objectivity. For example, you may get criticism that the text on your interface is difficult to read. You don’t know if this critic’s claim is objective until you get feedback from more people. If the majority also have trouble reading your text, you can conclude that the criticism is objective.
You also have to get specific with the issue. Why is the text hard to read? Is it the font size, color, typeface, or something else? How does the font size affect user behavior? Is their behavior meeting their needs? Design vetting offers ways to encourage and prompt critics to give you the specifics you need to make informed decisions.
When the majority verify that they experience the same issue, that’s when the criticism is objective and feedback is actionable. If the majority experience the issue differently, the criticism is subjective and should be discarded.
Design vetting has three stages to ensure you get feedback is specific, objective, and focuses on user needs.
Stage 1: Criteria
The first stage is where you create criteria for vetting. It will be based on universal design principles. Your team will use this to identify aspects of the design that violate the criteria. This will keep their criticisms relevant and focused on user needs.
Stage 2: Validity
The second stage is where you check to see if each criticism you get includes the specifics and reasoning behind it. If there aren’t any, it’s your job to solicit it from the critic. Once a criticism includes the specifics and reasoning behind it, it’s valid. However, validity does not mean it’s objective.
Stage 3: Objectivity
The third stage is where you check to see if everyone else on your team agrees with the criticism. Their agreement will be based on their own experience, not their vote. If the majority has the same experience with a criticism, that criticism is objective and should be applied to the design.
Learn More About Design Vetting
Design vetting is easy to do once you learn how to do it. The book Design Vetting: A Process for Getting Actionable Feedback On You Designs will guide you through each stage. It includes examples of design vetting sessions to show you what they look like.
It’s time for designers to use a better process to get the actionable feedback they need. With design vetting, they’ll make objective design decisions and collaborate better with their team. This is what every designer needs when they’re designing for users and working with their team.
Source link http://feedproxy.google.com/~r/uxmovement/~3/34cEZaCqZPw/