When I started working on the project, it was in an advanced stage of development, but it was getting more and more apparent the initial assumptions were not working in the real environment making the experience more troublesome and confusing instead of simple and straightforward. Beacon-based navigation used in this project was not precise enough to sufficiently support a person. People with normal vision were able to verify erroneous notifications from the system: it’s pretty obvious you would ignore a suggestion to ”walk straight ten meters” when facing the train tracks. For a person, such a situation might have grave consequences. Additionally, a complicated calibration process to determine a direction the user was facing was very cumbersome and failing at critical moments. So it was a matter of smoothing out those imperfections and potential errors in user’s positioning and make them more helpful and usable.

Together with the team, we started thinking and searching for a suitable solutions, keeping in mind the main idea, time frame, and technical limitations of the already established infrastructure (beacons and their management system). This way we came back to one of the previous ideas never fully explored. Before we started working on this implementation, we needed those new theories put to the test. While doing research, we stumbled upon a project named Wayfindr which seem to be going a very similar route.

The solution chosen by Wayfindr gave us validation we needed at this stage. With the base idea in place, we thought of tracking the user’s position using zones instead of points. From the system’s perspective the change was in the logic: instead of following a specific target in space we needed a history of this target’s visits and if there was no history we build the route based on the starting zone.
Knowing the user ”appears” in one of the zones on the platform (meaning there’s no history of this user’s visits to other available zones) we assumed that this user arrives on a train, is currently on the platform and wants to get out of the station. By analogy — if a user ”appears” at the entrance/exit zone (with no history from other zones) we could safely assume this user wants to get to the platform and on to the train. So as a starting point we chose thise two scenarios which we thought were the most common situations:

  1. The user arrives by train, is on the platform and wants to leave the station.
  2. The user enters the station from the street and wants to get to the platform.

Because we were building a testing stage with a solution mainly for blind people we weren’t thinking about situations where a new user appears in the middle of the planned scenario — if you’re a blind person you would most probably use the application the whole time, not turning it on and off.

A typical solution for the first scenario is ”I want to get out of the station using the exit I prefer” (if there is more than one exit). For the second — ”I want to get to the platform and board the train going in the direction I need” (the station we were running the tests at has only one line running through, in case of more lines the additional element was the ability to select a specific line).

Next we needed to decide how to divide the station into zones. What we wanted were the critical elements of the station’s interior (escalators, elevators, security, toilets, etc.) to be included in a specific zone and its description written using the most straightforward language, yet including as much information as possible.

After we drew the zones on paper, we had to write down various scenarios for possible routes the users could take walking to and from the platform (including variations with trips to the toilet, using elevators, etc.). Without the direction of movement received from the device, we had to come up with an idea how to determine that direction, even if at large approximation.
With the history of zones — which zones the user entered, which zones he left and in what order — we thought we should be able to determine with a quite substantial probability the direction the user was taking and estimate the target, which in turn helped us send to the user information needed at specific moments.

After the scripts were ready, they needed a trial by fire. We met with a group of smartphone users from Polski Związek Niewidomych (Polish Blind Association) that were able to help us. We ”became” our application, reading the notifications the actual app would send to the user at a specific zone and following a particular scenario while walking with our testers. Observing their reactions and listening to their comments we took notes we later incorporated into our scripts: what does and does not work, what are their comments regarding the specific language we used, what is utterly useless, what needs to be changed or rewritten, what can be cut away without losing the functionality and shortening the message to a bare minimum.

Because it quickly became apparent one of the main errors we made initially was to try and explain the area with too much details. We turned our messages into essays, which were wasting the user’s time while they had to listen to the whole story instead of moving forward. It was somewhat an equivalent of an interface animation that’s just too long, and you can’t skip it or do anything except wait for it to finish.

Then we had to incorporate changes and fixes made after the initial tests, create the zones and mechanics inside our system for the mobile app to use, and then run another series of tests, this time using the actual device.

The last phase of the project was a final clean up of the messages’ contents and testing the application without any help or support, just the blind user with their device.

And that’s where my involvement with the project came to an end.



Source link https://uxdesign.cc/-for-the-blind-535160e8b31e?source=rss—-138adf9c44c—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here