Already for some time I wanted to learn the basics of mobile development and to do so, I needed some project to practice on. It needed to be something personal and relatable, what I can test on my own or with friends and family.
After rummaging through the old notes, I’ve chosen a cooking recipe because using them was always difficult for me. I felt as though using a recipe cripples cooking in some way. Quick reflection narrowed it down to three pain points:
- A constant switching back and forth between preparation and instructions;
- Using the book/device containing the instruction with dirty or wet hands;
- Searching for a particular place in the instruction.
- To make a better interface for a cooking recipe that takes into account the context of use.
- To build it as a hybrid mobile app.
I took a recipe & broke it down into simple actions. Each action/step will become an individual screen & user can navigation through them with a swipe — simple gesture which does not require accuracy i.e. can be done with the dirty hand.
I also provided each step with illustrations, so that it would be easier to differentiate them from each other.
This was the first app I developed and choosing the instrument/framework was hell. At first, I tried a native approach but came to the conclusion that it is too time-consuming and I need something where I can use my existing knowledge of HTML/CSS. Then I looked at React Native but after a week or two, it was clear that it wasn’t built for custom layouts. Finally, I have chosen Cordova. The entire development of the application took about two months, with most of it being switching frameworks and reading the documentation.
I also tried to animate my illustrations for them to be more appealing, but, when I tested the app with my friends, even small animation seemed to distract them, so I removed almost all of it.
Missing the point
During the course of the development I realised that, while being carried away by illustrations and their animation, I missed that the mobile device is not the best format for the recipe. Yes, it became easier to navigate between steps but it was far from being seamless.
Usually, after working through scenarios, I typically wireframe or sketch out interfaces and layouts. With regard to voice UIs, there aren’t interfaces or layouts to wireframe. Instead, I wrote the script that would be the interaction foundation for my app.
As you probably know, main building blocks for voice interfaces are invocations (what person says) and responses (what system replies). I already had the list of basic responses — Post-it notes with steps on them. To understand what people will say to my app I simulated the process using prototalking technique, with a person playing the role of the voice-enabled device and responding to the user based on the voice interface script. Reading the conversation flows aloud helped to make sure that they sounded natural and conversational.
With a script in my hand, I was ready to make something. After a couple of days of research, I chose a mix of Actions for Google and Dialogflow.
Developing a voice app is a relatively simple, template-driven process, with simple instructions provided by both Amazon and Google in an attempt to establish their platforms. Google has extensive materials on do’s and don’ts of designing voice experiences.
Navigation and tone
As it was my first practical experience in scripting voice interface, I was surprised to see how language shaped the navigation. At first, I saw two ways in which the instructions can be delivered:
Application steers user actions through the process. Like a preschool teacher, it constantly checks on you and guides each of your steps.
The application waits for the user to say something. This one is more like university professor—it is there to help if needed, but it is your job to actually do things.
At first, I wanted to try to guide users. This approach gave an opportunity to write a more distinctive voice and personality. I had an image of a cheeky and witty cook, helping and following the cooking proсes. But it proved to be impossible due to privacy concerns — device can’t be active between steps & it can’t say anything on its own, it always responds to what the user says.
According to the Google policy, action should be closed after the dialog is finished. But you can’t prepare a dish in one go. Thus I faced an unusual situation for a GUI designer — to finish the recipe user will need to open the app several times. I was afraid that it will ruin the experience, but testing showed that, despite it being a flaw, it the app is still fun to use.
Source link https://blog.prototypr.io/searching-for-a-medium-for-a-recipe-c707ccfab23a?source=rss—-eb297ea1161a—4