Snapchat is partnering with Amazon to bring visual search based commerce, or AR-Commerce, to the intrepid messaging platform. Can it work? I tested it out.

Figure 1. Amazon’s app scan points attempt to create a search query via my iPhone’s camera.

Snapchat recently announced the pilot of an innovative visual shopping feature, in partnership with the uncontested leader in shopping experience innovation in the west, Amazon. Put simply, the intent of the feature is to allow you to buy any item that you see in a snap or photo with a tap or two, using its native camera and content features.

Sounds cool, right? I thought so too — in 2009.

Believe it or not, this is actually a feature that my team and I proposed to a client as an innovation play way back in 2009. We knew it would be next to impossible (when has that ever stopped an ad agency from pitching an idea before?) as we would be stretching what image recognition could do at the and AI was nowhere. Our solve was a mechanical turk operation that would bridge the technology gap for both issues.

We lost the pitch. Though points were scored for cool factor, there wasn’t a belief that what I’m calling AR-Commerce could deliver a good enough experience. The client was probably right.

Like any good digital experience, AR-Commerce is actually a combination of dozens capabilities and technologies, including native device hardware integration, a tricky user experience, image recognition, AI-driven search algorithms, and then logistics and fulfillment. Lining these particular dominos up in a delightful way is far from trivial. Hence the Star Trek factor of this offering.

Even so, you likely have this feature on your phone already, if you have the Amazon app. It isn’t clear if this is the exact technology Amazon will deploy within Snapchat, but it seems like the feature set being discussed publicly.

As someone who once tried to pitch this idea, I was of course eager to try it myself when I first learned about it a little less than a year ago. At the time, I was summarily disappointed: it was inaccurate, hard to use, and clumsy. I will say I was delighted that Amazon was developing it and had it in the wild already. Given the nature of the feature and AI in general, I assumed it was a learning platform that had a lot to learn. Fast forward to the Snapchat news, and I was even more excited, thinking the learning machine had learned and was to be indispensable.

The Experiment

There was one way to know. I set out on a completely unscientific, though very realistic field study on the matter.

The field: my desk. The study: can it do what it says it does?

This turned out to be a three-part study. I focused (pun partially intended) on objects in separate categories that seemed like good candidates. I also made it easy on the machine, using only items with text on them. It turns out that my full field notes would make this article too long for Linked In, so you can check my detailed observations and the full blow-by-blow of the tests here on Medium, if you like.

Note, I didn’t check to see if Amazon sells the actual items in advance. I decided the failover logic would be more interesting to expose, and I didn’t want to risk “tainting” the outcome with search history. Here are the tests.

Test 1

Test 1 was a can of Polar brand 100% Natural Orange Vanilla Flavor Seltzer (don’t judge, it’s like a creamsicle in every can). This would cover the e-commerce-hungry CPG (FMCG) category.

Figure 2. Scanning… Scanning… Scanning…

Try 1. My first shot/scan went well. I was duly and predictably distracted by the circular countdown meter and the blinking “scanning” points that gave it a sci-fi feel that landed somewhere between 1990’s Mission Impossible and (also 1990’s) Austin Powers. Though I joke, I appreciated the developers exposing the points: it gives it legitimacy and gives the impression the machine is working hard.

After a few seconds and much sooner than the still-animating progress bar suggested, I was happy to get a positive match. After tapping to reveal it, I was less happy to see the accuracy. See for yourself in figure 3.

Figure 3. Cute results, but not what I was after.

While I do like a well made water bottle, the “Polar Bottle Insulated Water Bottle — 12oz” was pretty far off the mark. Even more interesting is the choices the machine made about the meaning of the search and the meta data tags. It resolved to “Polar(R) Seltzers,” (check) “12 Oz. Cans” (check) “24/Pack” (huh? for the record, it came from a 12-pack purchased at the local Stop & Shop). So the actual search was pretty accurate. If not for the “24/Pack,” the machine would have had a good shot. Even more curious, though, is that the machine stripped out some of the good parameters it had created (namely “Seltzers” and “Cans,”) and the result, it seems, was the unwanted water bottle.

Perhaps more bizarre are the misleading tags it inferred: “Nature,” “Men,” “Natural.” Natural I get. But the application of “Nature” makes us start to think that the very ontology of this machine is off. And “Men?” This had to come from my own (logged-in) profile. I’m fully aware that, while delicious, there’s almost nothing traditionally “manly” about this product or flavor profile. As I scrolled, it wasn’t until result number 15 that I encountered some Polar Brand Seltzer, though they were liter bottles. And not the delectable Orange Vanilla I crave.

Try 2. And so I tried again. This time, it rendered a slightly different — broader — set of parameters, and returned better results. This set began with a sponsored post from Polar for a 12 pack of Unflavored Liter Bottles. Some scrolling eventually yielded Orange Vanilla liter bottles. It’s also interesting to see that Perrier had outbid Polar on its own page result, and got the first position above the results.

Attempts Needed: 2
Outcome: Though I eventually found the product variety, it took multiple tries and I never found it in 12 oz. cans.
Unscientific, Mostly Arbitrary Score: D

Test 2

The next test was a PaperPro Stapler. Interestingly, I thought this would be a softball since I had purchased it on Amazon not long ago.

Again, I thought this would be a softball for the machine. Knowing Amazon likes its algorithms, I thought the fact that I had bought this item about three months ago from Amazon would positively skew the result, keeping the venerated office accessory the AI equivalent of “front of mind.”

The object is distinctive in its coloring and even more so in its prominent, large logo, although the latter is somewhat low-fidelity. I did want to test its sensitivity, so I took shots that had the edge of the object interacting with the edge of the field. My thinking was that If this is going to work in Snapchat, it had better be fairly forgiving.

Figure 4. The scan for Try 3.

Try 1 & Try 2. The experience for this one was disappointing. Two initial scans yielded “Item not found” messages. I was pretty surprised, as it really does look like a stapler. Looking again at the scan, the points were focusing on reflections in the object from the nearby window, not the object itself.

Figure 5. Initial search results.

Try 3. So, I flipped on the flash (easy from within the app) and tried again. This got me some results, but not the right one: Indeed, the tags searched for were “black” “pocket” “knife.” While my stapler is pretty badass, it isn’t a switch blade and I fear Katsu and James Brand both had their sponsored results bids erroneously squandered on this result. See Figure 5.

But there was hope — observe the “stapler” tag in the “See More” box. I believe the machine had a hunch the object was actually stapler-shaped, or the word “PaperPro” was swaying it.

Once tapped, I saw a panoply of staplers — over 9000, actually. My trusty PaperPro (in black, not yellow) was result #7. Not bad out of 9000, but I thought would be #1.

I was also really surprised that my purchase history did not appear to influence the search results in what would have been an algorithmic play that seems to me to be very Amazon. It is a not only plausible but likely scenario that a user might see a product they bought on Amazon, decide they need another, and go to the camera as a search path — especially on an object where brand might be less important.

Attempts Needed: 3, plus a filtering tag tap.
Outcome: I eventually found it (in a different color), but three tries is not going to cut it with an instant-gratification-driven Snapchat user.
Unscientific, Mostly Arbitrary Score: F

Test 3

I finished my experiment with a Mophie charging pack. Simple. Rectangular. Red. Should be a no-brainer for the learned machine.

Figure 6. First scan nails it.

Like the PaperPro, I thought this last test would also be straightforward for the machine — the object in question is a red, rectangular brick prominently emblazoned with the Mophie branding. In all fairness, my Mophie is a Project (Red) edition, and I didn’t expect to see that exact result. But I also had hoped it would not throw the algo off given the simplicity of the challenge.

Try 1. The machine triumphed in this test, returning acceptable and accurate results in both category and brand in the first try. I found a similarly featured product within three results.

It did raise an eyebrow, though, since Mophie was also the sponsored results (both brand store and product), which made me wonder if it was only so accurate due to the search terms purchased. But the fact that actual search executed was for “Mophie” vindicated the machine in my opinion. This was by far the most successful test.

Attempts Needed: 1
Outcome: Similar product found, with good results on the first try, with just a bit of scrolling.
Unscientific, Mostly Arbitrary Score: A

Final Judgement

Overall, I have to say this feature isn’t ready for time (no pun intended this time), and I think it will disappoint the slightly needy and very sensitive Snapchat audience.

So my final Unscientific, Mostly Arbitrary Score is the dreaded D-.

Is it useful? Not yet. Could it be? Definitely.

And I think the Snapchat + Amazon collab is a killer app (if we still say that these days). The idea that you could buy the thing your friend snapped you (or they were wearing in their snap) seems to be a highly frictionless, actionable referral path that could go hand-in-hand with the inexorable vanity that both consumes and drives that platform.

As something of an aside, though, I think the question of how sponsored results and pages show up is fairly important. In such a utility-based application, where there is limited scannable screen space, it seems disingenuous that the app should show you a thing that’s like the thing you took a picture of but was sponsored, rather than the thing you actually took the picture of. Of course, all sponsored search results are like that, but in this case, the search result should eventually be the most accurate result possible, as it is at its most basic a computer saying “here is what you showed me you want to buy.” Compare that to a typical Google search (“use words describe to me what you want to buy”) or even a voice search (“use words to tell me what you want to buy”), and it should be much more accurate — or at least the expectation of accuracy will be higher. It will be interesting to see users’ tolerance for results that do not look like what they showed the computer, because they’re sponsored.

Just the same, I know I look forward to a day where the shoot-to-buy path is as powerful as the promise is. Then we’ll see true AR-Commerce.

What do you think?



Source link https://uxdesign.cc/is-ar-commerce-ready-for-prime-time-c66f0d199087?source=rss—-138adf9c44c—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here