The Suggested Web is Killing Discovery – uxdesign.cc

I found myself opening my Pinterest today, staring at my homepage for a good 30 seconds, and it looked something like this:

In those 30 seconds, I made a mental model of myself: definitely a designer, cares about print, is into typography, a little bit into food. And while this might not be very far from true, there was something about this suggested Pinterest feed that made me very uncomfortable. In an attempt to point at what caused this discomfort, I penned down my thoughts around suggested feed and recommendations, and what it’s doing to us.

Suggested feed is everywhere. Your Facebook feed, the people Twitter wants you to follow, the pictures Instagram makes you see, the products Amazon thinks you’ll buy, Tumblr, Reddit, Medium, even your search engine. It’s hidden behind terms like personalization, customization or adaptive interfaces. It’s like this machine knows you, and what can be better than that? More often than not, we find ourselves appreciate the machine learning us, and adapting. More often than not, I find myself appreciating the person who wrote this algorithm. More often than not, it is so useful. My feed better reflects my interests, and I get better deals on stuff I like. But I often find myself questioning, am I seeing what I like, or am I just liking what I see?

What I’m trying to point at here is, this great algorithm, that knows a lot about me, is doing a good job at pointing me to things that I would like, based on my actions in the past. But I know for a fact that it doesn’t know me. It doesn’t understand me. I have the privilege to know how these systems work, where this data comes from, and how it’s analyzed. I know that it’s getting better by the day, I know where it is right now was unfathomable a while back. But I also know it’s far from what our services are portraying it to be. And I’m afraid I will fall for it. Human Machine Interaction is great, and I appreciate it, but it is still not Human Human Interaction, and it is trying to convince me that it is, which is inherently a dishonest thing to do!

Algorithms do not understand the complexity of human interaction. Text analyzers do not understand the complexity of human conversation. I see you squirming in your seat at this ungrateful user who cannot appreciate the outstanding abilities of their services. But no, I do see the great intention behind it. I see the usability it brings to the table. But I still stand by my point, that it’s not human. A very good example is a recent experience I had with Google Hangouts. Hangouts scrapes my conversations, and gives useful suggestions once in a while. One such feature is your location share. When someone asks you where are you, Hangouts prompts you (and provides a handy shortcut) to send your location. While all this seems great, in a recent conversation with someone I hadn’t spoken to in a while, they asked where I’ve been for so long. Hangouts of course prompted me to send them my location, and I laughed at how actually sending it would make for a great joke. The problem here is that, our machines are too literal, we are not. When I ask my code where I am, it tells me 65.946472, -25.488281. When I ask my professor where I am, she tells me I’m doing alright with the course, and have improved over the semester. Our culture has evolved over centuries, and keeps on evolving. So do our machines, but they evolve differently. Our perspectives shifts, our biases change, we develop new kinds of relationships. Our machines become faster, detect more things, store more data. This disconnect of evolution is disconcerting, and here’s why- My machine tells me who I am, and what I like. That’s not who I am. But I start believing it. So, I spend an hour on Pinterest looking up more of these typography images. But the side of me that wanted to look at stationery is dead now, and I am made to believe that it never existed.

Hence, services, especially those involving a search engine, lack discovery. There is no challenge. It’s all served to me on a plate, and this serving seems so perfect, that I fail to look outside the plate. Eli Pariser does a great job speaking abut how these filter bubbles trap our view points. Remember the pleasure of opening Nat Geo magazines and being amazed by the new kind of flying fish you saw? That’s been replaced by a machine that learns that you love otters, and will show you more otters and different types of otters, till you’re fully convinced that all you exist for is otters. And while otters are great, I just want to know about the fish too, because that’s how my learning evolves. So essentially, my suggestive feed branches makes me go on clicking on related content, and branches like this (left). While before the era of these systems, it branched like this (right).

Change in content branching (Suggestive content and Non-suggestive content)

Which is arguably less depth in knowledge, and more scattered knowledge, and possibly a chunk of irrelevant content. But it is definitely a greater breadth, out of free will, and most definitely a better reflection of me.

All this said, lets take a step back and look at the benefits of suggestive services. There lies a certain delight in knowing that the system is so personalized. There is great convenience in my application knowing what food I like. And moreover, this isn’t even a machine-only thing. We all remember walking in our regular bristo, being greeted by the waitress we know, and being asked for the regular. This is a waitress who knows you, and who has learnt what you order over the years, and predicts what you’ll order. Being pleasantly surprised is my only response to her. Isn’t she exactly similar to a learning system that suggests recommendations too? Doesn’t she do the exact same thing too? Why is it, then, that I receive her so well, but a machine doing this scares me? Here’s why- Because I trust her. There is a sense of trust with information we establish with humans we interact with. This trust is an essential part of all relationship building. There is no fine print coming with this human. I assume that this human is not doing anything else with my information. I assume that there isn’t an intelligent system running behind her. Moreover, I assume, in most cases, that there is no advertisement hidden behind her.

Which brings me to my next point. The amount of advertisement that drives recommendations on our personalized services is immense. So while my Ebay landing page knows I’m looking for cameras, their suggestions are not for me. They’re for Canon, where their money is coming from. My advertisement colleagues argue that what is wrong with this? It’s proven to work. It increases sale. It increases revenue. More so, it increases engagement. Which reflects, the user is happy. So it’s a win-win? But it’s not. My preferences are no more my preferences, but preferences of the brand that pays the services I use. Now here’s the debate- I end up spending more time on the website, but the time I spend is driven by an agent that I’m not even aware of. I, as a user, am engaged. But is this a meaningful engagement?

We humans have a tendency to adapt. We adapt even before we realize it. For a very long time now, our behavior is adapting by computer usage, and this is has been proved by several studies like this, this, this and this. Studies talk about lack of empathy in interaction, use of IM abbreviations in conversation, changing patterns in communication, even changing relationships. Now imagine us adapting to a suggestive service, say your predictive IM service. Imagine we get as literal as our predictive text. Imagine the sarcasm in our conversation dying. Imagine no uncertainty in speech, no interpretation, no alternate meanings. All humor in our conversation would die. We would soon start speaking in machine readable language (aka code). We would soon respond to ‘where have you been’ with geometric coordinates. While this is largely an exaggeration, even the possibility of us inching towards the loss of the beauty that lies in the complexity of human communication is frightening. It’s not necessary that we need to inch, our machines could very well be inching towards us, and getting more capable of understanding humans. But all these studies clearly show that it’s a two way movement and we are going to meet midway. It’s a race of who’s moving faster now, and I sincerely hope it’s not us.

— –

Related read: The Filter Bubble by Eli Pariser

— –

I do understand that this point of view is debatable and there are many schools of thought around it. I’m still attempting to consolidate mine to better convey my point. I also find it fruitful to write a post from the viewpoint of the good change that machine learning has brought about in these services and present a fresh perspective. Hence, I’d love to know your thoughts about this.

Sign up for our weekly UX newsletter
A weekly collection of UX-related links (like this one), brought to you by your friends at uxdesign.ccux.email

Author: Safinah Ali

Collect by: uxfree.com

Top