Bite-sized learnings from the third day of Interaction 17 in NYC — with topics ranging from Designing to Combat Misinformation, to Mindfulness in Tech, all the way to dog robots in Japan.
Interaction 17 is one of the biggest UX conferences in the world. Organized by IxDA, it brings together design leaders, professionals, and students from different continents to discuss the future of Interaction Design and our role and responsibility as designers in creating experiences for our users — as well as the larger impact the products we create can have in the world.
The event is incredibly inspiring, in many ways. Not only to be able to see so many minds that think alike getting together to celebrate our profession, but also for the fact that the topics covered in the conference force everyone to step back a hundred miles from our day-to-day activities and look at the larger picture of where our discipline is headed.
Here are a few things I learned in the third day of the conference:
Our use of pronouns can tell a lot
Dori Tunstall gave a quite interesting presentation on Respectful Design: The Canadian Context, where she shared a few projects from her students at the Ontario College of Art and Design University (OCAD U).
One of the projects really caught my attention and made a lot of sense in the political context we’re living right now.
Barcode, by Jasmine Leung, is a project that analyzes one’s use of pronouns to give an indication of their personality type and the way they socially relate to people, using algorithms to identify when and which pronouns are used and turn them into a bar code.
As part of the project Jasmine analyzed the use of pronouns by American president Donald Trump and compared with Canada’s prime minister Justin Trudeau — specifically comparing their inauguration speech.
Not a surprise, but:
- Trump uses more me and I than Trudeau does;
- Both Trump and Trudeau use pretty much the same amount of they;
- Trump uses way less we and us than Trudeau does.
About mindfulness and technology
We’re losing a bit of humanity, specially when we start looking at our phones about 20% of our lifetime.
The second keynote of the day had Liza Kindred talking about Mindful Technology.
“We’ve all been witness to both the delight and the disappointment that can happen when we let technology into the most personal parts of our lives. It can sometimes even seem that we serve the machines more than they serve us. (Are you listening, Alexa?!) There’s no doubt that ever-present technology has improved our lives, given us superpowers, and made us more efficient. But at what cost?”
A few data points and highlights from her presentation:
- We’ve reached a point in humanity where there are more connected devices than human beings;
- 90% of us suffers from what is called phantom phone vibration syndrome: the perception that one’s mobile phone is vibrating or ringing when it is not ringing;
- 90% of us keep phones within reach 24/7 (source unknown);
And then there’s this story:
And this quote:
“With the Internet of Things, we’re building a world-size robot without even noticing. How are we going to control it?”
At the end of the presentation Liza walked the audience through a few values for mindful technology:
Completely out of context, but why not. Here’s something I heard during Matt Yurdana’s session on Establishing Trust with Fully Autonomous Vehicles.
“The car invented the motel”
If you want to learn more about the the road ahead for autonomous vehicles, here’s a starting point.
GOOB, Get Out Of the Bubble
Solid talk from Chelsey Delaney about Designing to Combat Misinformation.
In 2013, the World Economic Forum (WEF) called the spread of “massive digital misinformation” a major global risk. WEF’s warning mainly highlighted the effects of media-induced “digital wildfires” — misinformation that spreads quickly through digital means, usually unintentionally and through social media, or intentionally within an echo chamber of like-minded people. But, misinformation isn’t new, and the digital context has brought awareness to it while exacerbating it. The long-term implications are starting to become very clear.
We, designers, tech-savvy people browsing the web all day, tend to look for the actual source of what we read online — as a way to verify the credibility of what we’re reading.
But the typical user doesn’t do that. They lack patience to dig deeper and distinguish what is real from what is fake.
The components of misinformation are pretty well-known: homogeneous cluster (I only read sources from people that think alike), the echo chamber that it creates (Facebook’s potential influence in the US election results is a good example of that), and polarization (oh, this binary world we live in…).
Chelsey also talked about how misinformation starts:
- From oversimplification. The simplest argument is always preferred. That’s how conspiracy theories are born: they are a shortcut for explaining pretty complex topics in an oversimplified way.
- From confirmation bias, which is the tendency to search for, interpret, favor and recall information in a way that confirms one’s preexisting beliefs.
- From the mere-exposure effect: effect by which people develop a preference for things because they become familiar with them.
AE, Artificial Emotion
To be accepted in the home, homebots and voice user interfaces will need to create trust through human-like relationships. The term AE or “Artificial Emotion” signals emerging forms of human-machine interactions that deliver on functional and emotional needs.
Natalie Phillips-Hamblett’s talk about the rise of homebots was eyeopening for me — particularly about Japan’s relationship with home robots and technology. To understand more about that relationship, McKinsey conducted mobile diaries with 50 consumers, and in-depth home visits with 12 consumers in order to explore existing technology, like Roombas and Alexas, and potential opportunities.
Just to thrown a few random stories that illustrate that relationship:
Funerals are being held for ROBOTIC dogs in Japan because owners believe they have souls.
In Japan, Robot Dog Owners Feel Abandoned as Sony Stops Supporting ‘Aibo’.
Quick video from McKinsey on the topic:
There was also this, still from Natalie’s talk…
…which I’ll explain more in a separate post.
What the future looked like in the past
Shown on Kevin Gaunt’s Past and Future Speculations on Smarter Homes presentation:
You should also check out Kevin’s project called Living with Bots:
“What are we left to do when technology just happens to do everything for us? Bots is system for home robot assistants that collaborate to make life alone a little more interesting. The system consists of specialist agents (the bots), a main control unit (the brain) and room speakers (the senses). Bots are modular artificial intelligences that focus on a single task. (e.g. online shopping, spying on the neighbors whereabouts or organizing surprise presents). The group dynamics that arises from the bots placed in the main unit changes the relationship we have with might have with technology to something alike having a group of pets: never entirely predictable but always succeeding or failing with the best of intents.”
Ethics in the age of AI
I’m still digesting everything Cennydd Bowles presented on his talk about ethics in the age of Artificial Intelligence. That was quite a lot, and I’m sure going to follow up here with additional articles and research on the topic.
For now, one quote from him that stuck with me for the whole day (not with these exact words, but here we go):
Every object we make exists in the future. Every time we design something we make statements about the future we believe in.
Check out Cennydd’s amazing writing on Ethics and many other topics:
Personal site of Cennydd Bowles, digital product designer and writer.www.cennydd.com
That’s it for today. I’m sure I’ll quickly feel the need to come back here and organize/share more thoughts about everything I learned in the Interaction17 conference , so expect some new posts on each of the topics above. I’ll see you in the next post 🙂
A few things I learned from the first day at #Interaction17 #IxDA17
Bite-sized learnings from the first day of Interaction 17 in NYC — with topics ranging from Virtual Reality, to Design…uxdesign.ccA few things I learned from the second day at #Interaction17 #IxDA17
Bite-sized learnings from the second day of Interaction 17 in NYC — with topics ranging from social entrepreneurship…uxdesign.cc