Now’s the time to get serious about Design Research

Photo by rawpixel.com on Unsplash

The problem

“How do we get users to trust artificially intelligent machines to identify and gather reliable information for them? And, how do we make these selection processes transparent and accessible to users?”

In the past few years, several large technology companies (most notably Facebook, Apple and Google) have released news feeds curated and powered by artificially intelligent machines. This relatively new use of deep learning and artificial intelligence comes with a set of important challenges common to all. The most basic of these problems is how to get users to trust artificial intelligence to identify and gather reliable information for them. Recently, the consequences of not addressing these issues have become clearer than ever. Technology companies like Google and Facebook have to assume responsibility for the incredible power they have over the distribution and selection of information. The current situation is messy and dysfunctional, but it is not inevitable. Human-led innovations and modifications to user experience have the power to fix a lot of these issues and prevent even scarier problems from happening in the future.

What follows is a summary of the current situation, definitions of user needs and pain points and a discussion of potential opportunities for reform. I define some simple research frameworks and explanatory models to show how we might begin to think through these complex issues. I hope that these may serve as a way to get user experience designers and researchers to define the problems at hand and leverage their design expertise to propose solutions.

My contention is that rigorous and creative design research has the potential to solve these fundamental threats to democracy and user experience.

Does this photograph help us understand how AI and deep learning really works? Image credit: Clint Adair on Unsplash

Solutions: Models for thinking about the problem and frameworks for research

How do we get people to trust AI (generally but more specifically in the context of news)?

  • By making basic information about artificial intelligence, deep learning and artificial neural networks widely accessible, legible, and useful to a wide range of people.
  • By showing people how artificially intelligent machines work through a problem and come up with an answer.
  • By explaining what safeguards, limits and systems of checks exist to monitor and control artificially intelligent machines.
  • By demonstrating the benefits of using artificially intelligent machines for specific tasks.
  • By establishing a clear feedback loop through which individuals can “teach” artificially intelligent machines to perform tasks better.
My version of an affinity map that illustrates how people understand and problematize AI

How do we best explain complex learning systems and neural networks?

  • Through different iterations of models or representation designed to explain qualitatively how artificially intelligent machines parse, analyze and interpret different sorts of data.
  • We might experiment with different visual, action-based (participatory) and semantic models to communicate essential information to people.
  • We might also identify which type of explanatory model makes most sense for different sub-groups of people (for example children as opposed to adults).
Michał Parzuchowski on Unsplash

Challenges

  • How do we get different users to trust AI’s capability and reliability?
  • How do we go about explaining how AI-powered news feeds work?
  • How can we give users power over what news they see?
  • How do we make algorithms more transparent and accountable without revealing proprietary functions and software?
  • And more specifically related to news, how do we encourage users to step out of their comfort zone or “media/filter bubble” without ignoring their preferences?
  • How can we identify clearly fake or unreliable news? Who will make that determination and with what criteria?

Constraints

  • We cannot reveal how everything about how different algorithms work. For example, we would not want to reveal how a company programmed its artificially intelligent products and we would not want to give away all of the algorithm’s functions at any point.
  • We cannot overload interfaces with additional information and complicate user experience too much. The user has to be given the choice to investigate without feeling overwhelmed by options.
  • We cannot assume anything about users’ basic knowledge of artificially intelligent systems and deep learning. People do not yet share a common set of understandings of AI.
  • We cannot ignore or automatically delegitimize users’ fears and worries about the potential dangers of AI. We have to address these concerns head on and present users with transparent explanations rather than dismiss them as retrograde or Luddite.
Photo by JESHOOTS.COM on Unsplash

A simplified research proposal

The benefits and ROI of this project will be amplified by the team’s breadth in focus. The optimal team should include designers, researchers, developers, product managers and business operatives at all stages of research, iteration, and production.

  1. The first step should involve interviewing experts on artificial intelligence including computer scientists, social scientists and anthropologists. “Outsiders” to the tech industry should also be included. The most important of these groups will be journalists, network representatives and media consultants. The team should sift through secondary research material and present their findings to one another. At the end of this broad discovery stage, everyone on the team should share a common understanding of how artificial intelligence works, is developed and what types of challenges its users, developers and designers face. This will insure that the team moves forward with a basic level of common shared knowledge, assumptions and language that define and describe AI, machine learning, as well as the role played by data and algorithms.
  2. We also need to get a better sense of how different users interact with AI powered news feed. For this, I would suggest immersive ethnographic user field interviews and shadowing users for a few hours. We would also benefit from observing user behavior over a relatively long period of time (one or multiple weeks) to observe the effect of different news cycles over time on these platforms. For this, diary studies would be ideal.
  3. Taking the insights from this initial stage of research, we should compile user personas and intricate user stories, condense the data gathered in visual ways (through user journeys, empathy maps and affinity mapping) and come up with a refined set of challenges and constraints informed by these results. At this point the team should be able to empathize deeply with users’ problems and explain succinctly what types of problems and challenges users typically experience when using AI-curated news feeds.
  4. We might take this new set of challenges and constraints to begin exploring potential solutions and synthesizing them in the form of matrices, opportunity areas, and K-P feasibility studies.
  5. At this point (or even before), it would be wise to begin iterating different solutions to come up with lo-fi wireframes and prototypes, content inventories and user flows.

What might a solution look like?

If you’ve been following tech and design news, you have no doubt heard about Facebook’s ongoing challenges and attempts to reform its news sharing ecosystem. Designers at Facebook have been applauded for their smart proposed fixes. But in the aftermath of the Cambridge Analytica scandal and the 2016 election, we have to remember that this issue is far from resolved.

To learn more about Facebook’s design solutions, see:

  • This medium post from Facebook designers and researchers
  • This Co.Design article by Stephanie Nguyen on empowering users against fake news
  • This Fast Co report by Harry McCrakken on Facebook’s attempt to repair its relationship with traditional media outlets
  • This Co.Design article by Mark Wilson on a design concept for Facebook’s news feed by designers out of the agency NewDealDesign

Resources

On explainability and artificial intelligence:

  • Bonsai’s Medium post based on their San Francisco Speech about deep learning and explainability
  • DARPA’s report on Explainable Artificial Intelligence (XAI)
  • LIME project on Githut — code and documentation for a package that explains machine learning classifiers
  • The Verge’s Casey Newton on explainability and Facebook’s news feed

On different ways to communicate how AI works:

  • “Generating Visual Explanations” by Hendricks, Akata, et al.
  • “Why Should I Trust You?” by Ribeiro, Singh, et al.
  • Open AI’s blog post on “Interpretable Machine Learning through Teaching (link for paper in post)

On ways that engineers and developers can reinforce better patterns of learning for AI machines:

  • OpenAI’s charter
  • OpenAI’s blog post on “Gathering Human Feedback”
  • Github code and documentation for rl-teacher (a web app that uses actual human feedback to reinforce AI learning)

Author: Julia Dufosse

Collect by: uxfree.com

Comment

Top