The Future, Crowdsourced –

An Open Letter to the Design Community


The smartphone in a typical middle schooler’s shirt pocket is more powerful than the most powerful supercomputer of 1985 — which at three feet tall and five feet wide was itself much smaller than its less powerful predecessor. The leaps in technology that led from one to the other were the products of a multitude of curious brains that latched onto every little development, applied imagination and expertise, and moved us all forward, one communal lurch at a time.

Now we’re entering another technological sea change, and this one is going to push designers to rethink everything about the user experience and about our own processes. I’m talking, of course, about machine learning and virtual reality. Software is soon going to look and feel a lot different than it does today, and while the user population may perceive the transformation as gradual and natural, the design community knows how much work it will take to shape and package the next evolution.

And, again, it won’t be the work of any lone hero. In the same way that Edison and Tesla arrived independently at similar ideas, the next phase of the software will arise from multiple discovery, the collective work of many independent minds focused on a single puzzle.

For that reason, I want to start a conversation with the design community at large. Maybe our hive mind can scale creativity in the same way P2P computing scales processing power. I’m going to share with you the Big Questions my team is exploring right now, and ask you to unpack your imagination and engage with us in the comments section.

So let’s get started.

Designers used to design a webpage and an engineer would build it. Then dynamic design came along and there was more variability, but the experience was still pretty static. With machine learning, a software application can not only ingest data but can act upon the results, delivering a personalized experience that changes to meet every user’s needs on the fly. A software package with one million users may generate a different interface for each one of them, so how can a designer accommodate that fluidity? And when a software application resides in the virtual world, will it even have an interface at all?

At Adobe, designers aren’t designing concrete experiences anymore. Instead, we look at aggregate data mining activities to understand what users are doing so we can help them be more efficient. But with all the computing power available to us today, efficiency shouldn’t be the only goal; we also want to give people the flexibility to come up with ideas they may not have come up with on their own. Think of this as computer-assisted creativity. To serve these goals, we need to design in broad strokes, focusing more on how the application will deliver personalized choices than on whether the user will click on a button.

When a computer is making so many choices for the user, it may seem like no designers would be necessary, but that has turned out to be a false fear. In fact, we’re discovering that designing in this way expands our roles as designers, freeing us from the limitations of the interface and coupling our work more closely with that of the engineering team.

At least, in theory.

In reality, we’ve used machine learning to understand content for a long time. In 1956, Alan Turing developed the Turing Test, which tested a computer’s ability to pose as a human. In 1985, an artificial neural network called NETtalk learned to speak in the same way a baby learns. In 1997, IBM’s Deep Blue beat the world champion at chess. And so forth, up to today when we can have conversational experiences and interact with our computers through movements.

All these earlier advancements were game-changers, but none of them were the end game. Now the next step is our hands, and we need to figure out how machine learning can create an overall better experience, one that provides magical moments of learning, or inspiration, or productivity. We haven’t even reached the frontier yet; we’re still rolling toward it in our covered wagons, figuring out the route as we go.

Right now, the problem we’re trying to solve is centered more around abstract inputs, where users don’t even know they’re giving the machine information. You’ve already experienced this if you use Facebook; Facebook’s system learns who you interact with, what pages you like, and which ads you respond to. The more the system learns about you, the more it can learn about you; it offers you slightly different choices to decide what to offer you next. You’re constantly being A/B tested and you probably don’t even notice. Don’t be embarrassed — nobody else is noticing, either, which is an affirmation of the Facebook team’s design skills.

That’s the direction we need to explore with other types of software. If interfaces are going to be built in broad strokes (“the software needs to learn this”) rather than fine details (“the user needs to click here”), we need to move the design process into earlier phases of software development — much earlier. Instead of giving engineers a single path through the experience, we might have conversations with them about how the design can change based on what the machine can learn. And while today we work with images and specs, in the next few years, we may no longer work with look and feel at all, and instead work with movement or sound. We won’t develop a set of visuals; we’ll write a description or record a video. We won’t build a prototype; we’ll give rules that define starting points for what we want the machine to learn.

That’s a lot of maybe and perhaps, but we do know one thing for sure: we’re going to have the opportunity to embrace a different thought process and push the boundaries of our creativity and our technology — and without even knowing where those boundaries lay.

There’s another aspect of machine learning that designers have to consider as well. Not only can the technology learn from user activities, it can recognize what a user might do better and offer a teaching moment. For instance, maybe an upcoming iteration of Illustrator will notice when a user only chooses certain brushes and suggest another brush style, along with a micro-tutorial on how to use it effectively. At the same time, not all users want that kind of distraction; production designers who know the software inside out may just want their software to be a workhorse, and machine learning can serve their needs by moving extraneous functions out of their way.

Your Turn

These are the questions my team is exploring now, and the more we dig into the unknowns, the more questions we unearth. We know we’re not alone; a lot of others in the design community are working to solve the same types of questions. We’d like to hear those questions, and we’d like to ask a few ourselves:

1. Are you considering machine learning in your design process?

2. How do you see software changing in the future?

3. What do you think your role will be in the near future?

4. How would you like to see machine learning incorporated into the tools you use in your own design process?

5. Has there ever been a more exciting time to be a designer?

Thanks, everyone!


Author: Jamie Myrold

Collect by: