There is a lot of conversation happening around Artificial Intelligence, Machine Learning, and using algorithms to shape the future of Design and the role of the designer. But how is that changing the way we work in the near future?
“The end is near”, according to specialists in robotics and artificial intelligence. Not really the end of the world itself, but the fact robots will be taking over a portion of jobs currently occupied by humans.
Futurist Thomas Frey, as an example, predicted in a TEDx talk that 2 billion jobs will have disappeared by 2030. Just to put it into perspective, that number represents half of all the jobs in the world.
Yes. Because of robots.
Uber’s self-driving cars, Amazon’s delivery drones, chatbots that replace customer service representatives — the robotic revolution is only getting started.
But what about designers? Are we in trouble? How are robots, artificial intelligence and machine learning technology going to affect our careers as designers in the long term?
Robots are not replacing designers
Well, at least not in the near future.
You probably remember the announcement of The Grid a few years ago: a website development and design system (a la Squarespace) powered by artificial intelligence — where site modules and other interaction patterns design themselves, without the need of a designer.
A few months later the world started to see the first websites designed by The Grid.
“I think your jobs are safe, designers”, says a reddit comment about what people started calling The Grid fiasco.
The vast majority of the jobs that will be taken over by robots in the are blue collar jobs — at least in the next decade or so. Drivers, receptionists, scribes, and other professionals whose tasks tend to be more repetitive and subject to automation.
When you look at Design, things are a bit more complex than that. Humans have this unique ability to set context for our designs and create empathy for other users.
- You don’t decide whether your app’s menu will be exposed or a hidden under a hamburger icon simply based on the number of items it contains.
- You don’t decide whether you are creating a 2 or 3-column grid on your site solely based on the size and amount of images you want to display.
- Most designers I know do not decide the font color based on a “color psychology bible” of sorts.
The truth is: it’s more likely that designers and robots start working side by side in the near future.
Instead of a problem, a series of opportunities.
Let’s talk about them.
1. Training the AI to automate legwork
Yes, a designer’s job sometimes includes legwork. Just by looking at what my team is working on at a certain point in time I can tell approximately 20% of their time is spent solving for problems that could be easily automated by a robot with artificial intelligence.
Cropping assets, resizing images, color correcting photos — some tasks cannot be automated by a simple Action on Photoshop, since they require human curation and human eyes that are able to make quick decisions as these tasks are happening.
But what if we could teach an AI to do that job for us?
Adobe has recently announced Sensei, its Artificial Intelligence that will help designers become more efficient at what they do.
Adobe Scene Stitch, for example, identifies patterns in the image to help designers patch, edit, or even completely reinvent certain scene.
Or the Context-Aware Crop, that ensures the subject of the photo never gets cropped out by accident, for example.
Or Netflix’ automated translation, that speeds up the process of content localization. When they need to create multiple banners for a show in different languages, all the designer has to do is to look at hundreds of layout options previously created by robots, to say whether they are approved or not.
More recently, Airbnb announced a technology that is able to identify paper sketches from designers and turn them into code, almost in real time.
“The time required to test an idea should be zero. We believe that, within the next few years, emerging technology will allow teams to design new products in an expressive and intuitive way, while simultaneously eliminating hurdles from the product development process.” — Benjamin Wilkins, Airbnb
These small optimizations can free up the designer’s time to let them think about more strategic product decisions — the ones computers will need many more decades to learn.
2. Creating smarter, more modular design systems
Artificial intelligence can help make your design system even more robust. If you’re not familiar with the term, a Design System is a series of patterns, modules and elements that, combined, build the design language of a brand or product.
From enterprise to startups, companies are trusting design systems more and more to keep their products consistent for users: Teams like Salesforce, GE, AirBnb, WeWork, Google, Atlassian, and IBM are redefining how design teams are working together on Design Systems — just to mention a few examples.
Now imagine adding an intelligence layer to these systems that can analyze metrics on how users interact with each of these elements, and immediately “understand” which one works best for each function. The more this AI learns about what’s working and what isn’t, the more it can start to optimize each of these modules to make sure they deliver better results.
Website builders like Wix and Squarespace have started to incorporate some of these technologies to help users make micro design decisions. Following a different path than the ambitious The Grid, these tools are invisibly incorporating AI into their workflow, to help designers on secondary and mundane design decisions.
Sites won’t start designing themselves in the short term, but can definitely require less maintenance and optimization effort from humans.
3. Creating generative visual styles
You have probably seen tools like the Artisto or Prisma apps, that apply intelligent filters to photos and videos based on image recognition tech. The technology identifies whether a photo contains a human face of a lemon pie, for example, and defines the best visual effect to apply to either of these.
There’s a whole generation of apps like these, powered by tech that can create dynamic and generative visual styles.
Another example of auto-generated visual elements powered by AI is Auto Draw, one of Google’s AI experiments that auto completes your sketches and turn them into more polished versions of what you are capable of sketching with a mouse in a few seconds. This is only possible because of machine learning: the more people engage with the tool and draw their own sketches, the more the artificial intelligence learns about what users are trying to draw.
Technologies like these make design more accessible to more people. Designers (and non-designers) can increase the quality and polish of what they are trying to create , without spending a lot of energy doing it— another example of AI being used as an assistive tech, and not trying to steal your job.
There’s a whole generation of apps like these, powered by tech that can create dynamic and generative visual styles to enhance the designer’s capabilities.
Want another example? Dynamic logos like the one below (from Brazilian mobile carrier Oi), with color and format variations generated by an algorithm, are also a big trend when it comes to branding and AI.
4. Personalizing the user experience
Websites are getting smarter and taking multiple user data points into consideration to enable more personalized experiences for visitors: time of day, where users are coming from, type of device they are accessing from, day of the week — and an ever-growing list of datapoints and signals users don’t even know about. Triangulating all these factors can give you creative insights into what users are more likely to be looking for when they land on your site.
But these decisions on what to look at / what to do with that information have always been done manually by a team of strategists, designers and technologists who are thinking through possible use cases and scenarios.
When machines start taking over that part of the process, the ability to scale use cases and make them hyper-personalized will become more viable and accessible for more companies.
More personalization in the user experience usually means more relevance for users, which leads to better conversion rates.
5. Analyzing huge amounts of data
There are more and more systems out there: sites, apps, digital services. And more users. Every time a user interacts with one of these systems, data is generated. Lots of data. The growth of business intelligence has only begun; data analysis processes will become increasingly complex, cross-referencing more refined and more valuable data sets that will help designers and product owners make more informed design decisions.
In the near future, a lot of the process of collecting and analyzing data will be done by artificial intelligence. That doesn’t mean we will need less analytics specialists, but instead that the same number of analysts will be able to make way more refined (and deeper) analysis about users’ interactions with a product or service.
Bonus: methods such as A/B testing can start to happen automatically, without human mediation. Machines will be able to:
- Identify potential areas of optimization in the product;
- Understand how this optimization can happen (replace a word? change a button’s color? reorder modules on a page?);
- Implement that change and run the A/B test;
- Analyze the results and decide which version is performing best;
- Update the product with the new design, and then restart the cycle.
More and more we’ll hear about “websites that optimize themselves”. Machines will do most of the work, while designers will become the strategists orchestrating all the optimization work.
6. Using AI to enhance the user experience
This the most promising yet the least explored category. Experiences powered by artificial intelligence have just started to pop up, and it won’t take long until smart experiences being the new norm.
A few examples:
Facebook uses AI to understand the content in the images you upload. Two practical applications here: 1. Facebook can “read” the content of the photo for users with visual impairments who browse the platform using screen reader software; and 2. knowing what’s in the photo Facebook can target you more relevant advertising (and charge more from advertisers, of course).
Google has recently updated Translate to incorporate elements of AI in the way sentences are analyzed and translated.
Google’s smarter, A.I.-powered translation system expands to more languages
Last fall, Google introduced a new system for machine-assisted language translations, Google Neural Machine Translation…techcrunch.com
More recently, Google has announced its visual search technology, Google Lens, that also uses AI to recognize what the user’s camera is pointing to and display content that might relevant for them.
Not to talk about chatbots and virtual assistants, that are becoming increasingly smart and capable of having more natural conversations with users. What the recent advancements in AI means for you, designer, is that you’ll soon have many more options to pull from when thinking the script of conversations between users and machines.
You might have noticed that in none of the examples above, robots “steal” your job as a designer. Technology can help us automate repetitive tasks and free up our time to focus on the more strategic side of design, making sure we design experiences that are more personalized, relevant, smart and efficient for people.
It’s time we start identifying these opportunities to work with technology — not against it, not afraid of it.