Product people build things that serve some purpose and solve some problem. Unfortunately, when it comes to machine learning, most are forgetting this.
We need a way to set guidelines on how the non-deterministic, intelligent algorithms should work. You can’t sprinkle it like magic dust and assume it will be transformative.
At Philosophie, we use prototyping to quickly try more solutions for our clients. This helps them reduce the likelihood of failure when going to market. For machine learning we have started to experiment with building non-coded prototypes that simulate what machine learning could do. Transitioning from prototype to building is not as straightforward. We needed a good way to do this and found inspiration from a research technique…
What is the computer thinking, hearing, and seeing?
When reading this intro to a Designing AI MOOC, part of slide 9 struck me in particular:
Where else do we care what something is thinking, listening, or seeing to better understand how to create a solution?
Empathy Mapping for people.
What is it REALLY thinking, hearing, and seeing?
You can take this in a very literal sense of how an intelligent algorithms can turn raw video feeds (or LIDAR or whatever) into symbolic information to use like Chris Noessel does in his Designing Agentive Technology.
I think this goes too far for most purposes, especially since companies like Google, Facebook, and Apple are investing heavily in the basics. The most recent example being from Apple with Core ML. We can bet that these basic methods will be commoditized going forward.
Unless you are in research, the real focus should be on what differentiates your product and gives it meaning. Not finding a better way to detect the difference between cats and dogs in ImageNet images.
Empathy Mapping mapped to machine learning
I have come across a few different forms for Empathy Mapping, but I feel that the following components are most helpful : what they will do, what do they sense, what will they say, what they think, and how they feel.
In past engagements, we have used Empathy Maps to kick off persona exercises. For example, we did Empathy Mapping with a group of dispatchers for a field service operations company. This allowed us to understand what things really stress them out and make their jobs hard to do. These were ripe opportunities to build innovative solutions.
Here is what we have at Philosophie for Empathy Mapping:
I have interpreted these Empathy Map quadrants for machine learning as follows:
- Does— what suggested information they could surface, actions they could take, or recommendations should make.
- Senses — the information, history, outcomes, and other context they need to do they work they need to do. The outcomes are particularly important in understanding what data needs to be labeled. This includes human feedback directly to the intelligent algorithm, especially for supervised learning algorithms, from the user experience of the system (e.g. buttons, text entry, etc.).
- Says — communication with the humans they work with to expose why they are making the decisions they do. This is key to how they build trust with the people they work with.
- Thinks — heuristics that are learned from people doing the job today that are less fuzzy. They are the hard and fast rules that should be obeyed, at least for now.
- Feels — what do we think is ‘good’ and ‘bad?’ Machine learning algorithms learn evolving towards a minimal state of error. This is related to the information that is needed for outcomes (e.g. labeled data) and is more fuzzy. This is where the intelligent algorithm can find novel and unknown correlations with sensing data.
After you have identified important problems and ideated for possible solutions that can be helpful you can pull together the Empathy Map for the agent.
Here is what you can do in a group setting:
- Draw grid of saying, doing, thinking, and feeling — cheesy robot head is optional and may be looked down upon by designers
- Generate ideas for each section for 3 minutes privately
- Affinitize and de-duplicate similar concepts, if you need to
- Dot vote on which ones are most important to your problem
From here you will have a prioritized list to start the discussion on how to implementors the system.
You may remember Patrick from a previous post. We repeated the Empathy Map for him:
Here are some examples from the exercise and how we would use them when considering implementation of the system:
- Do — “order needed parts for the job” which helps the dispatcher by automatically doing things that are necessary to get the job ready for assignment.
- Sense — “tech’s job schedules” which will allow Patrick to make recommendations on who is available to take jobs (and qualified, etc.)
- Say — “shows thought process for decision making” which builds trust with the dispatcher on why Patrick is doing something.
- Think — “doesn’t assign techs to jobs outside of their branches” which is an important heuristic for the company.
- Feel — “Bad: return trips” which identifies a key error of wasteful return trips by the field tech. If we don’t assign the right person, with the right expertise, the right parts, etc. a return trip is usually required.
People specialize too
One hazard with this exercise is that we try to give the machine too many things to do, sense, say, think, and feel. Just like people specialize when they work together we should have our machine learning focused on the key aspects of their work that help humans towards their purpose.
A symptom will be many user experience touch points for the intelligent algorithm or trying to solve too many problems at once. I have seen this when prototyping and developing chat bots that try to answer too many different types of requests. It is ends up being a huge mess that doesn’t do anything particularly well.
Kevin Kelly, founding executive editor of Wired magazine, a former editor/publisher of the Whole Earth Review, and a futurist writer I have a lot of respect for, had this to say about GAI:
You cannot optimize every dimension. You can only have tradeoffs. You can’t have a general multi-purpose unit outperform specialized functions. A big “do everything” mind can’t do everything as well as those things done by specialized agents.
If the Empathy Map starts to get too complex then you should break it up into smaller intelligent algorithms that can focus on particular problems. It is what your engineering team will most likely do when building it anyways.
Next steps after the Empathy Map
Once you have the Empathy Map there are five key activities that should take place:
- It is most important to realize that you will not get to a finished and fully functional model right away. The biggest roadblock is having the good information or the ability to get feedback from people. Prioritize this first.
- This output is helpful when creating user stories for the intelligent algorithm. The user stories can also involve multiple people, and probably should, when it comes to outlining how the machine learning interacts with people.
- You, or someone on your team, needs to understand what types of machine learning models will work for your needs. It may not even be machine learning specifically. In some cases a simple regression, like fitting a linear function to the data, will be good enough to get started.
- It is important to continue to evolve your understanding of the needs for the intelligent algorithm while building. Personas are great artifacts for people and could be used for a similarly for intelligent algorithms.
- You should consider how your intelligent algorithm progressively changes its involvement from watching to recommending to taking action directly. The end goal will be for the person only stepping in to veto a decision.
Just rocks we tricked…
We should be amazed with what computers can do:
This tweet combined with purpose driven computing should be taken even further:
Without human purpose, a computer is just a rock that we tricked into thinking.
The key is to make sure the right purpose is still understood and how it relates to the creation of the system. Don’t get stuck with a rock that doesn’t help you meet your purpose.