“AI is great! It allows us to outsource time-consuming tasks, makes us more efficient, can generate text and images, and even takes notes for us.”
“AI is terrible! Its results are full of inaccuracies and it’s taking jobs away from humans. Even when it transcribes voice notes, it still gets things wrong.”
Which of these two views do you lean towards?
Reality check
We need to keep firmly in mind that AI is a tool. It’s a computer program, and it’s not inherently good or evil. Right now, AI is not a mature computer application like MS Word or Adobe Acrobat. It’s still in its growth stage – its difficult teen years, if you will.
Because AI isn’t fully mature, it’s still hard to figure out how best to use it. And in human-centered professions like CX, there’s an additional level of caution. How can we use AI without becoming fully dependent on it? In other words, can we have our AI tools and still stay human-centric throughout our processes?
At CX by Design, we believe the answer is yes – you can use AI without losing the idea of putting humans at the center of the process. In this newsletter, we’ll consider just one aspect of human-centered AI: Using AI to improve processes rather than to replace workers.
Just Another Tool in the CX and UX Toolbox
We doubt any CXers are bothered by tools like Asana, Mural, or Figma. In fact, most favor these programs because they allow us to easily organize, collaborate, and communicate during projects – especially for distributed teams.
In the ‘AI is terrible’ scenario above, did you get the idea that the imaginary speaker saw AI as a replacement, a threat? A lot of workers do – and to be fair, we have seen AI cut into a long list of job functions. But could this be a case of misapplied AI? Instead of relying on AI to take over job functions and cut out the human element, can we humans use AI responsibly?
What might responsible, human-centric AI usage look like?
A UX designer asks AI to come up with a few design concepts for a new feature, then combines and refines facets from several of the concepts to create a wireframe to test with human users.
A CX consultant asks AI to summarize recent research from a trusted source so they know the main points. Then the consultant decides if they want to read the entire report.
A UI designer uses an AI tool to analyze their design for possible accessibility issues before submitting it for user testing.
A UX writer uses AI to quickly provide microtext for a wireframe, then later evaluates and adjusts the text to match the desired brand voice and tone.
An animator uses AI-generated images along with actual photographs for reference.
What doesn’t responsible AI use look like?
A UX designer asks AI to come up with a few design ideas for a new feature, then picks one and passes it off as their own work.
A CX consultant asks AI to summarize recent research without specifying the source and uses these ‘findings’ without verification.
A UI designer uses an AI tool to analyze their design for possible accessibility issues and submits it without any human user testing.
A UX writer uses AI to quickly provide microtext for an app, but doesn’t make any changes to the text in later iterations.
An animator uses AI to generate an animation without acknowledging it or uses AI-generated images without verifying them.
In the above irresponsible scenarios, AI is replacing the human/user element. It’s shortcutting the process, but at the wrong point. Instead of using it to increase quality, the main focus is decreasing time and costs.
In responsible AI use, humans and users are not cut out of the picture. They’re still involved, but AI has done some of the grunt work first. Yes, the process has been streamlined, but the human-centric formula is still intact. There are still humans fully participating in iteration, design, and testing. Most critically, AI’s contributions are being tested and verified by actual humans. The focus has shifted from using AI to cut costs to having humans use it to improve quality – just like any other piece of software.
There’s More to Learn About Human-Centric AI
Author Michelle Stansbury urges readers to treat AI like an intern: give it tasks, but supervise it. Train it. Check its work. In other words, don’t use AI to replace humans; instead, use it to make humans’ tasks a little easier and simpler.
Of course, there’s a lot more to be said on this subject; this is just the proverbial tip of the AI-HCD iceberg. Check back with us next month, when we’ll be posting the full article on our blog and discussing 5 more tips on human-centric AI usage.
|