EMERGE
Empowering Digital Product Leaders
meatburger is better than veggie
emerge
Five UX Guidelines to Embed AI in Your Enterprise Applications

Five UX Guidelines to Embed AI in Your Enterprise Applications

  • Product Definition /
  • Product Design /

AI is infiltrating our everyday lives at a rapidly increasing speed. We are often either unaware of, or simply take for granted, the incredible range (breadth / amplitude) of power that AI technology can provide. Something as mundane as an email spam filter utilizes machine learning to detect patterns of spam messages, while similar machine learning concepts allow self-driving cars to safely navigate from Point A to Point B. Use cases within modern enterprises also continue growing. As you are rolling out AI capabilities in your enterprise applications, you may wonder how to create a seamless user experience, blending traditional application functionality with AI-powered capabilities.

In my recent post about how to integrate AI APIs in your enterprise applications, I explained how easy it can be to begin integrating AI functionality into your applications.  It is also important to address how implementing AI affects your application’s user experience, particularly in regard to trust and control issues between your users and the AI-powered applications. We are, after all, asking a machine to make decisions or complete tasks that typically would be handled by a human.  Without insight as to why or how a machine is making a decision for us, we must rethink the ramifications of this choice. AI governance, which concerns itself with the how and why of AI decision making, is still in its infancy.

As Andrew Moore explains, “at its core, AI is about automating judgments that have previously been the exclusive domains of humans. This is a significant challenge unto itself, of course, but it brings with it significant risk as well. Increasing effort, for instance, is required to make the decisions of AI systems more transparent and understandable in human terms.” 

In this article, I outline five UX guidelines that product managers need to consider when integrating AI-capabilities into their enterprise applications.


1) Notify users about which tasks involve AI automation

A few years ago I wrote an article about Ambient UX, which described how massive amounts of personal data combined with machine learning algorithms would enable computers to constantly make “invisible” micro-adjustments to the systems around us. The environment around us would simply adjust to our needs. 

While this might sound alluring in our everyday life, having a machine perform tasks for us in our workplace without us knowing could lead to unintended consequences. This is especially true for users completing repetitive tasks within enterprise applications, who rely on consistency and predictability in their work environments.  Enterprise applications should clearly indicate which parts of the application automate tasks using AI. This would enable users to appropriately adjust their expectations, and anticipate data or functionality variations that may be outside of their control.


2) Indicate which data is derived from AI

Unleashing machine learning algorithms on large datasets may reveal conclusions that otherwise would have gone unnoticed. This is often compared to human “gut instincts,” when our subconscious draws a conclusion, but we aren’t necessarily able to explain how we arrived at that end. While we may expect this from humans, it is less commonly accepted for machines to have this capacity. Any data that is generated by a machine learning algorithm, as opposed to traditional data processing, should therefore be differentiated visually.  Users can subsequently decide for themselves if they want to trust this data in their work. Provide layers to the user that do not conceal the task automation, but provide different layers of visibility allowing your users to focus on the humanistic tasks, while being cognizant of what the AI is working on in parallel.

For example, Textio provides real-time input to job descriptions as they are being written. It is essential that any changes to the text are provided in a clear, yet unobtrusive way. The application can finish sentences and complete entire paragraphs based on only a few words.  This can prove to be jarring to the user. In the example below, Textio moves the user input into a text bubble, while placing its suggested text inline into the content. The suggested text is also marked with an icon at the end.


3) Let robots be robots; Let humans be humans

AI automation frees your users to focus more on responsibilities related to customer experience, employee engagement, and workplace culture. The UX of your applications should reflect this by allowing your users to focus on the humanistic tasks that emphasize their strengths over machines.

For example, a customer support system might use AI to suggest responses to common customer requests. Instead of taking a traditional support message user flow and layering the AI functionality into it, consider re-inventing the entire user flow so it is centered on the human review, and moderation of the automated AI responses.  By using this approach the application emphasizes the human strength, which is to monitor and fine-tune the automated AI responses, as opposed to keeping the outdated user flow in place where humans are managing tasks that the AI could handle.


4) Account for inconsistency in results

After years of working with traditional software, users have become accustomed to submitting a certain input value into an application and receiving a certain output value.  This is the rule when working with traditional algorithms, but machine learning decisions typically happen in a black box that can output very different results. There are a myriad of reasons this may happen.  There may be a massive number of input parameters or evolving machine learning models. For example, researchers at Mount Sinai’s Icahn School of Medicine found that the same machine learning algorithms diagnosing pneumonia in their own chest x-rays did not work as well when applied to images from other hospitals. The smallest variations in how x-rays were taken and processed resulted in vastly different results once the images were fed through the AI engine. Another example that most consumers have experienced is saying the same thing twice to their voice assistant and receiving two very different responses. The machine learning models that power our voice assistants are continuously tweaked and improved, meaning the same input can result in a completely different output tomorrow compared to today.

When users engage with a machine learning algorithm, they need to understand that results might seem inconsistent, much like a human might respond differently to a request based on thousands of variables in that moment. Providing transparency into the variables that influenced a decision will help users understand the system. Layering some educational components into the user experience to explain how AI data is generated can help bridge the gap as well.


5) Provide an escape route for when AI fails

With all the sensationalism around AI capabilities, it’s important that we acknowledge the limits of machine learning, and convey those to our users as well. We must recognize that our machine learning algorithms will get things wrong occasionally. When this happens, we must ensure that users are not stuck in a dead-end user flow, but are provided alternate ways to complete their tasks, or possibly override AI data. Think about it in terms of Tesla’s Autopilot functionality, which will at any point allow the driver to take over control of the car. The car AI will not fight back, but knows to get out of the way and let the human be in control.

Whenever the AI is making a decision on behalf of the user, or AI data is used in traditional algorithms, providing overrides for your users to complete their tasks or retract an AI decision will help build trust and user satisfaction. Let’s look at the example of a conversational AI system, such as a chatbot: In this case, the escape route can be as simple as asking for confirmation to verify whether the user wants to proceed with or cancel a requested action. The cancellation needs to be simple enough to execute that the error rate is low or users will be doubly frustrated that the AI is not working as expected.


Summary

By applying these five UX guidelines, you can increase your users’ adoption of AI functionality in your enterprise web and mobile apps, while reducing learning curves and friction as users are getting used to working with AI. If you want to take advantage of AI in your digital products, but don’t know where to start, check out our top five ways to employ AI in your enterprise web and mobile app strategy.