• Innovation
    • Artificial Intelligence

Five core design principles for explainable AI

  • Article

In the hype around AI and the concerns for its ethical evolution, design thinking may not seem like it should be high on the list of priorities. But core principles for design can help create AI systems that are interpretable and user-friendly.

As we interact more and more with AI systems in everyday life, we need algorithms that are transparent, fair and explainable. When applying for a bank loan, for example, the applicant deserves to understand whether a decision was made algorithmically and how any algorithm reached its decision. They should also be able to request a human-led review or a right of reply.

Explainable AI is key to ensuring that users have confidence in a system and its judgements. But this explainability also needs to be balanced with usability. There are design principles that can help developers and businesses to achieve both of these goals, giving users an AI system interaction that they can trust.

Here are five core design principles that AI systems designers should consider:

Start with the User

As with all technology builds, design should be guided by the user experience that you hope to achieve. That means assessing how a task is done and focussing on the key moment of interaction and how that experience can be enhanced with AI.

At the same time, it is essential that users retain an element of control over the system. Even though parts of the system are automated and are dealing with levels of data that humans cannot process at speed, the user must have control of the interaction. For example, the AI might provide the ability to intervene, contribute, or provide feedback to the system. Users are more open to understand something that is working with them rather than for them.

Set Expectations

AI, understandably, can mean a lot of things to a lot of different people, so it is especially important to craft an interaction that makes clear what the system does, which elements are automated, and what the user should expect – making it clear that the user is interacting with something powered by AI and what the potential limits of the AI may be.

For example, consider an AI system used to calculate time waiting in a queue for an online customer chat service. The AI will likely base its prediction on typical interaction times and number of people waiting. But it only takes one atypical interaction to throw that timing off. Making it clear that it’s a prediction could be as simple as phrasing it as such – “Based on typical waiting times, your query will be answered in 10 minutes.” Or the system might add other qualifying information to help the user assess the waiting time – “Based on typical waiting times, your query will be answered in 10 minutes. You are number 4 in the queue.”

Communicate Confidence

Linked to setting expectations, users should be provided with an intuitive understanding of the output of the system, along with the system’s confidence in its output. This can include what data is used, what features the AI system focussed on to produce outputs, and how those outputs are formulated but must be a seamless part of the user flow through the interface. Particularly with AI tools that act as decision support systems, it is essential to communicate the confidence of each output in a manner that is relevant for and digestible by the user, enabling users to select from multiple outputs where there is ambiguity.

For example, banks might have an AI-powered system that is designed to analyse transactions to identify those that could be associated with fraud. In such a case, to demonstrate the system’s confidence, it would not be enough to simply say “I predict transaction XYZ is associated with fraud”, you would want it to say: “I am 97% certain that transaction XYZ is associated with fraud, based on ABC data”. This explicit statement of the system’s confidence in its own predictions allows users to pick out decisions they may want to interrogate further.

Fail Gracefully

AI is inherently probabilistic and so it should be designed for error and uncertainty. In practice, that means that outputs with less confidence should be presented as such, very clearly different from answers that have high confidence.

Think of interactions with chatbots, where being sent down the wrong branch of the decision tree can be quite frustrating and time consuming. For example, if the chatbot interprets your query as a customer service rather than delivery enquiry, the next set of options will likely have nothing to do with the problem you want to solve. Far better for the chat bot to say, “I’m sorry, I’m not entirely certain what you’re looking for. Is your problem with your order or with your delivery?”

Share your process

AI and machine learning rely on datasets, whether large or small, to make decisions. Making it clear how this data is gathered, processed, and learned from is a vital part of the transparency of the system. You may want to offer visual representations of the learning loop to help non-technical users understand how the system works. With expert users, there are few better ways of building trust than open-sourcing the code and models.

In order for users to adopt AI systems with confidence and trust, they have to be both explainable and user-friendly. But when you design from the beginning with explainable AI in mind, usability and explainability become intrinsic parts of the AI system, and often improve the process itself.

The UX of explainable AI

An AI system needs to be transparent and accountable to inspire confidence in its users.

Need help?

For more information, please contact your HSBC representative.