Designing for AI: Trust. 8 ways to infuse trust into your AI… | by Arin Bhowmick


At IBM, we’re building software solutions that help our users make smarter decisions faster. In the world of data and artificial intelligence (AI), it all comes down to designing products our users can trust enough to help them make those important decisions.

This focus on trust goes beyond data security and validation, it’s about helping our users understand their data, providing relevant recommendations when they need it, and empowering them to create solutions they can be confident in.

As we designed our end to end AI platform IBM Cloud Pak for Data, as well as a diverse set of AI offerings and solutions in our IBM Watson portfolio, we focused on the following 8 principles for establishing trust within AI experiences.

At IBM, we believe that good design does not sacrifice transparency and that imperceptible AI is not ethical AI. When designing for AI, you should never hide the decision-making process and your users should always be aware that they are with an AI. To do this, you need to bring explainability into every AI experience so that your users understand the conclusions and recommendations made by the AI.

We’ve established a set of ethical guidelines related to designing for AI called Everyday Ethics for AI that outlines fundamental ways for you to bring explainability into your AI experiences.

  1. Allow for questions. A user should be able to ask why an AI is doing what it’s doing on an ongoing basis. This should be clear and upfront in the user interface.
  2. Decision-making processes must be reviewable, especially if the AI is working with highly sensitive personal information data like personally identifiable information, protected health information, and/or biometric data.
  3. When an AI is assisting users with making any highly sensitive decisions, the AI must be able to provide them with a sufficient explanation of recommendations, the data used, and the reasoning behind the recommendations.
  4. Teams should have access to a record of an AI’s decision processesand be amenable to verification of those decision processes.

Humans are inherently biased, and since humans build AI systems…there’s a pretty good chance that human bias could be embedded into the systems we create. It’s our responsibility to minimize algorithmic bias through continuous research and data collection that represents a diverse population. Fairness, like explainability, should be standard practice when it comes to infusing products and services with AI. Meaning that whenever sensitive data is involved, you should design AI experiences that not only minimize bias, but also help your users do the same. You can see this through the bias detector within Watson Openscale where users are alerted to potential bias in data sets.

In this example, you can see that age 65–105 did not get as many favorable outcomes compared to the other groups. It is below the acceptable level so Watson OpenScale marked it with an alert.

There’s also some great work being done by AI Fairness 360, a team of developers who have built an open-source toolkit to help teams examine, report, and mitigate discrimination and bias in their machine learning models. The best part? You can start using these metrics and datasets to start detecting bias in your own AI experiences today.

AI Fairness 360 is an open source toolkit that developers can use to mitigate discrimination and bias in their machine learning models.

Walking the voice and tone tightrope is a real challenge for anybody designing or writing for AI. It’s all about finding the balance between too technical and overly simplified. The language that you use within your experiences can go a long way when it comes to building trust with your users. We’ve found that it is best to be succinct and value-driven, and use straightforward language. It’s equally important that you don’t personify the AI because…it isn’t a person. At IBM, this means paying close attention to the language we use when our users are directly interacting with Watson. For example, instead of saying “What can I help you with?”, the Watson avatar should lead with something personal and user-focused like, “What do you want help with next?”

If you have a suite of products you expect the same key commands or icons to behave the same as you move between products. Well, the same goes for AI experiences.

When we’re designing with AI we should intentionally design consistent experiences of common elements from product to product. To do this, you can define and leverage AI design patterns. We’ve established universal patterns that should be applied to any moment where Watson is providing guidance or insight. This consistent look and feel ensures that your users aren’t having to relearn a new language every time they open a product with AI capabilities.

Examples of AI design patterns developed for IBM’s products.

Predictability is established through consistency. As you continue to deliver these transparent and easily-recognizable AI moments, you’ll get to the point where your users will grow accustomed to AI working alongside them. Ultimately, the need to overtly highlight these experiences will diminish because they will start to understand the possibilities and limits of AI. When you iterate on your experiences, always consider the future and how your users’ understanding and comfort with AI will evolve over time. It’s your job to guide them along the path toward AI maturity, meet them where they’re at, and avoid as many unknowns and surprises as possible.

To continuously educate your user and meet them where they’re at along their journey to AI, we recommend leaning on the principles of progressive disclosure. As designers, it’s our job to account for our users’ needs and serve up relevant guidance or content when they need it. We’ve found that most users crave a high-level understanding of what’s going on, but not all of them want to delve into the mechanics of AI. From in-product guidance to expert-level documentation, be sure to consider the moments where your users might need to dive deeper and those times when the complexity might just be too much.

When you’re designing for trust within AI experiences, it’s important to think about the unique ways AI can be used to help your users accomplish their goals. One way to do this is to think about clarity. Ask yourself: “how can AI help our users see beyond the obvious?” and on the design side: “what do we need to do to make sure everything within this AI experience is clear and consumable?” Often times, this all comes down to seeing beyond the obvious and translating complex insights into plain language.

Insights presented in plain language insights within Cognos Analytics.

At IBM, this sense of clarity is extremely relevant within Cognos Analytics, an analytics experience that our users explore, visualize, and share insights from their data. With a little help from AI, advanced pattern detection points out interesting relationships our users might not have known were there. And every visualization is accompanied by statistical insights that are presented in plain language.

In the world of AI and machine learning, accuracy is key. As you design solutions, always be sure to showcase accuracy and relevancy, so your users can clearly understand how confident the model is in the prediction it made. For a model that contains two output classes (cat vs. tiger) a confidence score of 51% is slightly better than flipping a coin, but a confidence score of 99% indicates that the model is very certain in the judgment that is provided. Knowing the model’s confidence will help your users gauge how much trust they should place in the recommendation.

This example of visual recognition model distinguishes lions from dogs. Notice that when the dog is wearing a lion costume the confidence isn’t as high but the model still got the right answer.

When it comes to delivering AI experiences — or any experience for that matter — a neverending commitment to co-creation is the key to creating something that your users will trust. This means working alongside your users to identify and design for their real needs. At IBM, we’ve developed a robust user research practice that’s centered around co-creation with them.

And the only way to successfully guide your users along their journey to AI is through collaboration, innovation, and trust.



Source link

More To Explore

Share on facebook
Share on twitter
Share on linkedin
Share on email