- Creative Currents Weekly
- Posts
- #2 - The explainable AI (xAI) cheat sheet for product designers
#2 - The explainable AI (xAI) cheat sheet for product designers
The new frontier in human-machine harmony
#2 - The explainable AI (xAI) cheat sheet for product designers
“Without explainability, artificial intelligence remains a mysterious tool, leaving us in the dark about its inner workings and potential biases."
👋 Welcome to the second issue of creative currents! Get ready to be inspired and informed as we dive into the latest trends, tips, and insights from the world of explainable AI. Whether you're a seasoned product designer or just starting out, this newsletter is your go-to resource for all things around creating human-centered products which people ❤️
So, sit back, relax, and let's embark on this creative journey together!
Explainable AI (xAI) and why should you care?
As a product designer, dealing with different stakeholders, xAI helps to understand and communicate how AI systems make decisions while enabling the following elements, listed below.
👉 Transparency and Trust: XAI helps build trust and understanding between humans and AI systems. This transparency fosters trust in AI systems and allows users to validate the outcomes, increasing their acceptance and adoption.
👉 Accountability and Ethics: XAI supports accountability and ethical considerations in AI applications. When AI systems make decisions that impact individuals or society, it is crucial to understand how those decisions are reached. XAI helps identify potential biases, discrimination, or unfairness in AI models, making it easier to address and rectify such issues.
👉 Error Detection and Debugging: XAI techniques aid in error detection and debugging of AI models. By providing explanations, XAI helps identify potential errors or biases in the data, training process, or model architecture. It allows developers and researchers to understand why an AI system might be producing incorrect or unexpected results, enabling them to make necessary improvements and fixes.
👉 Regulatory Compliance: XAI is becoming increasingly important due to regulatory requirements in various sectors. In domains like healthcare, finance, and legal systems, where AI is used to make critical decisions, regulations often demand explanations for those decisions. XAI techniques provide a means to comply with such regulations by offering interpretable and understandable explanations for AI-generated outputs.
👉 User Adoption and Acceptance: XAI plays a significant role in facilitating user adoption and acceptance of AI systems. Many people are skeptical or hesitant to trust AI systems when the decision-making process is opaque and difficult to comprehend.
👉 Learning and Improvement: XAI helps in the continuous improvement of AI systems. Explanations generated by XAI techniques can be used to refine and enhance the underlying AI models. By analyzing the explanations and user feedback, developers can identify areas where the model can be improved, leading to better performance, reduced biases, and increased accuracy.
Picks from the Editorial Team 🤌
// 1 AI Can Be Both Accurate and Transparent (HBR - LINK) - The article discusses the use of opaque algorithms and the risks associated with their lack of explainability. It challenges the assumption that unexplainable algorithms are inherently more accurate and presents research showing that simple, transparent models can be just as accurate as complex black-box models in many cases. The authors provide steps for organizations to consider before adopting a black-box approach, emphasizing the importance of knowing the data, users, organization, and regulations involved. They highlight the value of explainability and transparency in building trust and reducing biases in AI systems.
//2 Explaining the Unexplainable: Explainable AI for UX (LINK) - The article emphasizes the significance of XAI in addressing issues like illegitimate conclusions, bias and fairness, model monitoring and optimization, human trust and adoption of AI, and compliance with regulations. It also highlights the importance of tailoring explanations to different user groups and being mindful of human biases. The role of user experience (UX) professionals in designing ML explanations that are understandable, satisfactory, useful, and trustworthy is discussed. The article concludes by emphasizing the need for early involvement of UX in ML application design to ensure a positive user experience.
Source uxpamagazine.org
//3 Explainable AI for designers (LINK) - The link provide strategic design elements on how to explain AI systems from a design perspective and is highly relevant for the application in practice.
User: Who are your users and why do they need an explanation?
Context: When do users need an explanation?
Methods: What kind of explanation should be used?
Evaluation: How can explanation be assessed and validated?
Source uxai.design
HOW to apply it in practice 🛠️
👉 Guidance for choosing the right algorithm: AI Explainability includes many different algorithms capturing many ways of explaining, which may result in a daunting problem of selecting the right one for a given application. The following decision tree will help you in selecting (LINK)
👉 Ask the right question to different types of users: A perspective to consider when assessing a user's need for explainability is to envision the inquiries they would pose in order to comprehend the AI system. Presented below are several typical tasks users engage in with AI, along with the associated questions they might ask to accomplish those tasks (LINK)
Source medium.com
👉 Match user questions with different XAI techniques: Common user questions to AI have been summarized into the following nine groups below. Once the question is clear, the right explanation can be defined.
HOW: Inquiring about the overarching logic or methodology employed by the AI to acquire a comprehensive perspective
WHY: Inquiring about the underlying rationale behind a particular prediction
WHY NOT: Inquiring why the prediction deviates from an anticipated or preferred outcome
HOW TO CHANGE TO BE THAT: Inquiring about potential modifications to the input to yield a different prediction
HOW TO REMAIN TO BE THIS: Inquiring about the permissible alterations to the input while maintaining the same prediction
WHAT IF: Inquiring how the prediction alters when there are changes to the input
PERFORMANCE: Inquiring about the nature of the training data used by the AI
DATA: Inquiring about the expected implications or potential applications of the AI's output
The following table provides a matching chart and represents a great cheat sheet for product designers to map user questions and example XAI techniques (LINK)
Source medium.com
HCAI research & key takeaways 📚
👉 Questioning the AI: Informing Design Practices for Explainable AI User Experiences - The authors developed an algorithm-informed XAI question bank to probe user needs for explainability. The study provides insights into the design space of XAI, supports design practices, identifies opportunities for future XAI work, and offers an extended XAI question bank for user-centered XAI creation (LINK)
👉 A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems -The article presents a framework that categorizes design goals and evaluation methods, facilitating multidisciplinary XAI teams. It provides step-by-step design guidelines and evaluation methods, along with summarized tables of recommendations for various goals in XAI research (LINK)
Learn 🎓
An executive’s guide to AI / Model explainability (LINK)
Narratives and visuals used for explanation (LINK)
Explainability: Local post-hoc explanation (LINK)
Keynote @ IUI (2019) from David Gunning who coined the term xAI (Youtube LINK)
Introduction to Explainable AI (Youtube LINK)
Unclassifieds
Highlights from the CHI23 (Microsoft - LINK)
HAX Toolkit - Hands-on tools for AI builders to create effective and responsible human-AI experiences (Microsoft - LINK)
Struggling with a design challenge? Let’s jump on a call
Share the Creative Currents Newsletter with at least five friends in your network and Book 👋 ([email protected]) your Free Consultation Session with a design expert
Support the Creative Currents newsletter
Forward it to your product design friends and recommend them to Subscribe
Have product design related topics like events, newsletters, tools, jobs, articles or anything you'd like to share with our subscribers? Share Now
Sponsor the next edition and become part of the network