Week 13 - Responsible AI Pattern: Explainable Predictions Pattern and Fairness Lens Pattern

Lecture recording here.

Introduction

This week we look at two responsible AI patterns: the explainable predictions pattern and the fairness lens pattern. The explainable predictions pattern applies model explainability techniques to understand how and why models make predictions and improve user trust in machine learning systems. The fairness lens pattern uses tools to identify bias in datasets before training and evaluate trained models through a fairness lens to ensure model predictions are equitable across different groups of users and different scenarios.

In this week we also take a look at a general approach to design patterns.

Videos

Explainable Predictions PatternInterpretable vs Explainable Machine Learning

Assignment(s)

Assignment 6 - Investigation of Design Patterns for Machine Learning

The Explainable Predictions Design Pattern

The Rationale

The Explainable Predictions design pattern aims to provide transparent and interpretable explanations for the predictions or decisions made by machine learning models. As machine learning models are increasingly used in critical domains such as healthcare, finance, or autonomous systems, it becomes crucial to ensure trust and transparency in the decision-making process. Machine learning models are black boxes in general. But having a clear understanding of the model behavior is very important to diagnose the errors and to identify potential biases to decide if they can be employed. Introducing explainability in machine learning is a major factor in Responsible AI. Hence, the key idea of this pattern is to interpret the machine learning models to understand why and how the model made the predictions in a certain way.

A slightly similar implementation of the explainable predictions is the Linux error coding system. Typically in Linux when a system error occurs, an error code is assigned to the system global variable errno. These codes are know to the software but are not human readable. See Standard Linux Error Codes. A human readable form of the codes can be obtained by calling the global function strerror passing through errno as an argument. This is a human readable text representation of the error.

The UML

Here is a UML diagram for the explainable predictions pattern:

  +------------------+
  | ExplanableModel  |
  +------------------+
  | - Model          |
  +------------------+
  | + explain()      |
  | + predict()      |
  +------------------+  
In this UML diagram, the Explainable Predictions Design Pattern consists of a single component:
  1. ExplainableModel: The ExplainableModel component represents a machine learning model that is designed to provide explainable predictions. It encapsulates the underlying model and provides methods for explanation and prediction. The ExplainableModel component provides two main methods:
The ExplainableModel component facilitates the development and usage of machine learning models that prioritize explainability. By incorporating the Explainable Predictions Design Pattern, data scientists and developers can create models that not only provide accurate predictions but also offer insights into the reasoning behind those predictions.

Code Example - Explainable Predictions

Below is a simple code example with the explainable predictions pattern:
C++: ExplainablePredictions.cpp.
C#: ExplainablePredictions.cs.
Java: ExplainablePredictions.java.
Python: ExplainablePredictions.py.

Common Usage

The Explainable Predictions design pattern is gaining importance in the software industry as the need for interpretability and transparency in machine learning models increases. Here are some common usages of the Explainable Predictions design pattern in industry:

  1. Regulatory Compliance: In industries where regulatory compliance is crucial, such as finance, healthcare, and insurance, the Explainable Predictions pattern is used to provide explanations for the predictions made by machine learning models. This helps organizations meet regulatory requirements and ensures that the decision-making process is transparent and accountable.
  2. Risk Assessment and Fraud Detection: Explainable Predictions are extensively used in risk assessment and fraud detection systems. By providing explanations for the factors contributing to a prediction or identifying the important features that influenced the decision, organizations can understand the reasons behind risk scores or fraud alerts. This enables better decision-making and helps in identifying and addressing potential vulnerabilities or biases in the models.
  3. Credit Scoring and Loan Approval: In the financial industry, Explainable Predictions are used to provide transparency in credit scoring and loan approval systems. By explaining the factors that led to a particular credit score or loan decision, organizations can provide customers with understandable justifications and build trust. It also helps in identifying potential biases and discriminatory practices.
  4. Healthcare Diagnosis and Treatment: In healthcare, Explainable Predictions play a vital role in diagnostic systems and treatment recommendation engines. By providing explanations for the predictions made by the models, healthcare professionals can better understand the reasoning behind the recommendations. This facilitates collaboration between clinicians and machine learning models, leading to improved patient care and better-informed decision-making.
  5. Customer Relationship Management: In industries like e-commerce and customer service, Explainable Predictions are used to enhance customer relationship management. By providing explanations for product recommendations, personalized offers, or customer segmentation, organizations can improve customer satisfaction and loyalty. Customers gain insights into why certain recommendations are made, which helps build trust and improves the overall customer experience.
  6. Human Resources and Talent Management: Explainable Predictions are used in human resources and talent management systems to support decision-making in areas such as candidate screening, performance evaluation, and employee retention. By explaining the factors influencing decisions, organizations can ensure fairness, reduce bias, and improve transparency in these processes.
  7. Algorithmic Auditing and Ethical AI: Explainable Predictions are an important component of algorithmic auditing and ethical AI initiatives. Organizations use this pattern to assess the fairness, transparency, and ethical implications of their machine learning models. By providing explanations for predictions, potential biases and discriminatory patterns can be identified, leading to fairer and more ethical decision-making.

Code Problem - Complex Predictions

The PredictionModel abstract base class has three pure virtual methods: loadModel, predict, and explain. These methods are responsible for loading the model, making predictions, and providing explanations, respectively. The PredictionModel abstract base class includes two additional pure virtual methods: preprocessData and postprocessResults. These methods represent the steps for preprocessing the data before prediction and post-processing the prediction results, respectively.

The ConcretePredictionModel class is updated to implement these additional methods. It performs data preprocessing before making the prediction and performs post-processing on the prediction results.

The ExplainablePredictionModel wrapper class is also updated to delegate the preprocessData and postprocessResults methods to the wrapped model, similar to the existing delegation for other methods.

In the main function, we demonstrate the complete prediction workflow using the explainableModel. We load the model, preprocess the data, make a prediction, explain the prediction, and post-process the prediction results.
PredictionModel.h,
ConcretePredictionModel.h,
ExplainablePredictionModel.h,
ComplexPredictions.cpp.

Code Problem - Linear Regression Explainer

The following code implements a linear regression model and an explainer class to provide explanations for predictions.
MLModel.h,
ExplainablePrediction.h,
LinearRegressionModel.h,
LinearRegressionExplainer.h,
LRExplainerMain.cpp.

The Fairness Lens Design Pattern

The Rationale

The Fairness Lens design pattern is about making sure that machine-learning models behave fairly toward different groups of people, especially when the model influences important decisions (loans, hiring, policing, healthcare, school admissions, etc.). The Fairness Lens ML design pattern exists because models often inherit human biases in the data, so fairness cannot be assumed and must be explicitly measured across different demographic groups using definitions appropriate to the domain. It prevents real-world harm and maintains trust by ensuring no group is systematically disadvantaged, and it treats fairness as an ongoing responsibility throughout data collection, model training, deployment, and drift monitoring. The pattern also aligns ML systems with ethical expectations and growing regulatory requirements in sectors like finance, healthcare, employment, and public services.

The UML

Here is a UML for the fairness lens design pattern where MLSystem uses a Model and a FairnessLens, FairnessLens runs one or more FairnessMetrics on different Groups, and Results go into a FairnessReport with per-group MetricResults:
Fairness Lens UML

Code Example - Fairness Lens

The below code shows a tiny version of the Fairness Lens idea. We have a Model that makes basic predictions by checking whether a score is above 0.5. Two groups (Group A and Group B) each have a few scores, and the FairnessMetric calculates how many of those scores the model predicts as positive. The FairnessLens then applies this metric to each group and prints the results. Because Group A has mostly high scores, it receives more positive predictions than Group B, which has only low scores. This difference illustrates how a fairness check can reveal unequal outcomes between groups, even in a very simple system.
C++: FairnessLens.cpp,
C#: FairnessLens.cs,
Java: FairnessLensSimple.java,
Python: FairnessLens.py

Common Usage

Below are some common usages of the Explainable Predictions design pattern in industry.

Finance and Banking: In finance, banks and lenders use fairness lenses to check whether their loan approval, credit scoring, and fraud detection models treat different groups (such as age, gender, or neighborhood) fairly, and to prove to regulators that the models do not unfairly deny credit or set worse terms for protected groups.

Hiring and HR: In hiring and human resources, companies apply fairness checks to resume screeners, interview scorers, and job-matching tools to see if candidates from different backgrounds get similar chances, and to adjust or redesign models that might quietly favor one group over another.

Healthcare and Medical AI: In healthcare, hospitals and health-tech companies use fairness lenses to compare model accuracy and treatment recommendations across groups (for example different ages, genders, or skin tones) so that diagnostic tools and risk scores do not systematically miss or mistreat certain patients.

Public Sector and Government Services: Governments and public agencies use fairness evaluations for algorithms that support decisions about benefits, housing, policing, and immigration, checking whether people from different communities receive similar treatment and changing or limiting models that create unfair outcomes.

Online Platforms and Advertising: Large online platforms apply fairness lenses to recommendation systems, search ranking, and ad delivery, to see whether certain groups or creators are being shown less often or given fewer opportunities, and to adjust ranking or targeting rules when they detect unfair patterns.

Insurance: Insurance companies use fairness checks to analyze pricing and risk models for auto, home, or life insurance so that people with similar real-world risk are treated similarly, and to make sure their algorithms do not indirectly use sensitive attributes like gender or race to set higher premiums.

Retail and Customer Analytics: In retail, fairness lenses are used to inspect personalized pricing, promotions, and customer scoring models so that customers in certain locations, income levels, or demographic groups are not consistently offered worse deals or poorer service compared to others.

AI Platforms and Tools Vendors: Big technology companies that provide ML tools and cloud services build fairness-lens components (such as fairness metrics, dashboards, and reports) into their platforms so that their customers—banks, hospitals, retailers, and governments—can routinely test and document the fairness of the models they deploy.

Code Problem - Fairness ML Auditor

Here is a UML for the fairness machine learning auditor:
Fairness ML Auditor UML

Here is a more detailed explanation of each component:

Code that demonstrates the main ideas from the UML (MLSystem, DataSet, Model, FairnessLens, FairnessMetric, BiasMitigationStrategy, FairnessReport, Group, DataStream, MetricResult) can be seen below:
BasicDataTypes.h; the structure types for Group, Metric Result and DataPoint,
DataSet.h, the DataSet class,
Model.h, the Model class,
FairnessMetric.h, the FairnessMetric class,
DemographicParityMetric.h, the DemographicParityMetric class,
BiasMitigationStrategy.h, the BiasMitigationStrategy class,
ThresholdAdjustmentStrategy.h, the ThresholdAdjustmentStrategy class,
FairnessReport.h, the FairnessReport class,
DataStream.h, the DataStream class,
FairnessLens.h, the FairnessLens class,
MLSystem.h, the MLSystem class,
FairMLAuditor.cpp, the main function.

Design Patterns - A Generalization

This concludes our study of design patterns, both standard and specific to machine learning. As we have studied them, we notice a common theme in all of them. On the one hand is an interface to some concrete components (or several interfaces to several sets of concrete components), and on the other hand is a manager or director (or server or publisher) of some sort to these components. There could even be several managers/directors/servers/publishers accessible through their own interfaces. See the diagram below:
Design Patterns, a Generalization

Compare the above generalization to our standard design patterns. The UMLs can be seen at:
Week 1:Singleton Pattern
Factory Design Pattern
Week 2:Abstract Factory Pattern
Builder Pattern
Prototype Pattern
Week 3:Adapter Pattern
Bridge Pattern
Week 4:Decorator Pattern
Composite Pattern
Week 5:Proxy Pattern
Flyweight Pattern
Week 6:Observer Pattern
Strategy Pattern
Week 7:Template Pattern
Chain of Responsibility Pattern
Compare this generalization to the machine learning design patterns. See the lecture notes at:
Week 8,
Week 9,
Week 10,
Week 11,
Week 12,
Week 13.