Generating counterfactual explanations with any ML model

The goal of this notebook is to show how to generate CFs for ML models using frameworks other than TensorFlow or PyTorch. This is a work in progress and here we show a method to generate diverse CFs by three methods: 1. Independent random sampling of features 2. Genetic algorithm 3. Querying a KD tree

We use scikit-learn models for demonstration.

1. Independent random sampling of features

[1]:
%load_ext autoreload
%autoreload 2
[2]:
# import DiCE
import dice_ml
from dice_ml.utils import helpers # helper functions

import numpy as np
import pandas as pd
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report, accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.ensemble import RandomForestClassifier

Loading dataset

We use the “adult” income dataset from UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/adult). For demonstration purposes, we transform the data as described in dice_ml.utils.helpers module.

[3]:
dataset = helpers.load_adult_income_dataset()
[4]:
dataset.head()
[4]:
age workclass education marital_status occupation race gender hours_per_week income
0 28 Private Bachelors Single White-Collar White Female 60 0
1 30 Self-Employed Assoc Married Professional White Male 65 1
2 32 Private Some-college Married White-Collar White Male 50 0
3 20 Private Some-college Single Service White Female 35 0
4 41 Self-Employed Some-college Married White-Collar White Male 50 0
[5]:
d = dice_ml.Data(dataframe=dataset, continuous_features=['age', 'hours_per_week'], outcome_name='income')

Training a custom ML model

Below, we build an ML model using scikit-learn to demonstrate how our methods can work with any sklearn model.

[6]:
target = dataset["income"]
# Split data into train and test
from sklearn.model_selection import train_test_split
datasetX = dataset.drop("income", axis=1)
x_train, x_test, y_train, y_test = train_test_split(datasetX,
                                                    target,
                                                    test_size = 0.2,
                                                    random_state=0,
                                                    stratify=target)

numerical=["age", "hours_per_week"]
categorical = x_train.columns.difference(numerical)
from sklearn.compose import ColumnTransformer

# We create the preprocessing pipelines for both numeric and categorical data.
numeric_transformer = Pipeline(steps=[
    ('scaler', StandardScaler())])

categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder(handle_unknown='ignore'))])

transformations = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numerical),
        ('cat', categorical_transformer, categorical)])

# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', transformations),
                      ('classifier', RandomForestClassifier())])
model = clf.fit(x_train, y_train)
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
  "10 in version 0.20 to 100 in 0.22.", FutureWarning)
[7]:
# provide the trained ML model to DiCE's model object
backend = 'sklearn'
m = dice_ml.Model(model=model, backend=backend)

Generate diverse counterfactuals

[8]:
# initiate DiCE
exp_random = dice_ml.Dice(d, m, method="random")
[9]:
query_instances = x_train[4:6]
[10]:
# generate counterfactuals
dice_exp_random = exp_random.generate_counterfactuals(query_instances, total_CFs=2, desired_class="opposite", verbose=False)

[11]:
dice_exp_random.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 1.0)
age workclass education marital_status occupation race gender hours_per_week income
0 47.0 Self-Employed Some-college Married Professional - Female 19.0 1
1 70.0 Government Doctorate Separated Sales Other - 84.0 1
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 23.0 Government Doctorate Separated - - - 2.0 0
1 51.0 - School Divorced Service White Female - 0

It can be observed that the random sampling method produces less sparse CFs in contrast to current DiCE’s implementation. The sparsity issue with random sampling worsens with increasing total_CFs

Further, different sets of counterfactuals can be generated with different random seeds.

[12]:
# generate counterfactuals
dice_exp_random = exp_random.generate_counterfactuals(query_instances, total_CFs=4, desired_class="opposite", random_seed=9) # default ranomd see is 17
[13]:
dice_exp_random.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 1.0)
age workclass education marital_status occupation race gender hours_per_week income
0 36.0 Government Prof-school Separated Sales Other - 22.0 -
1 80.0 - Bachelors - White-Collar - Female 81.0 -
2 73.0 Self-Employed Some-college Married Professional - Female 48.0 1
3 73.0 Government Prof-school Separated Sales Other - 50.0 1
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 21.0 Government Doctorate Separated - - - 14.0 0
1 81.0 Private Bachelors Single White-Collar White Female 30.0 0
2 - - School Divorced Service White Female 84.0 0
3 47.0 Private Bachelors Single White-Collar White Female 19.0 0

Selecting the features to vary

Here, you can ensure that DiCE varies only features that it makes sense to vary.

[14]:
# generate counterfactuals
dice_exp_random = exp_random.generate_counterfactuals(query_instances, total_CFs=4, desired_class="opposite",
                                       features_to_vary=['workclass','education','occupation','hours_per_week'])
[15]:
dice_exp_random.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 1.0)
age workclass education marital_status occupation race gender hours_per_week income
0 - - Bachelors - White-Collar - - 81.0 1
1 - - Bachelors - White-Collar - - 98.0 1
2 - - Bachelors - White-Collar - - 95.0 1
3 - - Assoc - White-Collar - - 88.0 1
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 - Government Prof-school - Other/Unknown - - 24.0 0
1 - Government Prof-school - Other/Unknown - - 17.0 0
2 - Private Assoc - White-Collar - - - 0
3 - Government Doctorate - - - - 84.0 0

Choosing feature ranges

Since the features are sampled randomly, they can freely vary across their range. In the below example, we show how range of continuous features can be controlled using permitted_range parameter that can now be passed during CF generation.

[16]:
# generate counterfactuals
dice_exp_random = exp_random.generate_counterfactuals(query_instances, total_CFs=4, desired_class="opposite",
                                       permitted_range={'age':[22,50],'hours_per_week':[40,60]})
[17]:
dice_exp_random.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 1.0)
age workclass education marital_status occupation race gender hours_per_week income
0 - Self-Employed Some-college Married Professional - Female 51.0 1
1 31.0 - Bachelors - White-Collar - Female 57.0 1
2 33.0 Government Prof-school Separated Other/Unknown Other - 46.0 -
3 36.0 Government Prof-school Separated Other/Unknown Other - 47.0 1
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 39.0 Government Doctorate Separated - - - 40.0 0
1 - Private Assoc - Professional White Female 40.0 0
2 40.0 Government Prof-school Separated Other/Unknown - - - 0
3 26.0 Other/Unknown HS-grad Divorced Blue-Collar - - - 0

2. Genetic Algorithm

Here, we show how to use DiCE can be used to generate CFs for any ML model by using the genetic algorithm to find the best counterfactuals close to the query point. The genetic algorithm converges quickly, and promotes diverse counterfactuals.

Training a custom ML model

Currently, the genetic algorithm method works with scikit-learn models. We will use the same model as shown previously in the notebook. Support for Tensorflow 1&2 and Pytorch will be implemented soon.

Generate diverse counterfactuals

[18]:
# initiate DiceGenetic
exp_genetic = dice_ml.Dice(d, m, method='genetic')
[19]:
# generate counterfactuals
dice_exp_genetic = exp_genetic.generate_counterfactuals(query_instances, total_CFs=4, desired_class=0, verbose=True)
Initializing initial parameters to the genetic algorithm...
Initialization complete! Generating counterfactuals...
Diverse Counterfactuals found! total time taken: 00 min 03 sec
Initializing initial parameters to the genetic algorithm...
Initialization complete! Generating counterfactuals...
Diverse Counterfactuals found! total time taken: 00 min 03 sec
[20]:
dice_exp_genetic.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 0)
age workclass education marital_status occupation race gender hours_per_week income
0 - - - - - - - - -
1 - - - - - - - - -
2 - - - - - - - - -
3 - - - - - - - - -
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0)
age workclass education marital_status occupation race gender hours_per_week income
0 - Private - - Blue-Collar White - - 0
1 30.0 Private - - White-Collar White - - 0
2 - Private - - Blue-Collar White - - 0
3 - - HS-grad - Blue-Collar White - - 0

We can also ensure that the genetic algorithm also only varies the features that you wish to vary

[21]:
# generate counterfactuals
dice_exp_genetic = exp_genetic.generate_counterfactuals(query_instances, total_CFs=2, desired_class=0,
                                       features_to_vary=['workclass','education','occupation','hours_per_week'])
dice_exp_genetic.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 0)
age workclass education marital_status occupation race gender hours_per_week income
0 - - - - - - - - -
1 - - - - - - - - -
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0)
age workclass education marital_status occupation race gender hours_per_week income
0 - - - - Blue-Collar - - - 0
1 - - - - Blue-Collar - - - 0

You can also constrain the features to vary only within the permitted range

[22]:
# generate counterfactuals
dice_exp_genetic = exp_genetic.generate_counterfactuals(query_instances, total_CFs=2, desired_class=0,
                                                        permitted_range={'age':[22,50],'hours_per_week':[40,60]})
dice_exp_genetic.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 0)
age workclass education marital_status occupation race gender hours_per_week income
0 - - - - - - - - -
1 - - - - - - - - -
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0)
age workclass education marital_status occupation race gender hours_per_week income
0 - Private - - Blue-Collar White - - 0
1 - Private - - Blue-Collar White - - 0

3. Querying a KD Tree

Here, we show how to use DiCE can be used to generate CFs for any ML model by finding the closest points in the dataset that give the output as the desired class. We do this efficiently by building KD trees for each class, and querying the KD tree of the desired class to find the k closest counterfactuals from the dataset. The idea behind finding the closest points from the training data itself is to ensure that the counterfactuals displayed are feasible.

Training a custom ML model

Currently, the KD tree algorithm method works with scikit-learn models. Again, we will use the same model as shown previously in the notebook. Support for Tensorflow 1&2 and Pytorch will be implemented soon.

Generate diverse counterfactuals

[23]:
# initiate DiceKD
exp_KD = dice_ml.Dice(d, m, method='kdtree')
[24]:
# generate counterfactuals
dice_exp_KD = exp_KD.generate_counterfactuals(query_instances, total_CFs=4, desired_class="opposite")
[25]:
dice_exp_KD.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 1.0)
age workclass education marital_status occupation race gender hours_per_week income
0 26.0 - Bachelors Married - - - - 1
1 - - Assoc Married White-Collar - - - 1
2 - Government Some-college - Service - - - 1
3 - - Some-college Married - Other - - 1
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 32.0 - - - - - - - 0
1 - - - - - White - - 0
2 - - - - Blue-Collar - - - 0
3 - - Assoc - Professional - - - 0

Selecting the features to vary

Here, again, you can vary only features that you wish to vary. Please note that the output counterfactuals are only from the training data. If you want other counterfactuals, please use the random or genetic method.

[26]:
# generate counterfactuals
dice_exp_KD = exp_KD.generate_counterfactuals(query_instances, total_CFs=4, desired_class="opposite",
                                       features_to_vary=['age', 'workclass','education','occupation','hours_per_week'])
[27]:
dice_exp_KD.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

Diverse Counterfactual set (new outcome: 1.0)
age workclass education marital_status occupation race gender hours_per_week income
0 - Government Some-college - Service - - - 1
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 32.0 - - - - - - - 0
1 - - - - Blue-Collar - - - 0
2 - - Assoc - Professional - - - 0
3 - Private Bachelors - White-Collar - - - 0

Selecting the feature ranges

Here, you can control the ranges of continuous features.

[28]:
# generate counterfactuals
dice_exp_KD = exp_KD.generate_counterfactuals(query_instances, total_CFs=5, desired_class="opposite",
                                                        permitted_range={'age':[30,50],'hours_per_week':[40,60]})
dice_exp_KD.visualize_as_dataframe(show_only_changes=True)
Query instance (original outcome : 0)
age workclass education marital_status occupation race gender hours_per_week income
0 27 Private School Single Blue-Collar White Male 40 0

No counterfactuals found!
Query instance (original outcome : 1)
age workclass education marital_status occupation race gender hours_per_week income
0 31 Self-Employed Some-college Married Sales Other Male 60 1

Diverse Counterfactual set (new outcome: 0.0)
age workclass education marital_status occupation race gender hours_per_week income
0 32.0 - - - - - - - 0
1 - - - - - White - - 0
2 - - Assoc - Professional - - - 0
3 - - Assoc - - White - - 0
4 - Private Bachelors - - White - - 0