How to explain neural networks using SHAP | Your Data Teacher (2023)

This post contains affiliate links to products. We may receive a commission for purchases made through these links.

How to explain neural networks using SHAP | Your Data Teacher (1)

Neural networks are fascinating and very efficient tools for data scientists, but they have a very huge flaw: they are unexplainable black boxes. In fact, they don’t give us any information about feature importance. Fortunately, there is a powerful approach we can use to interpret every model, even neural networks. It is the SHAP approach.

Let’s see how to use it for explain and interpret a neural network in Python.

If you want to know more about neural networks first, I suggest reading this book.

What is SHAP?

SHAP stands for SHapley Additive exPlanations. It’s a way to calculate the impact of a feature to the value of the target variable. The idea is you have to consider each feature as a player and the dataset as a team. Each player gives their contribution to the result of the team. The sum of these contributions gives us the value of the target variable given some values of the features (i.e. given a particular record).

The main concept is that the impact of a feature doesn’t rely only on the single feature, but on the entire set of features in the dataset. So, SHAP calculates the impact of every feature to the target variable (called shap value) using combinatorial calculus and retraining the model over all the combination of features that contains the one we are considering. The average absolute value of the impact of a feature against a target variable can be used as a measure of its importance.

A very clear explanation of SHAP is given in this great article.

The benefit of SHAP is that it doesn’t care about the model we use. In fact, it is a model-agnostic approach. So, it’s perfect to explain those models that don’t give us their own interpretation of feature importance, like neural networks.

Let’s see how to use SHAP in Python with neural networks.

An example in Python with neural networks

In this example, we are going to calculate feature impact using SHAP for a neural network using Python and scikit-learn. In real-life cases, you’d probably use Keras to build a neural network, but the concept is exactly the same.

For this example, we are going to use the diabetes dataset of scikit-learn, which is a regression dataset.

Let’s first install shap library.

!pip install shap

Then, let’s import it and other useful libraries.

import shapfrom sklearn.preprocessing import StandardScalerfrom sklearn.neural_network import MLPRegressorfrom sklearn.pipeline import make_pipelinefrom sklearn.datasets import load_diabetesfrom sklearn.model_selection import train_test_split

Now we can load our dataset and the feature names, that will be useful later.

X,y = load_diabetes(return_X_y=True)features = load_diabetes()['feature_names']

We can now split our dataset into training and test.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

Now we have to create our model. Since we are talking about a neural network, we must scale the features in advance. For this example, I’ll use a standard scaler. The model itself is a feedforward neural network with 5 neurons in the hidden layer, 10000 epochs and a logistic activation function with an auto-adaptive learning rate. In real life, you will optimize these hyperparameters properly before setting these values.

model = make_pipeline( StandardScaler(), MLPRegressor(hidden_layer_sizes=(5,),activation='logistic', max_iter=10000,learning_rate='invscaling',random_state=0))

We can now fit our model.

model.fit(X_train,y_train)

Now it comes the SHAP part. First of all, we need to create an object called explainer. It’s the object that takes, in input, the predict method of our model and the training dataset. In order to make SHAP model-agnostic, it performs a perturbation around the points of the training dataset and calculates the impact of this perturbation to the model. It’s a type of resampling technique, whose number of samples are set later. This approach is related to another famous approach called LIME, which has been proved to be a special case of the original SHAP approach. The result is a statistical estimate of the SHAP values.

Order my book on pre-processing!

In this book, I show the practical use of Python programming language to perform pre-processing tasks in machine learning projects.

  • Data cleaning
  • Encoding of the categorical variables
  • Principal Component Analysis
  • Scaling
  • Binning
  • Power transformations
  • Feature selection
  • SMOTE

Available in paperback and eBook formats.

Register to the upcoming webinar

[contact-form-7 404 "Not Found"]

How to explain neural networks using SHAP | Your Data Teacher (3)

In this webinar, the courseFeature importance and model interpretation in Pythonis introduced. The contents of the course and its benefits will be presented.

Join my free course

Join my free course about Exploratory Data Analysis and you'll learn:

  • data visualization
  • multivariate analysis
  • correlation analysis
  • the most useful Python libraries

So, first of all let’s define the explainer object.

explainer = shap.KernelExplainer(model.predict,X_train)

Now we can calculate the shap values. Remember that they are calculated resampling the training dataset and calculating the impact over these perturbations, so ve have to define a proper number of samples. For this example, I’ll use 100 samples.

Then, the impact is calculated on the test dataset.

shap_values = explainer.shap_values(X_test,nsamples=100)

A nice progress bar appears and shows the progress of the calculation, which can be quite slow.

At the end, we get a (n_samples,n_features) numpy array. Each element is the shap value of that feature of that record. Remember that shap values are calculated for each feature and for each record.

Now we can plot what is called a “summary plot”. Let’s first plot it and then we’ll comment the results.

shap.summary_plot(shap_values,X_test,feature_names=features)
How to explain neural networks using SHAP | Your Data Teacher (4)

Each point of every row is a record of the test dataset. The features are sorted from the most important one to the less important. We can see that s5 is the most important feature. The higher the value of this feature, the more positive the impact on the target. The lower this value, the more negative the contribution.

Let’s go deeper inside a particular record, for example the first one. A very useful plot we can draw is called force plot

shap.initjs()shap.force_plot(explainer.expected_value, shap_values[0,:] ,X_test[0,:],feature_names=features)
How to explain neural networks using SHAP | Your Data Teacher (5)

113.90 is the predicted value. The base value is the average value of the target variable across all the records. Each stripe shows the impact of its feature in pushing the value of the target variable farther or closer to the base value. Red stripes show that their features push the value towards higher values. Blue stripes show that their features push the value towards lower values. The wider a stripe, the higher (in absolute value) the contribution. The sum of these contributions pushes the value of the target variable from the vase value to the final, predicted value.

As we can see, for this particular record, bmi, bp, s2, sex and s5 values have a positive contribution to the predicted value. s5 is still the most important variable of this record, because its contribution is the widest one (it has the largest stripe). The only variable that shows a negative contribution is s1, but it’s not strong enough to move the predicted value lower than the base value. So, since the total positive contribution (red stripes) is larger than the negative contribution (blue stripe), the final value is greater than the base value. That’s how SHAP works.

As we can see, we are learning several things about feature importance by reading only these charts. We don’t care about the model we are using, because SHAP is a model-agnostic approach. We just care about how the features impact the predicted value. This is very helpful for explaining black-boxes models like, in this example, neural networks.

We could never achieve such a knowledge of our dataset just knowing the weights of our neural network and that’s why SHAP is a very useful approach.

Conclusions

SHAP is a very powerful approach when it comes to explaining models that are not able to give use their own interpretation of feature importance. Such models are, for example, neural networks and KNN. Although this method is quite powerful, there’s no free lunch and we have to suffer some computationally expensive calculations that we must be aware of.

FAQs

How to explain neural networks using SHAP | Your Data Teacher? ›

A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.

How do you explain neural networks in learning? ›

A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.

What is the best way to explain neural networks? ›

Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.

Can you use shap for neural networks? ›

Learn how to explain predictions of convolutional neural networks with PyTorch and SHAP. Black-box models are a thing of the past — even with deep learning. You can use SHAP to interpret the predictions of deep learning models, and it requires only a couple of lines of code.

How do neural networks work for dummies? ›

Simply put, a neuron collects inputs from other neurons using dendrites. The neuron sums all the inputs and if the resulting value is greater than a threshold, it fires. The fired signal is then sent to other connected neurons through the axon.

Why are neural networks hard to explain? ›

Neural Network is one of the black box models that would not give "easy to understand" rules / or what has been learned. Specifically, what has been learned are the parameters in the model, but the parameters can be large: hundreds of thousands of the parameters is very normal.

How do you interpret Shap interaction values? ›

Positive SHAP value means positive impact on prediction, leading the model to predict 1(e.g. Passenger survived the Titanic). Negative SHAP value means negative impact, leading the model to predict 0 (e.g. passenger didn't survive the Titanic).

How does SHAP explain image classification? ›

Explain Image Classification by SHAP Deep Explainer

Image classification tasks can be explained by the scores on each pixel on a predicted image, which indicates how much it contributes to classifying that image into a particular class.

What is SHAP in deep learning? ›

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).

What are the 3 types of learning in neural network? ›

The three main types of learning in neural networks are supervised learning, unsupervised learning, and reinforcement learning.

What are the basic example of neural networks? ›

Neural networks are designed to work just like the human brain does. In the case of recognizing handwriting or facial recognition, the brain very quickly makes some decisions. For example, in the case of facial recognition, the brain might start with “It is female or male? Is it black or white?

How do you explain neural network output? ›

Computing neural network output occurs in three phases. The first phase is to deal with the raw input values. The second phase is to compute the values for the hidden-layer nodes. The third phase is to compute the values for the output-layer nodes.

What is a neural network for beginners? ›

Neural networks are artificial systems that were inspired by biological neural networks. These systems learn to perform tasks by being exposed to various datasets and examples without any task-specific rules.

Does neural network mean deep learning? ›

Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. It's the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three.

What are the three ways of neural network learning? ›

The three main types of learning in neural networks are supervised learning, unsupervised learning, and reinforcement learning.

What is a neural network quizlet? ›

Neural networks. Neural networks are a class of machine learning algorithms used to model complex patterns in datasets using multiple hidden layers and non-linear activation functions.

Top Articles
Latest Posts
Article information

Author: Melvina Ondricka

Last Updated: 09/11/2023

Views: 6480

Rating: 4.8 / 5 (68 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Melvina Ondricka

Birthday: 2000-12-23

Address: Suite 382 139 Shaniqua Locks, Paulaborough, UT 90498

Phone: +636383657021

Job: Dynamic Government Specialist

Hobby: Kite flying, Watching movies, Knitting, Model building, Reading, Wood carving, Paintball

Introduction: My name is Melvina Ondricka, I am a helpful, fancy, friendly, innocent, outstanding, courageous, thoughtful person who loves writing and wants to share my knowledge and understanding with you.