SHAP library [2] creates explanations of the model wondering for every prediction and feature how the prediction x change if feature y is removed from the model. The so-called SHAP values are the answer.
Since we built LTR models using LambdaMART (Multiple Additive Regression Trees), we used the TreeExplainer [3], an algorithm to compute SHAP values for trees and ensembles of trees, in polynomial time.
TreeSHAP provides us with several different types of plots, each one highlighting a specific aspect of the model. Matplotlib, a highly useful visualization library, is used for the rendering of the graphs in Python.
SUMMARY PLOT
The summary plot gives us Global Interpretability. The shap.summary_plot function with plot_type = “bar”, let you produce the feature importance plot (variables ranked in descending order) with the mean(|SHAP value|) in the x-axis.