pyreal.explainers.GlobalFeatureImportance#

class pyreal.explainers.GlobalFeatureImportance(model, x_train_orig=None, e_algorithm=None, shap_type=None, **kwargs)[source]#

Generic GlobalFeatureImportance wrapper

A GlobalFeatureImportance object wraps multiple global feature-based explanations. If no specific algorithm is requested, one will be chosen based on the information given. Currently, only SHAP is supported.

Parameters:
  • model (string filepath or model object) – Filepath to the pickled model to explain, or model object with .predict() function

  • x_train_orig (dataframe of shape (n_instances, x_orig_feature_count)) – The training set for the explainer

  • e_algorithm (string, one of ["shap", "permutation"]) – Explanation algorithm to use. If none, one will be chosen automatically based on model type

  • shap_type (string, one of ["kernel", "linear"]) – Type of shap algorithm to use, if e_algorithm=”shap”.

  • **kwargs – see LocalFeatureContributionsBase args

__init__(model, x_train_orig=None, e_algorithm=None, shap_type=None, **kwargs)[source]#

Generic ExplainerBase object

Parameters:
  • model (string filepath or model object) – Filepath to the pickled model to explain, or model object with .predict() function model.predict() should return a single value prediction for each input Classification models should return the index or class. If the latter, the classes parameter should be provided.

  • x_train_orig (DataFrame of shape (n_instances, x_orig_feature_count)) – The training set for the explainer. If none, must be provided separately when fitting

  • y_train (Series of shape (n_instances,)) – The y values for the dataset

  • e_algorithm (string) – Algorithm to use, if applicable

  • feature_descriptions (dict) – Interpretable descriptions of each feature

  • classes (array) – List of class names returned by the model, in the order that the internal model considers them if applicable. Can be automatically extracted if model is an sklearn classifier None if model is not a classifier

  • class_descriptions (dict) – Interpretable descriptions of each class None if model is not a classifier

  • transformers (transformer object or list of transformer objects) – Transformer(s) used by the Explainer.

  • fit_on_init (Boolean) – If True, fit the explainer on initiation. If False, self.fit() must be manually called before produce() is called

  • training_size (Integer) – If given this value, sample a training set with size of this value from x_train_orig and use it to train the explainer instead of the entire x_train_orig.

  • return_original_explanation (Boolean) – If True, return the explanation originally generated without any transformations

  • fit_transformers (Boolean) – If True, fit transformers on x_train_orig. Requires x_train_orig not be None

  • openai_api_key (string) – OpenAI API key. Required for GPT narrative explanations, unless openai client is provided

  • openai_client (openai.Client) – OpenAI client object, with API key already set. If provided, openai_api_key is ignored

Methods

__init__(model[, x_train_orig, e_algorithm, ...])

Generic ExplainerBase object

evaluate_model(scorer[, x_orig, y])

Evaluate the model using a chosen scorer algorithm.

evaluate_variation([with_fit, explanations, ...])

Evaluate the variation of the explanations generated by this Explainer.

feature_description(feature_name)

Returns the interpretable description associated with a feature

fit([x_train_orig, y_train])

Fit this explainer object

model_predict(x_orig)

Predict on x_orig using the model and return the result

model_predict_on_algorithm(x_algorithm)

Predict on x_algorithm using the model and return the result

model_predict_proba(x_orig)

Return the output probabilities of each class for x_orig

produce([x_orig, disable_feature_descriptions])

Return the explanation, in the interpretable feature space with feature descriptions applied.

produce_explanation(**kwargs)

Gets the raw explanation.

produce_explanation_interpret(x_orig, **kwargs)

Produce an interpretable explanation and corresponding values

transform_explanation(explanation[, x_orig])

Transform the explanation into its interpretable form, by running the algorithm transformer's "inverse_transform_explanation" and interpretable transformers "transform_explanation" functions.

transform_to_x_algorithm(x_orig)

Transform x_orig to x_algorithm, using the algorithm transformers

transform_to_x_interpret(x_orig)

Transform x_orig to x_interpret, using the interpret transformers :param x_orig: Original input :type x_orig: DataFrame of shape (n_instances, x_orig_feature_count) or Series

transform_to_x_model(x_orig)

Transform x_orig to x_model, using the model transformers

transform_x_from_algorithm_to_model(x_algorithm)

Transform x_algorithm to x_model, using the model transformers