pyreal.explainers.SimilarExamples#
- class pyreal.explainers.SimilarExamples(model, x_train_orig=None, standardize=False, fast=True, **kwargs)[source]#
SimilarExamples object.
A SimilarExamples object gets feature explanation using the Nearest Neighbors algorithm
SimilarExamples explainers expect data to be entirely numeric
- Parameters:
model (string filepath or model object) – Filepath to the pickled model to explain, or model object with .predict() function
x_train_orig (DataFrame of size (n_instances, n_features)) – Training set in original form.
standardize (Boolean) – If True, standardize the data when selected similar examples
fast (Boolean) – If True, use a faster algorithm to compute the neighbors. Set to False if having trouble with faiss library
**kwargs – see base Explainer args
- __init__(model, x_train_orig=None, standardize=False, fast=True, **kwargs)[source]#
Generic ExplainerBase object
- Parameters:
model (string filepath or model object) – Filepath to the pickled model to explain, or model object with .predict() function model.predict() should return a single value prediction for each input Classification models should return the index or class. If the latter, the classes parameter should be provided.
x_train_orig (DataFrame of shape (n_instances, x_orig_feature_count)) – The training set for the explainer. If none, must be provided separately when fitting
y_train (Series of shape (n_instances,)) – The y values for the dataset
e_algorithm (string) – Algorithm to use, if applicable
feature_descriptions (dict) – Interpretable descriptions of each feature
classes (array) – List of class names returned by the model, in the order that the internal model considers them if applicable. Can be automatically extracted if model is an sklearn classifier None if model is not a classifier
class_descriptions (dict) – Interpretable descriptions of each class None if model is not a classifier
transformers (transformer object or list of transformer objects) – Transformer(s) used by the Explainer.
fit_on_init (Boolean) – If True, fit the explainer on initiation. If False, self.fit() must be manually called before produce() is called
training_size (Integer) – If given this value, sample a training set with size of this value from x_train_orig and use it to train the explainer instead of the entire x_train_orig.
return_original_explanation (Boolean) – If True, return the explanation originally generated without any transformations
fit_transformers (Boolean) – If True, fit transformers on x_train_orig. Requires x_train_orig not be None
openai_api_key (string) – OpenAI API key. Required for GPT narrative explanations, unless openai client is provided
llm (LLM model object) – Local LLM object or LLM client object to use to generate narratives. One of llm or openai_api_key must be provided.
Methods
__init__
(model[, x_train_orig, standardize, ...])Generic ExplainerBase object
evaluate_model
(scorer[, x_orig, y])Evaluate the model using a chosen scorer algorithm.
evaluate_variation
([with_fit, explanations, ...])Evaluate the variation of the explanations generated by this Explainer.
feature_description
(feature_name)Returns the interpretable description associated with a feature
fit
([x_train_orig, y_train])Fit the explainer
model_predict
(x_orig)Predict on x_orig using the model and return the result
model_predict_on_algorithm
(x_algorithm)Predict on x_algorithm using the model and return the result
model_predict_proba
(x_orig)Return the output probabilities of each class for x_orig
produce
([x_orig, disable_feature_descriptions])Return the explanation, in the interpretable feature space with feature descriptions applied.
produce_explanation
(x_orig, **kwargs)Unused for similar examples explainers as explanations are directly produced in the interpretable feature space
produce_explanation_interpret
(x_orig[, ...])Get the n nearest neighbors to x_orig
transform_explanation
(explanation[, x_orig])Transform the explanation into its interpretable form, by running the algorithm transformer's "inverse_transform_explanation" and interpretable transformers "transform_explanation" functions.
transform_to_x_algorithm
(x_orig)Transform x_orig to x_algorithm, using the algorithm transformers
transform_to_x_interpret
(x_orig)Transform x_orig to x_interpret, using the interpret transformers :param x_orig: Original input :type x_orig: DataFrame of shape (n_instances, x_orig_feature_count) or Series
transform_to_x_model
(x_orig)Transform x_orig to x_model, using the model transformers
transform_x_from_algorithm_to_model
(x_algorithm)Transform x_algorithm to x_model, using the model transformers