pyreal.transformers.NarrativeTransformer#
- class pyreal.transformers.NarrativeTransformer(llm=None, openai_api_key=None, num_features=5, gpt_model_type='gpt-4o', context_description='', max_tokens=200, training_examples=None, **kwargs)[source]#
Transforms explanations to narrative (natural-language) form.
- __init__(llm=None, openai_api_key=None, num_features=5, gpt_model_type='gpt-4o', context_description='', max_tokens=200, training_examples=None, **kwargs)[source]#
Transforms explanations to narrative (natural-language) form. :param llm: Local LLM object or LLM client object to use to generate narratives. One of llm or openai_api_key must be provided. :type llm: LLM model object :param openai_api_key: OpenAI API key to use :type openai_api_key: string :param x_orig: Input to explain :type x_orig: DataFrame of shape (n_instances, n_features :param num_features: Number of features to include in the explanation, when relevant.
If None, all features will be included
- Parameters:
gpt_model_type (string) – OpenAI model to use to generate the explanation, if passing in an openai api key
context_description (string) – Description of the model’s prediction task, in sentence format. This will be passed to the LLM and may help produce more accurate explanations. For example: “The model predicts the price of houses.”
max_tokens (int) – Maximum number of tokens to use in the explanation
string (training_examples (dictionary of) –
list of tuples): Few-shot training examples. Keys are explanation type (currently support: “feature_contributions”) Values are lists of tuples, where the first element is the input to the model
and the second element is the example output.
Use the RealApp train_llm functions to populate this
- Returns:
- list of strings of length n_instances
Narrative version of feature contribution explanation, one item per instance
Methods
__init__
([llm, openai_api_key, ...])Transforms explanations to narrative (natural-language) form. :param llm: Local LLM object or LLM client object to use to generate narratives. One of llm or openai_api_key must be provided. :type llm: LLM model object :param openai_api_key: OpenAI API key to use :type openai_api_key: string :param x_orig: Input to explain :type x_orig: DataFrame of shape (n_instances, n_features :param num_features: Number of features to include in the explanation, when relevant. If None, all features will be included :type num_features: int :param gpt_model_type: OpenAI model to use to generate the explanation, if passing in an openai api key :type gpt_model_type: string :param context_description: Description of the model's prediction task, in sentence format. This will be passed to the LLM and may help produce more accurate explanations. For example: "The model predicts the price of houses." :type context_description: string :param max_tokens: Maximum number of tokens to use in the explanation :type max_tokens: int :param training_examples (dictionary of string: list of tuples): Few-shot training examples. Keys are explanation type (currently support: "feature_contributions") Values are lists of tuples, where the first element is the input to the model and the second element is the example output. Use the RealApp train_llm functions to populate this.
data_transform
(x)Transforms data x to a new feature space.
fit
(x, **params)Fit this transformer to data
fit_transform
(x, **fit_params)Fits this transformer to data and then transforms the same data
inverse_data_transform
(x_new)Wrapper for inverse_data_transform.
inverse_transform
(x_new)Transforms data x_new from new feature space back into the original feature space.
inverse_transform_explanation
(explanation)Transforms the explanation from the second feature space handled by this transformer to the first.
inverse_transform_explanation_additive_feature_contribution
(...)Inverse transforms additive feature contribution explanations
inverse_transform_explanation_additive_feature_importance
(...)Inverse transforms additive feature importance explanations
inverse_transform_explanation_decision_tree
(...)Inverse transforms decision-tree explanations
inverse_transform_explanation_example
(...)Inverse transforms example-based explanations
inverse_transform_explanation_feature_based
(...)Inverse transforms feature-based explanations
inverse_transform_explanation_feature_contribution
(...)Inverse transforms feature contribution explanations
inverse_transform_explanation_feature_importance
(...)Inverse transforms feature importance explanations
inverse_transform_explanation_similar_example
(...)Inverse transforms similar-example-based explanations
parse_feature_contribution_explanation_for_llm
(...)set_flags
([model, interpret, algorithm])set_training_examples
(explanation_type, ...)Set examples of narrative explanations for the request explanation type.
transform
(x)Wrapper for data_transform.
transform_explanation
(explanation)Transforms the explanation from the first feature space handled by this transformer to the second.
transform_explanation_additive_feature_contribution
(...)Transforms additive feature contribution explanations
transform_explanation_additive_feature_importance
(...)Transforms additive feature importance explanations
transform_explanation_decision_tree
(explanation)Inverse transforms feature-based explanations
transform_explanation_example
(explanation)Transforms example-based explanations
transform_explanation_feature_based
(explanation)Transforms feature-based explanations
transform_explanation_feature_contribution
(...)Transforms feature contribution explanations
transform_explanation_feature_importance
(...)Transforms feature importance explanations
transform_explanation_similar_example
(...)Transforms example-based explanations