pyreal.transformers.OneHotEncoder#

class pyreal.transformers.OneHotEncoder(columns=None, handle_unknown='error', **kwargs)[source]#

One-hot encodes categorical feature values

__init__(columns=None, handle_unknown='error', **kwargs)[source]#

Initializes the base one-hot encoder

Parameters:
  • columns (list, None, or "object_columns") –

    List of columns to apply one-hot encoding to. If None, all columns will be encoded. If “all_categorical”, all columns with an object dtype will be

    automatically encoded.

  • handle_unknown (one of "error", "ignore", "infrequent_if_exist") – How to handle unknown categories encountered during transform. “error” will raise an error, “ignore” will ignore the unknown category, and “infrequent_if_exist” will treat the unknown category as if it were an infrequent category.

Methods

__init__([columns, handle_unknown])

Initializes the base one-hot encoder

data_transform(x)

One-hot encode x.

fit(x, **params)

Fit this transformer to data

fit_transform(x, **fit_params)

Fits this transformer to data and then transforms the same data

inverse_data_transform(x_new)

Transforms one-hot encoded data x_new back into the original feature space.

inverse_transform(x_new)

Transforms data x_new from new feature space back into the original feature space.

inverse_transform_explanation(explanation)

Transforms the explanation from the second feature space handled by this transformer to the first.

inverse_transform_explanation_additive_feature_contribution(...)

Combine the contributions of one-hot-encoded features through adding to get the contributions of the original categorical feature.

inverse_transform_explanation_additive_feature_importance(...)

Combine the importances of one-hot-encoded features through adding to get the contributions of the original categorical feature.

inverse_transform_explanation_decision_tree(...)

Features cannot be decoded in existing decision trees, so raise a BreakingTransformError

inverse_transform_explanation_example(...)

Inverse transforms example-based explanations

inverse_transform_explanation_feature_based(...)

For non-additive feature-based explanations, the contributions or importances of the one-hot encoded features cannot be combined.

inverse_transform_explanation_feature_contribution(...)

Inverse transforms feature contribution explanations

inverse_transform_explanation_feature_importance(...)

Inverse transforms feature importance explanations

inverse_transform_explanation_similar_example(...)

Inverse transforms similar-example-based explanations

set_flags([model, interpret, algorithm])

transform(x)

Wrapper for data_transform.

transform_explanation(explanation)

Transforms the explanation from the first feature space handled by this transformer to the second.

transform_explanation_additive_feature_contribution(...)

Transforms additive feature contribution explanations

transform_explanation_additive_feature_importance(...)

Transforms additive feature importance explanations

transform_explanation_decision_tree(explanation)

Features cannot be added to encoded in existing decision trees, so raise a BreakingTransformError

transform_explanation_example(explanation)

Transforms example-based explanations

transform_explanation_feature_based(explanation)

For feature-based explanations, the contributions or importances of categorical features cannot be split into per-category features.

transform_explanation_feature_contribution(...)

Transforms feature contribution explanations

transform_explanation_feature_importance(...)

Transforms feature importance explanations

transform_explanation_similar_example(...)

Transforms example-based explanations