pyreal.transformers.Aggregator#
- class pyreal.transformers.Aggregator(mappings, func='sum', drop_original=True, missing='ignore', **kwargs)[source]#
Aggregate features into a single parent feature class
- __init__(mappings, func='sum', drop_original=True, missing='ignore', **kwargs)[source]#
Initialize a new Aggregator object
- Parameters:
mappings (Mappings) – A Mappings object representing the column relationships (see Mappings.generate_mappings to produce)
func (callable or one of ["sum", "mean", "max", "min", "remove"]) – The function to use to aggregate the features. If set to “remove”, the parent feature will be given None values (use in cases where you want to aggregate explanations on features, but no valid aggregation exists)
drop_original (bool) – Whether to drop the original features after aggregation
missing (str) – How to handle values in the mappings but not the transform data. One of [“ignore”, “raise”] If “ignore”, parent features will be made out of any child features that exist. If no child features exist, the parent feature will not be added. If “raise”, an error will be raised if any child features are missing
Methods
__init__
(mappings[, func, drop_original, ...])Initialize a new Aggregator object
data_transform
(X)Transform the input data, aggregating the features as specified in the mappings
fit
(x, **params)Fit this transformer to data
fit_transform
(x, **fit_params)Fits this transformer to data and then transforms the same data
inverse_data_transform
(x_new)Wrapper for inverse_data_transform.
inverse_transform
(x_new)Transforms data x_new from new feature space back into the original feature space.
inverse_transform_explanation
(x_new)Currently do not support reversing aggregations on explanations
inverse_transform_explanation_additive_feature_contribution
(...)Inverse transforms additive feature contribution explanations
inverse_transform_explanation_additive_feature_importance
(...)Inverse transforms additive feature importance explanations
inverse_transform_explanation_decision_tree
(...)Inverse transforms decision-tree explanations
inverse_transform_explanation_example
(...)Inverse transforms example-based explanations
inverse_transform_explanation_feature_based
(...)Inverse transforms feature-based explanations
inverse_transform_explanation_feature_contribution
(...)Inverse transforms feature contribution explanations
inverse_transform_explanation_feature_importance
(...)Inverse transforms feature importance explanations
inverse_transform_explanation_similar_example
(...)Inverse transforms similar-example-based explanations
set_flags
([model, interpret, algorithm])transform
(x)Wrapper for data_transform.
transform_explanation
(explanation)Transforms the explanation from the first feature space handled by this transformer to the second.
transform_explanation_additive_feature_contribution
(...)Sum together the contributions in explanation from the child features to get the parent features
transform_explanation_additive_feature_importance
(...)Sum together the importances in explanation from the child features to get the parent features
transform_explanation_decision_tree
(explanation)Inverse transforms feature-based explanations
transform_explanation_example
(explanation)Transforms example-based explanations
transform_explanation_feature_based
(explanation)Transforms feature-based explanations
transform_explanation_feature_contribution
(...)Transforms feature contribution explanations
transform_explanation_feature_importance
(...)Transforms feature importance explanations
transform_explanation_similar_example
(...)Transforms example-based explanations