- fixed Python 3.7 support;
- added support for XGBoost > 0.6a2;
- fixed deprecation warnings in numpy >= 1.14;
- documentation, type annotation and test improvements.
- backwards incompatible: DataFrame objects with explanations no longer use indexes and pivot tables, they are now just plain DataFrames;
- new method for inspection black-box models is added (Permutation Importance);
- transfor_feature_names is implemented for sklearn’s MinMaxScaler, StandardScaler, MaxAbsScaler and RobustScaler;
- zero and negative feature importances are no longer hidden;
- fixed compatibility with scikit-learn 0.19;
- fixed compatibility with LightGBM master (2.0.5 and 2.0.6 are still unsupported - there are bugs in LightGBM);
- documentation, testing and type annotation improvements.
- better pandas.DataFrame integration:
eli5.format_as_dataframesfunctions allow to export explanations to pandas.DataFrames;
eli5.explain_prediction()now shows predicted class for binary classifiers (previously it was always showing positive class);
targets=[<class>]now for binary classifiers; e.g. to show result as seen for negative class, you can use
eli5.explain_weights()for libsvm-based linear estimators from sklearn.svm:
SVC(kernel='linear')(only binary classification),
NuSVC(kernel='linear')(only binary classification),
eli5.explain_weights()for LightGBM estimators in Python 2 when
importance_typeis ‘split’ or ‘weight’;
- testing improvements.
eli5.explain_weights()for XGBoost models trained on pandas.DataFrame;
eli5.explain_weights()for LightGBM models trained on pandas.DataFrame;
- fixed an issue with
eli5.explain_prediction()for XGBoost models trained on pandas.DataFrame when feature names contain dots;
- testing improvements.
- Better pandas support in
eli5.explain_prediction()for xgboost, sklearn, LightGBM and lightning.
- Better scikit-learn Pipeline support in
eli5.explain_weights(): it is now possible to pass a Pipeline object directly. Curently only SelectorMixin-based transformers, FeatureUnion and transformers with
get_feature_namesare supported, but users can register other transformers; built-in list of supported transformers will be expanded in future. See Transformation pipelines for more.
- Inverting of HashingVectorizer is now supported inside FeatureUnion
eli5.sklearn.unhashing.invert_hashing_and_fit(). See Reversing hashing trick.
- Fixed compatibility with Jupyter Notebook >= 5.0.0.
eli5.explain_weights()for Lasso regression with a single feature and no intercept.
- Fixed unhashing support in Python 2.x.
- Documentation and testing improvements.
- bug fix: eli5 should remain importable if xgboost is available, but not installed correctly.
- feature contribution calculation fixed
eli5.explain_prediction(): new ‘top_targets’ argument allows to display only predictions with highest or lowest scores;
eli5.explain_weights()allows to customize the way feature importances are computed for XGBClassifier and XGBRegressor using
importance_typeargument (see docs for the eli5 XGBoost support);
eli5.explain_weights()uses gain for XGBClassifier and XGBRegressor feature importances by default; this method is a better indication of what’s going, and it makes results more compatible with feature importances displayed for scikit-learn gradient boosting methods.
- packaging fix: scikit-learn is added to install_requires in setup.py.
eli5.explain_prediction()works for XGBClassifier, XGBRegressor from XGBoost and for ExtraTreesClassifier, ExtraTreesRegressor, GradientBoostingClassifier, GradientBoostingRegressor, RandomForestClassifier, RandomForestRegressor, DecisionTreeClassifier and DecisionTreeRegressor from scikit-learn. Explanation method is based on http://blog.datadive.net/interpreting-random-forests/ .
eli5.explain_weights()now supports tree-based regressors from scikit-learn: DecisionTreeRegressor, AdaBoostRegressor, GradientBoostingRegressor, RandomForestRegressor and ExtraTreesRegressor.
eli5.explain_weights()works for XGBRegressor;
- new TextExplainer class allows to explain predictions of black-box text classification pipelines using LIME algorithm; many improvements in eli5.lime.
- rendering performance is improved;
- a number of remaining feature importances is shown when the feature importance table is truncated;
- styling of feature importances tables is fixed;
eli5.explain_prediction()support more linear estimators from scikit-learn: HuberRegressor, LarsCV, LassoCV, LassoLars, LassoLarsCV, LassoLarsIC, OrthogonalMatchingPursuit, OrthogonalMatchingPursuitCV, PassiveAggressiveRegressor, RidgeClassifier, RidgeClassifierCV, TheilSenRegressor.
- text-based formatting of decision trees is changed: for binary classification trees only a probability of “true” class is printed, not both probabilities as it was before.
feature_filterin addition to
feature_refor filtering features, and
eli5.explain_prediction()now also supports both of these arguments;
- ‘Weight’ column is renamed to ‘Contribution’ in the output of
show_feature_values=Trueformatter argument allows to display input feature values;
- fixed an issue with analyzer=’char_wb’ highlighting at the start of the text.
- packaging fixes: require attrs > 16.0.0, fixed README rendering
- HTML output;
- IPython integration;
- JSON output;
- visualization of scikit-learn text vectorizers;
- sklearn-crfsuite support;
- lightning support;
- eli5.lime improvements: samplers for non-text data, bug fixes, docs;
- HashingVectorizer is supported for regression tasks;
- performance improvements - feature names are lazy;
- sklearn ElasticNetCV and RidgeCV support;
- it is now possible to customize formatting output - show/hide sections, change layout;
- sklearn OneVsRestClassifier support;
- sklearn DecisionTreeClassifier visualization (text-based or svg-based);
- dropped support for scikit-learn < 0.18;
- basic mypy type annotations;
feature_reargument allows to show only a subset of features;
target_namesargument allows to change display names of targets/classes;
targetsargument allows to show a subset of targets/classes and change their display order;
- documentation, more examples.
- Candidate features in eli5.sklearn.InvertableHashingVectorizer are ordered by their frequency, first candidate is always positive.
- HashingVectorizer support in explain_prediction;
- add an option to pass coefficient scaling array; it is useful if you want to compare coefficients for features which scale or sign is different in the input;
- bug fix: classifier weights are no longer changed by eli5 functions.
- eli5.sklearn.InvertableHashingVectorizer and eli5.sklearn.FeatureUnhasher allow to recover feature names for pipelines which use HashingVectorizer or FeatureHasher;
- added support for scikit-learn linear regression models (ElasticNet, Lars, Lasso, LinearRegression, LinearSVR, Ridge, SGDRegressor);
- doc and vec arguments are swapped in explain_prediction function; vec can now be omitted if an example is already vectorized;
- fixed issue with dense feature vectors;
- all class_names arguments are renamed to target_names;
- feature name guessing is fixed for scikit-learn ensemble estimators;
- testing improvements.
- support any black-box classifier using LIME (http://arxiv.org/abs/1602.04938) algorithm; text data support is built-in;
- “vectorized” argument for sklearn.explain_prediction; it allows to pass example which is already vectorized;
- allow to pass feature_names explicitly;
- support classifiers without get_feature_names method using auto-generated feature names.
- ‘top’ argument of
explain_predictioncan be a tuple (num_positive, num_negative);
- classifier name is no longer printed by default;
- added eli5.sklearn.explain_prediction to explain individual examples;
- fixed numpy warning.