Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with Categorical Feature Encoding in Binary Classification #2636

Open
ArtemBoltaev opened this issue Apr 16, 2024 · 1 comment
Open

Comments

@ArtemBoltaev
Copy link

Hello,

First, I would like to express my appreciation for the CatBoost library; it has been a fantastic tool for numerous machine learning tasks. However, I've encountered an encoding anomaly with categorical features that I cannot explain.

Reproduction Steps:
I created a simplified dataset for a binary classification task with a single categorical feature having three unique values. Two of these values correspond to conversions in the dataset, while the third value has zero conversions. Using CatBoost "out of the box," the model fails to differentiate between the categories; i.e., it outputs the same prediction across all feature values during testing.

What I've Tried:

  1. I consulted the documentation, searched Google for insights.
  2. I saved the model in Python format and reverse-engineered the code. I encountered issues, such as the feature hash not being calculated, leading to the branch if bucket is None: in the calc_ctr() method, which then uses ctr.calc(0, 0).
  3. Changing the simple_ctr from Borders to Buckets or increasing CtrBorderCount appears to differentiate the classes correctly.

Attachments:
I am attaching a Jupyter notebook with the example for your reference. catboost_debug_encoding.ipynb.zip

Could you please help understand why the default settings fail to distinguish between these categories and any possible steps to resolve this?

Thank you for your assistance and for developing such a powerful tool.

@ek-ak
Copy link
Collaborator

ek-ak commented May 6, 2024

Hello!
It seems that in your case (there are very few different values in your cat feature), the best options is to use one_hot_encoded features (set option one_hot_max_size to 100). We will check, why it is not default behaviour in your case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants