Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

_catboost.CatBoostError: /src/catboost/catboost/libs/model/model.cpp:564: Too many features in model, ask catboost team for support #2648

Open
yuzumei opened this issue Apr 24, 2024 · 3 comments

Comments

@yuzumei
Copy link

yuzumei commented Apr 24, 2024

Problem:_catboost.CatBoostError: /src/catboost/catboost/libs/model/model.cpp:564: Too many features in model, ask catboost team for support
catboost version:1.2.3
Operating System:Ubuntu 20.04.5
CPU:Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
GPU: Tesla-V100-32G

Log:
bestTest = 0.08237376672
bestIteration = 12065
Shrink model to first 12066 iterations.
Traceback (most recent call last):
File "/root/code/leads_mining/dlc_train_file.py", line 183, in
model = train_cbm_model(train_df, train_type)
File "/root/code/leads_mining/dlc_train_file.py", line 126, in train_cbm_model
model.fit(X_train, y_train, eval_set=[(X_test, y_test)], early_stopping_rounds=100,
File "/home/pai/lib/python3.9/site-packages/catboost/core.py", line 5201, in fit
self._fit(X, y, cat_features, text_features, embedding_features, None, sample_weight, None, None, None, None, baseline, use_best_model,
File "/home/pai/lib/python3.9/site-packages/catboost/core.py", line 2396, in _fit
self._train(
File "/home/pai/lib/python3.9/site-packages/catboost/core.py", line 1776, in _train
self._object._train(train_pool, test_pool, params, allow_clear_pool, init_model._object if init_model else None)
File "_catboost.pyx", line 4833, in _catboost._CatBoost._train
File "_catboost.pyx", line 4882, in _catboost._CatBoost._train
_catboost.CatBoostError: /src/catboost/catboost/libs/model/model.cpp:564: Too many features in model, ask catboost team for support

The model only reported an error after training. After checking the source code, I found that the error was reported because featureIndex.FeatureIdx <= 0xffff. So since training can be carried out, why did it report an error after the end?
Is there a way to tell me what the value of featureIndex.FeatureIdx is before model training starts.
Finally, could you please help me explain what the featureIndex.FeatureIdx field means? Thank you.

@andrey-khropov
Copy link
Member

After checking the source code, I found that the error was reported because featureIndex.FeatureIdx <= 0xffff.

It is exactly the opposite. The condition featureIndex.FeatureIdx <= 0xfff is checked and if it is false then error is reported.

Finally, could you please help me explain what the featureIndex.FeatureIdx field means? Thank you.

You can see it in the code in model.cpp.

Basically all different binary splits in the trees in the trained model are grouped into "features". Features correspond to real features (including implicitly derived features from original categorical, text and embedding features) if the number of splits for a particular feature is less than MAX_VALUES_PER_BIN (which is 254 ), otherwise original features are split into subfeatures with at most MAX_VALUES_PER_BIN splits for each subfeature.

I suspect that the issue here is that too many derived features from categorical features are implicitly generated and used during the training.

Do you have categorical features, and if the answer is positive - how many? How many distinct values (categories) do they have?

@yuzumei
Copy link
Author

yuzumei commented Apr 26, 2024

After checking the source code, I found that the error was reported because featureIndex.FeatureIdx <= 0xffff.

It is exactly the opposite. The condition featureIndex.FeatureIdx <= 0xfff is checked and if it is false then error is reported.

Finally, could you please help me explain what the featureIndex.FeatureIdx field means? Thank you.

You can see it in the code in model.cpp.

Basically all different binary splits in the trees in the trained model are grouped into "features". Features correspond to real features (including implicitly derived features from original categorical, text and embedding features) if the number of splits for a particular feature is less than MAX_VALUES_PER_BIN (which is 254 ), otherwise original features are split into subfeatures with at most MAX_VALUES_PER_BIN splits for each subfeature.

I suspect that the issue here is that too many derived features from categorical features are implicitly generated and used during the training.

Do you have categorical features, and if the answer is positive - how many? How many distinct values (categories) do they have?

I have almost 900 categorical features,There are generally 2 to 30 different values ​​for each feature, and no more than 300 at most.I checked some previous issues and found that reducing max_ctr_complexity can solve this problem, but this will greatly affect the accuracy of the model.
What I want to know is whether this featureIndex.FeatureIdx is fixed before the model training starts, or will it continue to increase as the model is trained? Can I set it to stop training when it exceeds 0xffff during model training? Otherwise, it will waste a lot of time (because the error will not be reported until the model training is completed)

@ek-ak
Copy link
Collaborator

ek-ak commented Apr 27, 2024

Hello!
or will it continue to increase as the model is trained - yes. A new counter feature can be generated for each new split https://catboost.ai/en/docs/concepts/algorithm-main-stages_cat-to-numberic
You can you use model-size-reg option to decrease number of resulting features, it should solve your problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants