New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
_catboost.CatBoostError: /src/catboost/catboost/libs/model/model.cpp:564: Too many features in model, ask catboost team for support #2648
Comments
It is exactly the opposite. The condition
You can see it in the code in model.cpp. Basically all different binary splits in the trees in the trained model are grouped into "features". Features correspond to real features (including implicitly derived features from original categorical, text and embedding features) if the number of splits for a particular feature is less than I suspect that the issue here is that too many derived features from categorical features are implicitly generated and used during the training. Do you have categorical features, and if the answer is positive - how many? How many distinct values (categories) do they have? |
I have almost 900 categorical features,There are generally 2 to 30 different values for each feature, and no more than 300 at most.I checked some previous issues and found that reducing max_ctr_complexity can solve this problem, but this will greatly affect the accuracy of the model. |
Hello! |
Problem:_catboost.CatBoostError: /src/catboost/catboost/libs/model/model.cpp:564: Too many features in model, ask catboost team for support
catboost version:1.2.3
Operating System:Ubuntu 20.04.5
CPU:Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
GPU: Tesla-V100-32G
Log:
bestTest = 0.08237376672
bestIteration = 12065
Shrink model to first 12066 iterations.
Traceback (most recent call last):
File "/root/code/leads_mining/dlc_train_file.py", line 183, in
model = train_cbm_model(train_df, train_type)
File "/root/code/leads_mining/dlc_train_file.py", line 126, in train_cbm_model
model.fit(X_train, y_train, eval_set=[(X_test, y_test)], early_stopping_rounds=100,
File "/home/pai/lib/python3.9/site-packages/catboost/core.py", line 5201, in fit
self._fit(X, y, cat_features, text_features, embedding_features, None, sample_weight, None, None, None, None, baseline, use_best_model,
File "/home/pai/lib/python3.9/site-packages/catboost/core.py", line 2396, in _fit
self._train(
File "/home/pai/lib/python3.9/site-packages/catboost/core.py", line 1776, in _train
self._object._train(train_pool, test_pool, params, allow_clear_pool, init_model._object if init_model else None)
File "_catboost.pyx", line 4833, in _catboost._CatBoost._train
File "_catboost.pyx", line 4882, in _catboost._CatBoost._train
_catboost.CatBoostError: /src/catboost/catboost/libs/model/model.cpp:564: Too many features in model, ask catboost team for support
The model only reported an error after training. After checking the source code, I found that the error was reported because featureIndex.FeatureIdx <= 0xffff. So since training can be carried out, why did it report an error after the end?
Is there a way to tell me what the value of featureIndex.FeatureIdx is before model training starts.
Finally, could you please help me explain what the featureIndex.FeatureIdx field means? Thank you.
The text was updated successfully, but these errors were encountered: