Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] ETSformer training on GPU #160

Open
Sharaddition opened this issue Oct 13, 2023 · 0 comments
Open

[BUG] ETSformer training on GPU #160

Sharaddition opened this issue Oct 13, 2023 · 0 comments

Comments

@Sharaddition
Copy link

ETSformerForecaster trainer does not work on GPU notebooks.

Error:

indices should be either on cpu or on the same device as the indexed tensor (cpu)

To Reproduce

# Taken from Dashboard code
params = {'max_forecast_steps': forecast_steps,'n_past': n_past, 'use_gpu': True}
model_class = ModelFactory.get_model_class('ETSformerForecaster')
model = model_class(model_class.config_class(**params))

# EXOG DATA
if model.supports_exog and len(exog_columns) > 0:
    print('Exog Support: True')
    exog_ts = TimeSeries.from_pd(pd.concat((train_df.loc[:, exog_columns], test_df.loc[:, exog_columns])))
    train_df = train_df.loc[:, [target_column] + feature_columns]
    test_df = test_df.loc[:, [target_column] + feature_columns]
else:
    print('Exog Support: False')
    exog_ts = None

train_ts = TimeSeries.from_pd(train_df)
predictions = model.train(train_ts, exog_data=exog_ts)

Device:

  • Colab/Kaggle Notebook
  • Merlion Version 2.0.2

Additional context
Model trains perfectly on CPU, but as expected takes a long time to train on large data.

Screenshot
etsformer-issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant