Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation multiple windows took too long #3143

Open
moghadas76 opened this issue Mar 19, 2024 · 1 comment
Open

Evaluation multiple windows took too long #3143

moghadas76 opened this issue Mar 19, 2024 · 1 comment
Labels
question Further information is requested

Comments

@moghadas76
Copy link

Description

Evaluation multiple windows took too long(25Min!!!). I am tiered from this package. It is real garbage!

To Reproduce

from gluonts.dataset.common import DataEntry, Dataset
from gluonts.dataset.field_names import FieldName
from gluonts.dataset.split import split
from gluonts.dataset.util import period_index
from gluonts.evaluation import Evaluator
def _to_dataframe(input_label) -> pd.DataFrame:
    start = input_label[0][FieldName.START]
    targets = [entry[FieldName.TARGET] for entry in input_label]
    full_target = np.concatenate(targets, axis=-1)
    index = period_index(
        {FieldName.START: start, FieldName.TARGET: full_target}
    )
    return pd.DataFrame(full_target.transpose(), index=index)

def _make_evaluation_predictions(
    dataset: Dataset,
    predictor,
    num_samples: int = 100,
):
     window_length = predictor.prediction_length + predictor.lead_time
    N_WINDOWS = 50
    _, test_template = split(dataset, offset=-window_length*N_WINDOWS)
    test_data = test_template.generate_instances(window_length, windows=N_WINDOWS)

    return (
        predictor.predict(test_data.input, num_samples=num_samples),
        map(_to_dataframe, test_data),
    )
forecast_it, ts_it = _make_evaluation_predictions(
    dataset=dataset_04.test,  # test dataset
    predictor=predictor,  # predictor
    num_samples=200,  # number of sample paths we want for evaluation
    )
evaluator = Evaluator(
    custom_eval_fn={
        "mae_sota": [metrics.mae, "mean", "median"],
        "mape_sota": [metrics.mape, "mean", "median"],
        "rmse_sota": [metrics.rmse, "mean", "median"]
    }
)
agg_m, item_m = evaluator(
    ts_iterator=ts_it,
    fcst_iterator=forecast_it,
    num_series=len(dataset_04.test),
)
print(json.dumps(agg_m, indent=4))
print(agg_m["mae_sota"], agg_m["mape_sota"], agg_m["rmse_sota"])

Environment

  • Operating system: 22.04
  • Python version: 3.11
  • GluonTS version: 0.13.3
  • MXNet version:

(Add as much information about your environment as possible, e.g. dependencies versions.)

@moghadas76 moghadas76 added the bug Something isn't working label Mar 19, 2024
@lostella
Copy link
Contributor

lostella commented Apr 5, 2024

Hi @moghadas76, can you share details about the test data that you're using and the model? Without that, it's hard to tell if there's an actual issue here

@lostella lostella added question Further information is requested and removed bug Something isn't working labels Apr 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants