-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MemoryError #65
Comments
It seems the problem is not the GPU memory, but the RAM memory. How big is the dataset? Can you provide the full output of the training process? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I face a Memory Error trying to train a big model. Is there any way to train using all avalilable GPUs?
I am using a linux machine (gcloud) with 8 GPUs.
Traceback (most recent call last):
File "main.py", line 445, in
train_model(parameters, args.dataset)
File "main.py", line 73, in train_model
dataset = build_dataset(params)
File "/mnt/sdc200/nmt-keras/data_engine/prepare_data.py", line 229, in build_dataset
saveDataset(ds, params['DATASET_STORE_PATH'])
File "/mnt/sdc200/nmt-keras/src/keras-wrapper/keras_wrapper/dataset.py", line 52, in saveDataset
pk.dump(dataset, open(store_path, 'wb'), protocol=-1)
MemoryError
The text was updated successfully, but these errors were encountered: