Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: iteration over a 0-d tensor #25

Open
philtomson opened this issue Sep 20, 2018 · 4 comments
Open

TypeError: iteration over a 0-d tensor #25

philtomson opened this issue Sep 20, 2018 · 4 comments

Comments

@philtomson
Copy link

philtomson commented Sep 20, 2018

I tried running:

python main.py --network_type rnn --dataset ptb --controller_optim adam --controller_lr 0.00035 --shared_optim sgd --shared_lr 20.0 --entropy_coeff 0.0001

But got:


2018-09-20 11:34:56,560:INFO::[*] Make directories : logs/ptb_2018-09-20_11-34-56
2018-09-20 11:35:01,015:INFO::regularizing:
2018-09-20 11:35:06,032:INFO::# of parameters: 146,014,000
2018-09-20 11:35:06,127:INFO::[*] MODEL dir: logs/ptb_2018-09-20_11-34-56
2018-09-20 11:35:06,128:INFO::[*] PARAM path: logs/ptb_2018-09-20_11-34-56/params.json
/home/phil/anaconda3/envs/conda_env/lib/python3.7/site-packages/torch/nn/functional.py:995: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
  warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
/home/phil/anaconda3/envs/conda_env/lib/python3.7/site-packages/torch/nn/functional.py:1006: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
  warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
Traceback (most recent call last):
  File "main.py", line 48, in <module>
    main(args)
  File "main.py", line 34, in main
    trnr.train()
  File "/home/phil/devel/ENAS-pytorch/trainer.py", line 216, in train
    self.train_shared()
  File "/home/phil/devel/ENAS-pytorch/trainer.py", line 297, in train_shared
    hidden = utils.detach(hidden)
  File "/home/phil/devel/ENAS-pytorch/utils.py", line 130, in detach
    return tuple(detach(v) for v in h)
  File "/home/phil/devel/ENAS-pytorch/utils.py", line 130, in <genexpr>
    return tuple(detach(v) for v in h)
  File "/home/phil/devel/ENAS-pytorch/utils.py", line 130, in detach
    return tuple(detach(v) for v in h)
  File "/home/phil/devel/ENAS-pytorch/utils.py", line 130, in <genexpr>
    return tuple(detach(v) for v in h)
  File "/home/phil/devel/ENAS-pytorch/utils.py", line 130, in detach
    return tuple(detach(v) for v in h)
  File "/home/phil/anaconda3/envs/conda_env/lib/python3.7/site-packages/torch/tensor.py", line 381, in __iter__
    raise TypeError('iteration over a 0-d tensor')
TypeError: iteration over a 0-d tensor

Running Pytorch 0.4.1 on Python 3.7 (also tried on python 3.6.6, pytorch 0.4.1 and had same issue)

@philtomson
Copy link
Author

debugging this a bit, it calls utils.detatch() and ends up at:

-> return tuple(detach(v) for v in h)
  /home/phil/devel/ENAS-pytorch/utils.py(131)detach()
-> return tuple(detach(v) for v in h)
> /home/phil/.virtualenvs/p3.6/lib/python3.6/site-packages/torch/tensor.py(380)__iter__()
-> if self.dim() == 0:
(Pdb) pp self
tensor(0.2035, device='cuda:0', grad_fn=<SelectBackward>)
(Pdb) p self.dim()
0

So since it's a scalar and the dim is 0 it's going to raise a type error:

(Pdb) ll
373  	    def __iter__(self):
374  	        # NB: we use 'imap' and not 'map' here, so that in Python 2 we get a
375  	        # generator and don't eagerly perform all the indexes.  This could
376  	        # save us work, and also helps keep trace ordering deterministic
377  	        # (e.g., if you zip(*hiddens), the eager map will force all the
378  	        # indexes of hiddens[0] before hiddens[1], while the generator
379  	        # map will interleave them.)
380  ->	        if self.dim() == 0:
381  	            raise TypeError('iteration over a 0-d tensor')

@philtomson
Copy link
Author

If I'm understanding the response to this issue: #22
PyTorch 0.4.1 is not supported (needs to be 0.3.1?)

@philtomson
Copy link
Author

Just an FYI for folks wanting to run this code under versions of Pytorch >= 0.4.0: In trainer.py you should not use the call to utils.detach() - instead you should call detach_() on hidden:

Change from:

hidden = utils.detach(hidden)

to:

hidden.detach_()

@usmanhidral
Copy link

usmanhidral commented Aug 31, 2020

I resolved this error by changing that function in lstm_rnn fastai file
Before
def repackage_var(h):
return Variable(h.data) if type(h) == Variable else tuple(repackage_var(v) for v in h)
After
def repackage_var(h):
if not isinstance(h, list) and not isinstance(h, tuple) and len(h.size())==0:
h = torch.tensor([[h.data]])
return Variable(h.data) if type(h) == Variable else tuple(repackage_var(v) for v in h)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants