Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug help!!!!! #222

Open
quantumliang opened this issue Jan 19, 2024 · 2 comments
Open

bug help!!!!! #222

quantumliang opened this issue Jan 19, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@quantumliang
Copy link

quantumliang commented Jan 19, 2024

 0%|                                        | 1/2303 [00:06<4:13:39,  6.61s/it
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zl/quantum_molecular_generate/architecture/torchquantum_model.py", line 75, in forward
    tqf.cnot(self.device, wires=[k, 0])
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torchquantum-0.1.7-py3.8.egg/torchquantum/functional/functionals.py", line 2072, in cnot
    gate_wrapper(
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torchquantum-0.1.7-py3.8.egg/torchquantum/functional/functionals.py", line 372, in gate_wrapper
    q_device.states = apply_unitary_bmm(state, matrix, wires)
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torchquantum-0.1.7-py3.8.egg/torchquantum/functional/functionals.py", line 251, in apply_unitary_bmm
    new_state = mat.expand(expand_shape).bmm(permuted)
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/fx/traceback.py", line 57, in format_stack
    return traceback.format_stack()
 (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:114.)
  Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  0%|                                        | 1/2303 [00:12<7:50:40, 12.27s/it]
Traceback (most recent call last):
  File "/home/zl/quantum_molecular_generate/architecture/train.py", line 61, in <module>
    loss.backward()
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
    torch.autograd.backward(
  File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

while I run code, it will happen. And I don't want to use this method retain_graph=True to solve it, because it will lead to slower and slower performance. So what am I supposed to do

@GenericP3rson
Copy link
Collaborator

Hi! Could you provide a minimum example code that we can try to help debug?

@GenericP3rson GenericP3rson added the bug Something isn't working label Jan 27, 2024
@nikhilkhatri
Copy link

I had this bug too. In my code it happened because I had a QuantumDevice for which I didn't reset the state at the beginning of forward. This meant that the final state of the qdevice in the previous iteration of training was the initial state for the next iteration.

Making sure all QuantumDevice states were reset at the beginning of forward fixed this for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants