New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make GlobalPhase not differentiable #5620
base: master
Are you sure you want to change the base?
Make GlobalPhase not differentiable #5620
Conversation
Thanks for this @Tarun-Kumar07 For the failures due to errors:
That would be expected, and we should shift the measurement to expectation values. For the failures due to:
Those are legitimately different results, so we can safely safe we are getting wrong results in that case 😢 I'll investigate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a couple of small comments and one major suggestion: Could we set GlobalPhase.grad_method = "F"
? This will produce unnecessary shifted tapes for expectation values and probabilities, but it will avoid wrong results when differentiating qml.state
with finite_diff
and param_shift
.
tests/templates/test_state_preparations/test_mottonen_state_prep.py
Outdated
Show resolved
Hide resolved
tests/templates/test_state_preparations/test_mottonen_state_prep.py
Outdated
Show resolved
Hide resolved
tests/templates/test_state_preparations/test_mottonen_state_prep.py
Outdated
Show resolved
Hide resolved
@Tarun-Kumar07 @albi3ro Not sure you got to this yet, but it seems that the decomposition of those state preparation methods handle special parameter values differently than others. This makes the derivative wrong at those special values, because Basically, the decomposition does something like the following decomposition for def compute_decomposition(theta, wires):
if not qml.math.is_abstract(theta) and qml.math.isclose(theta, 0):
return []
return [qml.RZ(theta, wires)] It's correct but it does not have the correct parameter-shift derivative at 0. This looks like an independent bug to me, and like one that could be hiding across the codebase for other ops as well, theoretically. |
Context:
When using the following state preparation methods (
AmplitudeEmbedding
,StatePrep
,MottonenStatePreparation
) withjit
andgrad
, the errorValueError: need at least one array to stack
was encountered.Description of the Change:
All state preparation strategies used
GlobalPhase
under the hood, which caused the above error. After this PR,GlobalPhase
may not be differentiable anymore, as itsgrad_method
is set toNone
.Benefits:
Possible Drawbacks:
Related GitHub Issues:
It fixes #5541