Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the default shift size h of finite_diff and spsa_grad depend on shots #5568

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

dwierichs
Copy link
Contributor

@dwierichs dwierichs commented Apr 24, 2024

Context:
Finite differences use a shift parameter h by which input parameters are shifted. The optimal choice of this parameter (bias-variance tradeoff) is not known a priori, but it is well-known that analytic and shot-based computations have very different optimal settings.
The current default 1e-7 is a poor choice for shot-based evaluation because of the large variance incurred by sampling. For shots=None it is a reasonable setting.

The same holds for spsa_grad, which uses a finite difference recipe under the hood as well.

Description of the Change:
Introduce helper functions to finite_diff and spsa_grad that sets the default value of h to 0.1 when the differentiated tape has shots!=None. If shots=None, the previous default values (1e-7 for finite_diff, 1e-5 for spsa_grad) are maintained, and of course a user-specified h overwrites these default values.

Benefits:
Reasonable results for the gradient transforms when using shots and not specifying h manually.
In particular, when param_shift falls back to finite_diff, no h value can be specified.

Possible Drawbacks:
Defaults are set arbitrary, but so they were before.

Related GitHub Issues:
#904
Regarding this issue: A way more involved relationship between shots and shift size is proposed in the discussion there.
Given that unknown parameters would be needed to implement the equation suggested there, and that it only holds for a particular finite difference recipe (and thus not for other approx_order, n or strategy settings), it seems infeasible to implement such a complex heuristic. This PR is a coarse-grained fix preventing PennyLane from running almost-surely unreasonable circuits by default.
@glassnotes @co9olguy

[sc-4095]

Comment on lines -361 to +369
>>> qml.gradients.finite_diff(circuit, h=10e-2)(params)
>>> qml.gradients.finite_diff(circuit, h=0.1)(params)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a change.

@@ -787,6 +779,7 @@ def test_var_expectation_values(self, approx_order, strategy, validate):
def test_prob_expectation_values(self, approx_order, strategy, validate):
"""Tests correct output shape and evaluation for a tape
with prob and expval outputs"""
np.random.seed(239)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failed randomly while testing locally, so I add a seed.

Copy link

codecov bot commented Apr 24, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.68%. Comparing base (10d59e7) to head (1f7834d).
Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5568      +/-   ##
==========================================
- Coverage   99.69%   99.68%   -0.01%     
==========================================
  Files         410      410              
  Lines       38247    37966     -281     
==========================================
- Hits        38130    37848     -282     
- Misses        117      118       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dwierichs dwierichs added the do not merge ⚠️ Do not merge the pull request until this label is removed label Apr 24, 2024
@dwierichs
Copy link
Contributor Author

It was decided not to implement this change. Good to close @trbromley @co9olguy ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not merge ⚠️ Do not merge the pull request until this label is removed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant