Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a Derivative metagridder to predict derivatives #245

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

leouieda
Copy link
Member

@leouieda leouieda commented Mar 17, 2020

The verde.Gradient class is a meta-gridder to calculate directional
derivatives from other fitted estimators. It takes as
argument the estimator, a finite-difference step, and a direction
vector. Calling Gradient.predict calculates the derivative in the
given direction using centred finite-differences. Since it inherits from
BaseGridder, grids, scatters, and profiles of the gradient can also be
calculated without any extra code.

Fixes #115

Reminders:

  • Run make format and make check to make sure the code follows the style guide.
  • Add tests for new features or tests that would have caught the bug that you're fixing.
  • Add new public functions/methods/classes to doc/api/index.rst and the base __init__.py file for the package.
  • Write detailed docstrings for all functions/classes/methods. It often helps to design better code if you write the docstrings first.
  • If adding new functionality, add an example to the docstring, gallery, and/or tutorials.
  • Add your full name, affiliation, and ORCID (optional) to the AUTHORS.md file (if you haven't already) in case you'd like to be listed as an author on the Zenodo archive of the next release.

The `verde.Gradient` class is a meta-gridder to calculate directional
derivatives from other fitted estimators. It takes as
argument the estimator, a finite-difference step, and a direction
vector. Calling `Gradient.predict` calculates the derivative in the
given direction using centred finite-differences. Since it inherits from
`BaseGridder`, grids, scatters, and profiles of the gradient can also be
calculated without any extra code.
@leouieda
Copy link
Member Author

@santisoler @jessepisel (and anyone else, really) this still needs tests and documentation but I'd love some feedback on the API before committing to it.

This is how one would use verde.Gradient to predict derivatives of scattered data:

import matplotlib.pyplot as plt
import numpy as np
import verde as vd

# Make synthetic data
data = vd.datasets.CheckerBoard(
    region=(0, 5000, 0, 2500), w_east=5000, w_north=2500
).scatter(size=700, random_state=0)

# Fit a spline to the data
spline = vd.Spline().fit((data.easting, data.northing), data.scalars)

# Predict finite-difference derivatives in East and North directions on regular grids
east_deriv = vd.Gradient(spline, step=10, direction=(1, 0)).grid(spacing=50)
north_deriv = vd.Gradient(spline, step=10, direction=(0, 1)).grid(spacing=50)

# Plot the original data and derivatives
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(9, 10), sharex=True)
ax1.set_title("Original data")
tmp = ax1.scatter(data.easting, data.northing, c=data.scalars, cmap="RdBu_r")
plt.colorbar(tmp, ax=ax1, label="data unit")
ax1.set_ylabel("northing")
east_deriv.scalars.plot.pcolormesh(ax=ax2, cbar_kwargs=dict(label="data unit / m"))
ax2.set_title("East derivative")
ax2.set_xlabel("")
north_deriv.scalars.plot.pcolormesh(ax=ax3, cbar_kwargs=dict(label="data unit / m"))
ax3.set_title("North derivative")
fig.tight_layout()
plt.show()

sphx_glr_gradient_001

The implementation is actually quite straight-forward and much easier than what was initially proposed in #115. That proposal was to add a predict_derivative method, which would require either adding arguments to BaseGridder.grid and others or creating copies of them for the derivatives. This implementation requires none of that and seems to be really stable (I'm actually surprised by the results above). It can also generalize to N dimensions depending on how many coordinates are given and length of direction.

@jessepisel
Copy link
Contributor

I think the API is pretty straightforward for the class. As you state, it should generalize to N dimensions pretty easily. The only thing that might be confusing is the direction tuple, but normalizing it like you did should deal with non-orthogonal directions. The example is succinct and makes sense from a code and visual standpoint as well. Nice work!

@leouieda
Copy link
Member Author

The only thing that might be confusing is the direction tuple, but normalizing it like you did should deal with non-orthogonal directions

Yeah, I thought about that for a while. It's much easier to think in terms of azimuth but it doesn't generalize to higher dimensions. I also thought of asking for normalized direction vectors but it seemed like an easy source of errors (forgetting to normalize). In the end, I can easily do the normalization internally and just ask for a vector pointing somewhere.

The nice thing about this is that it should work with anything following the Verde API, so you could vd.Chain a BlockReduce + Trend + Spline and calculate the gradient of that whole thing. In principle at least.

@santisoler
Copy link
Member

I also like how this new class can integrate into the existing API. Great design @leouieda!

I'm a little bit dubious about the need of the predictor. I see the point of having the spline when you start with a scatter of points (gridding should be done before computing derivatives). But what happens if you already have your grid? How this class could be used on that case?

I'm thinking that for very computing demanding predictions (big EQL, for example), it's better to grid data only once, save the grid and then process it afterwards.

@leouieda
Copy link
Member Author

I'm a little bit dubious about the need of the predictor. I see the point of having the spline when you start with a scatter of points (gridding should be done before computing derivatives).

In this case, the gridding is not even necessary. The spline is used to calculate the derivatives directly. Most cases we deal with in Verde are for turning scatters into grids, so in case you want a grid of the values and a grid of derivatives you can do both from the original data.

But what happens if you already have your grid? How this class could be used on that case?

Then this class is kind of useless. If you start off with a grid, then Verde isn't really that useful. Most of what we have assumes scatters or works better with them (BlockReduce, Trend, all of that). So if you have a grid and want the derivatives you can do the finite-difference yourself (it's not particularly hard).

I'm thinking that for very computing demanding predictions (big EQL, for example), it's better to grid data only once, save the grid and then process it afterwards.

That might be the case but prediction is usually faster than fitting. But sure, this class doesn't cover every use case and it might be better to generate one grid and do the finite-differences using it. We can add a function for that or leave it up to the user.

The main advantage of the class if that you can do FD with a step that's smaller than the grid spacing and don't have to deal with the edges. But it inherits all the limitations of our Green's functions based methods

@leouieda
Copy link
Member Author

Also, for some applications you might want the derivatives at arbitrary points, in which case this is the only way. For example, in Euler Deconvolution we need the data and derivatives at the same points and don't really require grids. So it would be best to predict derivatives on the observation points and not rely on gridded products.

@ahartikainen
Copy link

Is there a way to select a good step-size for the gradient?

I just wonder how will this behave in couple of cases:

  • hard boundaries
  • high curvature
  • high curvature with a kink
  • saddle point / flat surface --> gradient == 0

@leouieda
Copy link
Member Author

Is there a way to select a good step-size for the gradient?

Very good question. I wondered that myself to try to set a default for the argument. I didn't really find anything readily available in the literature but I also didn't dig very deep to try to find it. But if you know of something, then I'd be happy to try to implement it.

hard boundaries

Those would probably not be recovered though the shouldn't cause a problem. The derivative is coming from the spline, which is smooth by definition. I'll try to make a test case to see what happens if I shift half the data by a step function.

high curvature
high curvature with a kink

It might be unstable, as any finite-difference would be but I'm not sure what we can do about it. If this design is what we decide to include, then I'll make a tutorial exploring those cases to see how it behaves.

saddle point / flat surface --> gradient == 0

These should be fine unless there is a lot of noise in the data. But even then, it can be overcome by strongly damping the spline (which is great about this method).

@leouieda
Copy link
Member Author

Things to do before finishing this PR:

  • Fill in docstrings of the new class
  • Add an example with real data (strain from the GPS data?)
  • Test passing a Vector to Gradient (already did the other way around)
  • Add tutorial for gradient calculations exploring some of the questions raised above

@leouieda leouieda mentioned this pull request Apr 6, 2020
6 tasks
@leouieda
Copy link
Member Author

Now that I'm using this a bit I realise a few things:

  • Right now, we have to fit the estimator first and then make the derivatives to avoid fitting repeatedly.
  • Not fitting the Gradient feels strange.
  • We usually want more than 1 derivative, for example when doing anything in gravity and magnetics or calculating divergence.

A possibly better implementation would be:

  • Make the current Gradient class a Derivative instead (which is what it really is)
  • Make a Gradient class that is actually the gradient vector. It can take an estimator and fit it only once. Then use the Derivative to calculate the 2 or more derivatives for the gradient. It would subclass Vector probably.

@leouieda
Copy link
Member Author

Actually, forget that last message. That makes things way more complicated. Though I still want to rename to Derivative since it's not a gradient.

@mtb-za
Copy link

mtb-za commented Jun 19, 2020

Taking a look at using this, it seems fairly intuitive to use vd.derivative.Derivative, and it looks like it closely matches the output I get from using a different method (although I am working with gridded data there, or and here with reduced data). Putting it into a chain does not complain, but might not really be what is wanted if one needs both easting and northing derivatives?

That is:

processing_chain = vd.Chain(
    [
        ("reduce", vd.BlockReduce(np.mean, spacing=the_spacing)),
        ("easting_derivative", vd.derivative.Derivative(processing_chain, step=10, direction=(1, 0)),
        ("northing_derivative", vd.derivative.Derivative(processing_chain, step=10, direction=(0, 1)),
    ]
)

will not get you a northing derivative and an easting derivative, but the northing derivative of the easting derivative of the block reduced data (at least as I understand the Chain function). It might be worth flagging if it gets used in a vd.Chain as a possible source of unintended error?

How would one get a vertical derivative? My instinct was to pass (0, 0, 1), but this fails because it has the wrong number of dimensions. This might be because the Chain only has two dimensions? How should this case be handled, since I suspect it might be fairly commonly desired.

@leouieda leouieda changed the title Add a Gradient metagridder to predict derivatives Add a Derivative metagridder to predict derivatives Oct 6, 2020
@leouieda
Copy link
Member Author

leouieda commented Oct 6, 2020

Hi @mtb-za, trying to revive this after a while. Thanks for your comments!

Putting it into a chain does not complain, but might not really be what is wanted if one needs both easting and northing derivatives?

The Chain is for fitting things successively on each element's output. So if you want to calculate the x-derivative of the y-derivative you could:

processing_chain = vd.Chain(
    [
        ("reduce", vd.BlockReduce(np.mean, spacing=the_spacing)),
        ("easting_derivative", vd.Derivative(vd.Spline(), step=10, direction=(1, 0)),
        ("northing_derivative", vd.Derivative(vd.Spline(), step=10, direction=(0, 1)),
    ]
)

Passing in the processing_chain to the elements of the processing_chain would cause crazy things to happen 🙂

If wanted to calculate both derivatives, you can use Vector instead:

processing_chain = vd.Chain(
    [
        ("reduce", vd.BlockReduce(np.mean, spacing=the_spacing)),
        ("spline", vd.Spline(damping=1e-6, mindist=1e3)),
    ]
)
processing_chain.fit(coordinates, data, weights)

gradient = vd.Vector([
    vd.Derivative(processing_chain, step=10, direction=(1, 0),
    vd.Derivative(processing_chain, step=10, direction=(0, 1),
])

grid_derivatives = gradient.grid(region=[....], spacing=....)

@leouieda
Copy link
Member Author

leouieda commented Oct 6, 2020

How would one get a vertical derivative?

Vertical is only possible for potential fields since it relies on Laplace's equation. So for Verde it's impossible. But you should be able to do it in Harmonica with EQLHarmonic instead of Spline (which I should test before merging this actually).

@leouieda leouieda marked this pull request as draft March 17, 2021 13:34
@leouieda
Copy link
Member Author

Calculating strain rate from GPS data would be a great example of this in action. See equations and results in https://nhess.copernicus.org/articles/9/1177/2009/nhess-9-1177-2009.pdf

Main problem will be passing VectorSpline2D to this class. Just have to make sure that works calculating the derivative of each vector component.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Method to calculate finite-difference derivative on arbitrary points
5 participants