Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

version v0.7.0 roadmap #195

Open
wsavran opened this issue Aug 2, 2022 · 1 comment
Open

version v0.7.0 roadmap #195

wsavran opened this issue Aug 2, 2022 · 1 comment

Comments

@wsavran
Copy link
Collaborator

wsavran commented Aug 2, 2022

  1. hdf5 support for catalogs
  2. implement sampling for catalog-based tests @Serra314
@Serra314
Copy link

We discovered that the difference in the number of events between the simulated catalogues composing a forecasts and the number of observations may cause problems to the M-test.

The main reason behind is that we are estimating the score probability distribution calculating the score between the union of the simulated catalogues and each simulated catalogue. This means that if the simulated catalogues have an average of N events we are estimating the score probability distribution for a sample of length N. If the number of observations is different from N then the score between the union of the simulated catalogues and the observed events is coming from a different distribution than the one we have estimated using the forecasts. This leads to value of γ that, instead of being uniformly distributed, are very concentrated around 1 or 0 (depending on the fact that we are overestimating or underestimating the number of events).

One way to solve this is to estimate the score probability distribution using samples from the union of simulated catalogues instead of the forecasts in a bootstrap fashion. Sampling from the union of the forecasts a number of events equals to the number of observed, and calculating the score between each bootstrap sample and the union of the simulated catalogues, yelds a sample of score values under the null hypothesis that the forecast and the observation comes from the same distribution. The γ values obtained in this way are correct and this can be applied to many different scores.

Below an example in which the simulated magnitudes and the observed ones both come from a GR law with b-value equals 1. I have used an observed sample with 100 events, while each simulated catalogues has a number of events which comes from a Poisson distribution with mean 1000. I have calculated the gamma values considering 1000 different observed samples against the same forecast. They correctly look uniform in all cases.

b1

Those are instead in cases where the forecasts come from a GR law with a different b-value, while the observations still come to a GR law with b-value equals 1. We can see how the distribution of %gamma; values departs from uniformity. The faster it departs from uniformity changing b-value the more sensitive the score is to incoherence in magnitude distribution.

Rplot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants