Skip to content

Latest commit

 

History

History

task1

Deadline: October 31, 23.59.

Save notebooks into task1/SurnameTask1.ipynb

IMPORTANT: the code must not be written in Torch/Tensorflow. For deep learning use Jax.

  1. [Reporter: Parviz Karimov] Compare Kronecker Laplace, and standard Laplace approximation methods (in 3 variants: with full covariance, diagonal covariance, scalar covariance) for a logistic regression on a synthetic dataset generated with parameters generated by Gaussian distribution with full-rank covariance. The hyperparameters must be optimized. Estimate: the model performance, approximation quality, the quality of covariance restoration, speed of the method with different dataset size and dimensionality.
  1. [Reporter: Kseniia Petrushina] Compare ELBO and standard Laplace approximation methods (in 3 variants: with full covariance, diagonal covariance, scalar covariance) for a linear regression on a synthetic dataset parameters generated by Gaussian distribution with full-rank covariance. Estimate quality of approximation, speed of approximation, and KL divergence between approximatng distribution and posterior distribution.
  1. [Reporter: Vladimirov Eduard] Analyze a naive method of hyperparameter optimization from Graves 2011. Adapt it to a full and diagonal covariance matrix. Estimate covariance estimation quality for datasets generated with different covariance matrix types.
  1. [Reporter: Boeva Galina] Run ELBO with different parameter sampling strategy for a neural network (at least 3 layers MLP). Dataset: MNIST (or more complex dataset). Parameter sampling strategies:
    • One sample per batch (see Graves, 2011)
    • One sample per batch element (naively)
    • One sample per batch element using local reparametrization
  1. [Reporter: Dmitry Protasov] Compare approximation for a Gaussian distribution and a Gaussian mixture using SGD, SGLD and Stein gradient descent. For each method optimize hyperparameters and show dependence of the quality from the iteration number.
  1. [Reporter: Marat Khusainov] Compute and compare ELBO for a model with parameters distributed using Gaussian mixture. The ELBO must be computed using a simple ELBO (Graves) and using implicit reparametrization trick.

  2. [Reporter: Gavrilyuk Alexander] Repeat plot 28.6 from MacKay book. Dataset: sklearn dataset. Prior: normal distribution with scalar covariance. Models: multiple linear regression models:

    1. with optimal hyperparameters
    2. with lowered variance for the prior
    3. with biased mean for the prior
  1. [Reporter: Tyurikov Maksim] Compare approximation with Laplace approximation, Variational inference and Expectation propagation for:
  • Gaussian mixture (visualize for different parameters of mixture)
  • Beta-distribution with a and beta < 1 (visualize for different parameters)