Skip to content

Releases: pavlin-policar/openTSNE

v1.0.1

29 Nov 14:50
Compare
Choose a tag to compare

Changes

  • setup.py maintenance (#249)
  • drop Python 3.6 support (#249)
  • correctly implement dof parameter in exact BH implementation (#246)

v1.0.0

24 May 13:51
Compare
Choose a tag to compare

Given the longtime stability of openTSNE, it is only fitting that we release a v1.0.0.

Changes

  • Various documentation fixes involving initialization, momentum, and learning rate (#243)
  • Include Python 3.11 in the test and build matrix
  • Uniform affinity kernel now supports mean and max mode (#242)

v0.7.1

20 Feb 15:45
Compare
Choose a tag to compare

Bug Fixes

  • (urgent) Fix memory error on data with duplicated rows (#236)

v0.7.0

15 Feb 15:07
Compare
Choose a tag to compare

Changes

  • By default, we now add jitter to non-random initialization schemes. This has almost no effect on the resulting visualizations, but helps avoid potential problems when points are initialized at identical positions (#225)
  • By default, the learning rate is now calculated as N/exaggeration. This speeds up convergence of the resulting embedding. Note that the learning rate during the EE phase will differ from the learning rate during the standard phase. Additionally, we set momentum=0.8 in both phases. Before, it was 0.5 during EE and 0.8 during the standard phase. This, again, speeds up convergence. (#220)
  • Add PrecomputedAffinities to wrap square affinity matrices (#217)

Build changes

  • Build universal2 macos wheels enabling ARM support (#226)

Bug Fixes

  • Fix BH collapse for smaller data sets (#235)
  • Fix updates in optimizer not being stored correctly between optimization calls (#229)
  • Fix inplace=True optimization changing the initializations themselves in some rare use-cases (#225)

As usual, a special thanks to @dkobak for helping with practically all of these bugs/changes.

v0.6.2

18 Mar 13:55
Compare
Choose a tag to compare

Changes

  • By default, we now use the MultiscaleMixture affinity model, enabling us to pass in a list of perplexities instead of a single perplexity value. This is fully backwards compatible.
  • Previously, perplexity values would be changed according to the dataset. E.g. we pass in perplexity=100 with N=150. Then TSNE.perplexity would be equal to 50. Instead, keep this value as is and add an effective_perplexity_ attribute (following the convention from scikit-learn, which puts in the corrected perplexity values.
  • Fix bug where interpolation grid was being prepared even when using BH optimization during transform.
  • Enable calling .transform with precomputed distances. In this case, the data matrix will be assumed to be a distance matrix.

Build changes

  • Build with oldest-supported-numpy
  • Build linux wheels on manylinux2014 instead of manylinux2010, following numpy's example
  • Build MacOS wheels on macOS-10.15 instead of macos-10.14 Azure VM
  • Fix potential problem with clang-13, which actually does optimization with infinities using the -ffast-math flag

v0.6.0

25 Apr 19:32
Compare
Choose a tag to compare

Changes:

  • Remove affinites from TSNE construction, allow custom affinities and initialization in .fit method. This improves the API when dealing with non-tabular data. This is not backwards compatible.
  • Add metric="precomputed". This includes the addition of openTSNE.nearest_neighbors.PrecomputedDistanceMatrix and openTSNE.nearest_neighbors.PrecomputedNeighbors.
  • Add knn_index parameter to openTSNE.affinity classes.
  • Add (less-than-ideal) workaround for pickling Annoy objects.
  • Extend the range of recommended FFTW boxes up to 1000.
  • Remove deprecated openTSNE.nearest_neighbors.BallTree.
  • Remove deprecated openTSNE.callbacks.ErrorLogger.
  • Remove deprecated TSNE.neighbors_method property.
  • Add and set as default negative_gradient_method="auto".

v0.5.0

24 Dec 11:58
Compare
Choose a tag to compare

Main changes:

  • Build wheels for MacOS target 10.6
  • Update to annoy v1.17.0, this should result in much faster multi-threaded performance

v0.4.0

04 May 12:45
Compare
Choose a tag to compare

Major changes:

  • Remove numba dependency, switch over to using Annoy nearest neighbor search. Pynndescent is now optional and can be used if installed manually.
  • Massively speed-up transform by keeping reference interpolation grid fixed. Limit new points to circle centered around reference embedding.
  • Implement variable degrees of freedom.

Minor changes:

  • Add spectral initialization using diffusion maps.
  • Replace cumbersome ErrorLogger callback with the verbose flag.
  • Change the default number of iterations to 750.
  • Add learning_rate="auto" option.
  • Remove the min_grad_norm parameter.

Bugfixes:

  • Fix case where KL divergence was sometimes reported as NaN.

Replace FFTW with numpy's FFT

11 Sep 09:56
Compare
Choose a tag to compare

In order to make usage as simple as possible and remove and external dependencies on FFTW (which needed to be installed locally before), this update replaces FFTW with numpy's FFT.