Skip to content

v0.7.1

Compare
Choose a tag to compare
@guenthermi guenthermi released this 15 Feb 15:10
· 45 commits to main since this release
9e790f0

Release Note Finetuner 0.7.1

This release covers Finetuner version 0.7.1, including dependencies finetuner-api 0.5.0 and finetuner-core 0.12.6.

This release contains 2 new features, 3 refactorings, 3 bug fixes, and 4 documentation improvements.

🆕 Features

Support SphereFace Loss Functions (#664)

SphereFace loss functions were first formulated for computer vision, specifically face recognition, tasks. Finetuner supports two variations of this loss function, ArcFaceLoss, and CosFaceLoss. Instead of attempting to minimize the distance between positive pairs and maximize the distance between negative pairs, the SphereFace loss functions compare each sample with an estimate of the center point of each class's embeddings.

Like all supported loss functions, you can use them by specifying their name in the loss attribute of the fit function.

run = finetuner.fit(
    ...,
    loss='ArcFaceLoss',
    ...
)

To track and refine our estimate of the class center points across batches, these SphereFace loss functions require an additional optimizer during training. By default, the type of optimizer used will be the same as the one used for the model itself, but you can also choose a different optimizer for your loss function using the loss_optimizer parameter.

run = finetuner.fit(
    ...,
    loss='ArcFaceLoss',
+   loss_optimizer='Adam',
+   loss_optimizer_options={'weight_decay': 0.01}
)

Support Continuing Training from an Artifact of a Previous Run (#668)

If you want to start fine-tuning from a model produced by a previous Run, or you collected new training data and want to use it to continue training, this is now possible. To use this feature, you need to set the artifact id of the model you want to continue training from via the model_artifact parameter of the fit function:

train_data = 'path/to/another/data.csv'
new_run = finetuner.fit(
    model='efficientnet_b0',
    train_data=train_data,
    model_artifact=previous_run.artifact_id,
)

⚙ Refactoring

Removing ResNet-based CLIP Models (#662)

Due to low usage, we removed CLIP models which are based on ResNet.

Add the EfficientNet B7 Model (#662)

For image-to-image search, we now support EfficientNet B7 as a backbone model.

Increase Upload Size of CSV Files for cloud.jina.ai

For Web UI users, we have increased the upload file size from 1MB to 32MB. Python client users have always been able to upload much larger datasets and are unaffected by this change.

🐞 Bug Fixes

Solve Dependency Problem in MLFlow Callback

A new SQLAlchemy release caused the MLFlow callback to behave incorrectly in some cases. This release fixes the problem.

Prevent Errors caused by wrong num_items_per_class Parameter

Some loss functions do not use the num_items_per_class parameter. In some cases, it is possible for users to set this parameter in a way that is incompatible with the rest of the configuration and cause Finetuner to fail. Now the parameter is only validated if it is actually used, and for loss functions that do not use it, it is completely ignored.

Solve Problems with the Login Function in Jupyter Notebooks (#672 )

Sometimes, when calling finetuner.login() in a Jupyter notebook, login would appear successful, but Finetuner might not always behave correctly. Previously, users had to call finetuner.login(force=True) to be sure they were correctly logged in. This problem has been resolved, and finetuner.login() works correctly without the force flag.

📗 Documentation Improvements

Add a Documentation Page for Loss Functions and Pooling (#664)

We add a new page to our documentation which explains several loss functions and the pooling options in more detail.

Add a Section about Finetuner Articles (#669)

We add a list with articles to our README that make use of Finetuner and provide more insights for using Finetuner in practice.

Add a Folder for Example CSV files (#663)

If you need example training datasets that have already been prepared for use in Finetuner, you can look at the dataset folder in our repository.

Proofread the documentation as a whole to fix typos and broken links (#661, #666)

We repaired broken links and fixed typos found in the Finetuner documentation.

🤟 Contributors

We would like to thank all contributors to this release: