You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Our computer vision solution is missing an important feature to improve accuracy.
Describe the use case
In the past, I've followed this Tensorflow tutorial to train an image classifier. This is a transfer learning technique that has two training rounds:
Training: add a classification head to a pretrained base model. The base model is frozen and the classification layers are trainable. This is currently what we have right now.
The later layers encode high level features of the image while the early ones encode low level features (corners, edges, colors, gradients, etc). It's not good to fine-tune the lower levels because the model will overfit those features.
When fine-tuning, we unfreeze the later layers of the pretrained base model. The number of layers to unfreeze depends on the task similarity and dataset size relative to the pretrained encoder.
Describe alternatives you've considered
The alternative is to keep things as they are and rely on users to implement their own 2-round transfer-learning job.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Our computer vision solution is missing an important feature to improve accuracy.
Describe the use case
In the past, I've followed this Tensorflow tutorial to train an image classifier. This is a transfer learning technique that has two training rounds:
Describe the solution you'd like
There are two ways to implement this:
Both require a way to choose how to freeze layers. There are two approaches to freezing layers:
The later layers encode high level features of the image while the early ones encode low level features (corners, edges, colors, gradients, etc). It's not good to fine-tune the lower levels because the model will overfit those features.
When fine-tuning, we unfreeze the later layers of the pretrained base model. The number of layers to unfreeze depends on the task similarity and dataset size relative to the pretrained encoder.
Describe alternatives you've considered
The alternative is to keep things as they are and rely on users to implement their own 2-round transfer-learning job.
The text was updated successfully, but these errors were encountered: