Skip to content

Results of experiments with entropy regularization to reduce false positive classifications, using a toy CNN classifier for pixelated MNIST images.

License

Notifications You must be signed in to change notification settings

pulkitgoyal56/flatnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

flatnet

Results of experiments with entropy regularization, using a toy CNN classifier for pixelated MNIST images.

Entropy regularization for a multi-label image classification problem is given by,

equation

where $L_{\iota}(\mathbf{i}, \mathbf{l})$ is the original loss of the model given a batch of input images $\mathbf{i}$ and labels $\mathbf{l}$, $\kappa$ is the regularization strength, and $p(l_j; i_k, \mathbf{l})$ is the probability of the $j$-th label given the $k$-th image.

The context of this is using such models as a source of rewards for Reinforcement Learning. The original application of this was to fine-tune CLIP models so that they have less noise and their semantics entropy reward trajectories are smoother. The expectation is that this would lead to denser rewards, decreased semantic bias in random images (misclassifications) / improved specificity, and possibly reduced class preference in CLIP's outputs.


This repository presents the results of using this entropy regularization in tandem with an augmented training dataset (using random images that should ideally be classified with equal probability among the labels) to train an MNIST classifier.

The results can be visualized here.
Click here to see the model summary and comparisons.
For architecture details of flatnet, please see the code.

About

Results of experiments with entropy regularization to reduce false positive classifications, using a toy CNN classifier for pixelated MNIST images.

Topics

Resources

License

Stars

Watchers

Forks

Languages