Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add text style transfer (#166) #263

Open
wants to merge 21 commits into
base: master
Choose a base branch
from
Open

Conversation

swapnull7
Copy link
Collaborator

@swapnull7 swapnull7 commented Dec 4, 2019

  • initial commit

  • bug fixes and adjusting conv inputs

  • separate forward function for Discriminator and Generator and disable Gen training for debugging

  • remove debugger statement

  • texar bug fix

  • detaching stuff before accumulating

  • refactor and add component as optional parameter

  • Add optimizer for and backprop against encoder

  • Add in README

* initial commit

* bug fixes and adjusting conv inputs

* separate forward function for Discriminator and Generator and disable Gen training for debugging

* remove debugger statement

* bug fix

* detaching stuff before accumulating

* refactor and add component as optional parameter

* Add optimizer for and backprop against encoder

* Add in README
* initial commit

* bug fixes and adjusting conv inputs

* separate forward function for Discriminator and Generator and disable Gen training for debugging

* remove debugger statement

* bug fix

* detaching stuff before accumulating

* refactor and add component as optional parameter

* Add optimizer for and backprop against encoder

* Add in README

* more fixes to eval mode

* create optimizers so that they can be saved

* fix typo
@codecov
Copy link

codecov bot commented Dec 5, 2019

Codecov Report

Merging #263 into master will decrease coverage by 0.04%.
The diff coverage is 42.10%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #263      +/-   ##
==========================================
- Coverage   82.53%   82.48%   -0.05%     
==========================================
  Files         205      206       +1     
  Lines       15829    15848      +19     
==========================================
+ Hits        13064    13072       +8     
- Misses       2765     2776      +11     
Impacted Files Coverage Δ
texar/torch/utils/variables.py 38.88% <38.88%> (ø)
texar/torch/utils/__init__.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 885255c...3776f4b. Read the comment docs.

* initial commit

* bug fixes and adjusting conv inputs

* separate forward function for Discriminator and Generator and disable Gen training for debugging

* remove debugger statement

* bug fix

* detaching stuff before accumulating

* refactor and add component as optional parameter

* Add optimizer for and backprop against encoder

* Add in README

* more fixes to eval mode

* create optimizers so that they can be saved

* fix typo

* linting issues

* add type annotation for encoder

* fix linting

* Isolate AE in training

* works after changing the learning rate

* remove debugger
@swapnull7 swapnull7 marked this pull request as ready for review December 8, 2019 06:00
@swapnull7 swapnull7 self-assigned this Dec 9, 2019
@gpengzhi
Copy link
Collaborator

gpengzhi commented Dec 9, 2019

Please merge master into your branch to pass the codecov test.

@huzecong
Copy link
Collaborator

huzecong commented Dec 9, 2019

You might also want to change the PR title; should reference #166 instead.

train_op_g.zero_grad()
step += 1

vals_d = model(batch, gamma_, lambda_g_, mode="train",
Copy link
Collaborator

@gpengzhi gpengzhi Dec 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which dataset do you use here? train_g? It seems that train_d is used in texar-tf. Why does such a difference exist?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For training, I use the same dataset because it is tricky to switch datasets between steps. And considering the discriminator and generator are trained separately, I've noticed it doesn't really affect the results.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the 2 iterators and keeping just one train iterator.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that is the case, we do not need to have train_d in the iterator (?)

hparams=config.model['opt']
)

def _train_epoch(gamma_, lambda_g_, epoch, verbose=True):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In texar-tf, both train_d and train_g are used in the _train_epoch, but we only use train_g in this function. Did I miss something important?

@swapnull7 swapnull7 changed the title Add text style transfer (#1) Add text style transfer (#166) Dec 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants