Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About freeze batch norm #15

Open
fyhfly opened this issue Jan 12, 2021 · 3 comments
Open

About freeze batch norm #15

fyhfly opened this issue Jan 12, 2021 · 3 comments

Comments

@fyhfly
Copy link

fyhfly commented Jan 12, 2021

Hi, sorry for bothering you.
I've been wondering for a long time why did you freeze bn when training GASDA using the pretrained F_s, F_t and CycleGAN.
If frozen batch norm, what parameters will be optimized in training?

@sshan-zhao
Copy link
Owner

Hi, sorry for bothering you.
I've been wondering for a long time why did you freeze bn when training GASDA using the pretrained F_s, F_t and CycleGAN.
If frozen batch norm, what parameters will be optimized in training?

  1. Firstly we can reduce the required memory. Secondly, we use small batchsize when training GASDA. Thirdly, when training GASDA, all parts have been pre-trained, so the BN layer can be fixed.
  2. Parameters in convolutional layers.

@fyhfly
Copy link
Author

fyhfly commented Feb 3, 2021

Hi, sorry for bothering you.
I've been wondering for a long time why did you freeze bn when training GASDA using the pretrained F_s, F_t and CycleGAN.
If frozen batch norm, what parameters will be optimized in training?

  1. Firstly we can reduce the required memory. Secondly, we use small batchsize when training GASDA. Thirdly, when training GASDA, all parts have been pre-trained, so the BN layer can be fixed.
  2. Parameters in convolutional layers.

Parameters in convolutional layers are weights and biases. If frozen bn, in my view, weights and biases will not change any more, and performance of net will not improve.
When running gasda_model.py, 2 task nets(depth generate), 2 trans nets(cycle), 4 discriminators are trained. You freeze bn in 2 task nets, does that mean these 2 nets won't be optimized?
That's what I don't understand.

@sshan-zhao
Copy link
Owner

sshan-zhao commented Feb 3, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants