Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training speed is so slow #4

Open
birham-red-bd opened this issue Sep 7, 2018 · 3 comments
Open

The training speed is so slow #4

birham-red-bd opened this issue Sep 7, 2018 · 3 comments

Comments

@birham-red-bd
Copy link

Thanks for sharing you code. I complied this version of caffe, but when i tried to train the model, the training speed is so slow even though i used two Titanx gpu. I cannot figure out why is that. Could you help me out? very thanks

@herr99441
Copy link

Thanks for sharing you code. I complied this version of caffe, but when i tried to train the model, the training speed is so slow even though i used two Titanx gpu. I cannot figure out why is that. Could you help me out? very thanks

I also encountered the same problem. Training is particularly slow. Have you solved this problem? @tf24-karatzhong

@peterpaniff
Copy link

it contains DDB(depth-wise dense blocks) subnetworks, like denseNet, too much concatenation operations lead to the huge use of GPU memory, thus it slow down the training process.

@QingxinWx
Copy link

maybe you can try let the “iter_size” to 1 or 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants