Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how do you get pretrained model #19

Open
yuyang16101066 opened this issue May 27, 2022 · 5 comments
Open

how do you get pretrained model #19

yuyang16101066 opened this issue May 27, 2022 · 5 comments

Comments

@yuyang16101066
Copy link

Thanks for your great work. I would like to know how you get pretrained model, like video_swin_tiny_pretrained.pth. In my understanding, it's different from Joint training with Ref-COCO/+/g datasets.

@wjn922
Copy link
Owner

wjn922 commented May 27, 2022

Hi, it is indeed the pretrained model is different joint training model. The pretrained models are only trained using Ref-COCO/+/g datasets in a image level (setting num_frmaes=1).

@nero1342
Copy link

Hi, did you use the pretrained models on RefCOCO/+/g datasets for the joint model? And is it imbalance when joint training between 3M expressions from RefCOCO/+/g and 13k expressions from RefYTVOS?

@wjn922
Copy link
Owner

wjn922 commented May 30, 2022

We do not use the pretrained model for joint trainnig. We do not adopt the balance sampling of RefCOCO/+/g and RefYTVOS, though their scales are different.

@zhenghao977
Copy link

@wjn922 Thanks for your great work! I would like to know epochs/learning rate/lr_drop out for getting the pretrained model on Ref-COCO

@wjn922
Copy link
Owner

wjn922 commented Aug 3, 2022

@zhenghao977 We use 32 V100 GPUs for the pretrained models. The total epoch is 12 and lr drops at the 8th and 10th epoch. The learning rate keeps the same as default setting, and batch size is set as 2 in each gpu. Please refer to #7 for the pretraining script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants