-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how do you get pretrained model #19
Comments
Hi, it is indeed the pretrained model is different joint training model. The pretrained models are only trained using Ref-COCO/+/g datasets in a image level (setting num_frmaes=1). |
Hi, did you use the pretrained models on RefCOCO/+/g datasets for the joint model? And is it imbalance when joint training between 3M expressions from RefCOCO/+/g and 13k expressions from RefYTVOS? |
We do not use the pretrained model for joint trainnig. We do not adopt the balance sampling of RefCOCO/+/g and RefYTVOS, though their scales are different. |
@wjn922 Thanks for your great work! I would like to know epochs/learning rate/lr_drop out for getting the pretrained model on Ref-COCO |
@zhenghao977 We use 32 V100 GPUs for the pretrained models. The total epoch is 12 and lr drops at the 8th and 10th epoch. The learning rate keeps the same as default setting, and batch size is set as 2 in each gpu. Please refer to #7 for the pretraining script. |
Thanks for your great work. I would like to know how you get pretrained model, like video_swin_tiny_pretrained.pth. In my understanding, it's different from Joint training with Ref-COCO/+/g datasets.
The text was updated successfully, but these errors were encountered: