-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question for uniform matcher #23
Comments
Hi, For the second question, you can refer to the answer here. |
thanks for your answer! and i wonder whether it is suitable for light-weighted model such as yolov4tiny, and i am in experiment, the result is not well, i simply changed the backbone to the yolov4tiny's, the map only got 13.8 as the input size is 320. could you give any suggestions? |
Hi, we did not train tiny models before. But I am happy to help get reasonable results. Could you provide more details about your modification? The backbone file, pre-trained models, and config file will be helpful. |
i simply change the backbone according to the yolov4tiny, and the anchor of 512 is deleted since the limited img size. btw, the activation is replaced by leakyrelu. other setting is the same as cpsdarknet53-dc5.
|
Ok, I will try it. |
Thx for your reply! Another question is that wheather the multi-scale training and swa are included? |
Multi-scale training is supported by Detectron2. You can refer to this repo for swa. The results for the multi-scale training and saw are not included in this repo. You can try them yourself. |
thx a lot! i find that when i change the test img size from 608 to 320, the performance drops a lot. map drops from 43.2 to 34.5. The performance degradation is significant in small and medium object (small object map drops from 22.8 to 11.8, medium object map drops from 47.2 to 36.4). compare to yolov4 with the input size of 320, the small object detection of yolof is not satisfying, is there any suggestions to improve is? |
You may need to re-train YOLOF with small image sizes. The provided pre-train model is trained with relatively large image sizes (from 512 to 768), which is not suitable to test with image size 320. |
according to your code, the uniform matcher seems calculate the L1 distance between pred_bbox/anchor with target among batch imgs. but i think it should be computed within single img. another question is that i do not understand the fusion method of the anchor indices and the pred_box indices, why simply add the two indices?https://github.com/megvii-model/YOLOF/blob/61a8accf957dceef11ea8029f121922b5f60901e/playground/detection/coco/yolof/yolof_base/uniform_matcher.py#L77
The text was updated successfully, but these errors were encountered: