Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad transfer learning result while fine tuning in iNaturalist 2019 which is not IN1K #77

Open
cssddnnc9527 opened this issue Dec 30, 2021 · 7 comments

Comments

@cssddnnc9527
Copy link

cssddnnc9527 commented Dec 30, 2021

Dear Author,

Firstly thanks and appreciated for your great contribution.

While fine tuning with IN1K base on the pre-train model which also trained with IN1K, the result is similar to the paper's as follows:
Screenshot from 2021-12-30 14-13-49

But if I fine tune with iNaturalist with the same pre-train model and same finetune parameters listed in your github page, the result is really bad as follows:
Screenshot from 2021-12-30 14-14-16

So, what do you think of the possible reason for me?
Looking forward to your reply, Thanks in advance !

BTW, the picture amount of iNaturalist is about 260,000, icludes 1010 classes. The train data and val data is not separated in iNaturalist, I divided follow the ratio of IN1K(96% for train, 4% for val).

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ?
If yes, how much longer we need wait for ? Thanks again !

@zsddd
Copy link

zsddd commented Jan 2, 2022

亲爱的作者,

首先感谢并感谢您的巨大贡献。

虽然在也使用IN1K训练的预训练模型的基础上使用IN1K进行微调,但结果与本文的结果类似:如下: Screenshot from 2021-12-30 14-13-49

但是,如果我使用github页面中列出的相同预训练模型和相同的微调参数对iNaturalist进行微调,结果非常糟糕,如下所示: Screenshot from 2021-12-30 14-14-16

那么,你认为我的可能原因是什么?期待您的回复,提前致谢!

顺便说一句,iNaturalist的图片数量约为260,000,包括1010类。在iNaturalist中,列车数据和val数据没有分开,我按照IN1K的比率划分(火车为96%,val为4%)。

此外,您是否计划实现对象检测和语义分割的微调代码?如果是,我们需要等待多长时间?再次感谢!

你好 请问您是怎样在微调过程载入作者的预训练模型的呢?可否提供下代码,感谢

@pengzhiliang
Copy link
Owner

@cssddnnc9527 Thanks for your kind words!

Are the pre-training weights loaded correctly?

@pengzhiliang
Copy link
Owner

@zsddd --finetune "/path/to/model_weight"

@cssddnnc9527
Copy link
Author

@pengzhiliang

Thanks for your reply!

I found the root cause. It's data split problem. I split the data roughly, not split into the data for every class.

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ?
If yes, how much longer we need wait for ? Thanks again !

@pengzhiliang
Copy link
Owner

Hello~ I am sorry that transfer it to downstream task like semantic segemantation is not on my schedule now.

But it is not a hard job, please refer to semantic_segmentation in beit

@insomniaaac
Copy link

Hello~ I am sorry that transfer it to downstream task like semantic segemantation is not on my schedule now.

But it is not a hard job, please refer to semantic_segmentation in beit

would you mind giving a short tutorial? I am not familiar with mmsegmentaion lib, it is confused to me.
I'm sorry to take up your time, but if you give us a short tutorial, I'll appreciate it.
Thanks again!~

@idansc
Copy link

idansc commented Oct 27, 2022

@pengzhiliang

Thanks for your reply!

I found the root cause. It's data split problem. I split the data roughly, not split into the data for every class.

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ? If yes, how much longer we need wait for ? Thanks again !

@pengzhiliang

Thanks for your reply!

I found the root cause. It's data split problem. I split the data roughly, not split into the data for every class.

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ? If yes, how much longer we need wait for ? Thanks again !

Hey, in case you've managed to fine-tune on iNaturalist, would you mind sharing the weights?
Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants