Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is Mixup necessary for MAE fine-tuning? #99

Open
SCU-zly opened this issue Dec 5, 2023 · 0 comments
Open

Is Mixup necessary for MAE fine-tuning? #99

SCU-zly opened this issue Dec 5, 2023 · 0 comments

Comments

@SCU-zly
Copy link

SCU-zly commented Dec 5, 2023

Your work is so excellent! However, when I try to fine-tuning on my downstream regression task, I found that when using Mixup method, the parameter of the function contains: num_classes=args.nb_classes. Due to the special nature of my task, I have the output target of only one class. So there's two possible solution for me:

  • Set the num_classes to 1 naively.
  • Disable the Mixup method.
    Besides, if I use Mixup, the loss function for my regression task should also find a proper one.
    I'm confuse to use which solution or other better solution....I'm looking forward to your answer!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant