Reset min_max normalization values at the start of validation epoch #2153
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📝 Description
Currently, min_max values for normalization are updated every validation epoch. This update keeps irrelevant values from previous epochs by comparing new and old ones:
self.max = torch.max(self.max, torch.max(predictions))
It seems like it does not affect most of the models, but with at least EfficientAD, the final anomaly maps after training are normalized the wrong way (see #2139 or this discussion).
This fix resets min_max values at the beginning of every validation epoch to ensure that only values from the recent one will be used for normalization.
🛠️ Fixes #2027, thank you @CarlosNacher for pointing this problem out! Partially fixes #2139.
✨ Changes
Select what type of change your PR is:
✅ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.