Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address minor WinCLIP issues #1889

Merged
merged 2 commits into from
Mar 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Fixed

- Use right interpolation method in WinCLIP resize (<https://github.com/openvinotoolkit/anomalib/pull/1889>)
- 🐞 Fix the error if the device in masks_to_boxes is not both CPU and CUDA by @danylo-boiko in https://github.com/openvinotoolkit/anomalib/pull/1839

## [v1.0.0] - 2024-02-29
Expand Down
6 changes: 2 additions & 4 deletions src/anomalib/models/image/winclip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,11 @@ WinCLIP is a zero-shot model, which means that we can directly evaluate the mode

### 0-Shot

`anomalib test --model WinClip --data MVTec --data.image_size 240 --data.normalization clip`
`anomalib test --model WinClip --data MVTec`

### 1-Shot

`anomalib test --model WinClip --model.k_shot 1 --data MVTec --data.image_size 240 --data.normalization clip`

> **Note:** The `data.image_size` and `data.normalization` parameters must be set to the above values to match the configuration in which the pre-trained CLIP model weights were obtained.
`anomalib test --model WinClip --model.k_shot 1 --data MVTec`

## Parameters

Expand Down
4 changes: 2 additions & 2 deletions src/anomalib/models/image/winclip/lightning_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

import torch
from torch.utils.data import DataLoader
from torchvision.transforms.v2 import Compose, Normalize, Resize, Transform
from torchvision.transforms.v2 import Compose, InterpolationMode, Normalize, Resize, Transform

from anomalib import LearningType
from anomalib.data.predict import PredictDataset
Expand Down Expand Up @@ -174,7 +174,7 @@ def configure_transforms(self, image_size: tuple[int, int] | None = None) -> Tra
logger.warning("Image size is not used in WinCLIP. The input image size is determined by the model.")
return Compose(
[
Resize((240, 240), antialias=True),
Resize((240, 240), antialias=True, interpolation=InterpolationMode.BICUBIC),
Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711)),
],
)
Loading