-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feat] Add Accuracy #6
Conversation
062e847
to
e47d1de
Compare
f00eabb
to
ddc5138
Compare
ddc5138
to
6fdf113
Compare
cdc0b86
to
2b68268
Compare
fdb2de2
to
a37c153
Compare
2b68268
to
61c4c7a
Compare
a37c153
to
cc82251
Compare
3de7c93
to
7247528
Compare
cc82251
to
d951c4c
Compare
ceb2f3d
to
b4b4c23
Compare
3e81490
to
fbd0635
Compare
e48d94a
to
29177a2
Compare
mmeval/classification/accuracy.py
Outdated
for pred, label in zip(predictions, labels): | ||
self._results.append((pred, label)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to only reserve the first top-k results only in the intermediate results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intermediate results can be very large if the number of classes is large.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, use corrects as the intermediate results.
mmeval/classification/accuracy.py
Outdated
for pred, label in zip(predictions, labels): | ||
corrects = self.compute_correct(pred, label) | ||
self._results.append(corrects) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why calculate corrects
one by one instead of a batch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, calculate corrects in batch now.
if torch is not None: | ||
values, indices = _torch_topk(torch.from_numpy(inputs), k, dim=axis) | ||
return values.numpy(), indices.numpy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How fast does the pytorch compared with numpy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This depends on the size of the input. The larger the size of the input, the greater the speedup.
The following is a simple benchmark.
import torch
import numpy as np
import time
def numpy_topk(inputs, k, axis=None, use_torch=False):
if use_torch:
values, indices = torch.from_numpy(inputs).topk(k, dim=axis)
return values.numpy(), indices.numpy()
indices = np.argsort(inputs, axis=axis)
indices = np.take(indices, np.arange(k), axis=axis)
values = np.take_along_axis(inputs, indices, axis=axis)
return values, indices
def test(shape, k, axis):
print('Test setting: ', shape, k, axis)
arr = np.random.rand(*shape)
t1 = time.time()
numpy_topk(arr, k, axis, use_torch=False)
t2 = time.time()
numpy_topk(arr, k, axis, use_torch=True)
t3 = time.time()
print(f'custom impl numpy topk cost: {t2-t1}')
print(f'torch impl numpy topk cost: {t3-t2}')
if __name__ == "__main__":
test((100, 100), k=4, axis=1)
test((100, 1000), k=4, axis=1)
test((100, 1000), k=10, axis=1)
test((1000, 1000), k=4, axis=1)
test((10000, 1000), k=4, axis=1)
Got the following outputs:
Test setting: (100, 100) 4 1
custom impl numpy topk cost: 0.0005044937133789062
torch impl numpy topk cost: 0.0046596527099609375
Test setting: (100, 1000) 4 1
custom impl numpy topk cost: 0.005578756332397461
torch impl numpy topk cost: 0.004395961761474609
Test setting: (100, 1000) 10 1
custom impl numpy topk cost: 0.005606651306152344
torch impl numpy topk cost: 0.0003619194030761719
Test setting: (1000, 1000) 4 1
custom impl numpy topk cost: 0.05330657958984375
torch impl numpy topk cost: 0.001957416534423828
Test setting: (10000, 1000) 4 1
custom impl numpy topk cost: 0.5171260833740234
torch impl numpy topk cost: 0.0032396316528320312
mmeval/classification/accuracy.py
Outdated
|
||
@overload # type: ignore | ||
@dispatch | ||
def compute_corrects( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a private method if the method won't be used by users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
502d4bb
to
d7e0ae7
Compare
Motivation
Adding accuracy metric and test case.
Modification
MMCls