-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GhostNet: More Features from Cheap Operations - 75.7% top-1 (better than than MobileNetV3) #4418
Comments
@WongKinYiu Hi, It can be trained for 2 weeks on GeForce RTX 2070. May be it can be fast on CPU/Neurochips (OpenCV-dnn). |
@AlexeyAB thank you! |
@WongKinYiu I just added dropout after avg-pooling. So if you already started training, you can download new cfg-file and continue training. |
我在cpu度测试darnet-19 速度1.3秒一张图片,测试图片大小500*374,请问哪里出问题了 |
Do you build darknet with (OPENMP=1 AVX=1)? |
thanks,i will try update my code and try again |
top-1 1.5%, top-5 5.6%. |
@WongKinYiu Thanks! |
@WongKinYiu This repo may help: https://github.com/d-li14/ghostnet.pytorch |
@iamhankai thank you very much. |
@WongKinYiu I have tested with your cfg/weights. The result is almost same as yours. |
@WongKinYiu Thanks! @rsek147 Can you attach your cfg-file? |
@AlexeyAB @WongKinYiu I think the ghostnet.cfg.txt is wrong,it can refer to https://github.com/d-li14/ghostnet.pytorch/blob/master/ghostnet.py , i use ghiost moudle in mobilenetv3 Small , it get 20% top1 after 20000 iters with 256 batch size |
@AlexeyAB hi! Thanks! |
@WongKinYiu Hi, have you been retraining Ghostnet since?If so, can you share .cfg and .weights files? |
I did not get good result. |
paper: https://arxiv.org/abs/1911.11907v1
source: https://github.com/iamhankai/ghostnet
model: ghostnet.cfg.txt
GPU GeForce RTX 2070 - Darknet framework (GPU=1 CUDNN=1 CUDNN_HALF=1)
CPU Intel Core i7 6700k - Darknet framework (OPENMP=1 AVX=1)
Comparison table: #4203 (comment)
maybe better than mobilenetv3, efficientnet, mixnet..., etc. huawei-noah/Efficient-AI-Backbones#1
We measure the actual inference speed on an ARM-based mobile phone using the TFLite tool, we use single-threaded mode with batch size 1:
The text was updated successfully, but these errors were encountered: