As a part of CS6480: Topics in Vision and Learning course taught by Dr Vineeth Balasubramanian, I will upload a research paper summary every week. They are mostly related to my research area of Knowledge Distillation for Model Compression.
- Week 1: StackGAN: Text to Photo-realistic image synthesis with Stacked Generative Adversarial Networks [paper] [summary]
- Week 2: Distilling the Knowledge in a Neural Network [paper] [summary]
- Week 3: A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning [paper] [summary]
- Week 4: Learning Efficient Object Detection Models with Knowledge Distillation [paper] [summary]
- Week 5: GRAD-CAM: Visual Explanations from Deep Networks via Gradient-based Localizaiton [paper] [summary]
- Week 6: Paying more attention to Attention: Improving the performance of (student) CNNs via Attention Transfer [paper] [summary]
- Week 7: FitNets: Hints For Thin Deep Nets [paper] [summary]
- Week 8: Deep Model Compression: Distilling Knowledge from Noisy Teachers [paper] [summary]
- Week 9: Data-free Knowledge Distillation for Deep Neural Networks [paper] [summary]
- Week 10: Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verificatio [paper] [summary]
Note: The course is over.