Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)
-
Updated
Oct 22, 2019 - Python
Learning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)
Customisable Unified Physical Simulations (CUPS) for Reinforcement Learning. Experiments run on the ai2thor environment (http://ai2thor.allenai.org/) e.g. using A3C, RainbowDQN and A3C_GA (Gated Attention multi-modal fusion) for Task-Oriented Language Grounding (tasks specified by natural language instructions) e.g. "Pick up the Cup or else"
Evaluating pre-trained navigation agents under corruptions
Add a description, image, and links to the ai2thor-environment topic page so that developers can more easily learn about it.
To associate your repository with the ai2thor-environment topic, visit your repo's landing page and select "manage topics."