Multi-Task Learning
To Read
- Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering
- Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing—and Back
- Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
- Multi-task Learning for Continuous Control
- Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research
- DiGrad: Multi-Task Reinforcement Learning with Shared Actions
- Distral: Robust Multitask Reinforcement Learning
- End-to-End Video Captioning with Multitask Reinforcement Learning
- RL2: Fast Reinforcement Learning via Slow Reinforcement Learning
Read
- Vision Based Multi-task Manipulation for Inexpensive Robots Using End-to-End Learning from Demonstration
- Includes a diagram for how to add a GAN to the network as an auxiliary task
- Multi-Task Learning Objectives for Natural Language Processing
- Specifically about NLP, but some ideas might be useful for MaLPi.
- Auxilliary tasks should complement the main task.
- Adversarial loss
- (Ganin, Y., & Lempitsky, V. (2015). Unsupervised Domain Adaptation by Backpropagation. In Proceedings of the 32nd International Conference on Machine Learning. (Vol. 37).
- Domain-Adversarial Training of Neural Networks
- Predicting the next frame in video, Grounded Language Learning in a Simulated 3D World.
- Hierarchical and Interpretable Skill Acquisition in Multi-Task Reinforcement Learning
- Multi-Task Sequence To Sequence Learning
- All of the examples are for text related tasks.
- Sequence auto-encoders were one of the auxiliary tasks they used which showed benefit.
- MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving
- They allow the driver to override the NN during autonomous operation, rather than having the expert edit collected data after the fact.
- They collect two stero pairs of images 33ms apart, then use those four images to make ten control decisions (power plus steering) over the next 330ms. However, only the final control decision is used, the others are considered ‘side tasks’ or auxiliary tasks.
- They insert a binary modality input after the first convolutional layer, before the second. Modality would be something like driving at home versus at the track.
- An Overview of Multi-Task Learning in Deep Neural Networks