Reducing the Computational Cost of Deep Reinforcement Learning Research
Pablo Samuel Castro, Staff Software Engineer at Google Research, writes on Google AI blog: It is widely accepted that the enormous growth of deep reinforcement learning research, which combines traditional reinforcement learning with deep neural networks, began with the publication of the seminal DQN algorithm. This paper demonstrated the potential of this combination, showing that it could produce agents that could play a number of Atari 2600 games very effectively. Since then, there have been several approaches that have built on and improved the original DQN. The popular Rainbow algorithm combined a number of these recent advances to achieve state-of-the-art performance on the ALE benchmark. This advance, however, came at a very high computational cost, which has the unfortunate side effect of widening the gap between those with ample access to computational resources and those without. In "Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research," to be presented at ICML 2021, we revisit this algorithm on a set of small- and medium-sized tasks. We first discuss the computational cost associated with the Rainbow algorithm. We explore how the same conclusions regarding the benefits of combining the various algorithmic components can be reached with smaller-scale experiments, and further generalize that idea to how research done on a smaller computational budget can provide valuable scientific insights.
from Slashdot https://ift.tt/36zRh0g
Read more of this story at Slashdot.
from Slashdot https://ift.tt/36zRh0g
0 Response to "Reducing the Computational Cost of Deep Reinforcement Learning Research"
Post a Comment