Learning How to Play Atari Games Through Deep Neural Networks
Briefly

Arthur Samuel's 1959 checkers agent was groundbreaking as it could learn to outperform its creator. This principle of simulating moves to find the best one is foundational for AI in games. Following checkers, various milestones included Deep Blue's defeat of Garry Kasparov in chess and TD-Gammon's innovative strategies derived from neural networks. The DQN approach, presented by Mnih et al. in 2013, represents a synthesis of Deep Neural Networks and TD-Learning, serving as a foundational step towards deep reinforcement learning, paving the way for future advancements despite being succeeded by newer methods.
"The checkers' agent tries to follow the idea of simulating every possible move given the current situation and selecting the most advantageous one."
"In this post, we explore one such approach: the DQN approach introduced in 2013 by Mnih et al, in which playing Atari games is approached through a synthesis of Deep Neural Networks and TD-Learning."
Read at towardsdatascience.com
[
|
]