Feb. 4, 2022, 7:04 p.m. | NandaKishore Joshi

Towards Data Science - Medium towardsdatascience.com

Part 2 — Building a deep Q-network to play Gridworld — Catastrophic Forgetting and Experience Replay

In this article let’s talk about the problem in Vanilla Q-learning model: Catastrophic forgetting . We will solve this problem using Experience replay and see the improvement we have made in playing GridWorld

Welcome to the second part of Deep Q-network tutorials. This is the continuation of the part 1. If you have not read the part 1, I strongly suggest to go …

building data science deep learning deep-q-learning machine learning network part reinforcement learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne