Web: https://www.reddit.com/r/reinforcementlearning/comments/ujwiai/sequence_length_in_lstm/

May 6, 2022, 8:28 p.m. | /u/No_Possibility_7588

Reinforcement Learning reddit.com

In this PPO-LSTM architecture, at some point there is a sequence length variable ([https://github.com/MarcoMeter/recurrent-ppo-truncated-bptt/blob/9206a97b7546ec62e668eaf67ae6d4b752e0f0ee/model.py#L79](https://github.com/MarcoMeter/recurrent-ppo-truncated-bptt/blob/9206a97b7546ec62e668eaf67ae6d4b752e0f0ee/model.py#L79)). If you look at the configs, it is always set to 8 in each environment ([https://github.com/MarcoMeter/recurrent-ppo-truncated-bptt/blob/9206a97b7546ec62e668eaf67ae6d4b752e0f0ee/configs.py#L15](https://github.com/MarcoMeter/recurrent-ppo-truncated-bptt/blob/9206a97b7546ec62e668eaf67ae6d4b752e0f0ee/configs.py#L15)). It is described as "length of the fed sequences". Can you make an example so that it is clearer what this is referring to? Thanks!

lstm reinforcementlearning

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California