July 20, 2023, 2:43 p.m. | /u/Then-Pineapple-3697

Deep Learning www.reddit.com

Hi All, I’ve been watching the CS224N NLP lectures from Stanford on YouTube. In Lecture 9, the instructor says that RNNs suffer from being unparallelizable, which makes intuitive sense to me. However, it is clearly possible to batch process inputs to an RNN during training as evidenced by the support for this on libraries such as PyTorch and TensorFlow. What am I missing here? Is there a distinction between parallelization and batching that I’m not understanding?

deeplearning lecture libraries nlp process rnn sense stanford support training youtube

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US