July 20, 2023, 2:43 p.m. | /u/Then-Pineapple-3697

Deep Learning www.reddit.com

Hi All, I’ve been watching the CS224N NLP lectures from Stanford on YouTube. In Lecture 9, the instructor says that RNNs suffer from being unparallelizable, which makes intuitive sense to me. However, it is clearly possible to batch process inputs to an RNN during training as evidenced by the support for this on libraries such as PyTorch and TensorFlow. What am I missing here? Is there a distinction between parallelization and batching that I’m not understanding?

deeplearning lecture libraries nlp process rnn sense stanford support training youtube

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US