June 1, 2023, 8:36 a.m. | /u/Calcifer777

Deep Learning www.reddit.com

Hi all, I'm trying to create a seq2seq model with attention and I'm a bit stuck in the decoder implementation.

I have the encoder embeddings with dimension (N, L\_e, E) and the decoder inputs with dimension (N, L\_d, E). N is the batch size; L\_e, L\_d are the encoder and decoder sequence lengths; and E is the embedding size.

I'm working in PyTorch; I would like to apply a nn.MultiHeadAttention layer to the encoder embeddings and pass them to the …

attention decoder deeplearning embeddings encoder implementation seq2seq

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA