July 8, 2023, 8:47 a.m. | /u/David202023

Deep Learning www.reddit.com

I have numeric signals from two sensors, and I would like to create a mapping using sequence-to-sequence autoencoder. I used the transformer architecture, and it seems to be learning - the loss is getting lower both for the training and validation sets over time, and when using the decoder on both inputs - memory (that is, \`encoder(source)\`) and \`target\`, the decoder would return a value, very similar to \`target\` (and also low \`MSE\` values).

Nevertheless, in the inference stage, results …

architecture autoencoder deeplearning loss mapping memory returns sensors seq2seq training transformer transformer architecture validation value

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv