all AI news
Encoder part of transformer learns even after removing positional encoding. Thoughts?
March 2, 2024, 2:46 p.m. | /u/mono1110
Deep Learning www.reddit.com
Initial positional encoding was added. The model overfitted (will work to mitigate it). Then I got curious what would happen if I removed positional encoding.
The model still overfitted.
Any thoughts why?
Thanks.
architecture attention classification deeplearning encoder encoding head layer part positional encoding self-attention sentiment simple solve thoughts transformer will work
More from www.reddit.com / Deep Learning
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne