Oct. 3, 2022, 1:12 a.m. | Michael Hoyer, Shahram Eivazi, Sebastian Otte

cs.LG updates on arXiv.org arxiv.org

Training recurrent neural networks is predominantly achieved via
backpropagation through time (BPTT). However, this algorithm is not an optimal
solution from both a biological and computational perspective. A more efficient
and biologically plausible alternative for BPTT is e-prop. We investigate the
applicability of e-prop to long short-term memorys (LSTMs), for both supervised
and reinforcement learning (RL) tasks. We show that e-prop is a suitable
optimization algorithm for LSTMs by comparing it to BPTT on two benchmarks for
supervised learning. This …

arxiv lstm traces training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Intern - Robotics Industrial Engineer Summer 2024

@ Vitesco Technologies | Seguin, US