all AI news
Using attention in Seq2Seq image
May 19, 2022, 9:35 p.m. | /u/Black_Beard53
Computer Vision www.reddit.com
I am trying to implement an encoder-decoder architecture with attention mechanism (not self-attention) for image sequences instead of text. So far I am only able to get resources that deal with image to text only. Has anyone worked on this before or know any resources that would be helpful ?
I am thinking of using CNN to get a flattened image vectors and feed it to the encoder-decoder module sequentially....and train the model to obtain a latent representation …
More from www.reddit.com / Computer Vision
Where to start
1 day, 19 hours ago |
www.reddit.com
edge inference HW?
1 day, 21 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Management Associate
@ EcoVadis | Ebène, Mauritius
Senior Data Engineer
@ Telstra | Telstra ICC Bengaluru