all AI news
[D] Interview with Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference
Feb. 12, 2024, 8:20 p.m. | /u/thejashGI
Machine Learning www.reddit.com
Some topics covered:
* Taking a contrarian bet on recurrent connections over attention
* Using data augmentation to encode knowledge into models
* Designing algorithms that take advantage of hardware
Listen to the conversation:
* [Spotify](https://open.spotify.com/show/1hikWa5LWDQJwXtz5LoeVn)
* [Apple Podcasts](https://podcasts.apple.com/us/podcast/generally-intelligent/id1544921720)
* [Pocket Casts](https://pca.st/ewh266dr)
* [Highlights and referenced papers](https://imbue.com/podcast/2024-02-08-podcast-episode-33-tri-dao/)
algorithms attention augmentation conversation data designing encode hardware knowledge machinelearning the conversation topics
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US