all AI news
FlashAttention 2: making Transformers 800% faster w/o approximation - with Tri Dao of Together AI
July 26, 2023, 4:46 p.m. | Alessio Fanelli
Latent Space www.latent.space
approximation architecture dao faster future industry inside lab life making research standard stanford together transformers
More from www.latent.space / Latent Space
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - AWS
@ 3Pillar Global | Costa Rica
Cost Controller/ Data Analyst - India
@ John Cockerill | Mumbai, India, India, India