all AI news
FlashConv: Speeding up State Space Models
Jan. 23, 2023, 7:11 p.m. | Dan Fu and Tri Dao
Blog Content - TOGETHER www.together.xyz
(SSMs) that enables training SSM-based language models up to 2.7B
parameters (with almost no attention) — and run inference 1.6X faster than
Transformers.
attention faster inference language language models space state training transformers
More from www.together.xyz / Blog Content - TOGETHER
Flash-Decoding for long-context inference
6 months, 2 weeks ago |
www.together.xyz
Faster inference enables up to 5x price reduction on Together API
8 months, 2 weeks ago |
www.together.xyz
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
C003549 Data Analyst (NS) - MON 13 May
@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium
Marketing Decision Scientist
@ Meta | Menlo Park, CA | New York City