Jan. 23, 2023, 7:11 p.m. | Dan Fu and Tri Dao

Blog Content - TOGETHER www.together.xyz

Introducing FlashConv, a new technique for speeding up State space models
(SSMs) that enables training SSM-based language models up to 2.7B
parameters (with almost no attention) — and run inference 1.6X faster than
Transformers.

attention faster inference language language models space state training transformers

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

C003549 Data Analyst (NS) - MON 13 May

@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium

Marketing Decision Scientist

@ Meta | Menlo Park, CA | New York City