April 4, 2024, 3:06 a.m. | Synced

Synced syncedreview.com

A research team from Huawei and Peking University introduces DiJiang, a groundbreaking Frequency Domain Kernelization approach, which facilitates the transition to a linear complexity model with minimal training overhead, achieving performance akin to LLaMA2-7B across various benchmarks, but at just 1/50th of the training cost.


The post Huawei & Peking U’s DiJiang: A Transformer Achieving LLaMA2-7B Performance at 1/50th the Training Cost first appeared on Synced.

ai artificial intelligence attention mechanisms benchmarks complexity cost deep-neural-networks domain groundbreaking huawei linear llama2 machine learning machine learning & data science ml performance research research team team technology training transformer transformers transition university

More from syncedreview.com / Synced

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Machine Learning (Tel Aviv)

@ Meta | Tel Aviv, Israel

Senior Data Scientist- Digital Government

@ Oracle | CASABLANCA, Morocco