all AI news
Huawei & Peking U’s DiJiang: A Transformer Achieving LLaMA2-7B Performance at 1/50th the Training Cost
Synced syncedreview.com
A research team from Huawei and Peking University introduces DiJiang, a groundbreaking Frequency Domain Kernelization approach, which facilitates the transition to a linear complexity model with minimal training overhead, achieving performance akin to LLaMA2-7B across various benchmarks, but at just 1/50th of the training cost.
The post Huawei & Peking U’s DiJiang: A Transformer Achieving LLaMA2-7B Performance at 1/50th the Training Cost first appeared on Synced.
ai artificial intelligence attention mechanisms benchmarks complexity cost deep-neural-networks domain groundbreaking huawei linear llama2 machine learning machine learning & data science ml performance research research team team technology training transformer transformers transition university