all AI news
ETH Zurich’s UltraFastBERT Realizes 78x Speedup for Language Models
Synced syncedreview.com
In a new paper Exponentially Faster Language Modelling, an ETH Zurich research team introduces UltraFastBERT, a variant of the BERT architecture. UltraFastBERT takes a revolutionary approach by replacing feedforward layers with fast feedforward networks, resulting in an impressive 78x speedup over the optimized baseline feedforward implementation.
The post ETH Zurich’s UltraFastBERT Realizes 78x Speedup for Language Models first appeared on Synced.
ai architecture artificial intelligence bert deep-neural-networks eth eth zurich faster implementation language language modelling language models large language model machine learning machine learning & data science ml modelling networks paper research research team team technology zurich