Nov. 28, 2023, 12:30 a.m. | Sana Hassan

MarkTechPost www.marktechpost.com

The development of UltraFastBERT by researchers at ETH Zurich addressed the problem of reducing the number of neurons used during inference while maintaining performance levels similar to other models. It was achieved through fast feedforward networks (FFFs), which resulted in a significant speedup compared to baseline implementations. The existing methods have been supported by the […]


The post ETH Zurich Researchers Introduce UltraFastBERT: A BERT Variant that Uses 0.3% of its Neurons during Inference while Performing on Par with Similar …

ai shorts applications artificial intelligence bert bert models development editors pick eth eth zurich inference language model large language model machine learning networks neurons performance researchers staff tech news technology through zurich

More from www.marktechpost.com / MarkTechPost

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Director, Venture Capital - Artificial Intelligence

@ Condé Nast | San Jose, CA

Senior Molecular Imaging Expert (Senior Principal Scientist)

@ University of Sydney | Cambridge (USA)