Aug. 1, 2022, 1:11 a.m. | Lukas Hauzenberger, Navid Rekabsaz

cs.CL updates on arXiv.org arxiv.org

In recent years language models have achieved state of the art performance on
a wide variety of natural language processing tasks. As these models are
continuously growing in size it becomes increasingly important to explore
methods to make them more storage efficient. At the same time their increase
cognitive abilities increase the danger that societal bias existing in datasets
are implicitly encoded in the model weights. We propose an architecture which
deals with these two challenges at the same time …

arxiv bias diff lg pruning

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US