March 29, 2024, 4:41 a.m. | Isaac Roberts, Alexander Schulz, Luca Hermes, Barbara Hammer

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.18872v1 Announce Type: new
Abstract: Attention based Large Language Models (LLMs) are the state-of-the-art in natural language processing (NLP). The two most common architectures are encoders such as BERT, and decoders like the GPT models. Despite the success of encoder models, on which we focus in this work, they also bear several risks, including issues with bias or their susceptibility for adversarial attacks, signifying the necessity for explainable AI to detect such issues. While there does exist various local explainability …

abstract architectures art arxiv attention bert cs.ai cs.cl cs.lg encoder focus gpt gpt models language language models language processing large language large language models llms natural natural language natural language processing nlp processing state success type visualization work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada