all AI news
The Effect of Model Size on LLM Post-hoc Explainability via LIME
May 10, 2024, 4:42 a.m. | Henning Heyen, Amy Widdicombe, Noah Y. Siegel, Maria Perez-Ortiz, Philip Treleaven
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) are becoming bigger to boost performance. However, little is known about how explainability is affected by this trend. This work explores LIME explanations for DeBERTaV3 models of four different sizes on natural language inference (NLI) and zero-shot classification (ZSC) tasks. We evaluate the explanations based on their faithfulness to the models' internal decision processes and their plausibility, i.e. their agreement with human explanations. The key finding is that increased model size …
abstract arxiv bigger boost classification cs.ai cs.cl cs.lg explainability however inference language language models large language large language models lime llm llms natural natural language performance tasks trend type via work zero-shot
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Cloud Data Platform Engineer
@ First Central | Home Office (Remote)
Associate Director, Data Science
@ MSD | USA - New Jersey - Rahway