Feb. 29, 2024, 7:41 p.m. | Shubham Sharma

AI News | VentureBeat venturebeat.com

In the Hellaswag LLM benchmark evaluating common sense natural language inference, Danube performed with an accuracy of 69.58%, sitting just behind Stability AI’s Stable LM 2 1.6 billion parameter model.

accuracy ai applications benchmark billion common sense computer science conversational ai danube generative-ai h2o h2o ai h2o-danube-1.8b inference language llm llm benchmark microsoft phi ml and deep learning mobile mobile applications natural natural language nlp releases science sense stability stability ai stable lm

More from venturebeat.com / AI News | VentureBeat

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States