Feb. 27, 2024, 5:50 a.m. | Jupinder Parmar, Shrimai Prabhumoye, Joseph Jennings, Mostofa Patwary, Sandeep Subramanian, Dan Su, Chen Zhu, Deepak Narayanan, Aastha Jhunjhunwala, A

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.16819v1 Announce Type: new
Abstract: We introduce Nemotron-4 15B, a 15-billion-parameter large multilingual language model trained on 8 trillion text tokens. Nemotron-4 15B demonstrates strong performance when assessed on English, multilingual, and coding tasks: it outperforms all existing similarly-sized open models on 4 out of 7 downstream evaluation areas and achieves competitive performance to the leading open models in the remaining ones. Specifically, Nemotron-4 15B exhibits the best multilingual capabilities of all similarly-sized models, even outperforming models over four times …

abstract arxiv billion coding cs.ai cs.cl cs.lg english evaluation language language model multilingual multilingual language model open models performance report tasks technical text tokens type

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Security Data Engineer

@ ASML | Veldhoven, Building 08, Netherlands

Data Engineer

@ Parsons Corporation | Pune - Business Bay

Data Engineer

@ Parsons Corporation | Bengaluru, Velankani Tech Park