Sept. 28, 2023, 2:04 a.m. | Synced

Synced syncedreview.com

In a new paper Effective Long-Context Scaling of Foundation Models, a Meta AI research team presents a series of long-context LLMs, built through the pretraining from LLAMA 2. These models support effective context windows of up to 32,768 tokens and outperform all existing open-sourced models in terms of performance.


The post Meta AI’s Long-Context LLMs: Redefining the Landscape of Natural Language Processing first appeared on Synced.

ai ai research artificial intelligence context context windows deep-neural-networks foundation landscape language language processing large language model llama llama 2 llms machine learning machine learning & data science meta meta ai meta ai research ml natural natural language natural language processing paper performance processing research research team scaling series support team technology terms through tokens windows

More from syncedreview.com / Synced

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote