all AI news
Microsoft's Orca 2 LLM Outperforms Models That Are 10x Larger
Dec. 12, 2023, 2 p.m. | Anthony Alford
InfoQ - AI, ML & Data Engineering www.infoq.com
Microsoft Research released its Orca 2 LLM, a fine-tuned version of Llama 2 that performs as well as or better than models that contain 10x the number of parameters. Orca 2 uses a synthetic training dataset and a new technique called Prompt Erasure to achieve this performance.
By Anthony Alfordai anthony dataset deep learning generative-ai large language models llama llama 2 llm microsoft microsoft research ml & data engineering neural networks orca orca 2 parameters performance prompt research synthetic training
More from www.infoq.com / InfoQ - AI, ML & Data Engineering
Meta Releases Llama 3 Open-Source LLM
1 day, 17 hours ago |
www.infoq.com
OpenAI Releases New Fine-Tuning API Features
1 week, 1 day ago |
www.infoq.com
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US