all AI news
Microsoft’s phi-1.5 Challenges LLMs Scaling Law, Showcases the Crucial Rule for ‘Textbook Quality’ Dataset
Synced syncedreview.com
A Microsoft research team introduce phi-1.5, a 1.3 billion parameter model trained on a vast dataset of 30 billion tokens, remarkably delivering performance that rivals models five times its size. Moreover, it outperforms most non-frontier LLMs in tackling intricate reasoning tasks.
The post Microsoft’s phi-1.5 Challenges LLMs Scaling Law, Showcases the Crucial Rule for ‘Textbook Quality’ Dataset first appeared on Synced.
ai artificial intelligence billion challenges dataset deep-neural-networks five large language model law llms machine learning machine learning & data science microsoft microsoft research ml performance quality reasoning research research team scaling scaling law tasks team technology textbook tokens