all AI news
Minerva: Solving Quantitative Reasoning Problems with Language Models
June 30, 2022, 4:34 p.m. | Google AI (noreply@blogger.com)
Google AI Blog ai.googleblog.com
Language models have demonstrated remarkable performance on a variety of natural language tasks — indeed, a general lesson from many works, including BERT, GPT-3, Gopher, and PaLM, has been that neural networks trained on diverse data at large scale in an unsupervised way can perform well on a variety of tasks.
Quantitative reasoning is one area in which language models still fall far …
deep learning education language language models minerva natural language processing reasoning self-supervised learning
More from ai.googleblog.com / Google AI Blog
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst
@ SEAKR Engineering | Englewood, CO, United States
Data Analyst II
@ Postman | Bengaluru, India
Data Architect
@ FORSEVEN | Warwick, GB
Director, Data Science
@ Visa | Washington, DC, United States
Senior Manager, Data Science - Emerging ML
@ Capital One | McLean, VA