Jan. 18, 2024, 7:34 p.m. | /u/APaperADay

Machine Learning www.reddit.com

**arXiv**: [https://arxiv.org/abs/2310.10971](https://arxiv.org/abs/2310.10971)

**OpenReview**:

[https://openreview.net/forum?id=lJYAkDVnRU](https://openreview.net/forum?id=lJYAkDVnRU)

[https://openreview.net/forum?id=SAu298HU2I](https://openreview.net/forum?id=SAu298HU2I)

**Abstract**:

>Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning. However, visual models trained to detect new objects during inference have been unable to replicate this ability, and instead either perform poorly or require meta-training and/or fine-tuning on similar objects. In this work, we propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning. Our approach leverages a …

abstract algorithm capacity chatgpt concepts fine-tuning inference language language models large language large language models learn machinelearning meta meta-learning objects replicate training visual work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence