Feb. 29, 2024, 5:42 a.m. | Robert Tjarko Lange, Yingtao Tian, Yujin Tang

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.18381v1 Announce Type: cross
Abstract: Large Transformer models are capable of implementing a plethora of so-called in-context learning algorithms. These include gradient descent, classification, sequence completion, transformation, and improvement. In this work, we investigate whether large language models (LLMs), which never explicitly encountered the task of black-box optimization, are in principle capable of implementing evolutionary optimization algorithms. While previous works have solely focused on language-based task specification, we move forward and focus on the zero-shot application of LLMs to black-box …

abstract algorithms arxiv box classification context cs.ai cs.lg cs.ne evolution gradient improvement in-context learning language language models large language large language models llms optimization strategies transformation transformer transformer models type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India