all AI news
Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning
April 16, 2024, 4:42 a.m. | Sungwon Han, Jinsung Yoon, Sercan O Arik, Tomas Pfister
cs.LG updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs), with their remarkable ability to tackle challenging and unseen reasoning problems, hold immense potential for tabular learning, that is vital for many real-world applications. In this paper, we propose a novel in-context learning framework, FeatLLM, which employs LLMs as feature engineers to produce an input data set that is optimally suited for tabular predictions. The generated features are used to infer class likelihood with a simple downstream machine learning model, such …
abstract applications arxiv context cs.lg engineer feature features few-shot framework in-context learning language language models large language large language models llms novel paper reasoning tabular type vital world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York