all AI news
Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning
April 16, 2024, 4:42 a.m. | Sungwon Han, Jinsung Yoon, Sercan O Arik, Tomas Pfister
cs.LG updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs), with their remarkable ability to tackle challenging and unseen reasoning problems, hold immense potential for tabular learning, that is vital for many real-world applications. In this paper, we propose a novel in-context learning framework, FeatLLM, which employs LLMs as feature engineers to produce an input data set that is optimally suited for tabular predictions. The generated features are used to infer class likelihood with a simple downstream machine learning model, such …
abstract applications arxiv context cs.lg engineer feature features few-shot framework in-context learning language language models large language large language models llms novel paper reasoning tabular type vital world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist (Computer Science)
@ Nanyang Technological University | NTU Main Campus, Singapore
Intern - Sales Data Management
@ Deliveroo | Dubai, UAE (Main Office)