Feb. 29, 2024, 5:43 a.m. | Terence Jie Chua, Wenhan Yu, Jun Zhao, Kwok-Yan Lam

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.17491v2 Announce Type: replace
Abstract: The emergence of foundation models, including language and vision models, has reshaped AI's landscape, offering capabilities across various applications. Deploying and fine-tuning these large models, like GPT-3 and BERT, presents challenges, especially in the current foundation model era. We introduce Emulator-Assisted Tuning (EAT) combined with Parameter-Efficient Fine-Tuning (PEFT) to form Parameter-Efficient Emulator-Assisted Tuning (PEAT). Further, we expand this into federated learning as Federated PEAT (FedPEAT). FedPEAT uses adapters, emulators, and PEFT for federated model tuning, …

abstract applications artificial artificial intelligence arxiv bert capabilities challenges computing convergence cs.lg cs.ni edge edge computing emergence emulator federated learning fine-tuning foundation gpt gpt-3 intelligence landscape language large models mobile mobile edge computing type vision vision models

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York