all AI news
Multitask Learning via Shared Features: Algorithms and Hardness. (arXiv:2209.03112v1 [cs.LG])
Sept. 8, 2022, 1:11 a.m. | Konstantina Bairaktari, Guy Blanc, Li-Yang Tan, Jonathan Ullman, Lydia Zakynthinou
cs.LG updates on arXiv.org arxiv.org
We investigate the computational efficiency of multitask learning of Boolean
functions over the $d$-dimensional hypercube, that are related by means of a
feature representation of size $k \ll d$ shared across all tasks. We present a
polynomial time multitask learning algorithm for the concept class of
halfspaces with margin $\gamma$, which is based on a simultaneous boosting
technique and requires only $\textrm{poly}(k/\gamma)$ samples-per-task and
$\textrm{poly}(k\log(d)/\gamma)$ samples in total.
In addition, we prove a computational separation, showing that assuming there
exists …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote