April 16, 2024, 4:41 a.m. | Nicolai Dorka, Janusz Marecki, Ammar Anwar

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.08755v1 Announce Type: new
Abstract: Addressing the challenge of a digital assistant capable of executing a wide array of user tasks, our research focuses on the realm of instruction-based mobile device control. We leverage recent advancements in large language models (LLMs) and present a visual language model (VLM) that can fulfill diverse tasks on mobile devices. Our model functions by interacting solely with the user interface (UI). It uses the visual input from the device screen and mimics human-like interactions, …

abstract array arxiv assistant challenge control cs.ai cs.cv cs.hc cs.lg digital digital assistant diverse language language model language models large language large language models llms mobile mobile device realm research smartphone tasks training type vision vision language model visual visual language model vlm

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York