Jan. 31, 2024, 3:43 p.m. | Robert Konrad Nitish Padmanaban J. Gabriel Buckmaster Kevin C. Boyle Gordon Wetzstein

cs.CV updates on arXiv.org arxiv.org

Multimodal large language models (LMMs) excel in world knowledge and problem-solving abilities. Through the use of a world-facing camera and contextual AI, emerging smart accessories aim to provide a seamless interface between humans and LMMs. Yet, these wearable computing systems lack an understanding of the user's attention. We introduce GazeGPT as a new user interaction paradigm for contextual AI. GazeGPT uses eye tracking to help the LMM understand which object in the world-facing camera view a user is paying attention …

aim attention capabilities computing computing systems contextual ai cs.cv cs.hc excel human humans knowledge language language models large language large language models lmms multimodal problem-solving smart systems through understanding wearable wearable computing world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne