June 15, 2023, 2:08 a.m. | Synced

Synced syncedreview.com

In a new paper From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces, a research team from Google and DeepMind proposes PIX2ACT, a Transformer-based image-to-text model that is able to generate outputs corresponding to mouse and keyboard actions based solely on pixel-based screenshots from Graphical User Interfaces (GUIs).


The post From Pixels to UI Actions: Google’s PIX2ACT Agent Learns to Follow Instructions via GUIs first appeared on Synced.

ai artificial intelligence deepmind deep-neural-networks google image image-to-text interfaces keyboard machine learning machine learning & data science ml paper pixel pixels research research team team technology text transformer transformers

More from syncedreview.com / Synced

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US