April 23, 2024, 4:43 a.m. | Yue Jiang, Changkong Zhou, Vikas Garg, Antti Oulasvirta

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13521v1 Announce Type: cross
Abstract: Present-day graphical user interfaces (GUIs) exhibit diverse arrangements of text, graphics, and interactive elements such as buttons and menus, but representations of GUIs have not kept up. They do not encapsulate both semantic and visuo-spatial relationships among elements. To seize machine learning's potential for GUIs more efficiently, Graph4GUI exploits graph neural networks to capture individual elements' properties and their semantic-visuo-spatial constraints in a layout. The learned representation demonstrated its effectiveness in multiple tasks, especially generating …

abstract arxiv cs.ai cs.cv cs.hc cs.lg diverse graph graphics graph neural networks interactive interfaces machine machine learning networks neural networks relationships semantic spatial text type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer

@ Apple | Sunnyvale, California, United States