April 2, 2024, 3:32 p.m. | /u/NuseAI

Artificial Intelligence www.reddit.com

- Apple researchers have developed an AI system called ReALM that can understand screen context and ambiguous references, improving interactions with voice assistants.

- ReALM reconstructs the screen using parsed on-screen entities to generate a textual representation, outperforming GPT-4.

- Apple is investing in making Siri more conversant and context-aware through this research.

- However, automated parsing of screens has limitations, especially with complex visual references.

- Apple is catching up in AI research but faces stiff competition from tech …

ai system apple artificial assistants context generate gpt gpt-4 improving interactions investing making realm representation researchers siri textual voice voice assistants

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada