all AI news
TAAT: Think and Act from Arbitrary Texts in Text2Motion
April 24, 2024, 4:45 a.m. | Runqi Wang, Caoyuan Ma, GuoPeng Li, Zheng Wang
cs.CV updates on arXiv.org arxiv.org
Abstract: Text2Motion aims to generate human motions from texts. Existing datasets rely on the assumption that texts include action labels (such as "walk, bend, and pick up"), which is not flexible for practical scenarios. This paper redefines this problem with a more realistic assumption that the texts are arbitrary. Specifically, arbitrary texts include existing action texts composed of action labels (e.g., A person walks and bends to pick up something), and introduce scene texts without explicit …
abstract act arxiv cs.cv datasets generate human labels paper practical think type
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
2 days, 15 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
2 days, 15 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne