Nov. 5, 2023, 6:43 a.m. | Patrick Perrine, Trevor Kirkby

cs.LG updates on arXiv.org arxiv.org

Digitally synthesizing human motion is an inherently complex process, which
can create obstacles in application areas such as virtual reality. We offer a
new approach for predicting human motion, KP-RNN, a neural network which can
integrate easily with existing image processing and generation pipelines. We
utilize a new human motion dataset of performance art, Take The Lead, as well
as the motion generation pipeline, the Everybody Dance Now system, to
demonstrate the effectiveness of KP-RNN's motion predictions. We have found …

application art arxiv deep learning human image image processing network neural network performance performance art pipeline prediction process processing reality rnn synthesis virtual virtual reality

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence