Sept. 2, 2022, 1:14 a.m. | Di Yang, Yaohui Wang, Antitza Dantcheva, Lorenzo Garattoni, Gianpiero Francesca, Francois Bremond

cs.CV updates on arXiv.org arxiv.org

Current self-supervised approaches for skeleton action representation
learning often focus on constrained scenarios, where videos and skeleton data
are recorded in laboratory settings. When dealing with estimated skeleton data
in real-world videos, such methods perform poorly due to the large variations
across subjects and camera viewpoints. To address this issue, we introduce ViA,
a novel View-Invariant Autoencoder for self-supervised skeleton action
representation learning. ViA leverages motion retargeting between different
human performers as a pretext task, in order to disentangle the …

arxiv learning representation representation learning

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US