all AI news
Self-Supervised Video Representation Learning via Latent Time Navigation. (arXiv:2305.06437v1 [cs.CV])
cs.CV updates on arXiv.org arxiv.org
Self-supervised video representation learning aimed at maximizing similarity
between different temporal segments of one video, in order to enforce feature
persistence over time. This leads to loss of pertinent information related to
temporal relationships, rendering actions such as `enter' and `leave' to be
indistinguishable. To mitigate this limitation, we propose Latent Time
Navigation (LTN), a time-parameterized contrastive learning strategy that is
streamlined to capture fine-grained motions. Specifically, we maximize the
representation similarity between different video segments from one video,
while …
arxiv feature information leads loss navigation persistence relationships rendering representation representation learning temporal video