all AI news
Multimodal and Force-Matched Imitation Learning with a See-Through Visuotactile Sensor
June 27, 2024, 4:46 a.m. | Trevor Ablett, Oliver Limoyo, Adam Sigal, Affan Jilani, Jonathan Kelly, Kaleem Siddiqi, Francois Hogan, Gregory Dudek
cs.LG updates on arXiv.org arxiv.org
Abstract: Contact-rich tasks continue to present a variety of challenges for robotic manipulation. In this work, we leverage a multimodal visuotactile sensor within the framework of imitation learning (IL) to perform contact rich tasks that involve relative motion (slipping/sliding) between the end-effector and object. We introduce two algorithmic contributions, tactile force matching and learned mode switching, as complimentary methods for improving IL. Tactile force matching enhances kinesthetic teaching by reading approximate forces during the demonstration and …
abstract arxiv challenges cs.ai cs.lg cs.ro framework imitation learning manipulation multimodal object replace robotic robotic manipulation sensor tasks the end through type work
More from arxiv.org / cs.LG updates on arXiv.org
MixerFlow: MLP-Mixer meets Normalising Flows
1 day, 22 hours ago |
arxiv.org
Kernelised Normalising Flows
1 day, 22 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
VP, Enterprise Applications
@ Blue Yonder | Scottsdale
Data Scientist - Moloco Commerce Media
@ Moloco | Redwood City, California, United States
Senior Backend Engineer (New York)
@ Kalepa | New York City. Hybrid
Senior Backend Engineer (USA)
@ Kalepa | New York City. Remote US.
Senior Full Stack Engineer (USA)
@ Kalepa | New York City. Remote US.
Senior Full Stack Engineer (New York)
@ Kalepa | New York City., Hybrid