all AI news
An MRP Formulation for Supervised Learning: Generalized Temporal Difference Learning Models
April 25, 2024, 7:42 p.m. | Yangchen Pan, Junfeng Wen, Chenjun Xiao, Philip Torr
cs.LG updates on arXiv.org arxiv.org
Abstract: In traditional statistical learning, data points are usually assumed to be independently and identically distributed (i.i.d.) following an unknown probability distribution. This paper presents a contrasting viewpoint, perceiving data points as interconnected and employing a Markov reward process (MRP) for data modeling. We reformulate the typical supervised learning as an on-policy policy evaluation problem within reinforcement learning (RL), introducing a generalized temporal difference (TD) learning algorithm as a resolution. Theoretically, our analysis draws connections between …
abstract arxiv cs.ai cs.lg data data modeling difference distributed distribution generalized markov modeling paper probability process statistical supervised learning temporal type
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 22 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 22 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 22 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US