May 1, 2024, 4:42 a.m. | Nandhini Swaminathan, David Danks

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.19256v1 Announce Type: cross
Abstract: As AI increasingly integrates with human decision-making, we must carefully consider interactions between the two. In particular, current approaches focus on optimizing individual agent actions but often overlook the nuances of collective intelligence. Group dynamics might require that one agent (e.g., the AI system) compensate for biases and errors in another agent (e.g., the human), but this compensation should be carefully developed. We provide a theoretical framework for algorithmic compensation that synthesizes game theory and …

abstract agent ai system arxiv bias biases collective compensation cs.ai cs.cy cs.gt cs.hc cs.lg cs.ma current decision dynamics focus human intelligence interactions making perspective reinforcement reinforcement learning type via

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US