all AI news
Cross-Loss Influence Functions to Explain Deep Network Representations. (arXiv:2012.01685v2 [cs.LG] UPDATED)
May 5, 2022, 1:12 a.m. | Andrew Silva, Rohit Chopra, Matthew Gombolay
cs.LG updates on arXiv.org arxiv.org
As machine learning is increasingly deployed in the real world, it is
paramount that we develop the tools necessary to analyze the decision-making of
the models we train and deploy to end-users. Recently, researchers have shown
that influence functions, a statistical measure of sample impact, can
approximate the effects of training samples on classification accuracy for deep
neural networks. However, this prior work only applies to supervised learning,
where training and testing share an objective function. No approaches currently
exist …
More from arxiv.org / cs.LG updates on arXiv.org
Regularization by Texts for Latent Diffusion Inverse Solvers
1 day, 20 hours ago |
arxiv.org
When can transformers reason with abstract symbols?
1 day, 20 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Enterprise Data Quality, Senior Analyst
@ Toyota North America | Plano
Data Analyst & Audit Management Software (AMS) Coordinator
@ World Vision | Philippines - Home Working
Product Manager Power BI Platform Tech I&E Operational Insights
@ ING | HBP (Amsterdam - Haarlerbergpark)
Sr. Director, Software Engineering, Clinical Data Strategy
@ Moderna | USA-Washington-Seattle-1099 Stewart Street
Data Engineer (Data as a Service)
@ Xplor | Atlanta, GA, United States