Sept. 28, 2022, 8:32 a.m. | AI & Data Today

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion www.aidatatoday.com

Not all algorithms are explainable. So does that mean that it's ok to not provide any explanation on how your AI system got to the decision it did if you're using one of those "black box" algorithms? The answer should obviously be no. So, what do you do then when creating Ethical and Responsible AI systems to address this issue around explainable and interpretable AI? In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer discuss …

interpretable ai podcast responsible ai series

More from www.aidatatoday.com / AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US