Sept. 28, 2022, 8:32 a.m. | AI & Data Today

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion www.aidatatoday.com

Not all algorithms are explainable. So does that mean that it's ok to not provide any explanation on how your AI system got to the decision it did if you're using one of those "black box" algorithms? The answer should obviously be no. So, what do you do then when creating Ethical and Responsible AI systems to address this issue around explainable and interpretable AI? In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer discuss …

interpretable ai podcast responsible ai series

More from www.aidatatoday.com / AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV