April 27, 2024, 8:47 p.m. | /u/thisisnotadrill66

Deep Learning www.reddit.com

I am a CS graduate and have a basic understanding of machine learning algorithms. Lately I came across a concept that captured my attention: explainable neural networks. I get that neural networks are kind of a black-box but how would that be explainable? I mean, in terms of weights and biases, what explainable means?

Thank you!

algorithms attention basic biases box concept deeplearning graduate kind machine machine learning machine learning algorithms mean networks neural networks terms understanding weights and biases

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US