April 27, 2024, 8:47 p.m. | /u/thisisnotadrill66

Deep Learning www.reddit.com

I am a CS graduate and have a basic understanding of machine learning algorithms. Lately I came across a concept that captured my attention: explainable neural networks. I get that neural networks are kind of a black-box but how would that be explainable? I mean, in terms of weights and biases, what explainable means?

Thank you!

algorithms attention basic biases box concept deeplearning graduate kind machine machine learning machine learning algorithms mean networks neural networks terms understanding weights and biases

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York