June 26, 2022, 1:52 a.m. | /u/Laddenvore

Deep Learning www.reddit.com

To clarify my question with a (semi) made up example: say we trained a DL model to take some input data to predict some outcome (eg. amino acid sequence to predict protein conformation). Say we also have very little idea how to relate the outcome to the predictors. Then we did some form of interpretability on the DL model and this led to novel theory/insight (eg. certain amino acid subsequences are highly likely to appear on the surface of the …

case case studies deeplearning good insight interpretation studies theory

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote