March 26, 2024, 4:44 a.m. | Hormoz Shahrzad, Babak Hodjat, Risto Miikkulainen

cs.LG updates on arXiv.org arxiv.org

arXiv:2204.10438v4 Announce Type: replace-cross
Abstract: Most AI systems are black boxes generating reasonable outputs for given inputs. Some domains, however, have explainability and trustworthiness requirements that cannot be directly met by these approaches. Various methods have therefore been developed to interpret black-box models after training. This paper advocates an alternative approach where the models are transparent and explainable to begin with. This approach, EVOTER, evolves rule-sets based on simple logical expressions. The approach is evaluated in several prediction/classification and prescription/policy …

abstract ai systems arxiv black boxes box cs.ai cs.lg cs.ne domains evolution explainability however inputs paper requirements systems training transparent type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain