Feb. 7, 2024, 7:02 p.m. | /u/uberdev

Machine Learning www.reddit.com

I'm writing about the ethical considerations of AI, and looking for authoritative (or at least comprehensive) frameworks for AI ethics and principles. I'm specifically focused on AI alignment, harm reduction, AI risks and mitigation; but open to all topics around AI Ethics. So far I've found:

[Asilomar AI Principles](https://futureoflife.org/open-letter/ai-principles/)

[European Commission's Ethics guidelines for trustworthy AI](https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)

[The IEEE Global Initiative On Ethics Of Autonomous And Intelligent Systems](https://standards.ieee.org/industry-connections/ec/autonomous-systems/)

Are there any canonical frameworks for AI ethics? Are there others that should …

ai alignment ai ethics ai risks alignment canonical ethical ethical considerations ethics found frameworks harm least machinelearning risks topics writing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India