Feb. 7, 2024, 7:02 p.m. | /u/uberdev

Machine Learning www.reddit.com

I'm writing about the ethical considerations of AI, and looking for authoritative (or at least comprehensive) frameworks for AI ethics and principles. I'm specifically focused on AI alignment, harm reduction, AI risks and mitigation; but open to all topics around AI Ethics. So far I've found:

[Asilomar AI Principles](https://futureoflife.org/open-letter/ai-principles/)

[European Commission's Ethics guidelines for trustworthy AI](https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)

[The IEEE Global Initiative On Ethics Of Autonomous And Intelligent Systems](https://standards.ieee.org/industry-connections/ec/autonomous-systems/)

Are there any canonical frameworks for AI ethics? Are there others that should …

ai alignment ai ethics ai risks alignment canonical ethical ethical considerations ethics found frameworks harm least machinelearning risks topics writing

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York