May 16, 2024, 6:25 p.m. | Maxwell Zeff


OpenAI launched its Superalignment team almost a year ago with the ultimate goal of controlling hypothetical super-intelligent AI systems and preventing them from turning against humans. Naturally, many people were concerned—why did a team like this need to exist in the first place? Now, something more concerning has…


ai alignment ai-safety ai systems andrej karpathy andrej karpathy karpathy anthropic artificial general intelligence cullen daniela amodei daniel kokotajlo daniel kokotajlo kokotajlo dario fei-fei li gpt-4 greg brockman helen toner humans ilya sutskever inflection ai intelligent izmailov leopold aschenbrenner openai pavel izmailov aschenbrennar people responsible safety sam altman ship something superalignment systems tasha mccauley team tesla them william saunders saunders

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Business Intelligence Analyst Insights & Reporting

@ Bertelsmann | Hilversum, NH, NL, 1217WP