Jan. 5, 2024, 10:41 p.m. | /u/NuseAI

Artificial Intelligence www.reddit.com

- The lack of technical comprehension in the automotive industry and government regarding AI risks is concerning.

- Both language models and self-driving cars use statistical reasoning to make decisions, but while a language model may give nonsense, a self-driving car can be deadly.

- Human errors in coding have replaced human errors in operation, and faulty software in autonomous vehicles has caused crashes.

- AI failure modes are difficult to predict, leading to unexpected behaviors like phantom braking in …

ai risks artificial automotive car cars coding decisions driving errors government human industry language language model language models reasoning risks self-driving self-driving car statistical technical

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA