Jan. 31, 2024, 5:30 p.m. | Matthew S. Smith

IEEE Spectrum spectrum.ieee.org



AI’s inclusivity problem is no secret. According to the ACLU, AI systems can perpetuate housing discrimination and bias in the justice system, among other harms. Bias in the data an AI model relies on is reproduced in its results.

Large Language Models (LLMs) share this problem; they can reproduce bias in medical settings and perpetuate harmful stereotypes, among other problems. To combat that, the New York-based FutureSum AI is building Latimer, the first “racially inclusive large language …

aclu ai model ai systems bias chatgpt data discrimination housing hugging face inclusivity justice language language models large language large language models llms medical meta openai secret systems

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US