Jan. 31, 2024, 5:30 p.m. | Matthew S. Smith

IEEE Spectrum spectrum.ieee.org



AI’s inclusivity problem is no secret. According to the ACLU, AI systems can perpetuate housing discrimination and bias in the justice system, among other harms. Bias in the data an AI model relies on is reproduced in its results.

Large Language Models (LLMs) share this problem; they can reproduce bias in medical settings and perpetuate harmful stereotypes, among other problems. To combat that, the New York-based FutureSum AI is building Latimer, the first “racially inclusive large language …

aclu ai model ai systems bias chatgpt data discrimination housing hugging face inclusivity justice language language models large language large language models llms medical meta openai secret systems

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN