March 28, 2024, 1:41 p.m. |

Mozilla Foundation Blog

Mozilla research found that detection tools aren’t always as reliable as they say. Further, researchers found that large language models like ChatGPT can be successfully prompted to create more ‘human-sounding’ text


As we wrote previously, generative AI presents new threats to the health of our information ecosystem. The major AI players recognize the risks that their services present: OpenAI published a paper on the threat of automated influence operations and their policy prohibits the use of ChatGPT for …

ai-generated text chatgpt detection detection tools ecosystem found generated generative health human information language language models large language large language models major mozilla research researchers text threats tools

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Snowflake Analytics Engineer - Technology Sector

@ Winning | Lisbon, Lisbon

Business Data Analyst

@ RideCo | Waterloo, Ontario, Canada

Senior Data Scientist, Payment Risk

@ Block | Boston, MA, United States

Research Scientist, Data Fusion (Climate TRACE)

@ WattTime | Remote

Technical Analyst (Data Analytics)

@ Contact Government Services | Fayetteville, AR