all AI news
Quoting Sam Altman, via Marvin von Hagen
May 25, 2023, 11:03 p.m. |
Simon Willison's Weblog simonwillison.net
A whole new paradigm would be needed to solve prompt injections 10/10 times – It may well be that LLMs can never be used for certain purposes. We're working on some new approaches, and it looks like synthetic data will be a key element in preventing prompt injections.
— Sam Altman, via Marvin von Hagen
ai data generativeai llms openai paradigm prompt promptinjection prompt injections sam sam altman security synthetic synthetic data
More from simonwillison.net / Simon Willison's Weblog
Merge pull request #1757 from simonw/heic-heif
1 day, 1 hour ago |
simonwillison.net
Wrap text at specified width
1 day, 3 hours ago |
simonwillison.net
llm cmd undo last git commit - a new plugin for LLM
2 days, 15 hours ago |
simonwillison.net
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Principal Data Engineer
@ RS21 | Remote
SQL/Power BI Developer
@ ICF | Virginia Remote Office (VA99)
Senior Machine Learning Engineer (Canada Remote)
@ Fullscript | Ottawa, ON
Software Engineer - MLOps.
@ Renesas Electronics | Toyosu, Japan
Junior Data Scientist / Artificial Intelligence consultant
@ Deloitte | Luxembourg, LU