Feb. 6, 2024, 3:10 p.m. | Stephanie Palazzolo

The Information www.theinformation.com

Put aside all of the scary talk about bad people theoretically using large language models to build bombs or bioweapons for a moment. A more urgent threat, says investor Rama Sekhar, are AI models that could leak sensitive corporate data or hackers that trigger ChatGPT service outages. Sekhar is a longtime cybersecurity investor who joined Menlo Ventures as a partner last month after many years at Norwest Venture Partners.

He isn’t the only one making this argument. Last week, for …

ai models ai security bioweapons bombs build chatgpt corporate data gemini hackers investor language language models large language large language models leak outages people risk security service talk threat

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US