Feb. 6, 2024, 3:10 p.m. | Stephanie Palazzolo

The Information www.theinformation.com

Put aside all of the scary talk about bad people theoretically using large language models to build bombs or bioweapons for a moment. A more urgent threat, says investor Rama Sekhar, are AI models that could leak sensitive corporate data or hackers that trigger ChatGPT service outages. Sekhar is a longtime cybersecurity investor who joined Menlo Ventures as a partner last month after many years at Norwest Venture Partners.

He isn’t the only one making this argument. Last week, for …

ai models ai security bioweapons bombs build chatgpt corporate data gemini hackers investor language language models large language large language models leak outages people risk security service talk threat

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079