Feb. 6, 2024, 3:52 a.m. | /u/RobertWF_47

Data Science www.reddit.com

Read a job posting with a biotech firm that's looking for candidates with experience manipulating data with **trillions** of records.

I can't fathom working with datasets that big. Depending on the number of variables, would think it'd be more convenient to draw a random sample?

big biotech data datascience datasets experience job random records sample think variables

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne