all AI news
Analyzing datasets with trillions of records?
Feb. 6, 2024, 3:52 a.m. | /u/RobertWF_47
Data Science www.reddit.com
I can't fathom working with datasets that big. Depending on the number of variables, would think it'd be more convenient to draw a random sample?
big biotech data datascience datasets experience job random records sample think variables
More from www.reddit.com / Data Science
Have Data Scientist Interviews Evolved Over the Last Year?
1 day, 13 hours ago |
www.reddit.com
Tell me about older individual contributors
1 day, 18 hours ago |
www.reddit.com
Pedro Thermo Similarity vs Levenshtain/ OSA/ Jaro/ ..
1 day, 19 hours ago |
www.reddit.com
Struggling on where to plug Python into my workflow
1 day, 20 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US