all AI news
[R] RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language Models
Oct. 30, 2023, 9:55 p.m. | /u/APaperADay
Machine Learning www.reddit.com
**Hugging Face**: [https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
**GitHub**: [https://github.com/togethercomputer/RedPajama-Data](https://github.com/togethercomputer/RedPajama-Data)
**Description**:
>RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the CCNet pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated.
dataset documents language language models large language large language models machinelearning pipeline quality redpajama text training
More from www.reddit.com / Machine Learning
[D] Is it a good idea for a 3rd year PhD student to start a …
1 day, 2 hours ago |
www.reddit.com
[D] Use VQ-VAEs for SSL?
1 day, 3 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US