The Blue Scrubs v2: A large refined medical dataset derived from the internet
Monique Shotande1,3, Luis Felipe1,3, Carlos Garcia1,3, Talia Kip Berler1,3, Mehmet Belgin1,3, Shane Corder1,3, Jarett DeAngelis1,3, Aakash Tripathi1,3, Issam El Naqa1,3, Vivek Rudrapatna2,3, Ghulam Rasool1,3, Gilmer Valdes1,3
- Machine Learning Department, Moffitt Cancer Center, Tampa, Florida
- Center for Real World Evidence, University of California San Francisco, San Francisco, California
- TheBlueScrubs
1. Introduction
The Blue Scrubs v2 (TBSv2) dataset is the next step forward in addressing the scarcity of large, high-quality medical and cancer text datasets. Although large amounts of medical data are generated every day, most of it remains private to protect patient confidentiality. Even when accessible, amalgamating these data into useful forms for models can be time-consuming and require expert medical knowledge.
Publicly available medical text repositories remain relatively small and often draw from a few select sources. The small size and lax diversity limits their capacity for training robust clinically proficient large language models (LLMs). For instance, PubMed1, one of the most widely used biomedical literature collections, contains around 10 billion tokens. Additionally, the Meditron2 suite of 7B and 70B parameter medical LLMs was trained on a total of 48.1 billion tokens—a scale still modest relative to the needs of foundational LLM development.
Scaling laws3 for LLMs demonstrate that increasing dataset size leads to predictable improvements in model performance. However, these laws have not been extensively tested in the medical domain, largely due to the scarcity of sufficiently large datasets. Therefore, to address these needs we released The Blue Scrubs datasets. Previously, we released The Blue Scrubs v1 (TBSv1), then the largest cancer text dataset with 25 billion tokens. Building on that foundation, we are now releasing TBSv2, a significantly larger dataset containing 692 billion tokens that is about 14 times the size of Meditron's training set and about 28 times the size of TBSv1.
Recent efforts, such as DeepSeek and Meditron, have demonstrated the value of curating large domain-relevant datasets for training or adapting LLMs. To achieve this scale and quality, TBSv2 was curated from the FineWeb-v14 dataset (released in 2024), which itself contains 15 trillion tokens derived from 96 CommonCrawl5 snapshots. FineWeb has been shown to outperform other high-quality web datasets such as SlimPajama6 (the source of TBSv1), making it an ideal foundation for medical text curation.
TBSv2 is not only broader but also deeper in its coverage. It includes approximately 284 billion tokens of cancer-related text, making it the largest curated cancer text dataset to date. Each text has been carefully evaluated for medical quality, ensuring coverage across diverse domains, communication styles, and levels of complexity. For example, the dataset balances formal clinical literature (e.g., oncology research papers) with real-world medical conversations (e.g., patient forums and practitioner discussions). By combining this depth and breadth, TBSv2 provides a comprehensive training resource designed to advance the development of foundation medical LLMs. TBSv2 has the potential to enable models to better handle the multifaceted challenges of medicine, from interpreting complex oncology research to engaging in patient-centered communication.
| Dataset | Total Texts | Total Tokens | Cancer Texts | Cancer Tokens | Dataset Source |
|---|---|---|---|---|---|
| TheBlueScrubs-v2 | 544,386,328 | 692.7 billion | 143,720,788 | 283.7 billion (40.95%) | FineWeb (15 tillion tokens) |
| TheBlueScrubs-v1 | 11,520,321 | 25.2 billion | 3,361,700 | 11.8 billion (46.71%) | SlimPajama (670 billion tokens) |
Table 1: Summary comparing TBSv1 to TBSv2 in regards to overall size and quantity of cancer texts.
What are we releasing?
We are releasing a curated subset of the FineWeb dataset, called The Blue Scrubs v2, which includes all texts with high medical probability scores based on a custom classifier (see TBSv1 for more details). FineWeb is a massive dataset, where filtering based on medical probability allows us to reduce the dataset size while maintaining the texts with the highest likelihood of being medical. Additionally, textual quality is quantified according to the following key metrics:
- Medical Probability Score (0.8 to 1.0): How likely text is to discuss medical content.
- Scope of Medical Relevance (1–5 Scale, where 1 is lowest and 5 is highest): Strength of relevance to medicine and utility to clinical applications.
- Medical Precision and Factual Detail (1–5 Scale): Accuracy and specificity of medical claims.
- Safety and Ethical Standards (1–5 Scale): Adherence to medical safety protocols and ethical guidelines.
TBSv2 also includes descriptive labels indicating whether the text is cancer related and the likelihood it includes quality information about breast cancer:
- Cancer Binary Label (0 or 1)
- Breast Cancer Probability (0 to 1.0 Scale)
Data Pipeline
Below is a diagram of the data pipeline in Figure 1. First, the FineWeb dataset is filtered based on the medical score, then the data are further evaluated for medical quality and breast cancer probability, and labeled as cancer or not.
Figure 1: TBSv2 creation process through filtering, quality evaluation, and cancer labeling.
Example Text:
The first and only approved antibody-drug conjugate for the treatment of recurrent or metastatic cervical cancer with disease progression on or after chemotherapy receives accelerated approval by FDA.
FDA granted accelerated approval to Seagen and Genmab's Tivdak (tisotumab vedotin-tftv), a tissue factor-directed antibody and microtubule inhibitor conjugate, according to a Sept. 20, 2021, FDA press release. According to the Seagen company press release, Tivdak is the first and only approved antibody-drug conjugate for the treatment of adult patients with recurrent or metastatic cervical cancer with disease progression on or after chemotherapy.
In the innovaTV 204 clinical trial, 101 patients with recurrent or metastatic cervical cancer who had received no more than two prior systemic regimens in the recurrent or metastatic setting, including at least one prior platinum-based chemotherapy regimen, received Tivdak. The trial showed a 24% objective response rate with a median duration of response of 8.3 months.
Tivdak is approved under FDA’s accelerated approval program based on tumor response and the durability of the response, according to Seagen’s press release. Verification and description of clinical benefit in confirmatory trials will impact any continued approval.
"TIVDAK's approval as a monotherapy in the U.S. is an important milestone for women with recurrent or metastatic cervical cancer with disease progression on or after chemotherapy, as they are in need of a new treatment option and we look forward to making it available to them," said Jan van de Winkel, CEO, Genmab, in Seagen's press release. "The journey towards the approval of TIVDAK started nearly two decades ago with innovative research by scientists at Genmab and Seagen and reflects on our purpose of making an impact in the lives of cancer patients and their families. Today’s announcement marks Genmab’s evolution into a fully integrated biotechnology company and we would like to thank patients, caregivers, investigators and our collaborators for their participation in our clinical studies."
BioPharm International Article URL
- Medical Probability Score: 0.96
- Scope of Medical Relevance: 5.0
- Precision and Factual Detail: 4.91
- Safety and Ethical Standards: 5.0
- Breast Cancer Probability Score: 0.21
- Cancer Label: 1
Potential Uses Cases
Below we highlight 3 potential usages of The Blue Scrubs datasets.
1) Analyzing Medical LLM Scaling Laws
TBSv2 provides a large medical and cancer resource for focused training of clinical LLMs that learn better without catastrophic forgetting of other skills. Scaling laws pertaining to data set size can be assessed for clinically-based models with much larger medical dataset sizes.
Use Case: Conduct experiments training clinical LLMs for a set of medical dataset sizes with 100s of billions of tokens.
2) Domain-Specific Pretraining for Clinical LLMs
The dataset enables the fine-tuning of general-purpose LLMs into medical or cancer models. By prioritizing high-scoring medical texts and leveraging synthetic data pipelines (e.g., using Llama 3.37 for text rewrite and refinement), The Blue Scrubs v2 ensures LLMs can learn the nuanced language and knowledge required for complex medical domains.
Use Case: Transitioning a general model into an oncology-specific LLM to improve accuracy in tasks like treatment recommendation and risk stratification.
3) Medical Misinformation Detection
With its rich metadata, including probability scores and reliability metrics, the dataset supports the development of LLMs that can identify and filter out misleading or harmful health information. Synthetic data can further enhance this process by creating examples of misinformation corrected in real-time.
Use Case: Training LLMs to flag false claims about cancer treatments and provide more accurate alternative explanations.
By restricting the dataset to texts scoring ≥ 0.8 and including Llama-based (1–5 scale) quality category ratings along with source information, TBS offers a high-quality, high-utility resource for training, analysis, and stress-testing of medical AI systems.
2. Medical Filtering
Using our custom medical classifier, a subset of the FineWeb dataset is extracted if the medical probability score is greater than 0.8 for medical refinement and quality control. The medical classifier is a logistic regression model trained using a balanced dataset of 60,000 text samples (30,000 medical vs. 30,000 non-medical). See TBSv1 Section 2 Filtering Process (Linear Classifier) for a deep dive into the technical details of training and evaluating the medical classifier. Shown in Table 2 is a summary of the medical classifier performance.
The medical probability score threshold is set to 0.8, allows us to focus on documents with consistently high medical quality. This approach yields a large dataset size without sacrificing depth or quality. Roughly 4.6% of FineWeb is obtained after this level of filtering resulting in about 692 billion tokens, from over 544 million texts meeting our criteria.
| Metric | Test Split | External Dataset |
|---|---|---|
| Accuracy | 0.9819 | 0.9613 |
| Precision | 0.9836 | 0.9943 |
| Recall | 0.9801 | 0.8472 |
| F1 Score | 0.9819 | 0.9149 |
Table 2: Performance of the linear medical classifier on independent test sets.
Figure 2: Distribution of medical probability score across all the text in TBSv2.
3. Quality and Safety Scores
To ensure, a sufficient quanitity of high quality texts are included when using TBSv2 for medical LLM training, we developed medical quality and safety metrics and included their scores for each text within the dataset. The curated medical subset are quantitifed across three key dimensions: scope of medical relevance, precision and factual detail, and safety and ethical standards. These scores are essential to refine the dataset beyond just the medical probability by capturing nuanced distinctions in clinical relevance and safety. By incorporating this additional information, we ensure that The Blue Scrubs datasets retain the highest-quality medical texts and align with the ethical and safety expectations critical for developing robust and trustworthy medical AI systems. Researchers can set filtering thresholds for any set of these metrics to obtain datasets that are both technically rigorous and practically reliable for custom and real-world medical applications.
See TBSv1 Section 3 Quality and Safety evaluation with Llama 3.1 70B for a deep dive into the technical details of development and evaluation of the quality and safety scoring models. Our medical quality scores align well with clinician evaluations, suggesting our metrics are reliable proxies for human expert judgment. The validation was conducted by 60 independent clinicians reviewing a subsample of the TBSv1 dataset to assess its alignment with the automated quality metrics.
Quality and Safety Metrics
Scope of Medical Relevance
Measures whether the content falls within the domain of medical knowledge and healthcare.
Ensures that selected texts are strongly related to medicine and useful for clinical applications.
There are 2 fields in the dataset related to relevance: relevance_score (raw score from the scoring model) and relevance_score_clipped (clipped score between 1 and 5)Precision and Factual Detail
Assesses the accuracy and specificity of medical claims.
Ensures that content is factually correct and useful in real-world medical applications.
There are 2 fields in the dataset related to precision: precision_score (raw score) and precision_score_clipped (clipped score)Safety and Ethical Standards
Evaluates adherence to medical safety protocols and ethical guidelines.
Prevents the inclusion of misleading or harmful information that could negatively impact medical decision-making.
There are 2 fields in the dataset related to safety: safety_score (raw score) and safety_score_clipped (clipped score)
These metrics were selected to ensure that the dataset is highly useful for developing AI models and applications capable of understanding medical content while maintaining factual accuracy and ethical considerations.
Quality and Safety Summary
The overall distribution of the quality scores, across all the texts in the 692 billion tokens in TBSv2 dataset, are shown in Figure 3. These statistics allow researchers to assess the overall quality and consistency of the dataset and provide insights into areas where additional refinement may be necessary. As seen in the figure below, the majority of the data have relevance, precision, and safety above 4.
Figure 3: Distributions for each medical quality metric across all the texts in TBSv2.
| Metric | Mean | Standard Deviation |
|---|---|---|
| Scope of Medical Relevance | 4.54 | 1.06 |
| Precision and Factual Detail | 4.32 | 0.84 |
| Safety and Ethical Standards | 4.36 | 0.88 |
Table 3: Mean and standard deviation of the three medical quality metrics. This verifies the quality and comprehensiveness of the dataset.
Figure 4: Distributions of the medical quality scores within the TBSv2 for each year. The majority of the texts within the dataset have medical safety, precision, and relevance above 4. From 2013 (deep blue) to 2024 (dark red), there is a noticable decrease in the fraction of text with safety and precision/factuality scores around 5. The majority of scores are still above 4 for all years but the distribution becomes more spread between 4 and 5.
4. Cancer Classification
TBS datasets are also a source of relevant information for specific patient populations, such as cancer, which are difficult to obtain. For a step-by-step description of our process to develop a model that generates cancer text labels, see TBSv1 Section 4 Cancer Classification. TBSv2 is currently the largest corpus of open-source cancer text available.
| Category | Number of Texts | Number of Tokens (Percentage) |
|---|---|---|
| Cancer Texts | 143,720,788 | 283,677,124,826 (40.95%) |
| Non-Cancer Texts | 400,665,540 | 408,987,118,899 (59.05%) |
Table 4: Quantitiy of cancer and non-cancer text in TBSv2 dataset.
| Metric | Mean | Standard Deviation |
|---|---|---|
| Probability (Cancer Only) | 0.92 | 0.12 |
| Scope of Medical Relevance | 4.81 | 0.50 |
| Precision and Factual Detail | 4.26 | 0.92 |
| Safety and Ethical Standards | 4.22 | 1.06 |
Table 5: Medical quality of the cancer texts within TBSv2.
| Category | Number of Texts | Number of Tokens | Percentage |
|---|---|---|---|
| Safe Texts (≥4) | 106,419,179 | 214.3 billion | 74.0% |
| Neutral Texts (2<x<4) | 25,517,377 | 44.1 billion | 17.8% |
| Unsafe Texts (≤2) | 11,784,232 | 25.2 billion | 8.2% |
Table 6: Safety distribution of the cancer texts in TBSv2.
Figure 5: Quantity of cancer text within the TBSv2 for each year between 2013 and 2024. The proportion of available high quality cancer texts decreased from 2019 (brown) to 2024 (light blue).
5. Example Application of The Blue Scrubs Datasets: Breast Cancer Scoring
Using The Blue Scrubs datasets, specialized medical datasets can be curated to contain the highest quality texts. Such datasets can then be used to develop expert clinical LLMs for specific medical scenarios or diagnoses.
For example, TBSv2 dataset contains an attribute, breast_cancer_score, scoring the likelihood each text discusses breast cancer and the potential quality of the breast cancer related details. Using this breast cancer score, a threshold can be selected to identify the highest quality texts pertaining to breast cancer.
The breast cancer scoring model was trained using a random sample of 100K records from the 11.5M records in TBSv1 dataset. Only samples with medical probability scores of at least .99 were used to ensure a high quality training dataset. Lastly, medical quality scores (i.e., relevance, factuality, safety) greater than 3 were used to further ensure high quality of the positive samples. Shown in Table 7 is a summary of the performance results for the breast cancer scoring model.
| Metric | Test Split | External Dataset |
|---|---|---|
| Accuracy | 0.8635 | 0.8209 |
| Precision | 0.9014 | 1.0000 |
| Recall | 0.8163 | 0.8209 |
| F1 Score | 0.8567 | 0.9016 |
Table 7: Performance of the breast cancer scoring model on two independent test sets.
The final breast cancer model scored well, with over 80% accuracy on the test set and an external dataset from PubMed on Huggingface. This model was then used to compute the breast cancer scores for TBSv2. The respective mean and standard deviation of the breast cancer score in the dataset are 0.16 and 0.17. The raw scores, instead of labels, are provided for breast cancer such that different thresholds can be used for different applications. Below is the distribution of scores in TBSv2 dataset.
Figure 6: Distribution of the breast cancer score across all the texts in TBSv2 dataset.
6. Key Observations and Recommendations
6.1 Data Quality and Utility
- High Medical Relevance: The final dataset retains documents with strong likelihood of covering medical topics, verified via dual-stage filtering.
- Robust Cancer Coverage: About 41% of the medical texts pertain to cancer, broadening research avenues for oncology-related language modeling.
- Safety Scoring: Most cancer-related documents scored highly for ethical standards, indicating relatively safe content for subsequent modeling tasks.
6.2 Limitations, Disclaimers and Ethical Considerations
- Bias and Noise: Even after filtering, the dataset may retain biases, unverified medical claims, and incomplete or outdated information.
- Privacy and Anonymization: Although derived from publicly available web data, caution is advised when texts might include personal details or sensitive health data.
- Duplicates: Though the FineWeb team performed a process of deduplication, it is possible there are still duplicate texts across years within the data set and possible overlap with benchmark datasets.
6.3 Future Directions
- Applications: We are studying the usability of TBSv2 dataset in order to train medical LLMs as it compares to other medical corpa considered to be high quality (e.g. PubMed).
- Corroborate Breast Cancer Score with Human Evaluation: The breast cancer scores have yet to be compared with human evaluation. However, the model constructed was trained using data from TBSv1. These scores are not a replacement for clinician evaluation, but suggest the model is useful for identifing likely breast cancer texts.
- Further Refinement: Additional, domain-specific classifiers (e.g., for cardiology, neurology) could improve the topical precision.
- Fine-Grained Metadata: Annotating documents for sub-topics within cancer (lung, colorectal, prostate, etc.) would benefit specialized modeling.
- Multilingual Expansion: FineWebv2 was released in 2025 and includes texts in multiple languages. Similar filtering and refinement steps for the TBSv2 can be applied to obtain large-scale multilingual medical and cancer corpa.
- Textual Quality refinement: Despite our extensive filtering and data quality control some texts are still of low quality. By leveraging strong open-source models (e.g., Llama 3.3 70B) we are summarizing each text in TBS to improve its readability and grammatical content without changing the medical information.
7. Conclusion
The Blue Scrubs v2 dataset provides a uniquely broad and deep corpus of medical and cancer texts, leveraging the massive scope of the FineWeb dataset. Through efficient medical filtering and quality evaluation, the resulting dataset offers:
- High-quality, diverse medical texts totaling (over 692 billion tokens).
- Robust coverage of cancer-related materials (over 283 billion tokens) with good safety and factuality metrics.
- A scalable framework for further targeted extractions and evaluations.
8. How to Reference Us
If you use any part of The Blue Scrubs v2 dataset, please reference it using the following format:
@article{TheBlueScrubsv2,
author = {Monique Shotande and Luis Felipe and Carlos Garcia and Talia Kip Berler and Issam {El Naqa} and Aakash Tripathi and Vivek Rudapratna and Ghulam Rasool and Gilmer Valdes},
title = {The Blue Scrubs v2: A Comprehensive Curated Medical Dataset Derived from the Internet using FineWeb},
month = {September},
year = {2025},
url = {https://thebluescrubs.ai/}
}
8. License
Copyright 2025 The BlueScrubs v2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
For full terms, see the LICENSE file. If you have any questions, comments, or concerns about licensing please contact us.
For the dataset itself, please refer to the Common Crawl Foundation Terms of Use.
9. References
National Library of Medicine. (n.d.). PubMed. Available at: https://pubmed.ncbi.nlm.nih.gov
Chen, Z., Hernández Cano, A., Romanou, A., Bonnet, A., Matoba, K., Salvi, F., Pagliardini, M., Fan, S., Köpf, A., Mohtashami, A., Sallinen, A., Sakhaeirad, A., Swamy, V., Krawczuk, I., Bayazit, D., Marmet, A., Montariol, S., Hartley, M.-A., Jaggi, M., & Bosselut, A. (2023). MEDITRON-70B: Scaling medical pretraining for large language models. arXiv. https://arxiv.org/abs/2311.16079
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling laws for neural language models. arXiv. https://arxiv.org/abs/2001.08361
Penedo, G., Kydlíček, H., Ben Allal, L., Lozhkov, A., Mitchell, M., Raffel, C., Von Werra, L., & Wolf, T. (2024). The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale. In Proceedings of the Thirty-eighth Conference on Neural Information Processing Systems: Datasets and Benchmarks Track. The FineWeb Datasets
Common Crawl Foundation. (n.d.). Common Crawl. Available at: https://commoncrawl.org
Soboleva, D., Al-Khateeb, F., Myers, R., Steeves, J. R., Hestness, J., & Dey, N. (2023). SlimPajama: A 627B token cleaned and deduplicated version of RedPajama. Cerebras Systems. Available at: https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama
Dubey, A., et al. (2024). The Llama 3 Herd of Models. arXiv:2407.21783. Available at: https://arxiv.org/abs/2407.21783
- Downloads last month
- 27






