Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find any data file at /src/services/worker/Mascinissa/LOOPerSet. Couldn't find 'Mascinissa/LOOPerSet' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Mascinissa/LOOPerSet@ed4761c402699e82547bf9fa3f462dd5d9c53f8d/data/pact25_train.jsonl.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.zip']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1027, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find any data file at /src/services/worker/Mascinissa/LOOPerSet. Couldn't find 'Mascinissa/LOOPerSet' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Mascinissa/LOOPerSet@ed4761c402699e82547bf9fa3f462dd5d9c53f8d/data/pact25_train.jsonl.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.zip']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Optimization
Dataset Description
LOOPerSet is a large-scale public dataset for machine learning-based compiler optimization. It provides labeled performance data for training and evaluating models that predict the effects of code transformations.
The dataset contains over 28 million labeled data points derived from approximately 220,000 unique, synthetically generated loop nests. Each data point consists of a program, a specific sequence of applied loop transformations (e.g., fusion, tiling, skewing, parallelization), and its resulting ground-truth performance measurement.
Transformation sequences were generated using a polyhedral compilation framework to ensure they were legal and semantics-preserving. LOOPerSet was originally created to train the cost model for the LOOPer autoscheduler (PACT '25). For a full description of the generation process and a diversity analysis, please see our companion paper on arXiv.
Supported Tasks
The dataset can be used for several research applications in machine learning and compilers:
- Performance Prediction: The dataset's primary use case. Train a model to map a program's features and a candidate optimization schedule to a predicted performance value (e.g., execution time or speedup). This forms the core of a learned cost model for guiding compiler optimization.
- Schedule Ranking: A learning-to-rank task where a model learns to order a set of candidate schedules for a given program based on their relative performance.
- Compiler Heuristic Discovery: A data analysis task to discover new optimization heuristics by finding correlations between program features and the effectiveness of transformation sequences.
- Program Representation Learning: Develop and evaluate novel methods for featurizing programs, computer code, and transformation schedules, such as learning dense vector embeddings.
- Transfer Learning for Hardware Portability: A general-purpose cost model can be pre-trained on
LOOPerSetand then fine-tuned on a much smaller, target-specific dataset, significantly reducing the data collection cost for new architectures.
Dataset Configurations
The dataset is provided in two configurations:
full: The complete ~28 million point dataset (composed of ~220k programs), available as a singletrainsplit.pact25split: A 10-million-point version used to train the LOOPer cost model, pre-split intotrain(90%) andvalidation(10%) sets for reproducibility. This 10M set is a subset of the 28M one.
How to Use
The dataset files are stored in .jsonl.gz format (gzipped JSON Lines), where each line is a complete JSON object representing one program.
Bellow we provide a simple method to download the files and stream the data in Python.
Installation
You will need the huggingface-hub library to download the files from the repository.
pip install huggingface-hub
Step 1: Download the Data Files
The dataset is available in two configurations, with the following approximate file sizes:
| File | Compressed Size | Decompressed Size |
|---|---|---|
looperset_full.jsonl.gz |
~3.7 GB | ~34 GB |
looperset_pact25_train.jsonl.gz |
~1.2 GB | ~22 GB |
looperset_pact25_validation.jsonl.gz |
~146 MB | ~5.3 GB |
First, use the hf_hub_download function to fetch the dataset files you need.
from huggingface_hub import hf_hub_download
import os
REPO_ID = "Mascinissa/LOOPerSet"
# --- Option 1: Download the full 28M dataset ---
full_dataset_path = hf_hub_download(
repo_id=REPO_ID,
filename="data/looperset_full.jsonl.gz",
repo_type="dataset",
)
print(f"Full dataset downloaded to: {full_dataset_path}")
# --- Option 2: Download the PACT '25 splits ---
pact25_train_path = hf_hub_download(
repo_id=REPO_ID,
filename="data/pact25/looperset_pact25_train.jsonl.gz",
repo_type="dataset",
)
pact25_validation_path = hf_hub_download(
repo_id=REPO_ID,
filename="data/pact25/looperset_pact25_validation.jsonl.gz",
repo_type="dataset",
)
print(f"PACT'25 train split downloaded to: {pact25_train_path}")
print(f"PACT'25 validation split downloaded to: {pact25_validation_path}")
Step 2: Stream and Parse the Data
Due to the large size of the dataset, we recommend streaming the data using a generator function.
The following function reads a .jsonl.gz file line-by-line.
import gzip
import json
def stream_jsonl_gz(file_path):
"""
Generator function to stream and parse a .jsonl.gz file.
Yields one JSON object (as a Python dict) at a time.
"""
with gzip.open(file_path, 'rt', encoding='utf-8') as f:
for line in f:
yield json.loads(line)
# --- Example: Iterate through the pact25_split training set ---
# (Assuming you have run the download code from Step 1)
data_stream = stream_jsonl_gz(pact25_train_path)
print("First 3 programs from the stream:")
for i, program in enumerate(data_stream):
if i >= 3:
break
print(f"\n--- Program {i+1}: {program['program_name']} ---")
print(f" Initial time: {program['initial_execution_time']:.4f} ms")
print(f" Number of schedules: {len(program['schedules_list'])}")
Example 1: Generating Training Examples
Each record in LOOPerSet represents a single program. This program contains a list of all schedules (optimization sequences) that were evaluated for it. To create training examples, one must iterate through each program and then through its schedules_list.
Here is how you can use the streamer to create (program, schedule, performance) tuples.
import numpy as np
# (pact25_train_path is defined in the download step)
data_stream = stream_jsonl_gz(pact25_train_path)
training_examples = []
for processed_count, program in enumerate(data_stream):
# iterate over the first 100 programs only
if processed_count >= 100:
break
program_features = program['program_annotation']
initial_time = program['initial_execution_time']
for schedule in program['schedules_list']:
schedule_features = schedule # Or a subset of its fields
# The label is the median of the 30 execution times
# Here we compute speedup over the un-optimized version
median_time = np.median(schedule['execution_times'])
speedup = initial_time / median_time
training_examples.append({
"program_features": program_features,
"schedule_features": schedule_features,
"speedup": speedup
})
print(f"Created {len(training_examples)} tuples from {processed_count} programs.")
Example 2: Finding the Best Schedule per Program
The following example shows how to find the best speedup achieved for each program:
import numpy as np
# (pact25_train_path is defined in the download step)
data_stream = stream_jsonl_gz(pact25_train_path)
# Iterate through a few programs and find the best schedule for each
num_programs_to_process = 5
processed_count = 0
# Iterate through a few programs and find the best schedule for each
for processed_count, program in enumerate(data_stream):
if processed_count >= 5:
break
program_name = program['program_name']
initial_time = program['initial_execution_time']
# Handle cases where the initial run might have failed
if initial_time is None:
print(f"\nProgram: {program_name} has no initial time. Skipping.")
continue
best_schedule_info = None
min_time = initial_time
for schedule in program['schedules_list']:
# Ensure execution times are valid before calculating median
if not schedule.get('execution_times'):
continue
current_time = np.median(schedule['execution_times'])
if current_time < min_time:
min_time = current_time
best_schedule_info = schedule['sched_str']
speedup = initial_time / min_time if min_time > 0 else float('inf')
print(f"\nProgram: {program_name}")
print(f" - Initial Time: {initial_time:.4f} ms")
if best_schedule_info:
print(f" - Best Found Time: {min_time:.4f} ms (Speedup: {speedup:.2f}x)")
print(f" - Best Schedule: {best_schedule_info}")
else:
print(" - No better schedule found in the dataset.")
Dataset Structure
Each row in the dataset represents a single synthetic program and contains all optimization schedules explored for it.
Click to see a sample JSONL entry
{
"program_name": "function12345",
"program_annotation": {
"memory_size": 4.19,
"iterators": { "...": "..." },
"computations": { "...": "..." },
"buffers": { "...": "..." }
},
"initial_execution_time": 1393.751,
"schedules_list": [
{
"execution_times": [451.234, 465.112, 458.543, "..."],
"sched_str": "F({CO,C1},1)T2({CO},L2,L3,32,32)...",
"fusions": [["comp00", "comp01", 1]],
"tree_structure": { "..." },
"comp00": {
"tiling": {"tiling_depth": 2, "tiling_dims": ["i0", "i1"], "tiling_factors": [32, 32]},
"unrolling_factor": null,
"parallelized_dim": null,
"transformations_list": [ [1, 0, 1, 0, ...] ]
},
"comp01": {
"...": "..."
}
},
{ "...": "..." }
]
}
Top-Level Fields
program_name(string): A unique identifier for the synthetic program (e.g., "function684979").program_annotation(dict): A detailed, structured representation of the original, untransformed program. This serves as the primary source for program feature engineering.initial_execution_time(float): The median execution time (in ms) of the program before any optimizations.schedules_list(list of dicts): A list of all optimization sequences explored for this program. Each dictionary in the list details a unique schedule and its performance.
The program_annotation Dictionary
This object contains all the static information about the source program.
memory_size(float): The total memory footprint of all buffers in megabytes.iterators(dict): Contains the full loop nest hierarchy of the program. Each key is an iterator name (e.g.,i0), and the value contains itslower_bound,upper_bound,parent_iterator, andchild_iterators.computations(dict): Contains all computational statements. Each key is a computation name (e.g.,comp00), and the value contains its properties, including:iterators: The list of loops this computation is nested in.write_access_relation: A string representing the write access pattern.accesses: A list of all read memory accesses.expression_representation: A tree-based representation of the arithmetic expression.
buffers(dict): Contains metadata for all data arrays (buffers) used in the program, including their dimensions, data types, and whether they are inputs or outputs.
The schedules_list Entries
Each element in this list represents one complete optimization schedule applied to the program.
execution_times(list of float): A list of 30 raw execution time measurements (in ms) for this specific schedule. The ground-truth label for ML models is typically derived from this list (e.g., by taking the median).sched_str(string): A human-readable summary string of the transformations applied in this schedule (e.g.,I(L0,L1)P(L0)U(L3,8)).fusions(list): A list detailing any loop fusion transformations. Each entry is a list of[comp_1, comp_2, fusion_level].tree_structure(dict): Represents the program's loop nest structure after fusion has been applied.- Computation-specific transformations (dict): For each computation in the program (e.g.,
comp00,comp01), there is a key holding a dictionary of the transformations applied to it:tiling(dict): Details on tiling, includingtiling_depth,tiling_dims, andtiling_factors.unrolling_factor(int): The factor used for loop unrolling (if applied).parallelized_dim(string): The name of the loop that was parallelized (if applied).transformations_list(list): Each element in the list is a vector representing one affine transformation (interchange, reversal, or skewing). The order of vectors defines the order of application.
`transformations_list` format
Each element in the list is a fixed-length (16-element) integer vector representing one affine transformation. The order of vectors in the list determines the order of application.The first element of the vector (vector[0]) is a type tag that specifies the transformation:
1: Loop Interchange2: Loop Reversal3: Loop Skewing
The meaning of the subsequent elements depends on the type tag:
If
typeis 1 (Interchange):vector[1]andvector[2]specify the two loop levels (as integer indices) to be interchanged. Other elements are unused.
If
typeis 2 (Reversal):vector[3]specifies the loop level (as an integer index) to be reversed. Other elements are unused.
If
typeis 3 (Skewing):vector[4],vector[5], andvector[6]specify the three loop levels (as integer indices) involved in the skewing transformation.vector[7]throughvector[15]specify the nine integer parameters of the 3x3 skewing submatrix.
Dataset Creation
Generation Pipeline
The data was generated using a three-stage pipeline:
- Synthetic Program Generation: A randomized generator created a diverse corpus of polyhedral programs with varied loop structures, memory access patterns, and computational complexities.
- Transformation Space Sampling: We used the beam search algorithm from the LOOPer autoscheduler to explore and sample meaningful optimization sequences for each program. This "relevance-guided" strategy ensures the dataset focuses on transformations a real-world compiler would consider.
- Performance Label Generation: Each
(program, schedule)pair was compiled with Tiramisu and executed on a dual-socket Intel Xeon E5-2695 v2 system. Each version was run up 30 times to collect a stable distribution of execution times.
Diversity Analysis
A quantitative diversity analysis was performed to validate the dataset's quality. Using normalized Tree Edit Distance (nTED) to measure structural similarity between programs, the analysis showed that:
LOOPerSetdoes not contain any accidental replications of PolyBench benchmarks.- The dataset covers a broader and more varied structural space than existing benchmark suites.
Full details are available in our companion paper.
Citation Information
If you use this dataset, please cite the following paper:
@misc{merouani2025looperset,
title={LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Compiler Optimization},
author={Massinissa Merouani and Afif Boudaoud and Riyadh Baghdadi},
year={2025},
eprint={2510.10209},
archivePrefix={arXiv},
primaryClass={cs.PL},
url={https://arxiv.org/abs/2510.10209},
}
If you a building upon or comparing against the LOOPer cost model, please cite our PACT '25 paper:
@misc{merouani24looper,
title={LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers},
author={Massinissa Merouani and Khaled Afif Boudaoud and Iheb Nassim Aouadj and Nassim Tchoulak and Islem Kara Bernou and Hamza Benyamina and Fatima Benbouzid-Si Tayeb and Karima Benatchba and Hugh Leather and Riyadh Baghdadi},
year={2025},
eprint={2403.11522},
archivePrefix={arXiv},
primaryClass={cs.PL},
url={https://arxiv.org/abs/2403.11522},
}
License
This dataset is licensed under the Creative Commons Attribution 4.0 International (CC-BY 4.0) License.
- Downloads last month
- 25