Dataset Preview
	The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
				Error code:   DatasetGenerationError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1853, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 678, in wrapped
                  for item in generator(*args, **kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 830, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1387, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1896, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
| text
				 string | 
|---|
| 
	107,82,13,11 | 
| 
	752,132,330,276 | 
| 
	533,357,361,351 | 
| 
	238,173,37,153 | 
| 
	630,357,29,79 | 
| 
	677,85,142,476 | 
| 
	725,52,190,565 | 
| 
	280,257,20,61 | 
| 
	780,401,101,80 | 
| 
	89,136,69,59 | 
| 
	488,43,763,569 | 
| 
	258,230,424,311 | 
| 
	197,239,546,312 | 
| 
	57,53,267,190 | 
| 
	209,128,145,147 | 
| 
	136,160,547,345 | 
| 
	71,88,264,253 | 
| 
	1028,215,157,174 | 
| 
	453,319,90,188 | 
| 
	580,442,123,114 | 
| 
	473,103,200,589 | 
| 
	18,536,174,65 | 
| 
	165,122,717,434 | 
| 
	449,275,257,91 | 
| 
	853,378,155,339 | 
| 
	843,304,158,391 | 
| 
	398,379,172,293 | 
| 
	342,189,368,367 | 
| 
	497,133,32,67 | 
| 
	353,298,250,96 | 
| 
	256,254,292,310 | 
| 
	339,323,277,67 | 
| 
	170,124,163,225 | 
| 
	461,417,131,102 | 
| 
	619,349,447,329 | 
| 
	168,117,97,58 | 
| 
	255,171,52,14 | 
| 
	161,194,61,14 | 
| 
	298,321,96,120 | 
| 
	7,170,65,56 | 
| 
	260,189,876,448 | 
| 
	194,106,213,116 | 
| 
	1004,457,167,118 | 
| 
	395,203,129,439 | 
| 
	494,122,163,397 | 
| 
	678,189,146,286 | 
| 
	405,1,225,453 | 
| 
	604,49,179,59 | 
| 
	515,430,74,135 | 
| 
	436,286,430,256 | 
| 
	579,138,146,486 | 
| 
	402,118,382,337 | 
| 
	441,405,337,241 | 
| 
	630,301,211,157 | 
| 
	582,403,184,228 | 
| 
	464,106,57,170 | 
| 
	188,94,54,37 | 
| 
	101,220,239,164 | 
| 
	123,17,141,85 | 
| 
	211,354,679,303 | 
| 
	704,332,108,155 | 
| 
	628,283,49,80 | 
| 
	573,405,21,15 | 
| 
	258,66,417,125 | 
| 
	172,85,201,221 | 
| 
	916,148,114,170 | 
| 
	58,93,44,191 | 
| 
	203,89,20,42 | 
| 
	342,104,12,20 | 
| 
	127,3,334,303 | 
| 
	260,158,24,14 | 
| 
	541,119,206,177 | 
| 
	58,121,343,208 | 
| 
	71,416,159,94 | 
| 
	1166,356,111,258 | 
| 
	567,276,710,81 | 
| 
	63,162,76,61 | 
| 
	258,96,344,110 | 
| 
	524,74,448,643 | 
| 
	615,375,40,55 | 
| 
	679,1,304,669 | 
| 
	277,61,84,265 | 
| 
	320,14,127,322 | 
| 
	439,287,290,430 | 
| 
	77,356,148,87 | 
| 
	372,160,50,125 | 
| 
	336,122,311,265 | 
| 
	122,358,722,230 | 
| 
	271,26,228,115 | 
| 
	488,190,226,527 | 
| 
	201,40,120,317 | 
| 
	267,270,330,289 | 
| 
	218,162,33,36 | 
| 
	132,77,42,41 | 
| 
	615,136,183,428 | 
| 
	54,90,192,82 | 
| 
	583,215,176,197 | 
| 
	1036,363,242,177 | 
| 
	396,131,25,30 | 
| 
	421,355,95,62 | 
End of preview. 
TrackingNet devkit
This repository contains the data of the paper TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild.
Download from HuggingFace
Download splits
from huggingface_hub import snapshot_download
# Download TRAIN_0 split (90GB)
snapshot_download(repo_id="SilvioGiancola/TrackingNet", 
                  repo_type="dataset", revision="main",
                  local_dir="TrackingNet_HF", 
                  allow_patterns="*TRAIN_0/*")
# Download TEST split (35GB)
snapshot_download(repo_id="SilvioGiancola/TrackingNet", 
                  repo_type="dataset", revision="main",
                  local_dir="TrackingNet_HF", 
                  allow_patterns="*TEST/*")
# Download all TRAIN splits (1.2TB)
snapshot_download(repo_id="SilvioGiancola/TrackingNet", 
                  repo_type="dataset", revision="main",
                  local_dir="TrackingNet_HF", 
                  allow_patterns="*TRAIN*")
TrackingNet pip package
conda create -n TrackingNet python pip
pip install TrackingNet
Utility functions for TrackingNet
from TrackingNet.utils import getListSplit
# Get list of codenames for the 12 training + testing split
TrackingNetSplits = getListSplit()
print(getListSplit())
# returns ["TEST", "TRAIN_0", "TRAIN_1", "TRAIN_2", "TRAIN_3", "TRAIN_4", "TRAIN_5", "TRAIN_6", "TRAIN_7", "TRAIN_8", "TRAIN_9", "TRAIN_10", "TRAIN_11"]
# Get list of tracking sequences
print(getListSequence(split=TrackingNetSplits[1])) # return list of tracking sequences in that split
print(getListSequence(split="TEST")) # return list of tracking sequences for testing
print(getListSequence(split=["TRAIN_0", "TRAIN_1"])) # return list of tracking sequences for train splits 0 and 1
print(getListSequence(split="TRAIN")) # return list of tracking sequences for al train splits
Downloading TrackingNet
from TrackingNet.Downloader import TrackingNetDownloader
from TrackingNet.utils import getListSplit
downloader = TrackingNetDownloader(LocalDirectory="path/to/TrackingNet")
for split in getListSplit():
    downloader.downloadSplit(split)
- Downloads last month
- 42,446
