Dataset Viewer
Auto-converted to Parquet
image
imagewidth (px)
512
512
label
imagewidth (px)
512
512
image_id
stringlengths
14
14
split
stringclasses
1 value
file_name
stringlengths
18
18
relative_path
stringlengths
30
30
sequence_id
int64
0
0
seq_5_frame149
train
seq_5_frame149.bmp
train/image/seq_5_frame149.bmp
0
seq_6_frame174
train
seq_6_frame174.bmp
train/image/seq_6_frame174.bmp
0
seq_7_frame013
train
seq_7_frame013.bmp
train/image/seq_7_frame013.bmp
0
seq_4_frame127
train
seq_4_frame127.bmp
train/image/seq_4_frame127.bmp
0
seq_6_frame173
train
seq_6_frame173.bmp
train/image/seq_6_frame173.bmp
0
seq_3_frame128
train
seq_3_frame128.bmp
train/image/seq_3_frame128.bmp
0
seq_6_frame122
train
seq_6_frame122.bmp
train/image/seq_6_frame122.bmp
0
seq_3_frame098
train
seq_3_frame098.bmp
train/image/seq_3_frame098.bmp
0
seq_1_frame206
train
seq_1_frame206.bmp
train/image/seq_1_frame206.bmp
0
seq_8_frame013
train
seq_8_frame013.bmp
train/image/seq_8_frame013.bmp
0
seq_2_frame111
train
seq_2_frame111.bmp
train/image/seq_2_frame111.bmp
0
seq_5_frame199
train
seq_5_frame199.bmp
train/image/seq_5_frame199.bmp
0
seq_4_frame145
train
seq_4_frame145.bmp
train/image/seq_4_frame145.bmp
0
seq_7_frame051
train
seq_7_frame051.bmp
train/image/seq_7_frame051.bmp
0
seq_8_frame038
train
seq_8_frame038.bmp
train/image/seq_8_frame038.bmp
0
seq_3_frame217
train
seq_3_frame217.bmp
train/image/seq_3_frame217.bmp
0
seq_4_frame184
train
seq_4_frame184.bmp
train/image/seq_4_frame184.bmp
0
seq_1_frame186
train
seq_1_frame186.bmp
train/image/seq_1_frame186.bmp
0
seq_2_frame044
train
seq_2_frame044.bmp
train/image/seq_2_frame044.bmp
0
seq_8_frame149
train
seq_8_frame149.bmp
train/image/seq_8_frame149.bmp
0
seq_7_frame131
train
seq_7_frame131.bmp
train/image/seq_7_frame131.bmp
0
seq_7_frame110
train
seq_7_frame110.bmp
train/image/seq_7_frame110.bmp
0
seq_8_frame086
train
seq_8_frame086.bmp
train/image/seq_8_frame086.bmp
0
seq_4_frame221
train
seq_4_frame221.bmp
train/image/seq_4_frame221.bmp
0
seq_6_frame115
train
seq_6_frame115.bmp
train/image/seq_6_frame115.bmp
0
seq_5_frame220
train
seq_5_frame220.bmp
train/image/seq_5_frame220.bmp
0
seq_5_frame041
train
seq_5_frame041.bmp
train/image/seq_5_frame041.bmp
0
seq_3_frame174
train
seq_3_frame174.bmp
train/image/seq_3_frame174.bmp
0
seq_5_frame200
train
seq_5_frame200.bmp
train/image/seq_5_frame200.bmp
0
seq_4_frame011
train
seq_4_frame011.bmp
train/image/seq_4_frame011.bmp
0
seq_5_frame081
train
seq_5_frame081.bmp
train/image/seq_5_frame081.bmp
0
seq_1_frame049
train
seq_1_frame049.bmp
train/image/seq_1_frame049.bmp
0
seq_4_frame198
train
seq_4_frame198.bmp
train/image/seq_4_frame198.bmp
0
seq_1_frame068
train
seq_1_frame068.bmp
train/image/seq_1_frame068.bmp
0
seq_1_frame152
train
seq_1_frame152.bmp
train/image/seq_1_frame152.bmp
0
seq_5_frame015
train
seq_5_frame015.bmp
train/image/seq_5_frame015.bmp
0
seq_6_frame160
train
seq_6_frame160.bmp
train/image/seq_6_frame160.bmp
0
seq_3_frame114
train
seq_3_frame114.bmp
train/image/seq_3_frame114.bmp
0
seq_2_frame071
train
seq_2_frame071.bmp
train/image/seq_2_frame071.bmp
0
seq_7_frame021
train
seq_7_frame021.bmp
train/image/seq_7_frame021.bmp
0
seq_6_frame147
train
seq_6_frame147.bmp
train/image/seq_6_frame147.bmp
0
seq_1_frame096
train
seq_1_frame096.bmp
train/image/seq_1_frame096.bmp
0
seq_7_frame122
train
seq_7_frame122.bmp
train/image/seq_7_frame122.bmp
0
seq_5_frame136
train
seq_5_frame136.bmp
train/image/seq_5_frame136.bmp
0
seq_4_frame066
train
seq_4_frame066.bmp
train/image/seq_4_frame066.bmp
0
seq_3_frame104
train
seq_3_frame104.bmp
train/image/seq_3_frame104.bmp
0
seq_2_frame141
train
seq_2_frame141.bmp
train/image/seq_2_frame141.bmp
0
seq_7_frame155
train
seq_7_frame155.bmp
train/image/seq_7_frame155.bmp
0
seq_3_frame169
train
seq_3_frame169.bmp
train/image/seq_3_frame169.bmp
0
seq_2_frame009
train
seq_2_frame009.bmp
train/image/seq_2_frame009.bmp
0
seq_3_frame125
train
seq_3_frame125.bmp
train/image/seq_3_frame125.bmp
0
seq_1_frame090
train
seq_1_frame090.bmp
train/image/seq_1_frame090.bmp
0
seq_2_frame062
train
seq_2_frame062.bmp
train/image/seq_2_frame062.bmp
0
seq_1_frame224
train
seq_1_frame224.bmp
train/image/seq_1_frame224.bmp
0
seq_7_frame191
train
seq_7_frame191.bmp
train/image/seq_7_frame191.bmp
0
seq_4_frame045
train
seq_4_frame045.bmp
train/image/seq_4_frame045.bmp
0
seq_8_frame054
train
seq_8_frame054.bmp
train/image/seq_8_frame054.bmp
0
seq_1_frame081
train
seq_1_frame081.bmp
train/image/seq_1_frame081.bmp
0
seq_4_frame037
train
seq_4_frame037.bmp
train/image/seq_4_frame037.bmp
0
seq_8_frame203
train
seq_8_frame203.bmp
train/image/seq_8_frame203.bmp
0
seq_6_frame079
train
seq_6_frame079.bmp
train/image/seq_6_frame079.bmp
0
seq_8_frame167
train
seq_8_frame167.bmp
train/image/seq_8_frame167.bmp
0
seq_2_frame110
train
seq_2_frame110.bmp
train/image/seq_2_frame110.bmp
0
seq_7_frame188
train
seq_7_frame188.bmp
train/image/seq_7_frame188.bmp
0
seq_7_frame196
train
seq_7_frame196.bmp
train/image/seq_7_frame196.bmp
0
seq_3_frame193
train
seq_3_frame193.bmp
train/image/seq_3_frame193.bmp
0
seq_5_frame028
train
seq_5_frame028.bmp
train/image/seq_5_frame028.bmp
0
seq_4_frame109
train
seq_4_frame109.bmp
train/image/seq_4_frame109.bmp
0
seq_8_frame068
train
seq_8_frame068.bmp
train/image/seq_8_frame068.bmp
0
seq_3_frame100
train
seq_3_frame100.bmp
train/image/seq_3_frame100.bmp
0
seq_8_frame002
train
seq_8_frame002.bmp
train/image/seq_8_frame002.bmp
0
seq_3_frame173
train
seq_3_frame173.bmp
train/image/seq_3_frame173.bmp
0
seq_8_frame118
train
seq_8_frame118.bmp
train/image/seq_8_frame118.bmp
0
seq_1_frame164
train
seq_1_frame164.bmp
train/image/seq_1_frame164.bmp
0
seq_2_frame151
train
seq_2_frame151.bmp
train/image/seq_2_frame151.bmp
0
seq_1_frame223
train
seq_1_frame223.bmp
train/image/seq_1_frame223.bmp
0
seq_4_frame141
train
seq_4_frame141.bmp
train/image/seq_4_frame141.bmp
0
seq_4_frame143
train
seq_4_frame143.bmp
train/image/seq_4_frame143.bmp
0
seq_6_frame039
train
seq_6_frame039.bmp
train/image/seq_6_frame039.bmp
0
seq_8_frame021
train
seq_8_frame021.bmp
train/image/seq_8_frame021.bmp
0
seq_4_frame035
train
seq_4_frame035.bmp
train/image/seq_4_frame035.bmp
0
seq_2_frame137
train
seq_2_frame137.bmp
train/image/seq_2_frame137.bmp
0
seq_5_frame030
train
seq_5_frame030.bmp
train/image/seq_5_frame030.bmp
0
seq_5_frame163
train
seq_5_frame163.bmp
train/image/seq_5_frame163.bmp
0
seq_8_frame097
train
seq_8_frame097.bmp
train/image/seq_8_frame097.bmp
0
seq_7_frame205
train
seq_7_frame205.bmp
train/image/seq_7_frame205.bmp
0
seq_5_frame087
train
seq_5_frame087.bmp
train/image/seq_5_frame087.bmp
0
seq_4_frame061
train
seq_4_frame061.bmp
train/image/seq_4_frame061.bmp
0
seq_3_frame056
train
seq_3_frame056.bmp
train/image/seq_3_frame056.bmp
0
seq_7_frame153
train
seq_7_frame153.bmp
train/image/seq_7_frame153.bmp
0
seq_5_frame124
train
seq_5_frame124.bmp
train/image/seq_5_frame124.bmp
0
seq_7_frame119
train
seq_7_frame119.bmp
train/image/seq_7_frame119.bmp
0
seq_2_frame201
train
seq_2_frame201.bmp
train/image/seq_2_frame201.bmp
0
seq_5_frame064
train
seq_5_frame064.bmp
train/image/seq_5_frame064.bmp
0
seq_7_frame147
train
seq_7_frame147.bmp
train/image/seq_7_frame147.bmp
0
seq_2_frame206
train
seq_2_frame206.bmp
train/image/seq_2_frame206.bmp
0
seq_3_frame197
train
seq_3_frame197.bmp
train/image/seq_3_frame197.bmp
0
seq_6_frame113
train
seq_6_frame113.bmp
train/image/seq_6_frame113.bmp
0
seq_7_frame003
train
seq_7_frame003.bmp
train/image/seq_7_frame003.bmp
0
seq_6_frame221
train
seq_6_frame221.bmp
train/image/seq_6_frame221.bmp
0
End of preview. Expand in Data Studio

Dataset Card for Endovis2017

Dataset Description

Dataset Summary

The Endovis2017 dataset contains preprocessed data for surgical instrument segmentation in robotic endoscopic procedures. This dataset was part of the MICCAI 2017 EndoVis Challenge for robotic instrument segmentation.

The dataset includes high-resolution images from the da Vinci surgical system along with pixel-level segmentation annotations for surgical instruments. It is designed for training and evaluating computer vision models for surgical scene understanding and instrument tracking.

Supported Tasks

  • Image Segmentation: Pixel-level segmentation of surgical instruments in endoscopic images
  • Medical Image Analysis: Understanding surgical scenes and instrument types
  • Computer-Assisted Surgery: Real-time instrument detection and tracking

Languages

Not applicable (image dataset)

Dataset Structure

Data Instances

Each instance in the dataset contains:

{
    'image': PIL.Image,          # RGB endoscopic image
    'label': PIL.Image,          # Segmentation mask (grayscale)
    'image_id': str,             # Unique identifier
    'file_name': str,            # Original filename
    'split': str,                # 'train' or 'val'
    'relative_path': str,        # Path relative to dataset root
    'sequence_id': int           # Sequence/video ID (0 for train, 1-4 for val)
}

Data Fields

  • image: RGB image of size 640×480 or similar (varies by sequence)
  • label: Grayscale segmentation mask matching image dimensions
  • image_id: Unique string identifier for the image
  • file_name: Original filename (e.g., "frame000.png")
  • split: Dataset split ("train" or "val")
  • relative_path: Path relative to dataset root directory
  • sequence_id: Integer identifying the surgical sequence (0 for training, 1-4 for validation sequences)

Data Splits

Split Examples
train 1,800
val 901
Total 2,701

The training set contains images from multiple surgical procedures, while the validation set is organized into 4 different sequences (val1-val4) representing different surgical scenarios.

Dataset Creation

Source Data

The dataset originates from the 2017 Robotic Instrument Segmentation Challenge held at MICCAI 2017.

Original Source: Zenodo Repository

Data Collection

Images were captured using the da Vinci surgical system during robotic-assisted surgical procedures. The dataset includes various instrument types and surgical scenarios to ensure model generalization.

Annotations

Pixel-level segmentation masks were manually annotated by experts. The annotations include:

  • Binary segmentation (instrument vs. background)
  • Part-level segmentation (shaft, wrist, claspers)
  • Instrument type classification

Personal and Sensitive Information

The dataset contains surgical video frames but does not include patient-identifiable information. All images show only the surgical field and instruments, not patients.

Considerations for Using the Data

Social Impact

This dataset enables research in computer-assisted surgery and robotic surgery, which can potentially:

  • Improve surgical outcomes through better instrument tracking
  • Enable automated surgical skill assessment
  • Advance autonomous surgical robotics

Bias and Limitations

  • Limited to da Vinci surgical system (may not generalize to other platforms)
  • Contains only certain types of surgical procedures
  • Annotation quality may vary across different sequences
  • Dataset size is relatively small compared to natural image datasets

Recommendations

Users should:

  • Test models on multiple surgical systems if deploying in production
  • Consider domain adaptation techniques for different surgical contexts
  • Validate performance on institution-specific data before clinical use
  • Be aware of potential biases toward specific instrument types and surgical scenarios

Usage

Loading the Dataset

from datasets import load_dataset

# Download and cache the full dataset
dataset = load_dataset("tyluan/Endovis2017")

# Access splits
train_data = dataset['train']
val_data = dataset['val']

# Get a sample
sample = train_data[0]
image = sample['image']  # PIL Image
label = sample['label']  # PIL Image (segmentation mask)

print(f"Image size: {image.size}")
print(f"Label size: {label.size}")

Streaming Mode (No Download)

For quick exploration without downloading the entire dataset:

from datasets import load_dataset

# Stream the dataset
dataset = load_dataset("tyluan/Endovis2017", streaming=True)

# Iterate over samples
for sample in dataset['train']:
    image = sample['image']
    label = sample['label']
    # Process sample...
    break  # Just show first sample

Using with PyTorch

from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms

# Load dataset
dataset = load_dataset("tyluan/Endovis2017", split="train")

# Define transforms
transform = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.ToTensor(),
])

# Apply transforms
def apply_transforms(example):
    example['image'] = transform(example['image'])
    example['label'] = transform(example['label'])
    return example

dataset = dataset.map(apply_transforms)
dataset.set_format(type='torch', columns=['image', 'label'])

# Create DataLoader
dataloader = DataLoader(dataset, batch_size=8, shuffle=True)

# Iterate
for batch in dataloader:
    images = batch['image']  # Shape: [8, 3, 256, 256]
    labels = batch['label']  # Shape: [8, 1, 256, 256]
    # Train your model...
    break

Integration with EasyMedSeg

This dataset is part of the EasyMedSeg framework:

from dataloader.image import Endovis2017Dataset

# Download mode (recommended)
dataset = Endovis2017Dataset(
    mode='download',
    split='train',
    hf_repo_id='tyluan/Endovis2017'
)

# Streaming mode
from dataloader.image import Endovis2017StreamingDataset

streaming_dataset = Endovis2017StreamingDataset(
    split='val',
    shuffle=True
)

Additional Information

Dataset Curators

Original dataset curated by the MICCAI 2017 EndoVis Challenge organizers.

HuggingFace version prepared by the EasyMedSeg team.

Licensing Information

This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).

When using this dataset, you must:

Citation Information

If you use this dataset in your research, please cite:

@article{allan2019endovis,
  title={2017 Robotic Instrument Segmentation Challenge},
  author={Allan, Max and Shvets, Alex and Kurmann, Thomas and Zhang, Zichen and Duggal, Rahul and Su, Yun-Hsuan and Rieke, Nicola and Laina, Iro and Kalavakonda, Niveditha and Bodenstedt, Sebastian and others},
  journal={arXiv preprint arXiv:1902.06426},
  year={2019}
}

Contributions

Thanks to:

  • MICCAI 2017 EndoVis Challenge organizers for creating the dataset
  • Original annotators for high-quality segmentation masks
  • EasyMedSeg team for preparing the HuggingFace version

Contact

For questions or issues with this HuggingFace version, please open an issue in the EasyMedSeg repository.

For questions about the original dataset, refer to the challenge website or the Zenodo repository.

References

Downloads last month
100