File size: 6,714 Bytes
08bd989
 
 
 
 
178493a
9470c3e
 
9ee8348
9470c3e
 
 
 
 
 
 
 
 
 
e6a41dd
9470c3e
e6a41dd
b9cb356
 
 
9470c3e
 
 
 
e6a41dd
9470c3e
e6a41dd
9470c3e
 
 
 
 
 
e6a41dd
 
9470c3e
 
934199f
9470c3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6a41dd
9470c3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b9cb356
9470c3e
 
 
 
 
 
 
 
e6a41dd
9470c3e
 
 
 
 
 
 
 
 
 
e6a41dd
9470c3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: apache-2.0
language:
- en
---
# STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking

<p align="center">
  <img src="imgs/STEP.jpg" alt="mp" style="width: 30%; max-width: 600px; min-width: 200px;" />
</p>

<p align="center">
  <img src="https://img.shields.io/badge/python-3.8%20%7C%203.9%20|%203.10-blue" alt="python"/>
  <img src="https://img.shields.io/badge/framework-BrainCog-blue" alt="Braincog"/>
  <img src="https://img.shields.io/badge/version-1.0.0-green" alt="Version"/>
  <img src="https://img.shields.io/badge/-continuous_integration-red" alt="Contiguous"/>
</p>

## ⚑ Introduction
This repository is the official **checkpoint repository** for STEP. It is intended solely for storing the model checkpoints, training logs, and corresponding configuration files (.yaml) used in STEP, to facilitate usage, reproduction, and comparison of Spiking Transformer models by researchers and interested users.

For the complete STEP framework, including source code and tutorials, please refer to the [official GitHub repository](https://github.com/Fancyssc/STEP).

<!-- 
Built on top of **[BrainCog](https://github.com/BrainCog-X/Brain-Cog)**, this repository reproduces state-of-the-art Spiking Transformer models and offers a unified pipeline for **classification, segmentation, and object detection**. By standardizing data loaders, training routines, and logging, it enables fair, reproducible comparisons while remaining easy to extend with new models or tasks.

- **Modular Design** – Swap neuron models, encodings, or attention blocks with a few lines of code.  
- **Multi-Task Ready** – Shared backbone, task-specific heads; evaluate *once*, report *everywhere*.  
- **Cross-Framework Compatibility** – Runs on BrainCog, SpikingJelly, or BrainPy with a thin adapter layer.  
- **End-to-End Reproducibility** – Version-locked configs and CI scripts guarantee β€œone-command” reruns.   -->

<!-- ### πŸ“‚ Task-Specific READMEs

| Task | Documentation |
|------|---------------|
| Classification | [cls/Readme.md](cls/Readme.md) |
| Segmentation   | [seg/Readme.md](seg/Readme.md) |
| Detection      | [det/Readme.md](det/Readme.md) |
 -->
<!-- ## πŸ”‘ Key Features of STEP

<p align="center">
  <img src="imgs/bench.png" alt="mp" style="width: 75%; max-width: 600px; min-width: 200px;" />
</p>

- **Unified Benchmark for Spiking Transformers**  
  STEP offers a single, coherent platform for evaluating classification, segmentation, and detection models, removing fragmented evaluation pipelines and simplifying comparison across studies.

- **Highly Modular Architecture**  
  All major blocksβ€”neuron models, input encodings, attention variants, surrogate gradients, and task headsβ€”are implemented as swappable modules. Researchers can prototype new ideas by mixing and matching components without rewriting the training loop.

- **Broad Dataset Compatibility**  
  Out-of-the-box support spans static vision (ImageNet, CIFAR10/100), event-based neuromorphic data (DVS-CIFAR10, N-Caltech101), and sequential benchmarks. Data loaders follow a common interface, so adding a new dataset is typically a ~50-line effort.

- **Multi-Task Adaptation**  
  Built-in pipelines extend beyond image classification to dense prediction tasks. STEP seamlessly plugs Spiking Transformers into MMSeg (segmentation) and MMDet (object detection) heads such as FCN and FPN, enabling fair cross-task studies with minimal glue code.

- **Backend-Agnostic Implementation**  
  A thin abstraction layer makes the same model definition runnable on SpikingJelly, BrainCog, or BrainPy. This widens hardware and software coverage while promoting reproducible results across laboratories.

- **Reproducibility & Best-Practice Templates**  
  Every experiment ships with version-locked configs, deterministic seeds, and logging utilities. CI scripts validate that reported numbers can be reproduced with a single command, fostering transparent comparison and faster iteration.

> **TL;DR** STEP lowers the barrier to building, training, and fairly benchmarking Spiking Transformers, accelerating progress toward practical neuromorphic vision systems.
## Repository Structure

<!-- ```plaintext
Spiking-Transformer-Benchmark/
β”œβ”€β”€ cls/               # Classification submodule
β”‚   β”œβ”€β”€ README.md      
β”‚   β”œβ”€β”€ configs/     
β”‚   β”œβ”€β”€ datasets/      
β”‚   └── ...
β”œβ”€β”€ seg/               # Segmentation submodule 
β”‚   β”œβ”€β”€ README.md      
β”‚   β”œβ”€β”€ configs/       
β”‚   β”œβ”€β”€ mmseg      
β”‚   └── ...
β”œβ”€β”€ det/               # Object detection submodule 
β”‚   β”œβ”€β”€ README.md      
β”‚   β”œβ”€β”€ configs/       
β”‚   β”œβ”€β”€ mmdet      
β”‚   └── ...
└── README.md          
``` --> 

## πŸš€ Quick Start

To get started, clone the repository and install the required dependencies:

```bash
git clone https://github.com/Fancyssc/STEP.git
```
<!-- 
### BrainCog Installation
For the BrainCog framework, we recommend installing it via GitHub. You can use the following command in your terminal to install it from GitHub:
```angular2html
pip install git+https://github.com/braincog-X/Brain-Cog.git
```


For the **seg** and **cls** tasks, different environment requirements apply. Please refer to the corresponding README files in each subdirectory for details.

> **Prerequisites**: Python 3.8 or above, PyTorch, and BrainCog.
 -->

## Contact & Collaboration

- **Questions or Feedback**  
  If you run into any issues, have questions about STEP, or simply want to share suggestions, please open a GitHub Issue or start a discussion thread. We monitor the repository regularly and aim to respond within a few business days.

- **Integrate Your Model**  
  Have an exciting Spiking Transformer variant or related module you’d like to see supported? We welcome external contributions! Open an Issue describing your model, its licensing, and any specific requirements, or email the maintainers. We’ll coordinate with you to add the necessary adapters, documentation, and tests.

We look forward to working with the community to make STEP an ever-stronger platform for neuromorphic research.

## πŸ“Citation
```angular2html
@misc{shen2025stepunifiedspikingtransformer,
      title={STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking}, 
      author={Sicheng Shen and Dongcheng Zhao and Linghao Feng and Zeyang Yue and Jindong Li and Tenglong Li and Guobin Shen and Yi Zeng},
      year={2025},
      eprint={2505.11151},
      archivePrefix={arXiv},
      primaryClass={cs.NE},
      url={https://arxiv.org/abs/2505.11151}, 
}
```