update dataset links
Browse files
README.md
CHANGED
|
@@ -257,14 +257,6 @@ model-index:
|
|
| 257 |
|
| 258 |
---
|
| 259 |
|
| 260 |
-
## External Benchmark
|
| 261 |
-
|
| 262 |
-
This model, **SpaceOm**, was presented in the paper [Spatial Mental Modeling from Limited Views](https://huggingface.co/papers/2506.21458).
|
| 263 |
-
|
| 264 |
-
Project Page: [https://mll-lab-nu.github.io/mind-cube](https://mll-lab-nu.github.io/mind-cube)
|
| 265 |
-
|
| 266 |
-
Code: [https://github.com/mll-lab-nu/MindCube](https://github.com/mll-lab-nu/MindCube)
|
| 267 |
-
|
| 268 |
[](https://remyx.ai/?model_id=SpaceThinker-Qwen2.5VL-3B&sha256=abc123def4567890abc123def4567890abc123def4567890abc123def4567890)
|
| 269 |
|
| 270 |
# SpaceOm
|
|
@@ -275,6 +267,7 @@ Code: [https://github.com/mll-lab-nu/MindCube](https://github.com/mll-lab-nu/Min
|
|
| 275 |
|
| 276 |
- [π§ Model Overview](#model-overview)
|
| 277 |
- [π Evaluation & Benchmarks](#model-evaluation)
|
|
|
|
| 278 |
- [πββοΈ Running SpaceOm](#running-spaceom)
|
| 279 |
- [ποΈββοΈ Training Configuration](#training-spaceom)
|
| 280 |
- [π Dataset Info](#dataset-info)
|
|
@@ -286,8 +279,8 @@ Code: [https://github.com/mll-lab-nu/MindCube](https://github.com/mll-lab-nu/Min
|
|
| 286 |
**SpaceOm** improves over **SpaceThinker** by adding:
|
| 287 |
|
| 288 |
* the target module `o_proj` in LoRA fine-tuning
|
| 289 |
-
* **SpaceOm** [dataset](https://huggingface.co/datasets/
|
| 290 |
-
* **Robo2VLM-Reasoning** [dataset](https://huggingface.co/datasets/
|
| 291 |
|
| 292 |
|
| 293 |
The choice to include `o_proj` among the target modules in LoRA finetuning was inspired by the study [here](https://arxiv.org/pdf/2505.20993v1), which argues for
|
|
@@ -555,7 +548,13 @@ the tested cog map output settings.
|
|
| 555 |
See the [results](https://huggingface.co/datasets/salma-remyx/SpaceOm_MindCube_Results/tree/main) of the [MindCube benchmark](https://arxiv.org/pdf/2506.21458) evaluation from [Spatial Mental Modeling from Limited Views](https://arxiv.org/pdf/2506.21458).
|
| 556 |
|
| 557 |
|
|
|
|
|
|
|
|
|
|
| 558 |
|
|
|
|
|
|
|
|
|
|
| 559 |
|
| 560 |
## Limitations
|
| 561 |
|
|
|
|
| 257 |
|
| 258 |
---
|
| 259 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 260 |
[](https://remyx.ai/?model_id=SpaceThinker-Qwen2.5VL-3B&sha256=abc123def4567890abc123def4567890abc123def4567890abc123def4567890)
|
| 261 |
|
| 262 |
# SpaceOm
|
|
|
|
| 267 |
|
| 268 |
- [π§ Model Overview](#model-overview)
|
| 269 |
- [π Evaluation & Benchmarks](#model-evaluation)
|
| 270 |
+
- [π External Benchmark](#external-benchmark)
|
| 271 |
- [πββοΈ Running SpaceOm](#running-spaceom)
|
| 272 |
- [ποΈββοΈ Training Configuration](#training-spaceom)
|
| 273 |
- [π Dataset Info](#dataset-info)
|
|
|
|
| 279 |
**SpaceOm** improves over **SpaceThinker** by adding:
|
| 280 |
|
| 281 |
* the target module `o_proj` in LoRA fine-tuning
|
| 282 |
+
* **SpaceOm** [dataset](https://huggingface.co/datasets/remyxai/SpaceOm) for longer reasoning traces
|
| 283 |
+
* **Robo2VLM-Reasoning** [dataset](https://huggingface.co/datasets/remyxai/Robo2VLM-Reasoning) for more robotics domain and MCVQA examples
|
| 284 |
|
| 285 |
|
| 286 |
The choice to include `o_proj` among the target modules in LoRA finetuning was inspired by the study [here](https://arxiv.org/pdf/2505.20993v1), which argues for
|
|
|
|
| 548 |
See the [results](https://huggingface.co/datasets/salma-remyx/SpaceOm_MindCube_Results/tree/main) of the [MindCube benchmark](https://arxiv.org/pdf/2506.21458) evaluation from [Spatial Mental Modeling from Limited Views](https://arxiv.org/pdf/2506.21458).
|
| 549 |
|
| 550 |
|
| 551 |
+
## External Benchmark
|
| 552 |
+
|
| 553 |
+
This model, **SpaceOm**, was presented in the paper [Spatial Mental Modeling from Limited Views](https://huggingface.co/papers/2506.21458).
|
| 554 |
|
| 555 |
+
Project Page: [https://mll-lab-nu.github.io/mind-cube](https://mll-lab-nu.github.io/mind-cube)
|
| 556 |
+
|
| 557 |
+
Code: [https://github.com/mll-lab-nu/MindCube](https://github.com/mll-lab-nu/MindCube)
|
| 558 |
|
| 559 |
## Limitations
|
| 560 |
|