Datasets:

Modalities:
Text
Formats:
text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 8,520 Bytes
8482e95
 
 
 
 
 
 
 
 
dc64ff6
2bb0a88
dc64ff6
 
d651c3a
dc64ff6
 
 
 
 
 
 
 
 
 
 
2bb0a88
 
 
dc64ff6
 
2bb0a88
dc64ff6
 
2bb0a88
dc64ff6
2bb0a88
dc64ff6
 
 
 
2bb0a88
 
 
 
dc64ff6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d555234
dc64ff6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7ad9da3
d555234
dc64ff6
 
 
c3bc565
dc64ff6
 
 
 
2bb0a88
 
 
 
 
 
a8d8f21
dc64ff6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: other
license_name: adobe-research-license
license_link: LICENSE
language:
- en
pretty_name: NoLiMa
---

# NoLiMa: Long-Context Evaluation Beyond Literal Matching
This repository contains the data associated with our **ICML 2025** paper, "[NoLiMa: Long-Context Evaluation Beyond Literal Matching](https://arxiv.org/abs/2502.05167)".

## Abstract
Recent large language models (LLMs) support long contexts ranging from 128K to 1M tokens. A popular method for evaluating these capabilities is the needle-in-a-haystack (NIAH) test, which involves retrieving a "needle" (relevant information) from a "haystack" (long irrelevant context). Extensions of this approach include increasing distractors, fact chaining, and in-context reasoning. However, in these benchmarks, models can exploit existing literal matches between the needle and haystack to simplify the task. To address this, we introduce **NoLiMa**, a benchmark extending NIAH with a carefully designed needle set, where questions and needles have **minimal lexical overlap, requiring models to infer latent associations to locate the needle within the haystack**. We evaluate 12 popular LLMs that claim to support contexts of at least 128K tokens. While they perform well in short contexts (<1K), performance degrades significantly as context length increases. At 32K, for instance, 10 models drop below 50\% of their strong short-length baselines. Even GPT-4o, one of the top-performing exceptions, experiences a reduction from an almost-perfect baseline of 99.3\% to 69.7\%. Our analysis suggests these declines stem from the increased difficulty the attention mechanism faces in longer contexts when literal matches are absent, making it harder to retrieve relevant information.

## Results
| Models               | Claimed Length | Effective Length | Base Score<br>(×0.85: Thr.) | 1K  | 2K  | 4K  | 8K  | 16K | 32K |
|----------------------|:-------------:|:---------------:|:-----------------------:|:---:|:---:|:---:|:---:|:---:|:---:|
| GPT-4o              | 128K          | 8K              | 99.3 (84.4)             | <ins>98.1</ins> | <ins>98.0</ins> | <ins>95.7</ins> | <ins>89.2</ins> | 81.6 | 69.7 |
| Llama 3.3 70B       | 128K          | 2K              | 97.3 (82.7)             | <ins>94.2</ins> | <ins>87.4</ins> | 81.5 | 72.1 | 59.5 | *42.7* |
| Llama 3.1 405B      | 128K          | 2K              | 94.7 (80.5)             | <ins>89.0</ins> | <ins>85.0</ins> | 74.5 | 60.1 | 48.4 | *38.0* |
| Llama 3.1 70B       | 128K          | 2K              | 94.5 (80.3)             | <ins>91.0</ins> | <ins>81.8</ins> | 71.2 | 62.7 | 51.8 | *43.2* |
| Gemini 1.5 Pro      | 2M            | 2K              | 92.6 (78.7)             | <ins>86.4</ins> | <ins>82.7</ins> | 75.4 | 63.9 | 55.5 | 48.2 |
| Jamba 1.5 Mini      | 256K          | <1K             | 92.4 (78.6)             | 76.3 | 74.1 | 70.8 | 62.2 | 52.7 | *43.6* |
| Command R+          | 128K          | <1K             | 90.9 (77.3)             | 77.0 | 73.5 | 66.3 | *39.5* | *21.3* | *7.4* |
| Llama 4 Maverick 🆕 | 1M            | 2K             | 90.1 (76.6)             | <ins>81.6</ins>  | <ins>78.3</ins> | 68.8 | ⏳ | ⏳ | ⏳ |
| Gemini Flash 2.0 🆕 | 1M            | 4K             | 89.4 (76.0)             | <ins>87.7</ins> | <ins>87.5</ins> | <ins>77.9</ins> | 64.7 | 48.2 | *41.0* |
| Gemma 3 27B 🆕      | 128K          | <1K             | 88.6 (75.3)             | 73.3 | 65.6 | 48.1 | *32.7* | *20.2* | *9.5* |
| Mistral Large 2     | 128K          | 2K              | 87.9 (74.7)             | <ins>86.1</ins> | <ins>85.5</ins> | 73.3 | 51.5 | *32.6* | *18.7* |
| Claude 3.5 Sonnet   | 200K          | 4K              | 87.6 (74.4)             | <ins>85.4</ins> | <ins>84.0</ins> | <ins>77.6</ins> | 61.7 | 45.7 | *29.8* |
| Gemma 3 12B 🆕      | 128K          | 1K              | 87.4 (74.3)             | <ins>74.7</ins> | 61.8 | *39.9* | *27.4* | *16.8* | *7.3* |
| Gemini 1.5 Flash    | 1M            | <1K             | 84.7 (72.0)             | 68.6 | 61.6 | 51.0 | 44.4 | *35.5* | *28.6* |
| GPT-4o mini         | 128K          | <1K             | 84.9 (72.2)             | 67.7 | 58.2 | 44.1 | *32.6* | *20.6* | *13.7* |
| Llama 4 Scout 🆕    | 10M           | 1K              | 81.7 (69.4)             | <ins>72.3<ins> | 61.8 | 50.8 | *35.5* | *26.9* | *21.6* |
| Llama 3.1 8B        | 128K          | 1K              | 76.7 (65.2)             | <ins>65.7</ins> | 54.4 | 44.1 | *31.9* | *22.6* | *14.2* |
| Gemma 3 4B 🆕       | 128K          | <1K              | 73.6 (62.6)             | 50.3 | *35.3* | *16.4* | *7.5* | *2.3* | *0.9* |

This table presents the performance results of selected models on NOLIMA tests. The **base score** represents a model’s accuracy on the task at short contexts (250, 500, and 1K) and serves as a controlled reference to measure performance degradation at longer contexts. 
The **effective length** is defined as the longest context where a model maintains at least 85% of its base score. Scores above this threshold are <ins>underlined</ins>, while scores dropping below 50% of the base score are *italicized*.

#### ✨ Updates:

- [2025-04-10]: Added evaluation results on Gemma 3 models (4B/12B/27B), Gemini 2.0 Flash, and Llama 4 Scout. (Llama 4.0 Maverick evaluation in progress... ⏳)

### NoLiMa-Hard Results
| Models                | Base Score | 4K  | 8K  | 16K | 32K |
|-----------------------|:---------:|:---:|:---:|:---:|:---:|
| **Llama 3.3 70B**     |           |     |     |     |     |
| - w/o CoT            | 98.3       | 55.5 | *37.2* | *16.7* | *8.9* |
| - w/ CoT             | 97.1       | 73.0 | 51.2 | *31.8* | *10.1* |
| **Reasoning Models**  |           |     |     |     |     |
| GPT-o1               | 99.9       | 92.0 | 78.0 | 60.1 | *31.1* |
| GPT-o3 Mini          | 98.8       | 52.8 | *36.9* | *25.5* | *18.9* |
| DeepSeek R1-Distill-Llama-70B   | 99.9       | 91.4 | 75.5 | *49.4* | *20.7* |

This table presents the performance results of selected reasoning models on **NoLiMa-Hard**, a subset of the original NoLiMa needle set containing the 10 most challenging question-needle pairs from previous evaluations. 
Scores dropping below 50% of the base score are in *italic*.

## Evaluation

This HuggingFace repository contains all the necessary data--including the NoLiMa needle set and the haystacks--to evaluate models on the NoLiMa benchmark. 

To access the evaluation script and more information, please refer to the [NoLiMa GitHub repository](https://github.com/adobe-research/NoLiMa).

## Dataset Structure
The dataset is structured as follows:
- `haystack/` : Contains the haystack data
    - `books.tar.gz` : Contains the books used to generate the haystacks. It can also be used to create new shuffled haystacks.
    - `rand_shuffle/` : Contains the shuffled haystacks that were used in the evaluation.
- `needlesets/` : Contains the NoLiMa needle sets:
    - `needle_set.json` : The main NoLiMa needle set.
    - `needle_set_hard.json` : The NoLiMa-Hard needle set; a subset of the main needle set containing the 10 most challenging question-needle pairs.
    - `needle_set_ONLYDirect.json` : The main needle set with only direct questions.
    - `needle_set_MC.json` : The main needle set formatted as multiple-choice questions.
    - `needle_set_w_CoT.json` : The main needle set with CoT task templates.
    - `needle_set_w_distractor.json` : The main needle set with distractors.

## GitHub Repo & Paper
For more information about **NoLiMa**, refer to:

- 📄 Paper: "[NoLiMa: Long-Context Evaluation Beyond Literal Matching](https://arxiv.org/abs/2502.05167)"
- 🔗 GitHub Repo: [NoLiMa GitHub repository](https://github.com/adobe-research/NoLiMa)

## License

The evaluation code and needle set data is licensed under the [Adobe Research License](LICENSE). The license prohibits commercial use and allows non-commercial research use. For details about the haystack data, please refer to the [haystack/LICENSES.md](haystack/LICENSES.md) file.

## Cite
If you use the **NoLiMa** dataset, filtering pipeline, or code, please cite the paper:
```bibtex
@inproceedings{modarressi2025nolima,
  title={NoLiMa: Long-Context Evaluation Beyond Literal Matching},
  author={Modarressi, Ali and Deilamsalehy, Hanieh and Dernoncourt, Franck and Bui, Trung and Rossi, Ryan A. and Yoon, Seunghyun and Schütze, Hinrich},
  booktitle={Forty-second International Conference on Machine Learning},
  year={2025},
  url={https://arxiv.org/abs/2502.05167}
}
```