Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Clémentine
commited on
Commit
·
b838ed1
1
Parent(s):
01d1bbb
text reorg
Browse files- content.py +5 -4
content.py
CHANGED
|
@@ -3,12 +3,13 @@ TITLE = """<h1 align="center" id="space-title">GAIA Leaderboard</h1>"""
|
|
| 3 |
INTRODUCTION_TEXT = """
|
| 4 |
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc). (See our paper for more details.)
|
| 5 |
|
| 6 |
-
##
|
| 7 |
-
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve.
|
|
|
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
|
| 12 |
Results can be submitted for both validation and test. Scores are expressed as the percentage of correct answers for a given split.
|
| 13 |
|
| 14 |
We expect submissions to be json-line files with the following format. The first two fields are mandatory, `reasoning_trace` is optionnal:
|
|
|
|
| 3 |
INTRODUCTION_TEXT = """
|
| 4 |
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc). (See our paper for more details.)
|
| 5 |
|
| 6 |
+
## Data
|
| 7 |
+
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve.
|
| 8 |
+
It is therefore divided in 3 levels, where level 1 should be breakable by very good LLMs, and level 3 indicate a strong jump in model capabilities. Each level is divided into a fully public dev set for validation, and a test set with private answers and metadata.
|
| 9 |
|
| 10 |
+
GAIA data can be found in this space (https://huggingface.co/datasets/gaia-benchmark/GAIA). Questions are contained in `metadata.jsonl`. Some questions come with an additional file, that can be found in the same folder and whose id is given in the field `file_name`.
|
| 11 |
|
| 12 |
+
## Submissions
|
| 13 |
Results can be submitted for both validation and test. Scores are expressed as the percentage of correct answers for a given split.
|
| 14 |
|
| 15 |
We expect submissions to be json-line files with the following format. The first two fields are mandatory, `reasoning_trace` is optionnal:
|