Spaces:
Running
Running
| ABOUT_TEXT = ABOUT_TEXT = """ | |
| # π About AfroBench Leaderboard | |
| The **AfroBench Leaderboard** is a platform for evaluating multilingual language models across **64 African languages** and over **15 diverse NLP tasks**. These tasks span **classification**, **reasoning**, **question answering**, **summarization**, and **machine translation**, and are grounded in over **22 benchmark datasets** focused on low-resource and underrepresented languages. | |
| The goal of this leaderboard is to: | |
| - π Highlight the performance of LLMs on African languages. | |
| - π§ͺ Support diagnostic and task-level evaluation across different LLMs. | |
| - βοΈ Enable fair comparisons between open-source and closed models using both full and lite subsets of the benchmark. | |
| This leaderboard supports two main views: | |
| - **AfroBench**: The full evaluation benchmark organized by task, subtask, and dataset. | |
| - **AfroBench-Lite**: A lightweight subset of the benchmark with a consistent set of languages across tasks, designed for efficient evaluation. | |
| Each score is computed as the average across all selected columns and views, allowing flexible filtering and analysis. | |
| --- | |
| ## π More Information | |
| To learn more about the benchmark, datasets, task definitions, and evaluation procedures, please visit the official project site: | |
| π [AfroBench Website](https://mcgill-nlp.github.io/AfroBench/index.html) | |
| You can also explore: | |
| - π [AfroBench Paper on arXiv](https://arxiv.org/abs/2311.07978) | |
| - π§π½βπ» [AfroBench GitHub Repository](https://github.com/McGill-NLP/AfroBench) | |
| """ | |
| SUBMISSION_TEXT = """ | |
| Details on how to upload to teh huggingface leaderboard, COMING SOON! | |
| In the meantime view how to run the evaluation in our [Github codebase](https://github.com/McGill-NLP/AfroBench) | |
| """ | |
| SUBMISSION_TEXT_2 = """ | |
| """ | |
| SUBMISSION_TEXT_3 = """ | |
| """ | |