Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
| from dataclasses import dataclass | |
| from enum import Enum | |
| class Task: | |
| benchmark: str | |
| metric: str | |
| col_name: str | |
| # Init: to update with your specific keys | |
| class Tasks(Enum): | |
| # task_key in the json file, metric_key in the json file, name to display in the leaderboard | |
| task0 = Task("logiqa", "delta_abs", "LogiQA Ξ") | |
| task1 = Task("logiqa2", "delta_abs", "LogiQA2 Ξ") | |
| task2 = Task("lsat-ar", "delta_abs", "LSAT-ar Ξ") | |
| task3 = Task("lsat-lr", "delta_abs", "LSAT-lr Ξ") | |
| task4 = Task("lsat-rc", "delta_abs", "LSAT-rc Ξ") | |
| #METRICS = list(set([task.value.metric for task in Tasks])) | |
| logo1_url = "https://raw.githubusercontent.com/logikon-ai/cot-eval/main/assets/AI2_Logo_Square.png" | |
| logo2_url = "https://raw.githubusercontent.com/logikon-ai/cot-eval/main/assets/logo_logikon_notext_withborder.png" | |
| LOGOS = f'<div style="display: flex; justify-content: center;"><a href="https://allenai.org/"><img src="{logo1_url}" alt="AI2" style="width: 30vw; min-width: 20px; max-width: 60px;"></a> <a href="https://logikon.ai"><img src="{logo2_url}" alt="Logikon AI" style="width: 30vw; min-width: 20px; max-width: 60px; margin-left: 10px;"></a></div>' | |
| # Your leaderboard name | |
| TITLE = f'<h1 align="center" id="space-title"> Open CoT Leaderboard</h1> {LOGOS}' | |
| # What does your leaderboard evaluate? | |
| INTRODUCTION_TEXT = """ | |
| The Open CoT Leaderboard tracks the reasoning skills of LLMs, measured as their ability to generate **effective chain-of-thought reasoning traces**. | |
| The leaderboard reports **accuracy gains** achieved by using [chain-of-thought](https://logikon.ai/docs/delib_prompting) (CoT), i.e.: _accuracy gain Ξ_ = _accuracy with CoT_ β _accuracy w/o CoT_. | |
| Detailed model-specific results can be explored with the [Open CoT Dashboard](https://huggingface.co/spaces/cot-leaderboard/open-cot-dashboard). See the "About" tab for background infos and motivation. | |
| """ | |
| # Which evaluations are you running? how can people reproduce what you have? | |
| LLM_BENCHMARKS_TEXT = f""" | |
| ## How it works (roughly) | |
| To assess the reasoning skill of a given `model`, we carry out the following steps for each `task` (test dataset) and different CoT `regimes`. (A CoT `regime` consists in a prompt chain and decoding parameters used to generate a reasoning trace.) | |
| 1. `model` generates CoT reasoning traces for all problems in the test dataset according to `regime`. | |
| 2. `model` answers the test dataset problems, we record the resulting _baseline accuracy_. | |
| 3. `model` answers the test dataset problems _with the reasoning traces appended_ to the prompt, we record the resulting _CoT accuracy_. | |
| 4. We compute the _accuracy gain Ξ_ = _CoT accuracy_ β _baseline accuracy_ for the given `model`, `task`, and `regime`. | |
| Each `regime` yields a different _accuracy gain Ξ_, and the leaderboard reports (for every `model`/`task`) the best Ξ achieved by any regime. All models are evaluated against the same set of regimes. | |
| A notebook with detailed result exploration and visualization is available [here](https://github.com/logikon-ai/cot-eval/blob/main/notebooks/CoT_Leaderboard_Results_Exploration.ipynb). | |
| ## How is it different from other leaderboards? | |
| Performance leaderboards like the [π€ Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) or [YALL](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) do a great job in ranking models according to task performance. | |
| Unlike these leaderboards, the Open CoT Leaderboard assesses a model's ability to effectively reason about a `task`: | |
| ### π€ Open LLM Leaderboard | |
| * a. Can `model` solve `task`? | |
| * b. Metric: absolute accuracy. | |
| * c. Measures `task` performance. | |
| * d. Covers broad spectrum of `tasks`. | |
| ### Open CoT Leaderboard | |
| * a. Can `model` do CoT to improve in `task`? | |
| * b. Metric: relative accuracy gain. | |
| * c. Measures ability to reason (about `task`). | |
| * d. Focuses on critical thinking `tasks`. | |
| ## Test dataset selection (`tasks`) | |
| The test dataset problems in the CoT Leaderboard can be solved through clear thinking alone, no specific knowledge is required to do so. They are subsets of the [AGIEval benchmark](https://github.com/ruixiangcui/AGIEval) and re-published as [`logikon-bench`](https://huggingface.co/datasets/logikon/logikon-bench). The `logiqa` dataset has been newly translated from Chinese to English. | |
| ## Reproducibility | |
| To learn more about the evaluation pipeline and reproduce our results, check out the repository [cot-eval](https://github.com/logikon-ai/cot-eval). | |
| ## Acknowledgements | |
| We're grateful to community members for running evaluations and reporting results. To contribute, join us at [`cot-leaderboard`](https://huggingface.co/cot-leaderboard) organization. | |
| """ | |
| EVALUATION_QUEUE_TEXT = """ | |
| ## Some good practices before submitting a model | |
| ### 1) Make sure you can load your model and tokenizer with `vLLM`: | |
| ```python | |
| from vllm import LLM, SamplingParams | |
| prompts = [ | |
| "Hello, my name is", | |
| "The president of the United States is", | |
| "The capital of France is", | |
| "The future of AI is", | |
| ] | |
| sampling_params = SamplingParams(temperature=0.8, top_p=0.95) | |
| llm = LLM(model="<USER>/<MODEL>") | |
| outputs = llm.generate(prompts, sampling_params) | |
| ``` | |
| If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. | |
| Note: make sure your model is public! | |
| ### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index) | |
| It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`! | |
| ### 3) Make sure your model has an open license! | |
| This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model π€ | |
| ### 4) Fill up your model card | |
| When we add extra information about models to the leaderboard, it will be automatically taken from the model card | |
| ## Your model is stuck in the pending queue? | |
| We're populating the Open CoT Leaderboard step by step. The idea is to grow a diverse and informative sample of the LLM space. Plus, with limited compute, we're currently prioritizing models that are popular, promising, and relatively small. | |
| """ | |
| CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" | |
| CITATION_BUTTON_TEXT = r""" | |
| Betz, G. and Cacean, S. and Richardson, K. (2024). Open CoT Leaderboard. Retrieved from https://huggingface.co/spaces/logikon/open_cot_leaderboard | |
| """ | |