GPQA Benchmark Evaluation
In order to reproduce the results of the GPQA benchmark evaluation (reported in the paper), please follow these steps,
- Clone the official OpenHands repository:
git clone https://github.com/All-Hands-AI/OpenHands.git
- Checkout the commit used for the evaluation:
git checkout 5a1ecbb50584c740ab4c1ae1bcafc32f29c2556a
- Apply the patch for reproducing the exact evaluation results:
git apply reproducibility.patch
- Follow the instructions in the README.md file of the
https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/gpqadirectory to run the evaluation. For instance, you can use
./evaluation/gpqa/scripts/run_infer.sh [model_config_name] [num_samples_eval] [data_split] [AgentClass]
'gpqa_main', 'gqpa_diamond', 'gpqa_experts', 'gpqa_extended' -- data split options From the root of the OpenHands repo, run the following command:
./evaluation/gpqa/scripts/run_infer.sh [model_config_name] [num_samples_eval] [data_split] [AgentClass]
You can replace model_config_name with any model you set up in config.toml.
model_config_name: The model configuration name fromconfig.tomlthat you want to evaluate.num_samples_eval: Number of samples to evaluate (useful for testing and debugging).data_split: The data split to evaluate on. Must be one ofgpqa_main,gqpa_diamond,gpqa_experts,gpqa_extended. Defaults togpqa_diamondas done in the paper.AgentClass: The agent class to use for evaluation. Currently only supportsCodeActAgentfor CodeActAgent.