license: mit
task_categories:
- other
tags:
- llm
- large-language-models
- model-analysis
- performance-benchmarking
- architecture
- benchmark
From Parameters to Performance: A Data-Driven Study on LLM Structure and Development
This dataset is the official companion to the paper "From Parameters to Performance: A Data-Driven Study on LLM Structure and Development". It provides a comprehensive collection of structural configurations and performance metrics for a wide range of open-source Large Language Models (LLMs), enabling data-driven research on how structural choices impact model performance.
Paper: https://huggingface.co/papers/2509.18136 Code: https://github.com/DX0369/llm-structure-performance
Abstract
Large language models (LLMs) have achieved remarkable success across various domains, driving significant technological advancements and innovations. Despite the rapid growth in model scale and capability, systematic, data-driven research on how structural configurations affect performance remains scarce. To address this gap, we present a large-scale dataset encompassing diverse open-source LLM structures and their performance across multiple benchmarks. Leveraging this dataset, we conduct a systematic, data mining-driven analysis to validate and quantify the relationship between structural configurations and performance. Our study begins with a review of the historical development of LLMs and an exploration of potential future trends. We then analyze how various structural choices impact performance across benchmarks and further corroborate our findings using mechanistic interpretability techniques. By providing data-driven insights into LLM optimization, our work aims to guide the targeted development and application of future models. We will release our dataset at this https URL
Main Contributions
- Large-Scale LLM Structure and Performance Dataset: We introduce a comprehensive dataset containing structural configurations and performance metrics for a wide range of open-source LLMs. The dataset is available at Hugging Face.
- Quantitative Study on Structure's Impact: We provide a large-scale, quantitative validation of how structural configurations (e.g., layer depth, FFN size) influence LLM performance across different benchmarks.
- Mechanistic Interpretability Validation: We use layer-pruning and gradient analysis techniques to validate and provide deeper insights into our data-driven findings.
Usage and Reproduction
Follow these steps to reproduce the analysis and figures from the paper.
Step 1: Prepare the Data
The repository includes two of the three necessary data files (file/merge_performance_parameter.csv and file/performance.csv). The third file, file/model_info.csv, can be download from: .
(Optional) Fetching New Model Data
The data_obtain.py script is provided for users who wish to gather the latest model information directly from the Hugging Face Hub. This step is not necessary to reproduce the original paper's results.
Create
models_list.txt: This file should contain the list of Hugging Face model IDs you want to analyze, with one ID per line. Thedata_obtain.pyscript will read from this file.Set Your Hugging Face Token: For reliable access to the Hugging Face API, set your token as an environment variable.
export HF_TOKEN='your_hf_token_here'Run the data fetching script: This will create the
file/model_info.csvfile.python data_obtain.py
Step 2: Run the Analysis Notebooks
Once all data files are in the file/ directory, you can run the Jupyter Notebooks to perform the analysis and generate the visualizations. We recommend using Jupyter Lab or Jupyter Notebook.
Launch Jupyter:
jupyter labRun
analysis.ipynb: Open and run the cells in this notebook to reproduce the analysis and visualizations.Run
regression.ipynb: Open and run the cells in this notebook to reproduce the regression experiments.