The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ToolMind: Synthesizing Complex Tool-Use Trajectories via Graph Sampling and Multi-Agent Simulation
ToolMind is a large open-source tool-use dataset with reasoning traces, designed to advance reasoning and tool-calling capabilities in agentic LLMs. It comprises over 160k turns synthesized from over 20k tools. By organizing functions as nodes in a graph structure and sampling paths on the graph, we construct complex and high-quality user intents. Then, trajectory is synthesized by a multi-agent way with user and tool are simulted with a LM. Moreover, we perform inference answering and correctness filtering for each round in the trajectory through thinking model, only keeping the correct and valuable turns. Models fine-tuned on ToolMesh achieves promising improvements against baselines on Tau-bench, Tau2-bench and BFCL-v4 agentic.
- Technical Report - comming soon
Synthesis pipeline
Data collection and augmentation
- We collect a wide variety of functions from open-source datasets, including xlam-function-calling-60k, glaive-function-calling-v2, and ToolACE. Each function is expected to have defined inputs and outputs; however, the original definitions are often incomplete — for instance, some functions do not explicitly specify the output parameter types. To unify them within a common representation space, we use powful LMs to complete the descriptions and types of all input and output parameters, and then vectorize them using the embedding model Conan-embedding-v1.
Graph construction
- Based on the unified vector representations of functions, we further construct a function graph to capture their potential relationships. We regard each function as a node and construct edges based on the vector similarity between output and input parameters. Specifically, an edge is established when an output parameter of one function is semantically similar to an input parameter of another function. In this way, we build a function graph where edges represent transitive relationships between functions. In addition, to increase the diversity of edges and the overall topology, we introduce a certain degree of random edge construction.
Random walk sampling
- After constructing the graph, function chains are sampled using a random walk of length 5–20. Meanwhile, to avoid oversampling specific functions, we restrict the number of visits to each node.
Multi-agent trajectory synthesis
- Following the sampling of function chains, we synthesize the user intention without enforcing task completion in chain order. The trajectory is then generated through a multi-agent simulation, where three models simulate a user, an assistant, and a function: the user poses questions according to the synthesized intention, the assistant responds, and the function provides simulated tool responses.
- To construct correct procedural steps under various scenarios (including front-wheel error correction) and to retain only valid rounds from multi-turn interactions, we leverage a thinking model to generate and perform quality filtering for each turn within the synthesized traces.
Hybrid Training with Augmented Open-Source Data
- In addition to the synthesized trajectories, we also incorporated a large amount of processed open-source data, including xlam-function-calling-60k, When2Call, glaive-function-calling-v2, ToolACE, BUTTONInstruct, APIGen-MT-5k, Tau-bench training set. The processing steps involved quality filtering and response reconstruction. Experimental results demonstrate that both our synthesized data and the post-processed open-source data significantly contribute to performance improvements.
- It should be noted that our data is segmented based on the messages of the assistant, so the loss is only calculated for the last assistant message for each sample during training.
Overall Performance
For tau2-bench evaluation, we use gpt-4o to act as the user
| Tau2-airline | Tau2-retail | Tau2-telecom | BFCL-v4 | BFCL-v4-agentic | |
|---|---|---|---|---|---|
| qwen3-8b (FC) | 32.0 | 43.9 | 28.1 | 42.21 | 14.35 |
| with ToolMind(36w) | 48.0 | 59.6 | 31.6 | 46.92 | 20.97 |
| qwen3-14b (FC) | 36.0 | 52.6 | 33.3 | 45.14 | 16.90 |
| with ToolMind(36w) | 56.0 | 59.6 | 31.6 | 50.54 | 26.67 |
Ablation Study
| Tau2-airline | Tau2-retail | Tau2-telecom | BFCL-v4 | BFCL-v4-agentic | |
|---|---|---|---|---|---|
| qwen3-8b (FC) | 32.0 | 43.9 | 28.1 | 42.21 | 14.35 |
| with Augmented Open-Source Data (20w) | 44.0 | 57.9 | 24.6 | 45.88 | 20.22 |
| with Synthesized Data(16w) | 42.0 | 43.0 | 31.6 | 46.87 | 24.37 |
| with ToolMind (36w) | 48.0 | 59.6 | 31.6 | 46.92 | 20.97 |
Dataset Statistic
The following shows the statistics of the length of the synthetic data, including conversation length, the number of user turns in the conversations, and token length.
![]() |
![]() |
![]() |
Limitations
While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information.
Other Information
If you find our dataset useful or want to use it in your projects, please kindly cite this Huggingface project. If you have any questions, please raise an issue or contact us at nanbeige@126.com.
- Downloads last month
- 171


