Datasets:
File size: 20,813 Bytes
d0b3489 a63309e d0b3489 a63309e d0b3489 f74a915 a63309e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 |
---
license: cc-by-4.0
task_categories:
- graph-ml
- tabular-classification
- tabular-regression
language:
- en
tags:
- database-analysis
- graph-similarity
- federated-learning
- schema-matching
- wikidata
size_categories:
- 10B<n<100B
---
# WikiDBGraph Dataset
WikiDBGraph is a comprehensive dataset for database graph analysis, containing structural and semantic properties of 100,000 Wikidata-derived databases. The dataset includes graph representations, similarity metrics, community structures, and various statistical properties designed for federated learning research and database schema matching tasks.
## Dataset Overview
This dataset provides graph-based analysis of database schemas, enabling research in:
- **Database similarity and matching**: Finding structurally and semantically similar databases
- **Federated learning**: Training machine learning models across distributed database pairs
- **Graph analysis**: Community detection, connected components, and structural properties
- **Schema analysis**: Statistical properties of database schemas including cardinality, entropy, and sparsity
### Statistics
- **Total Databases**: 100,000
- **Total Edges**: 17,858,194 (at threshold 0.94)
- **Connected Components**: 6,109
- **Communities**: 6,133
- **Largest Component**: 10,703 nodes
- **Modularity Score**: 0.5366
## Dataset Structure
The dataset consists of 15 files organized into four categories:
### 1. Graph Structure Files
#### `graph_raw_0.94.dgl`
DGL (Deep Graph Library) graph file containing the complete database similarity graph.
**Structure**:
- **Nodes**: 100,000 database IDs
- **Edges**: 17,858,194 pairs with similarity ≥ 0.94
- **Node Data**:
- `embedding`: 768-dimensional node embeddings (if available)
- **Edge Data**:
- `weight`: Edge similarity scores (float32)
- `gt_edge`: Ground truth edge labels (float32)
**Loading**:
```python
import dgl
import torch
# Load the graph
graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
graph = graphs[0]
# Access nodes and edges
num_nodes = graph.num_nodes() # 100000
num_edges = graph.num_edges() # 17858194
# Access edge data
src, dst = graph.edges()
edge_weights = graph.edata['weight']
edge_labels = graph.edata['gt_edge']
# Access node embeddings (if available)
if 'embedding' in graph.ndata:
node_embeddings = graph.ndata['embedding'] # shape: (100000, 768)
```
#### `database_embeddings.pt`
PyTorch tensor file containing pre-computed 768-dimensional embeddings for all databases.
**Structure**:
- Tensor shape: `(100000, 768)`
- Data type: float32
- Embeddings generated using BGE (BAAI General Embedding) model
**Loading**:
```python
import torch
embeddings = torch.load('database_embeddings.pt', weights_only=True)
print(embeddings.shape) # torch.Size([100000, 768])
# Get embedding for specific database
db_idx = 42
db_embedding = embeddings[db_idx]
```
### 2. Edge Files (Database Pair Relationships)
#### `filtered_edges_threshold_0.94.csv`
Main edge list with database pairs having similarity ≥ 0.94.
**Columns**:
- `src` (float): Source database ID
- `tgt` (float): Target database ID
- `similarity` (float): Cosine similarity score [0.94, 1.0]
- `label` (float): Ground truth label (0.0 or 1.0)
- `edge` (int): Edge indicator (always 1)
**Loading**:
```python
import pandas as pd
edges = pd.read_csv('filtered_edges_threshold_0.94.csv')
print(f"Total edges: {len(edges):,}")
# Find highly similar pairs
high_sim = edges[edges['similarity'] >= 0.99]
print(f"Pairs with similarity ≥ 0.99: {len(high_sim):,}")
```
**Example rows**:
```
src tgt similarity label edge
26218.0 44011.0 0.9896456 0.0 1
26218.0 44102.0 0.9908572 0.0 1
```
#### `edges_list_th0.6713.csv`
Extended edge list with lower similarity threshold (≥ 0.6713).
**Columns**:
- `src` (str): Source database ID (padded format, e.g., "00000")
- `tgt` (str): Target database ID (padded format)
- `similarity` (float): Cosine similarity score [0.6713, 1.0]
- `label` (float): Ground truth label
**Loading**:
```python
import pandas as pd
edges = pd.read_csv('edges_list_th0.6713.csv')
# Database IDs are strings with leading zeros
print(edges['src'].head()) # "00000", "00000", "00000", ...
# Filter by similarity threshold
threshold = 0.90
filtered = edges[edges['similarity'] >= threshold]
```
#### `edge_structural_properties_GED_0.94.csv`
Detailed structural properties for database pairs at threshold 0.94.
**Columns**:
- `db_id1` (int): First database ID
- `db_id2` (int): Second database ID
- `jaccard_table_names` (float): Jaccard similarity of table names [0.0, 1.0]
- `jaccard_columns` (float): Jaccard similarity of column names [0.0, 1.0]
- `jaccard_data_types` (float): Jaccard similarity of data types [0.0, 1.0]
- `hellinger_distance_data_types` (float): Hellinger distance between data type distributions
- `graph_edit_distance` (float): Graph edit distance between schemas
- `common_tables` (int): Number of common table names
- `common_columns` (int): Number of common column names
- `common_data_types` (int): Number of common data types
**Loading**:
```python
import pandas as pd
edge_props = pd.read_csv('edge_structural_properties_GED_0.94.csv')
# Find pairs with high structural similarity
high_jaccard = edge_props[edge_props['jaccard_columns'] >= 0.5]
print(f"Pairs with ≥50% column overlap: {len(high_jaccard):,}")
# Analyze graph edit distance
print(f"Mean GED: {edge_props['graph_edit_distance'].mean():.2f}")
print(f"Median GED: {edge_props['graph_edit_distance'].median():.2f}")
```
#### `distdiv_results.csv`
Distribution divergence metrics for database pairs.
**Columns**:
- `src` (int): Source database ID
- `tgt` (int): Target database ID
- `distdiv` (float): Distribution divergence score
- `overlap_ratio` (float): Column overlap ratio [0.0, 1.0]
- `shared_column_count` (int): Number of shared columns
**Loading**:
```python
import pandas as pd
distdiv = pd.read_csv('distdiv_results.csv')
# Find pairs with low divergence (more similar distributions)
similar_dist = distdiv[distdiv['distdiv'] < 15.0]
# Analyze overlap patterns
high_overlap = distdiv[distdiv['overlap_ratio'] >= 0.3]
print(f"Pairs with ≥30% overlap: {len(high_overlap):,}")
```
#### `all_join_size_results_est.csv`
Estimated join sizes for databases (cardinality estimation).
**Columns**:
- `db_id` (int): Database ID
- `all_join_size` (float): Estimated size of full outer join across all tables
**Loading**:
```python
import pandas as pd
join_sizes = pd.read_csv('all_join_size_results_est.csv')
# Analyze join complexity
print(f"Mean join size: {join_sizes['all_join_size'].mean():.2f}")
print(f"Max join size: {join_sizes['all_join_size'].max():.2f}")
# Large databases (complex joins)
large_dbs = join_sizes[join_sizes['all_join_size'] > 1000]
```
### 3. Node Files (Database Properties)
#### `node_structural_properties.csv`
Comprehensive structural properties for each database schema.
**Columns**:
- `db_id` (int): Database ID
- `num_tables` (int): Number of tables in the database
- `num_columns` (int): Total number of columns across all tables
- `foreign_key_density` (float): Ratio of foreign keys to possible relationships
- `avg_table_connectivity` (float): Average number of connections per table
- `median_table_connectivity` (float): Median connections per table
- `min_table_connectivity` (float): Minimum connections for any table
- `max_table_connectivity` (float): Maximum connections for any table
- `data_type_proportions` (str): JSON string with data type distribution
- `data_types` (str): JSON string with count of each data type
- `wikidata_properties` (int): Number of Wikidata properties used
**Loading**:
```python
import pandas as pd
import json
node_props = pd.read_csv('node_structural_properties.csv')
# Parse JSON columns
node_props['data_type_dist'] = node_props['data_type_proportions'].apply(
lambda x: json.loads(x.replace("'", '"'))
)
# Analyze database complexity
complex_dbs = node_props[node_props['num_tables'] > 10]
print(f"Databases with >10 tables: {len(complex_dbs):,}")
# Foreign key density analysis
print(f"Mean FK density: {node_props['foreign_key_density'].mean():.4f}")
```
**Example row**:
```
db_id: 88880
num_tables: 2
num_columns: 24
foreign_key_density: 0.0833
avg_table_connectivity: 1.5
data_type_proportions: {'string': 0.417, 'wikibase-entityid': 0.583}
```
#### `data_volume.csv`
Storage size information for each database.
**Columns**:
- `db_id` (str/int): Database ID (may have leading zeros)
- `volume_bytes` (int): Total data volume in bytes
**Loading**:
```python
import pandas as pd
volumes = pd.read_csv('data_volume.csv')
# Convert to more readable units
volumes['volume_mb'] = volumes['volume_bytes'] / (1024 * 1024)
volumes['volume_gb'] = volumes['volume_bytes'] / (1024 * 1024 * 1024)
# Find largest databases
top_10 = volumes.nlargest(10, 'volume_bytes')
print(top_10[['db_id', 'volume_gb']])
```
### 4. Column-Level Statistics
#### `column_cardinality.csv`
Distinct value counts for all columns.
**Columns**:
- `db_id` (str/int): Database ID
- `table_name` (str): Table name
- `column_name` (str): Column name
- `n_distinct` (int): Number of distinct values
**Loading**:
```python
import pandas as pd
cardinality = pd.read_csv('column_cardinality.csv')
# High cardinality columns (potentially good as keys)
high_card = cardinality[cardinality['n_distinct'] > 100]
# Analyze cardinality distribution
print(f"Mean distinct values: {cardinality['n_distinct'].mean():.2f}")
print(f"Median distinct values: {cardinality['n_distinct'].median():.2f}")
```
**Example rows**:
```
db_id table_name column_name n_distinct
6 scholarly_articles article_title 275
6 scholarly_articles article_description 197
6 scholarly_articles pub_med_id 269
```
#### `column_entropy.csv`
Shannon entropy for column value distributions.
**Columns**:
- `db_id` (str): Database ID (padded format)
- `table_name` (str): Table name
- `column_name` (str): Column name
- `entropy` (float): Shannon entropy value [0.0, ∞)
**Loading**:
```python
import pandas as pd
entropy = pd.read_csv('column_entropy.csv')
# High entropy columns (high information content)
high_entropy = entropy[entropy['entropy'] > 3.0]
# Low entropy columns (low diversity)
low_entropy = entropy[entropy['entropy'] < 0.5]
# Distribution analysis
print(f"Mean entropy: {entropy['entropy'].mean():.3f}")
```
**Example rows**:
```
db_id table_name column_name entropy
00001 descendants_of_john_i full_name 3.322
00001 descendants_of_john_i gender 0.881
00001 descendants_of_john_i father_name 0.000
```
#### `column_sparsity.csv`
Missing value ratios for all columns.
**Columns**:
- `db_id` (str): Database ID (padded format)
- `table_name` (str): Table name
- `column_name` (str): Column name
- `sparsity` (float): Ratio of missing values [0.0, 1.0]
**Loading**:
```python
import pandas as pd
sparsity = pd.read_csv('column_sparsity.csv')
# Dense columns (few missing values)
dense = sparsity[sparsity['sparsity'] < 0.1]
# Sparse columns (many missing values)
sparse = sparsity[sparsity['sparsity'] > 0.5]
# Quality assessment
print(f"Columns with >50% missing: {len(sparse):,}")
print(f"Mean sparsity: {sparsity['sparsity'].mean():.3f}")
```
**Example rows**:
```
db_id table_name column_name sparsity
00009 FamousPencilMoustacheWearers Name 0.000
00009 FamousPencilMoustacheWearers Biography 0.000
00009 FamousPencilMoustacheWearers ViafId 0.222
```
### 5. Clustering and Community Files
#### `community_assignment_0.94.csv`
Community detection results using Louvain algorithm.
**Columns**:
- `node_id` (int): Database ID
- `partition` (int): Community/partition ID
**Loading**:
```python
import pandas as pd
communities = pd.read_csv('community_assignment_0.94.csv')
# Analyze community structure
community_sizes = communities['partition'].value_counts()
print(f"Number of communities: {len(community_sizes)}")
print(f"Largest community size: {community_sizes.max()}")
# Get databases in a specific community
community_1 = communities[communities['partition'] == 1]['node_id'].tolist()
```
**Statistics**:
- Total communities: 6,133
- Largest community: 4,825 nodes
- Modularity: 0.5366
#### `cluster_assignments_dim2_sz100_msNone.csv`
Clustering results from dimensionality reduction (e.g., t-SNE, UMAP).
**Columns**:
- `db_id` (int): Database ID
- `cluster` (int): Cluster ID
**Loading**:
```python
import pandas as pd
clusters = pd.read_csv('cluster_assignments_dim2_sz100_msNone.csv')
# Analyze cluster distribution
cluster_sizes = clusters['cluster'].value_counts()
print(f"Number of clusters: {len(cluster_sizes)}")
# Get databases in a specific cluster
cluster_9 = clusters[clusters['cluster'] == 9]['db_id'].tolist()
```
### 6. Analysis Reports
#### `analysis_0.94_report.txt`
Comprehensive text report of graph analysis at threshold 0.94.
**Contents**:
- Graph statistics (nodes, edges)
- Connected components analysis
- Community detection results
- Top components and communities by size
**Loading**:
```python
with open('analysis_0.94_report.txt', 'r') as f:
report = f.read()
print(report)
```
**Key Metrics**:
- Total Nodes: 100,000
- Total Edges: 17,858,194
- Connected Components: 6,109
- Largest Component: 10,703 nodes
- Communities: 6,133
- Modularity: 0.5366
## Usage Examples
### Example 1: Finding Similar Database Pairs
```python
import pandas as pd
# Load edges with high similarity
edges = pd.read_csv('filtered_edges_threshold_0.94.csv')
# Find database pairs with similarity > 0.98
high_sim_pairs = edges[edges['similarity'] >= 0.98]
print(f"Found {len(high_sim_pairs)} pairs with similarity ≥ 0.98")
# Get top 10 most similar pairs
top_pairs = edges.nlargest(10, 'similarity')
for idx, row in top_pairs.iterrows():
print(f"DB {int(row['src'])} ↔ DB {int(row['tgt'])}: {row['similarity']:.4f}")
```
### Example 2: Analyzing Database Properties
```python
import pandas as pd
import json
# Load node properties
nodes = pd.read_csv('node_structural_properties.csv')
# Find complex databases
complex_dbs = nodes[
(nodes['num_tables'] > 10) &
(nodes['num_columns'] > 100)
]
print(f"Complex databases: {len(complex_dbs)}")
# Analyze data type distribution
for idx, row in complex_dbs.head().iterrows():
db_id = row['db_id']
types = json.loads(row['data_types'].replace("'", '"'))
print(f"DB {db_id}: {types}")
```
### Example 3: Loading and Analyzing the Graph
```python
import dgl
import torch
import pandas as pd
# Load DGL graph
graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
graph = graphs[0]
# Basic statistics
print(f"Nodes: {graph.num_nodes():,}")
print(f"Edges: {graph.num_edges():,}")
# Analyze degree distribution
in_degrees = graph.in_degrees()
out_degrees = graph.out_degrees()
print(f"Average in-degree: {in_degrees.float().mean():.2f}")
print(f"Average out-degree: {out_degrees.float().mean():.2f}")
# Find highly connected nodes
top_nodes = torch.topk(in_degrees, k=10)
print(f"Top 10 most connected databases: {top_nodes.indices.tolist()}")
```
### Example 4: Federated Learning Pair Selection
```python
import pandas as pd
# Load edges and structural properties
edges = pd.read_csv('filtered_edges_threshold_0.94.csv')
edge_props = pd.read_csv('edge_structural_properties_GED_0.94.csv')
# Merge data
pairs = edges.merge(
edge_props,
left_on=['src', 'tgt'],
right_on=['db_id1', 'db_id2'],
how='inner'
)
# Select pairs for federated learning
# Criteria: high similarity + high column overlap + low GED
fl_candidates = pairs[
(pairs['similarity'] >= 0.98) &
(pairs['jaccard_columns'] >= 0.4) &
(pairs['graph_edit_distance'] <= 3.0)
]
print(f"FL candidate pairs: {len(fl_candidates)}")
# Sample pairs for experiments
sample = fl_candidates.sample(n=100, random_state=42)
```
### Example 5: Column Statistics Analysis
```python
import pandas as pd
# Load column-level statistics
cardinality = pd.read_csv('column_cardinality.csv')
entropy = pd.read_csv('column_entropy.csv')
sparsity = pd.read_csv('column_sparsity.csv')
# Merge on (db_id, table_name, column_name)
merged = cardinality.merge(entropy, on=['db_id', 'table_name', 'column_name'])
merged = merged.merge(sparsity, on=['db_id', 'table_name', 'column_name'])
# Find high-quality columns for machine learning
# Criteria: high cardinality, high entropy, low sparsity
quality_columns = merged[
(merged['n_distinct'] > 50) &
(merged['entropy'] > 2.0) &
(merged['sparsity'] < 0.1)
]
print(f"High-quality columns: {len(quality_columns)}")
```
### Example 6: Community Analysis
```python
import pandas as pd
# Load community assignments
communities = pd.read_csv('community_assignment_0.94.csv')
nodes = pd.read_csv('node_structural_properties.csv')
# Merge to get properties by community
community_props = communities.merge(
nodes,
left_on='node_id',
right_on='db_id'
)
# Analyze each community
for comm_id in community_props['partition'].unique()[:5]:
comm_data = community_props[community_props['partition'] == comm_id]
print(f"\nCommunity {comm_id}:")
print(f" Size: {len(comm_data)}")
print(f" Avg tables: {comm_data['num_tables'].mean():.2f}")
print(f" Avg columns: {comm_data['num_columns'].mean():.2f}")
```
## Applications
### 1. Federated Learning Research
Use the similarity graph to identify database pairs for federated learning experiments. The high-similarity pairs (≥0.98) are ideal for horizontal federated learning scenarios.
### 2. Schema Matching
Leverage structural properties and similarity metrics for automated schema matching and integration tasks.
### 3. Database Clustering
Use embeddings and community detection results to group similar databases for analysis or optimization.
### 4. Data Quality Assessment
Column-level statistics (cardinality, entropy, sparsity) enable systematic data quality evaluation across large database collections.
### 5. Graph Neural Networks
The DGL graph format is ready for training GNN models for link prediction, node classification, or graph classification tasks.
## Technical Details
### Similarity Computation
- **Method**: BGE (BAAI General Embedding) model for semantic embeddings
- **Metric**: Cosine similarity
- **Thresholds**: Multiple thresholds available (0.6713, 0.94, 0.96)
### Graph Construction
- **Nodes**: Database IDs (0 to 99,999)
- **Edges**: Database pairs with similarity above threshold
- **Edge weights**: Cosine similarity scores
- **Format**: DGL binary format for efficient loading
### Community Detection
- **Algorithm**: Louvain method
- **Modularity**: 0.5366 (indicates well-defined communities)
- **Resolution**: Default parameter
### Data Processing Pipeline
1. Schema extraction from Wikidata databases
2. Semantic embedding generation using BGE
3. Similarity computation across all pairs
4. Graph construction and filtering
5. Property extraction and statistical analysis
6. Community detection and clustering
## Data Format Standards
### Database ID Formats
- **Integer IDs**: Used in most files (0-99999)
- **Padded strings**: Used in some files (e.g., "00000", "00001")
- **Conversion**: `str(db_id).zfill(5)` for integer to padded string
### Missing Values
- Numerical columns: May contain `NaN` or `-0.0`
- String columns: Empty strings or missing entries
- Sparsity column: Explicit ratio of missing values
### Data Types
- `float32`: Similarity scores, weights, entropy
- `float64`: Statistical measures, ratios
- `int64`: Counts, IDs
- `string`: Names, identifiers
## File Size Information
Approximate file sizes:
- `graph_raw_0.94.dgl`: ~2.5 GB
- `database_embeddings.pt`: ~300 MB
- `filtered_edges_threshold_0.94.csv`: ~800 MB
- `edge_structural_properties_GED_0.94.csv`: ~400 MB
- `node_structural_properties.csv`: ~50 MB
- Column statistics CSVs: ~20-50 MB each
- Other files: <10 MB each
## Citation
If you use this dataset in your research, please cite:
```bibtex
@article{wu2025wikidbgraph,
title={WikiDBGraph: Large-Scale Database Graph of Wikidata for Collaborative Learning},
author={Wu, Zhaomin and Wang, Ziyang and He, Bingsheng},
journal={arXiv preprint arXiv:2505.16635},
year={2025}
}
```
## License
This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).
## Acknowledgments
This dataset is derived from Wikidata and builds upon the WikiDBGraph system for graph-based database analysis and federated learning. We acknowledge the Wikidata community for providing the underlying data infrastructure.
|