Datasets:
				
			
			
	
			
	
		
			
	
		
		license: mit
language:
  - en
tags:
  - conversations
  - tagging
  - embeddings
  - bittensor
pretty_name: Bittensor Conversational Tagging and Embedding
size_categories:
  - 10M<n<100M
ποΈ ReadyAI - Bittensor Conversational Tagging and Embedding Dataset
ReadyAI is an open-source initiative focused on low-cost, resource-minimal pipelines for structuring raw data for AI applications.
This dataset is part of the ReadyAI Conversational Genome Project, leveraging the Bittensor decentralized network.
AI runs on structured data β and this dataset bridges the gap between raw conversation transcripts and structured, vectorized semantic tags.
You can find more about our subnet on GitHub here.
π Dataset Overview
This dataset contains annotated conversation transcripts with:
- Human-readable semantic tags
 - Embedding vectors contextualized to each conversation
 - Participant metadata
 
It is ideal for:
- Conversational AI training
 - Dialogue understanding research
 - Retrieval-augmented generation (RAG)
 - Semantic search
 - Fine-tuning large language models (LLMs)
 
π Dataset Structure
The dataset consists of four main components:
1. data/bittensor-conversational-tags-and-embeddings-part-*.parquet β Tag Embeddings and Metadata
Each Parquet file contains rows with:
| Column | Type | Description | 
|---|---|---|
| c_guid | int64 | Unique conversation group ID | 
| tag_id | int64 | Unique identifier for the tag | 
| tag | string | Semantic tag (e.g., "climate change") | 
| vector | list of float32 | Embedding vector representing the tag's meaning in the conversation's context | 
β Files split into ~1GB chunks for efficient loading and streaming.
2. tag_to_id.parquet β Tag Mapping
Mapping between tag IDs and human-readable tags.
| Column | Type | Description | 
|---|---|---|
| tag_id | int64 | Unique tag ID | 
| tag | string | Semantic tag text | 
β Useful for reverse-mapping tags from models or outputs.
3. conversations_to_tags.parquet β Conversation-to-Tag Mappings
Links conversations to their associated semantic tags.
| Column | Type | Description | 
|---|---|---|
| c_guid | int64 | Conversation group ID | 
| tag_ids | list of int64 | List of tag IDs relevant to the conversation | 
β For supervised training, retrieval tasks, or semantic labeling.
4. conversations.parquet β Full Conversation Text and Participants
Contains the raw multi-turn dialogue and metadata.
| Column | Type | Description | 
|---|---|---|
| c_guid | int64 | Conversation group ID | 
| transcript | string | Full conversation text | 
| participants | list of strings | List of speaker identifiers | 
β Useful for dialogue modeling, multi-speaker AI, or fine-tuning.
π How to Use
Install dependencies
pip install pandas pyarrow
Load a single Parquet split
import pandas as pd
df = pd.read_parquet("data/bittensor-conversational-tags-and-embeddings-part-0000.parquet")
print(df.head())
Load all tag splits
import pandas as pd
import glob
files = sorted(glob.glob("data/bittensor-conversational-tags-and-embeddings-part-*.parquet"))
df_tags = pd.concat((pd.read_parquet(f) for f in files), ignore_index=True)
print(f"Loaded {len(df_tags)} tag records.")
Load tag dictionary
tag_dict = pd.read_parquet("tag_to_id.parquet")
print(tag_dict.head())
Load conversation to tags mapping
df_mapping = pd.read_parquet("conversations_to_tags.parquet")
print(df_mapping.head())
Load full conversations dialog and metadata
df_conversations = pd.read_parquet("conversations.parquet")
print(df_conversations.head())
π₯ Example: Reconstruct Tags for a Conversation
# Build tag lookup
tag_lookup = dict(zip(tag_dict['tag_id'], tag_dict['tag']))
# Pick a conversation
sample = df_mapping.iloc[0]
c_guid = sample['c_guid']
tag_ids = sample['tag_ids']
# Translate tag IDs to human-readable tags
tags = [tag_lookup.get(tid, "Unknown") for tid in tag_ids]
print(f"Conversation {c_guid} has tags: {tags}")
π¦ Handling Split Files
| Situation | Strategy | 
|---|---|
| Enough RAM | Use pd.concat() to merge splits | 
| Low memory | Process each split one-by-one | 
| Hugging Face datasets | Use streaming mode | 
Example (streaming with Hugging Face datasets)
from datasets import load_dataset
# Stream the dataset directly
dataset = load_dataset(
    "ReadyAi/bittensor-conversational-tags-and-embeddings",
    split="train",
    streaming=True
)
for example in dataset:
    print(example)
    break
π License
MIT License β
β
 Free to use and modify,
β Commercial redistribution without permission is prohibited.
β¨ Credits
Built using contributions from Bittensor conversational miners and the ReadyAI open-source community.
π― Summary
| Component | Description | 
|---|---|
| parquets/part_*.parquet | Semantic tags and their contextual embeddings | 
| tag_to_id.parquet | Dictionary mapping of tag IDs to text | 
| conversations_to_tags.parquet | Links conversations to tags | 
| conversations.parquet | Full multi-turn dialogue with participant metadata |