tokenizers
tokenizer

Tokenizer Card for Ansh-128k!

The tokenizer model Ansh-128k - is trained on a dataset of 22 Official Indic languages and English. We propose the name Ansh as this tokenizer is designed to meticulously identify every essential token (Ansh in Sanskrit) of our diverse Indic languages. This model is the advanced version of the Ansh-160k which was trained on 18 Indic languages and English.

image/png

Model Description

India is a vast country that has a multi-lingual culture that covers 22 Official languages and more than 1700 languages and dialects. It has been observed that various languages share words among themselves, sometimes even across language families. To capitalize on this observation, we trained our tokenization model with a vocabulary size of 128,000 (128k) using the dataset of Wikipedia articles and Sangraha dataset in 22 Indic languages and English by applying the Byte-Pair Encoding (BPE) algorithm. When compared among all the popular open-source tokenizers trained on multilingual Indic languages on fertility scores, our model outperformed them in 9 Indic languages.

How to Get Started with the Model ๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป

Use the code below to get started with the model.

from transformers import AutoTokenizer
try:
    tokenizer = tokenizer = AutoTokenizer.from_pretrained("LingoIITGN/Ansh-128k"))
    print("Tokenizer loaded successfully!")
except Exception as e:
    print(f"Error loading tokenizer: {e}")
    print("Please ensure you have the correct model name and are connected to the internet.")
    exit()
 
input_text = "Hello, world! This is an example of how to use the tokenizer."
#input_text = 'เคฎเฅเคเฅ‡ เคฏเคน presentation เค•เคฒ morning เคคเค• submit เค•เคฐเคจเคพ เคนเฅˆเฅค '
#input_text = 'What is capital city of India?'

encoded_input = tokenizer.encode(example_text)
print("\nOriginal Text:", example_text)
print("Encoded (Token IDs):", encoded_input)

decoded_output = tokenizer.decode(encoded_input)
print("Decoded Text:", decoded_output)

Evaluation

[More Information Needed]

Results ๐Ÿ†

Comparison of Fertility Scores among popular open-source tokenizers trained on multilingual Indic languages and Ansh-128k tokenizers across the 22 Indic languages and English. Tokenizers Results
Language Ansh-128k MuRIL IndicBERTv2 Ansh-160k Llama-3.1 NLLB XLMRoBERTa Gemma Sarvam-1
Tamil 1.915 1.844 1.790 1.899 11.941 2.742 2.486 2.524 2.590
Kannada 1.909 1.953 1.815 1.862 14.239 2.846 2.507 3.349 2.654
Malayalam 2.210 2.337 2.177 2.236 16.064 3.406 2.968 3.612 3.363
Maithili 1.474 1.832 1.695 1.561 3.246 1.955 2.133 2.152 2.503
Konkani 1.941 2.491 2.221 2.072 4.037 2.617 2.581 2.727 2.992
Telugu 1.940 2.069 1.873 2.010 13.240 2.859 2.552 3.143 2.693
Odia 1.546 1.714 1.539 1.587 15.535 2.149 2.196 4.523 2.494
Bengali 1.542 1.442 1.461 1.509 8.200 2.205 2.140 1.767 2.045
Nepali 1.376 1.413 1.411 1.428 3.611 1.898 1.643 2.027 2.358
Punjabi 1.415 1.420 1.341 1.434 7.855 1.843 1.798 2.789 1.726
Urdu 1.285 1.314 1.393 1.270 3.003 1.589 1.430 1.687 8.417
Hindi 1.245 1.276 1.272 1.246 2.757 1.546 1.525 1.442 1.480
Gujarati 1.537 1.587 1.459 1.495 9.651 2.145 2.062 2.358 2.093
Kashmiri 1.540 2.131 2.646 1.619 4.026 2.849 2.985 3.053 9.248
Marathi 1.585 1.579 1.521 1.573 4.010 2.207 2.011 2.012 1.979
Sindhi 1.300 1.354 1.630 1.333 2.938 1.621 1.532 2.101 8.165
Assamese 1.662 1.770 1.686 1.724 8.051 2.191 2.875 2.728 4.334
Sanskrit 2.444 2.855 2.732 2.470 5.034 3.453 3.344 3.562 3.949
Bodo 1.486 2.761 1.886 2.499 3.855 3.008 3.068 3.057 3.136
Santhali 1.333 1.144 1.966 4.538 13.456 2.994 2.095 5.634 14.402
Dogri 1.539 1.512 1.457 1.525 2.810 1.721 1.717 1.658 1.789
Manipuri 4.416 1.436 2.497 4.407 13.184 2.237 2.326 9.272 13.496
English 1.545 1.368 1.373 1.449 1.384 1.480 1.470 1.415 1.743
Overall 1.641 1.899 1.893 2.348 6.024 2.498 2.439 3.123 5.963

Model Card Contact โœ‰๏ธ

Lingo Research Group at IIT Gandhinagar, India
Mail at: lingo@iitgn.ac.in

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support