Create README.md
Browse files
    	
        README.md
    ADDED
    
    | 
         @@ -0,0 +1,28 @@ 
     | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 1 | 
         
            +
            ---
         
     | 
| 2 | 
         
            +
            library_name: transformers
         
     | 
| 3 | 
         
            +
            tags:
         
     | 
| 4 | 
         
            +
             - transformers.js
         
     | 
| 5 | 
         
            +
             - tokenizers
         
     | 
| 6 | 
         
            +
            ---
         
     | 
| 7 | 
         
            +
             
     | 
| 8 | 
         
            +
            # text-davinci-003 Tokenizer
         
     | 
| 9 | 
         
            +
             
     | 
| 10 | 
         
            +
            A 🤗-compatible version of the **text-davinci-003 tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/xenova/transformers.js).
         
     | 
| 11 | 
         
            +
             
     | 
| 12 | 
         
            +
            ## Example usage:
         
     | 
| 13 | 
         
            +
             
     | 
| 14 | 
         
            +
            ### Transformers/Tokenizers
         
     | 
| 15 | 
         
            +
            ```py
         
     | 
| 16 | 
         
            +
            from transformers import GPT2TokenizerFast
         
     | 
| 17 | 
         
            +
             
     | 
| 18 | 
         
            +
            tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/text-davinci-003')
         
     | 
| 19 | 
         
            +
            assert tokenizer.encode('hello world') == [31373, 995]
         
     | 
| 20 | 
         
            +
            ```
         
     | 
| 21 | 
         
            +
             
     | 
| 22 | 
         
            +
            ### Transformers.js
         
     | 
| 23 | 
         
            +
            ```js
         
     | 
| 24 | 
         
            +
            import { AutoTokenizer } from '@xenova/transformers';
         
     | 
| 25 | 
         
            +
             
     | 
| 26 | 
         
            +
            const tokenizer = await AutoTokenizer.from_pretrained('Xenova/text-davinci-003');
         
     | 
| 27 | 
         
            +
            const tokens = tokenizer.encode('hello world'); // [31373, 995]
         
     | 
| 28 | 
         
            +
            ```
         
     |