Upload README.md with huggingface_hub
Browse files
    	
        README.md
    ADDED
    
    | 
         @@ -0,0 +1,122 @@ 
     | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 1 | 
         
            +
            ---
         
     | 
| 2 | 
         
            +
            language: en
         
     | 
| 3 | 
         
            +
            license: apache-2.0
         
     | 
| 4 | 
         
            +
            model_name: roberta-sequence-classification-9.onnx
         
     | 
| 5 | 
         
            +
            tags:
         
     | 
| 6 | 
         
            +
            - validated
         
     | 
| 7 | 
         
            +
            - text
         
     | 
| 8 | 
         
            +
            - machine_comprehension
         
     | 
| 9 | 
         
            +
            - roberta
         
     | 
| 10 | 
         
            +
            ---
         
     | 
| 11 | 
         
            +
            <!--- SPDX-License-Identifier: Apache-2.0 -->
         
     | 
| 12 | 
         
            +
             
     | 
| 13 | 
         
            +
            # RoBERTa
         
     | 
| 14 | 
         
            +
             
     | 
| 15 | 
         
            +
            ## Use cases
         
     | 
| 16 | 
         
            +
            Transformer-based language model for text generation.
         
     | 
| 17 | 
         
            +
             
     | 
| 18 | 
         
            +
            ## Description
         
     | 
| 19 | 
         
            +
            RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT.
         
     | 
| 20 | 
         
            +
             
     | 
| 21 | 
         
            +
            ## Model
         
     | 
| 22 | 
         
            +
             
     | 
| 23 | 
         
            +
            |Model        |Download  |Download (with sample test data)| ONNX version |Opset version|Accuracy|
         
     | 
| 24 | 
         
            +
            | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
         
     | 
| 25 | 
         
            +
            |RoBERTa-BASE| [499 MB](model/roberta-base-11.onnx) |  [295 MB](model/roberta-base-11.tar.gz) |  1.6 | 11| 88.5|
         
     | 
| 26 | 
         
            +
            |RoBERTa-SequenceClassification| [499 MB](model/roberta-sequence-classification-9.onnx) |  [432 MB](model/roberta-sequence-classification-9.tar.gz) |  1.6 | 9| MCC of [0.85](dependencies/roberta-sequence-classification-validation.ipynb)|
         
     | 
| 27 | 
         
            +
             
     | 
| 28 | 
         
            +
            ## Source
         
     | 
| 29 | 
         
            +
            PyTorch RoBERTa => ONNX RoBERTa
         
     | 
| 30 | 
         
            +
            PyTorch RoBERTa + script changes => ONNX RoBERTa-SequenceClassification
         
     | 
| 31 | 
         
            +
             
     | 
| 32 | 
         
            +
            ## Conversion
         
     | 
| 33 | 
         
            +
            Here is the [benchmark script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/run_benchmark.sh) that was used for exporting RoBERTa-BASE model.
         
     | 
| 34 | 
         
            +
             
     | 
| 35 | 
         
            +
            Tutorial for conversion of RoBERTa-SequenceClassification model can be found in the [conversion](https://github.com/SeldonIO/seldon-models/blob/master/pytorch/moviesentiment_roberta/pytorch-roberta-onnx.ipynb) notebook.
         
     | 
| 36 | 
         
            +
             
     | 
| 37 | 
         
            +
            Official tool from HuggingFace that can be used to convert transformers models to ONNX can be found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py)
         
     | 
| 38 | 
         
            +
             
     | 
| 39 | 
         
            +
            ## Inference
         
     | 
| 40 | 
         
            +
            We used [ONNX Runtime](https://github.com/microsoft/onnxruntime) to perform the inference.
         
     | 
| 41 | 
         
            +
             
     | 
| 42 | 
         
            +
            Tutorial for running inference for RoBERTa-SequenceClassification model using onnxruntime can be found in the [inference](dependencies/roberta-inference.ipynb) notebook.
         
     | 
| 43 | 
         
            +
             
     | 
| 44 | 
         
            +
            ### Input
         
     | 
| 45 | 
         
            +
            input_ids: Indices of input tokens in the vocabulary. It's a int64 tensor of dynamic shape (batch_size, sequence_length). Text tokenized by RobertaTokenizer.
         
     | 
| 46 | 
         
            +
             
     | 
| 47 | 
         
            +
            For RoBERTa-BASE model:
         
     | 
| 48 | 
         
            +
            Input is a sequence of words as a string. Example: "Text to encode: Hello, World"
         
     | 
| 49 | 
         
            +
             
     | 
| 50 | 
         
            +
            For RoBERTa-SequenceClassification model:
         
     | 
| 51 | 
         
            +
            Input is a sequence of words as a string including sentiment. Example: "This film is so good"
         
     | 
| 52 | 
         
            +
             
     | 
| 53 | 
         
            +
             
     | 
| 54 | 
         
            +
            ### Preprocessing
         
     | 
| 55 | 
         
            +
            For RoBERTa-BASE and RoBERTa-SequenceClassification model use tokenizer.encode() to encode the input text:
         
     | 
| 56 | 
         
            +
            ```python
         
     | 
| 57 | 
         
            +
            import torch
         
     | 
| 58 | 
         
            +
            import numpy as np
         
     | 
| 59 | 
         
            +
            from simpletransformers.model import TransformerModel
         
     | 
| 60 | 
         
            +
            from transformers import RobertaForSequenceClassification, RobertaTokenizer
         
     | 
| 61 | 
         
            +
             
     | 
| 62 | 
         
            +
            text = "This film is so good"
         
     | 
| 63 | 
         
            +
            tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
         
     | 
| 64 | 
         
            +
            input_ids = torch.tensor(tokenizer.encode(text, add_special_tokens=True)).unsqueeze(0)  # Batch size 1
         
     | 
| 65 | 
         
            +
            ```
         
     | 
| 66 | 
         
            +
             
     | 
| 67 | 
         
            +
            ### Output
         
     | 
| 68 | 
         
            +
            For RoBERTa-BASE model:
         
     | 
| 69 | 
         
            +
            Output of this model is a float32 tensors ```[batch_size,seq_len,768]``` and ```[batch_size,768]```
         
     | 
| 70 | 
         
            +
             
     | 
| 71 | 
         
            +
            For RoBERTa-SequenceClassification model:
         
     | 
| 72 | 
         
            +
            Output of this model is a float32 tensor ```[batch_size, 2]```
         
     | 
| 73 | 
         
            +
             
     | 
| 74 | 
         
            +
            ### Postprocessing
         
     | 
| 75 | 
         
            +
            For RoBERTa-BASE model:
         
     | 
| 76 | 
         
            +
            ```
         
     | 
| 77 | 
         
            +
            last_hidden_states = ort_out[0]
         
     | 
| 78 | 
         
            +
            ```
         
     | 
| 79 | 
         
            +
             
     | 
| 80 | 
         
            +
            For RoBERTa-SequenceClassification model:
         
     | 
| 81 | 
         
            +
            Print sentiment prediction
         
     | 
| 82 | 
         
            +
            ```python
         
     | 
| 83 | 
         
            +
            pred = np.argmax(ort_out)
         
     | 
| 84 | 
         
            +
            if(pred == 0):
         
     | 
| 85 | 
         
            +
            print("Prediction: negative")
         
     | 
| 86 | 
         
            +
            elif(pred == 1):
         
     | 
| 87 | 
         
            +
            print("Prediction: positive")
         
     | 
| 88 | 
         
            +
            ```
         
     | 
| 89 | 
         
            +
             
     | 
| 90 | 
         
            +
            ## Dataset
         
     | 
| 91 | 
         
            +
            RoBERTa-BASE model was trained on five datasets:
         
     | 
| 92 | 
         
            +
            * [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
         
     | 
| 93 | 
         
            +
            * [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
         
     | 
| 94 | 
         
            +
            * [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.
         
     | 
| 95 | 
         
            +
            * [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2,
         
     | 
| 96 | 
         
            +
            * [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.
         
     | 
| 97 | 
         
            +
             
     | 
| 98 | 
         
            +
            Pretrained RoBERTa-BASE model weights can be downloaded [here](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin).
         
     | 
| 99 | 
         
            +
             
     | 
| 100 | 
         
            +
            RoBERTa-SequenceClassification model weights can be downloaded [here](https://storage.googleapis.com/seldon-models/pytorch/moviesentiment_roberta/pytorch_model.bin).
         
     | 
| 101 | 
         
            +
             
     | 
| 102 | 
         
            +
            ## Validation accuracy
         
     | 
| 103 | 
         
            +
            [GLUE (Wang et al., 2019)](https://gluebenchmark.com/) (dev set, single model, single-task finetuning)
         
     | 
| 104 | 
         
            +
            |Model        |MNLI |QNLI| QQP |RTE|SST-2|MRPC|CoLA|STS-B|
         
     | 
| 105 | 
         
            +
            | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
         
     | 
| 106 | 
         
            +
            |```roberta.base```| 87.6 | 92.8 |  91.9 | 78.7|94.8|90.2|63.6|91.2|
         
     | 
| 107 | 
         
            +
             
     | 
| 108 | 
         
            +
            Metric and benchmarking details are provided by [fairseq](https://github.com/pytorch/fairseq/tree/master/examples/roberta).
         
     | 
| 109 | 
         
            +
             
     | 
| 110 | 
         
            +
            ## Publication/Attribution
         
     | 
| 111 | 
         
            +
            * [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf).Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov
         
     | 
| 112 | 
         
            +
             
     | 
| 113 | 
         
            +
            ## References
         
     | 
| 114 | 
         
            +
            * The RoBERTa-SequenceClassification model is converted directly from [seldon-models/pytorch](https://github.com/SeldonIO/seldon-models/blob/master/pytorch/moviesentiment_roberta/pytorch-roberta-onnx.ipynb)
         
     | 
| 115 | 
         
            +
            * [Accelerate your NLP pipelines using Hugging Face Transformers and ONNX Runtime](https://medium.com/microsoftazure/accelerate-your-nlp-pipelines-using-hugging-face-transformers-and-onnx-runtime-2443578f4333)
         
     | 
| 116 | 
         
            +
             
     | 
| 117 | 
         
            +
            ## Contributors
         
     | 
| 118 | 
         
            +
            [Kundana Pillari](https://github.com/kundanapillari)
         
     | 
| 119 | 
         
            +
             
     | 
| 120 | 
         
            +
            ## License
         
     | 
| 121 | 
         
            +
            Apache 2.0
         
     | 
| 122 | 
         
            +
             
     |