Update README.md
Browse files
README.md
CHANGED
|
@@ -13,50 +13,57 @@ widget:
|
|
| 13 |
tags:
|
| 14 |
- text-generation
|
| 15 |
---
|
| 16 |
-
#
|
| 17 |
-
Pretrained model on the English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released on this page.
|
| 18 |
|
| 19 |
-
## Model description
|
| 20 |
-
ScriptGPT is a language model trained on a dataset of
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
The goal of ScriptGPT is to generate scripts for
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
| 25 |
|
| 26 |
More models are coming soon...
|
| 27 |
|
| 28 |
-
## Intended uses
|
| 29 |
-
The intended uses of ScriptGPT include generating scripts for videos
|
| 30 |
|
| 31 |
-
|
|
|
|
| 32 |
You can use this model directly with a pipeline for text generation.
|
| 33 |
|
| 34 |
-
__Load Model__
|
| 35 |
```python
|
| 36 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 37 |
|
| 38 |
tokenizer = AutoTokenizer.from_pretrained("SRDdev/ScriptGPT-small")
|
| 39 |
model = AutoModelForCausalLM.from_pretrained("SRDdev/ScriptGPT-small")
|
| 40 |
```
|
| 41 |
-
|
|
|
|
| 42 |
```python
|
| 43 |
from transformers import pipeline
|
| 44 |
generator = pipeline('text generation, model= model , tokenizer=tokenizer)
|
| 45 |
|
| 46 |
-
context = "
|
| 47 |
-
length_to_generate =
|
| 48 |
|
| 49 |
script = generator(context, max_length=length_to_generate, do_sample=True)[0]['generated_text']
|
| 50 |
|
| 51 |
script
|
| 52 |
```
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
## Limitations and bias
|
| 56 |
-
The model is trained on Youtube Scripts and will work better for that. It may also generate random information and users should be aware of that and cross-validate the results.
|
| 57 |
|
|
|
|
|
|
|
| 58 |
|
| 59 |
## Citations
|
|
|
|
| 60 |
@model{
|
| 61 |
Name=Shreyas Dixit
|
| 62 |
framework=Pytorch
|
|
|
|
| 13 |
tags:
|
| 14 |
- text-generation
|
| 15 |
---
|
| 16 |
+
# ScriptGPT-small
|
|
|
|
| 17 |
|
| 18 |
+
## 🖊️ Model description
|
| 19 |
+
ScriptGPT-small is a language model trained on a dataset of 100 YouTube videos that cover different domains of Youtube videos.
|
| 20 |
+
ScriptGPT-small is a Causal language transformer. The model resembles the GPT2 architecture, the model is a Causal Language model meaning it predicts the probability of a sequence of words based on the preceding words in the sequence.
|
| 21 |
+
It generates a probability distribution over the next word given the previous words, without incorporating future words.
|
| 22 |
|
| 23 |
+
The goal of ScriptGPT-small is to generate scripts for Youtube videos that are coherent, informative, and engaging.
|
| 24 |
+
This can be useful for content creators who are looking for inspiration or who want to automate the process of generating video scripts.
|
| 25 |
+
To use ScriptGPT-small, users can provide a prompt or a starting sentence, and the model will generate a sequence of words that follow the context and style of the training data.
|
| 26 |
|
| 27 |
+
Models
|
| 28 |
+
- [Script_GPT](https://huggingface.co/SRDdev/Script_GPT) : AI content Model
|
| 29 |
+
- [ScriptGPT-small](https://huggingface.co/SRDdev/ScriptGPT-small) : Generalized Content Model
|
| 30 |
|
| 31 |
More models are coming soon...
|
| 32 |
|
| 33 |
+
## 🛒 Intended uses
|
| 34 |
+
The intended uses of ScriptGPT-small include generating scripts for videos, providing inspiration for content creators, and automating the process of generating video scripts.
|
| 35 |
|
| 36 |
+
|
| 37 |
+
## 📝 How to use
|
| 38 |
You can use this model directly with a pipeline for text generation.
|
| 39 |
|
| 40 |
+
1. __Load Model__
|
| 41 |
```python
|
| 42 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 43 |
|
| 44 |
tokenizer = AutoTokenizer.from_pretrained("SRDdev/ScriptGPT-small")
|
| 45 |
model = AutoModelForCausalLM.from_pretrained("SRDdev/ScriptGPT-small")
|
| 46 |
```
|
| 47 |
+
|
| 48 |
+
2. __Pipeline__
|
| 49 |
```python
|
| 50 |
from transformers import pipeline
|
| 51 |
generator = pipeline('text generation, model= model , tokenizer=tokenizer)
|
| 52 |
|
| 53 |
+
context = "Cooking red sauce pasta"
|
| 54 |
+
length_to_generate = 250
|
| 55 |
|
| 56 |
script = generator(context, max_length=length_to_generate, do_sample=True)[0]['generated_text']
|
| 57 |
|
| 58 |
script
|
| 59 |
```
|
| 60 |
+
<p style="opacity: 0.8">The model may generate random information as it is still in beta version</p>
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
+
## 🎈Limitations and bias
|
| 63 |
+
> The model is trained on Youtube Scripts and will work better for that. It may also generate random information and users should be aware of that and cross-validate the results.
|
| 64 |
|
| 65 |
## Citations
|
| 66 |
+
```
|
| 67 |
@model{
|
| 68 |
Name=Shreyas Dixit
|
| 69 |
framework=Pytorch
|