Update app.py
Browse files
app.py
CHANGED
|
@@ -9,9 +9,9 @@ model_cache = {}
|
|
| 9 |
|
| 10 |
# Available models
|
| 11 |
AVAILABLE_MODELS = {
|
| 12 |
-
"Nous-1-4B": "
|
| 13 |
-
"Nous-1-8B": "
|
| 14 |
-
"Nous-1-2B": "
|
| 15 |
}
|
| 16 |
|
| 17 |
@spaces.GPU
|
|
@@ -190,7 +190,7 @@ def create_interface():
|
|
| 190 |
gr.Markdown("""
|
| 191 |
# π Nous-1 Model Chat Interface
|
| 192 |
|
| 193 |
-
Chat with the Nous-1 models by
|
| 194 |
|
| 195 |
**Available Models:**
|
| 196 |
- Nous-1-4B (4 billion parameters)
|
|
@@ -286,11 +286,11 @@ def create_interface():
|
|
| 286 |
---
|
| 287 |
|
| 288 |
### About the Nous-1 Models
|
| 289 |
-
**Nous-1-2B**: 2 billion parameter model by
|
| 290 |
|
| 291 |
-
**Nous-1-4B**: 4 billion parameter model by
|
| 292 |
|
| 293 |
-
**Nous-1-8B**: 8 billion parameter model by
|
| 294 |
|
| 295 |
All models are designed for conversational AI and support various text generation tasks. The 8B model provides more sophisticated responses but requires more computational resources.
|
| 296 |
|
|
|
|
| 9 |
|
| 10 |
# Available models
|
| 11 |
AVAILABLE_MODELS = {
|
| 12 |
+
"Nous-1-4B": "NoemaResearch/Nous-1-4B",
|
| 13 |
+
"Nous-1-8B": "NoemaResearch/Nous-1-8B",
|
| 14 |
+
"Nous-1-2B": "NoemaResearch/Nous-1-2B",
|
| 15 |
}
|
| 16 |
|
| 17 |
@spaces.GPU
|
|
|
|
| 190 |
gr.Markdown("""
|
| 191 |
# π Nous-1 Model Chat Interface
|
| 192 |
|
| 193 |
+
Chat with the Nous-1 models by Noema Research.
|
| 194 |
|
| 195 |
**Available Models:**
|
| 196 |
- Nous-1-4B (4 billion parameters)
|
|
|
|
| 286 |
---
|
| 287 |
|
| 288 |
### About the Nous-1 Models
|
| 289 |
+
**Nous-1-2B**: 2 billion parameter model by Noema Research, designed for fast and quick infrencing
|
| 290 |
|
| 291 |
+
**Nous-1-4B**: 4 billion parameter model by Noema Research, optimisd for efficient conversation and text generation
|
| 292 |
|
| 293 |
+
**Nous-1-8B**: 8 billion parameter model by Noema Research, offering enhanced capabilities and better performance for complex tasks
|
| 294 |
|
| 295 |
All models are designed for conversational AI and support various text generation tasks. The 8B model provides more sophisticated responses but requires more computational resources.
|
| 296 |
|