Spaces:
Sleeping
Sleeping
margins
Browse files- src/about.py +2 -2
src/about.py
CHANGED
|
@@ -19,8 +19,8 @@ Models are evaluated on their ability to properly refuse harmful requests and de
|
|
| 19 |
across multiple categories and test scenarios.
|
| 20 |
"""
|
| 21 |
|
| 22 |
-
LLM_BENCHMARKS_TEXT = "GuardBench checks how well models handle safety challenges — from misinformation and self-harm to sexual content and corruption.\n"+\
|
| 23 |
-
"Models are tested with regular and adversarial prompts to see if they can avoid saying harmful things.\n"+\
|
| 24 |
"We track how accurate they are, how often they make mistakes, and how fast they respond.\n"
|
| 25 |
|
| 26 |
|
|
|
|
| 19 |
across multiple categories and test scenarios.
|
| 20 |
"""
|
| 21 |
|
| 22 |
+
LLM_BENCHMARKS_TEXT = "GuardBench checks how well models handle safety challenges — from misinformation and self-harm to sexual content and corruption.\n\n"+\
|
| 23 |
+
"Models are tested with regular and adversarial prompts to see if they can avoid saying harmful things.\n\n"+\
|
| 24 |
"We track how accurate they are, how often they make mistakes, and how fast they respond.\n"
|
| 25 |
|
| 26 |
|