File size: 2,274 Bytes
6a3b26e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- lora
- semantic-router
- intent-classification
- text-classification
- candle
- rust
language:
- en
pipeline_tag: text-classification
library_name: candle
---
# lora_intent_classifier_modernbert-base_model
## Model Description
This is a LoRA (Low-Rank Adaptation) fine-tuned model based on **bert-base-uncased** for Intent Classification - Classifies text into categories like business, technology, science, etc..
This model is part of the [semantic-router](https://github.com/vllm-project/semantic-router) project and is optimized for use with the Candle framework in Rust.
## Model Details
- **Base Model**: bert-base-uncased
- **Task**: Intent Classification
- **Framework**: Candle (Rust)
- **Model Size**: ~571MB
- **LoRA Rank**: N/A
- **LoRA Alpha**: N/A
- **Target Modules**:
## Usage
### With semantic-router (Recommended)
```python
from semantic_router import SemanticRouter
# The model will be automatically downloaded and used
router = SemanticRouter()
results = router.classify_batch(["Your text here"])
```
### With Candle (Rust)
```rust
use candle_core::{Device, Tensor};
use candle_transformers::models::bert::BertModel;
// Load the model using Candle
let device = Device::Cpu;
let model = BertModel::load(&device, &config, &weights)?;
```
## Training Details
This model was fine-tuned using LoRA (Low-Rank Adaptation) technique:
- **Rank**: 16
- **Alpha**: 32
- **Dropout**: 0.1
- **Target Modules**:
## Performance
Intent Classification - Classifies text into categories like business, technology, science, etc.
For detailed performance metrics, see the [training results](https://github.com/vllm-project/semantic-router/blob/main/training-result.md).
## Files
- `model.safetensors`: LoRA adapter weights
- `config.json`: Model configuration
- `lora_config.json`: LoRA-specific configuration
- `tokenizer.json`: Tokenizer configuration
- `label_mapping.json`: Label mappings for classification
## Citation
If you use this model, please cite:
```bibtex
@misc{semantic-router-lora,
title={LoRA Fine-tuned Models for Semantic Router},
author={Semantic Router Team},
year={2025},
url={https://github.com/vllm-project/semantic-router}
}
```
## License
Apache 2.0
|