EttinX-nli-s / README.md
dleemiller's picture
Update README.md
db0dfbf verified
metadata
language:
  - en
tags:
  - sentence-transformers
  - cross-encoder
  - reranker
  - generated_from_trainer
  - dataset_size:942069
  - loss:PrecomputedDistillationLoss
base_model: jhu-clsp/ettin-encoder-68m
datasets:
  - dleemiller/all-nli-distill
pipeline_tag: text-classification
library_name: sentence-transformers
metrics:
  - f1_macro
  - f1_micro
  - f1_weighted
model-index:
  - name: CrossEncoder based on jhu-clsp/ettin-encoder-68m
    results:
      - task:
          type: cross-encoder-classification
          name: Cross Encoder Classification
        dataset:
          name: AllNLI dev
          type: AllNLI-dev
        metrics:
          - type: f1_macro
            value: 0.8974451466908199
            name: F1 Macro
          - type: f1_micro
            value: 0.8976446049753268
            name: F1 Micro
          - type: f1_weighted
            value: 0.8979023442463457
            name: F1 Weighted
      - task:
          type: cross-encoder-classification
          name: Cross Encoder Classification
        dataset:
          name: AllNLI test
          type: AllNLI-test
        metrics:
          - type: f1_macro
            value: 0.8959668105702971
            name: F1 Macro
          - type: f1_micro
            value: 0.8961640211640212
            name: F1 Micro
          - type: f1_weighted
            value: 0.8964174910602712
            name: F1 Weighted
license: mit

EttinX Cross-Encoder: Natural Language Inference (NLI)

This cross encoder performs sequence classification for contradiction/neutral/entailment labels. This has drop-in compatibility with comparable sentence transformers cross encoders.

To train this model, I added teacher logits to the all-nli dataset dleemiller/all-nli-distill from the dleemiller/ModernCE-large-nli model. This significantly improves performance above standard training.

This 68m architecture is based on ModernBERT and is an excellent candidate for lightweight CPU inference.


Features

  • High performing: Achieves 87.98% and 88.67% (Micro F1) on MNLI mismatched and SNLI test.
  • Efficient architecture: Based on the Ettin-68m encoder design (68M parameters), offering faster inference speeds.
  • Extended context length: Processes sequences up to 8192 tokens, great for LLM output evals.

Performance

Model MNLI Mismatched SNLI Test Context Length # Parameters
dleemiller/ModernCE-large-nli 0.9202 0.9110 8192 395M
dleemiller/ModernCE-base-nli 0.9034 0.9025 8192 149M
cross-encoder/nli-deberta-v3-large 0.9049 0.9220 512 435M
cross-encoder/nli-deberta-v3-base 0.9004 0.9234 512 184M
dleemiller/EttinX-nli-s 0.8798 0.8967 8192 68M
cross-encoder/nli-distilroberta-base 0.8398 0.8838 512 82M
dleemiller/EttinX-nli-xs 0.8380 0.8820 8192 32M
dleemiller/EttinX-nli-xxs 0.8047 0.8695 8192 17M

Extended NLI Evaluation Results

F1-Micro scores (equivalent to accuracy) for each dataset.

Model finecat mnli mnli_mismatched snli anli_r1 anli_r2 anli_r3 wanli lingnli
dleemiller/finecat-nli-l 0.8152 0.9088 0.9217 0.9259 0.7400 0.5230 0.5150 0.7424 0.8689
tasksource/ModernBERT-large-nli 0.7959 0.8983 0.9229 0.9188 0.7260 0.5110 0.4925 0.6978 0.8504
dleemiller/ModernCE-large-nli 0.7811 0.9088 0.9205 0.9273 0.6630 0.4860 0.4408 0.6576 0.8566
tasksource/ModernBERT-base-nli 0.7595 0.8685 0.8979 0.8915 0.6300 0.4820 0.4192 0.6632 0.8118
dleemiller/ModernCE-base-nli 0.7533 0.8923 0.9035 0.9187 0.5240 0.3950 0.3333 0.6464 0.8282
dleemiller/EttinX-nli-s 0.7251 0.8765 0.8798 0.9128 0.3360 0.2790 0.3083 0.6234 0.8012
dleemiller/EttinX-nli-xs 0.7013 0.8376 0.8380 0.8979 0.2780 0.2840 0.2800 0.5838 0.7521
dleemiller/EttinX-nli-xxs 0.6842 0.7988 0.8047 0.8851 0.2590 0.3060 0.2992 0.5426 0.7018

Usage

To use EttinX for NLI tasks, you can load the model with the Hugging Face sentence-transformers library:

from sentence_transformers import CrossEncoder

# Load EttinX model
model = CrossEncoder("dleemiller/EttinX-nli-s")

scores = model.predict([
    ('A man is eating pizza', 'A man eats something'),
    ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')
])

# Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
# ['entailment', 'contradiction']

Training Details

Pretraining

We initialize the `` weights.

Details:

  • Batch size: 256
  • Learning rate: 1e-4
  • Attention Dropout: attention dropout 0.1

Fine-Tuning

Fine-tuning was performed on the dleemiller/all-nli-distill dataset.

Validation Results

The model achieved the following test set micro f1 performance after fine-tuning:

  • MNLI Unmatched: 0.8798
  • SNLI: 0.8967

Model Card

  • Architecture: Ettin-encoder-68m
  • Fine-Tuning Data: dleemiller/all-nli-distill

Thank You

Thanks to the Johns Hopkins team for providing the ModernBERT models, and the Sentence Transformers team for their leadership in transformer encoder models.


Citation

If you use this model in your research, please cite:

@misc{moderncenli2025,
  author = {Miller, D. Lee},
  title = {EttinX NLI: An NLI cross encoder model},
  year = {2025},
  publisher = {Hugging Face Hub},
  url = {https://huggingface.co/dleemiller/EttinX-nli-xxs},
}

License

This model is licensed under the MIT License.