Model Card

We release open-weight metatune-gpt20b, fine tuned version of OpenAI's gpt-oss-20b model, this is one of the first public release recursive self improving AI.

  • Generates new data for itself,
  • Evaluates its performance, and
  • Adjusts its own hyperparameters based on improvement metrics.
  • Fine tune automaticlaly using unsloth SFT tuning techniques

Use cases:

  • genuinely demonstrate scientific and mathematical understanding at a postdoctoral level.
  • coding
    • Topics: Euler–Lagrange equation, vector calculus, statistical mechanics

Guardrails:

  • generally, please set reasoning = "high", it will usually prevent jailbreaking and prompt injection
  • use safety gpt oss 20b for guardrails before this model: openai/gpt-oss-safeguard-20b

Inference examples

Transformers

You can use gpt-oss-120b and gpt-oss-20b with Transformers. If you use the Transformers chat template, it will automatically apply the harmony response format. If you use model.generate directly, you need to apply the harmony format manually using the chat template or use our openai-harmony package.

To get started, install the necessary dependencies to setup your environment:

pip install -U transformers kernels torch 

For Google Colab (free/Pro)

!pip install -q --upgrade torch

!pip install -q transformers triton==3.4 kernels

!pip uninstall -q torchvision torchaudio -y

Once, setup you can proceed to run the model by running the snippet below:

from transformers import pipeline
import torch
model_id = "EpistemeAI/metatune-gpt20b"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)
messages = [
    {"role": "user", "content": "Derive the Euler–Lagrange equation from the principle of stationary action.""},
]
outputs = pipe(
    messages,
    max_new_tokens=3000,
)
print(outputs[0]["generated_text"][-1])

Reasoning levels

You can adjust the reasoning level that suits your task across three levels:

  • Low: Fast responses for general dialogue.
  • Medium: Balanced speed and detail.
  • High: Deep and detailed analysis.

The reasoning level can be set in the system prompts, e.g., "Reasoning: high".

Tool use

The gpt-oss models are excellent for:

  • Web browsing (using built-in browsing tools)
  • Function calling with defined schemas
  • Agentic operations like browser tasks

Fine-tuning

Both gpt-oss models can be fine-tuned for a variety of specialized use cases.

This smaller model gpt-oss-20b can be fine-tuned on consumer hardware, whereas the larger gpt-oss-120b can be fine-tuned on a single H100 node.

Benchmark

These benchmark are current benchmark and not final benchmark, due to recursive fine tuning techniques self improves over time:

hf (pretrained=EpistemeAI/metatune-gpt20b-R0,parallelize=True,dtype=bfloat16), gen_kwargs: (temperature=1,top_p=1,max_new_tokens=1000), limit: 30.0, num_fewshot: 5, batch_size: 1

Tasks metatune MiniMax M1 80k Llama 4 Maverick
gsm8k_cot 0.91 - -
gpqa_diamond_cot_n_shot 0.722 0.70 0.67

Thank you

  • OpenAI
  • Unsloth
  • Google Colab
  • Nvidia for A100

Uploaded finetuned model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit

This gpt_oss model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
215
Safetensors
Model size
22B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EpistemeAI/metatune-gpt20b-R0

Base model

openai/gpt-oss-20b
Quantized
(80)
this model

Space using EpistemeAI/metatune-gpt20b-R0 1