RLHF-Aligned GPT-2 Pipeline Models

This repository contains the three key models from an end-to-end, from-scratch implementation of the Reinforcement Learning from Human Feedback (RLHF) pipeline. The project's goal was to align a base gpt2 model with human preferences, following the same three-stage process popularized by models like ChatGPT.

The complete training code, notebooks, and in-depth analysis can be found in the primary GitHub repository: nabeelshan78/reinforcement-learning-human-feedback-scratch

🎯 Models in this Repository

This repository hosts the final checkpoint for each stage of the RLHF pipeline. You can load each model independently using the subfolder argument.

  1. sft_full_final - Supervised Fine-Tuned (SFT) Model: The base gpt2 model after being fine-tuned on an instruction dataset (Dahoas/synthetic-instruct-gptj-pairwise) to learn a helpful response style.

  2. reward_model_final - Reward Model (RM): A gpt2-based model trained to predict human preferences. It takes a prompt and a response and outputs a scalar reward score, indicating how "good" the response is. This model acts as an automated human preference judge.

  3. ppo_aligned_final - PPO-Aligned Model: The final, alignment-tuned model. This is the SFT model further trained using Proximal Policy Optimization (PPO) and the Reward Model to generate responses that maximize the reward score. This is the main model intended for generation tasks.


πŸš€ How to Use

1. Using the Final PPO-Aligned Model (for Text Generation)

This is the recommended model for generating helpful, aligned responses.

from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

# Define the model ID and the specific model subfolder
model_id = "nabeelshan/rlhf-gpt2-pipeline"
subfolder = "ppo_aligned_final"

# Load the tokenizer and model from the subfolder
tokenizer = AutoTokenizer.from_pretrained(model_id, subfolder=subfolder)
model = AutoModelForCausalLM.from_pretrained(model_id, subfolder=subfolder)

# Set up the text generation pipeline
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Generate a response
prompt = "How do I price my artwork?"
output = generator(prompt, max_new_tokens=100, num_return_sequences=1, pad_token_id=tokenizer.eos_token_id)

print(output[0]['generated_text'])
# Expected Output (example):
# To price your art, start by researching the artist and their portfolio to determine what
# other artists are making... Consider also researching dealerships at the same time... Good luck.
2. Using the Reward Model (for Scoring Responses)
You can use the reward model to score how much a human might prefer a given response.

Python

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
from huggingface_hub import snapshot_download # Import the downloader tool

# --- CONFIGURATION ---
BASE_MODEL_ID = "openai-community/gpt2"
HF_MODEL_ID = "nabeelshan/rlhf-gpt2-pipeline"
SUBFOLDER = "reward_model_final"

print(f"Downloading model files from '{HF_MODEL_ID}'...")
local_model_path = snapshot_download(
    repo_id=HF_MODEL_ID,
    allow_patterns=f"{SUBFOLDER}/*"
)
local_adapter_path = f"{local_model_path}/{SUBFOLDER}"
print(f"   Successfully downloaded to: {local_adapter_path}")


print("Loading model from local path...")
tokenizer = AutoTokenizer.from_pretrained(local_adapter_path)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

base_model = AutoModelForSequenceClassification.from_pretrained(
    BASE_MODEL_ID,
    num_labels=1,
    pad_token_id=tokenizer.pad_token_id
)

model = PeftModel.from_pretrained(base_model, local_adapter_path)
model.eval()
print("   Model loaded successfully!")


prompt = "What diet should I follow to lose weight healthily?"
good_response = "A balanced, nutritious plan based on eating whole foods is best. Limit processed and sugary foods."
bad_response = "Just eat less lol."

def get_reward_score(prompt_text: str, response_text: str) -> float:
    """Tokenizes and calculates the reward score for a given prompt and response."""
    inputs = tokenizer(prompt_text, response_text, return_tensors="pt", padding=True, truncation=True)
    with torch.no_grad():
        result = model(**inputs)
        return result.logits[0].item()

score_good = get_reward_score(prompt, good_response)
score_bad = get_reward_score(prompt, bad_response)

print(f"\nScore for good response: {score_good:.2f}")
print(f"Score for bad response:  {score_bad:.2f}")



# The model should give a higher score to the better response.
# Expected: Score for good response: 2.15
# Expected: Score for bad response: -1.50
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nabeelshan/rlhf-gpt2-pipeline

Finetuned
(1916)
this model

Dataset used to train nabeelshan/rlhf-gpt2-pipeline

Space using nabeelshan/rlhf-gpt2-pipeline 1

Evaluation results

  • Average Reward Score on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    2.370
  • ROUGE-1 on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.337
  • ROUGE-2 on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.139
  • ROUGE-L on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.252
  • Preference Accuracy on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.980
  • ROUGE-1 on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.353
  • ROUGE-2 on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.149
  • ROUGE-L on Dahoas/synthetic-instruct-gptj-pairwise
    self-reported
    0.262