File size: 4,164 Bytes
8572793
 
 
 
 
 
 
 
 
 
 
 
 
 
003cba1
8572793
 
 
 
e55da09
003cba1
 
 
 
 
 
 
 
 
 
 
 
8572793
 
 
 
 
 
 
 
 
 
 
003cba1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---

title: PromptTune
emoji: 🐠
colorFrom: indigo
colorTo: green
sdk: gradio
sdk_version: 5.48.0
app_file: app/gradio_interface.py # <--- FIXED LINE
pinned: false
license: mit
short_description: MLOps for Prompt Engineering and Continuous Improvement.
---


# πŸš€ Intelligent Prompt Optimizer (IPO-Meta)

This project demonstrates a zero-GPU MLOps pipeline using LLM orchestration 
to automatically improve the system prompt based on continuous user feedback.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# 🎡 PromptTune

**MLOps Toolkit for Interactive Prompt Engineering and Optimization**

---

## πŸ“– Introduction

**promptTune** is a modular MLOps toolkit designed for experimenting with, optimizing, and managing LLM prompts. It provides a streamlined interface for rewriting prompts, collecting feedback, and iteratively improving prompt performanceβ€”all while maintaining robust, auditable records of prompt changes and user interactions.

---
## πŸš€ Features

**πŸ€– LLM Orchestration & Rewriting:** Dynamically leverages a **Meta-LLM** via the OpenRouter API to transform vague user inputs into highly structured, actionable system prompts, ensuring high-quality responses from the final **Task-LLM**.

**♻️ Continuous Prompt Learning:** Implements a zero-GPU, feedback-driven loop where sufficient **negative user ratings (Rating: 0)** automatically trigger the optimization workflow.

**βš™οΈ MLOps Deployment Pipeline:** Uses scheduled **GitHub Actions** to execute the core Python script, automatically versioning, committing, and deploying the newly refined system prompt configuration back to the main branch.

**πŸ’Ύ Versioned Configuration Management:** Maintains a single source of truth for the active system prompt (`master_prompt.json`), ensuring **reproducibility** and enabling future rollbacks.

**πŸ’» Gradio Interface & Data Collection:** Provides a simple, Python-native web interface for user interaction and securely logs all raw feedback to inform the next nightly deployment cycle.

**πŸ“Š Observability Log:** Includes a dedicated status file (`status_log.txt`) that tracks the exact date and time of the last successful prompt deployment, offering a clear audit trail.

---

## πŸš€ Installation

1. **Clone the repository:**
   ```bash

   git clone https://github.com/your-username/promptTune.git

   cd promptTune

   ```

2. **Set up a Python environment:**
   ```bash

   python3 -m venv venv

   source venv/bin/activate

   ```

3. **Install dependencies:**
   ```bash

   pip install -r requirements.txt

   ```

4. **Configure environment variables:**
   - Create a `.env` file in the project root and add your OpenAI or compatible API key:
     ```

     OPENROUTER_API_KEY=your_api_key_here

     ```


---

## ⚑ Usage

### 1. **Run the Gradio Web App**
   ```bash

   python -m app.gradio_interface

   ```
   - **Interact:** Enter prompts, view responses, and provide feedback via the web UI.

### 2. **Optimize Prompts via Script**
   ```bash

   python scripts/optimize_prompt.py

   ```
   - This script reviews feedback logs and updates the master prompt for improved results.

### 3. **Project Structure**
   ```

   promptTune/

   β”œβ”€β”€ app/

   β”‚   β”œβ”€β”€ __init__.py

   β”‚   β”œβ”€β”€ core_logic.py

   β”‚   └── gradio_interface.py

   β”œβ”€β”€ data/

   β”‚   β”œβ”€β”€ feedback_log.json

   β”‚   └── master_prompt.json

   └── scripts/

       └── optimize_prompt.py

   ```

---

## 🀝 Contributing

We welcome contributions! To get started:

1. Fork the repository.
2. Create a branch for your feature or fix (`git checkout -b feature-name`).
3. Commit your changes.
4. Submit a pull request with a clear description.

**Please ensure all code is well-documented and tested.**

---

## πŸ“„ License

This project is licensed under the [MIT License](LICENSE).

---

> **Maintained by [Manisankarrr](https://github.com/Manisankarrr)**
```



πŸ”— GitHub Repo: https://github.com/Manisankarrr/promptTune