Spaces:
Running
Running
File size: 11,970 Bytes
cb26688 fb3980b cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a a111000 cb26688 a111000 986fb2a cb26688 986fb2a cb26688 a111000 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 a111000 cb26688 a111000 cb26688 986fb2a cb26688 a111000 986fb2a a111000 986fb2a cb26688 986fb2a cb26688 a111000 cb26688 986fb2a a111000 cb26688 a111000 986fb2a cb26688 a111000 cb26688 986fb2a a111000 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 a111000 986fb2a a111000 986fb2a a111000 986fb2a a111000 986fb2a a111000 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 a111000 cb26688 986fb2a cb26688 a111000 cb26688 a111000 cb26688 a111000 cb26688 a111000 cb26688 a111000 cb26688 986fb2a a111000 cb26688 a111000 cb26688 a111000 cb26688 a111000 cb26688 a111000 986fb2a cb26688 a111000 cb26688 986fb2a cb26688 a111000 cb26688 a111000 cb26688 a111000 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 986fb2a cb26688 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 |
---
title: Build AI Agent Text-to-SQL
emoji: π£οΈ
colorFrom: green
colorTo: blue
sdk: static
pinned: false
---
Here's a `README.md` file outlining your Text-to-SQL agent project:
## Intelligent Text-to-SQL Agent with `smolagents`
This project demonstrates building a robust Text-to-SQL AI agent using the `smolagents` framework, capable of translating natural language queries into SQL, executing them, and intelligently processing the results, including handling complex scenarios like table joins.
## Table of Contents
- [Intelligent Text-to-SQL Agent with `smolagents`](#intelligent-text-to-sql-agent-with-smolagents)
- [Table of Contents](#table-of-contents)
- [Why an AI Agent for Text-to-SQL?](#why-an-ai-agent-for-text-to-sql)
- [Features](#features)
- [Installation](#installation)
- [Instantiating the Agent (Single Table)](#instantiating-the-agent-single-table)
- [Querying the Agent: Single Table](#querying-the-agent-single-table)
- [Extending for Table Joins](#extending-for-table-joins)
- [Querying the Agent: Multi-Table](#querying-the-agent-multi-table)
- [How it Works](#how-it-works)
- [Key Concepts Demonstrated](#key-concepts-demonstrated)
- [Contributing](#contributing)
- [License](#license)
## Why an AI Agent for Text-to-SQL?
Traditional Text-to-SQL pipelines often suffer from brittleness:
- **Syntactic Errors:** Generated SQL queries might be invalid, leading to execution failures.
- **Semantic Errors:** Even if syntactically correct, queries can produce incorrect or irrelevant results without explicit error messages, leading to silent failures and potentially misleading information.
**An agent-based system overcomes these limitations by:**
- **Critical Inspection:** Analyzing query outputs and execution logs.
- **Self-Correction:** Identifying errors or suboptimal results and iteratively refining the SQL query or subsequent processing steps.
- **Enhanced Robustness:** Providing a more reliable and intelligent way to interact with databases from natural language.
## Features
- **Natural Language to SQL:** Translates user questions into executable SQL queries.
- **Database Interaction:** Executes SQL queries against an in-memory SQLite database.
- **Intelligent Parsing:** Processes and extracts relevant information from SQL query results.
- **Self-Correction:** Learns from execution errors and refines its approach.
- **Multi-Table Querying:** Supports questions requiring joins across multiple tables.
- **LLM Flexibility:** Integrates with various Large Language Models (LLMs) via `smolagents`.
## Installation
To get started, clone this repository and install the required dependencies:
```bash
git clone https://github.com/your-username/text-to-sql-agent.git
cd text-to-sql-agent
pip install smolagents python-dotenv sqlalchemy --upgrade -q
```
`````
**Note:** To interact with Large Language Models via inference providers (e.g., Hugging Face Inference API), you'll need a valid authentication token set as an environment variable, typically `HF_TOKEN`.
## Project Structure
The core logic of this project is encapsulated in `text_to_sql.py`.
```text
.
βββ README.md
βββ text_to_sql.py
```
## Usage
This section walks through the `text_to_sql.py` script, explaining each part of building and using the agent.
### Setup and Dependencies
First, load your environment variables, including your LLM token.
```python
# text_to_sql.py
from dotenv import load_dotenv
load_dotenv()
from sqlalchemy import (
create_engine,
MetaData,
Table,
Column,
String,
Integer,
Float,
insert,
inspect,
text,
)
from smolagents import tool, CodeAgent, InferenceClientModel
# ... (rest of the code)
```
### Database Initialization
We set up an in-memory SQLite database using SQLAlchemy, defining `receipts` and `waiters` tables and populating them with sample data.
````python
# text_to_sql.py
engine = create_engine("sqlite:///:memory:")
metadata_obj = MetaData()
def insert_rows_into_table(rows, table, engine=engine):
for row in rows:
stmt = insert(table).values(**row)
with engine.begin() as connection:
connection.execute(stmt)
# Define the 'receipts' table
receipts = Table(
"receipts",
metadata_obj,
Column("receipt_id", Integer, primary_key=True),
Column("customer_name", String(255)), # Adjusted from String(16) for longer names
Column("price", Float),
Column("tip", Float),
)
metadata_obj.create_all(engine)
# Sample data for 'receipts'
rows = [
{"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20},
{"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24},
{"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43},
{"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00},
]
insert_rows_into_table(rows, receipts)
# Print table schema (for LLM context)
inspector = inspect(engine)
columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")]
table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info])
print(table_description)```
**Output:**
`````
Columns:
- receipt_id: INTEGER
- customer_name: VARCHAR(255)
- price: FLOAT
- tip: FLOAT
````
### Creating the SQL Tool
The `sql_engine` function acts as the agent's interface to the database. Its detailed docstring provides the LLM with crucial information about its functionality and the database schema.
```python
# text_to_sql.py
@tool
def sql_engine(query: str) -> str:
"""
Enables execution of SQL queries against the database.
Outputs the query results as a formatted string.
Known tables and their column structures:
Table 'receipts':
Columns:
- receipt_id: INTEGER (Primary Key)
- customer_name: VARCHAR(255)
- price: FLOAT
- tip: FLOAT
Args:
query: The precise SQL query string to be executed.
Example: "SELECT customer_name FROM receipts WHERE price > 10.0;"
"""
output = ""
with engine.connect() as con:
rows = con.execute(text(query))
for row in rows:
output += "\n" + str(row)
return output
````
### Instantiating the Agent (Single Table)
We create a `CodeAgent` and provide it with the `sql_engine` tool and an LLM (e.g., `meta-llama/Llama-3.1-8B-Instruct`).
```python
# text_to_sql.py
agent = CodeAgent(
tools=[sql_engine],
model=InferenceClientModel(model_id="meta-llama/Llama-3.1-8B-Instruct"),
)
```
### Querying the Agent: Single Table
Now, we can ask the agent a question and observe its problem-solving process, including self-correction.
```python
# text_to_sql.py
agent.run("Can you give me the name of the client who got the most expensive receipt?")
```
**Expected Agent Output (summarized):**
The agent will attempt several SQL queries, potentially encountering syntax errors or parsing issues with the raw string output from `sql_engine`. Through iterative self-correction, it will eventually generate and execute `SELECT MAX(price), customer_name FROM receipts ORDER BY price DESC LIMIT 1`, parse the result `(53.43, 'Woodrow Wilson')`, and identify 'Woodrow Wilson'.
### Extending for Table Joins
To handle more complex queries, we add a `waiters` table and update the `sql_engine` tool's description to include its schema.
```python
# text_to_sql.py
# Define the 'waiters' table
waiters = Table(
"waiters",
metadata_obj,
Column("receipt_id", Integer, primary_key=True),
Column("waiter_name", String(16), primary_key=True),
)
metadata_obj.create_all(engine)
# Sample data for 'waiters'
rows = [
{"receipt_id": 1, "waiter_name": "Corey Johnson"},
{"receipt_id": 2, "waiter_name": "Michael Watts"},
{"receipt_id": 3, "waiter_name": "Michael Watts"},
{"receipt_id": 4, "waiter_name": "Margaret James"},
]
insert_rows_into_table(rows, waiters)
# Update the tool's description to include the new table
updated_description = """This tool allows performing SQL queries on the database, returning results as a string.
It can access the following tables:"""
inspector = inspect(engine)
for table in ["receipts", "waiters"]:
columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)]
table_description = f"Table '{table}':\n"
table_description += " Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info])
updated_description += "\n\n" + table_description
print(updated_description)
sql_engine.description = updated_description # Update the tool's description
```
**Output:**
```
This tool allows performing SQL queries on the database, returning results as a string.
It can access the following tables:
Table 'receipts':
Columns:
- receipt_id: INTEGER
- customer_name: VARCHAR(255)
- price: FLOAT
- tip: FLOAT
Table 'waiters':
Columns:
- receipt_id: INTEGER
- waiter_name: VARCHAR(16)
```
### Querying the Agent: Multi-Table
We switch to a more powerful LLM (`Qwen/Qwen2.5-Coder-32B-Instruct`) for this harder task.
```python
# text_to_sql.py
agent = CodeAgent(
tools=[sql_engine],
model=InferenceClientModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct"),
)
agent.run("Which waiter received the highest total amount in tips?")
```
**Expected Agent Output (summarized):**
The agent will formulate a SQL query to join `waiters` and `receipts` tables (e.g., `SELECT w.waiter_name, r.tip FROM waiters w JOIN receipts r ON w.receipt_id = r.receipt_id`). It will then process the results in Python to sum tips per waiter and identify "Michael Watts" as having the highest total tips.
## How it Works
The `smolagents` `CodeAgent` operates on the **ReAct (Reasoning + Acting)** framework:
1. **Reasoning (LLM as Brain):** A Large Language Model (e.g., Llama-3.1, Qwen2.5) interprets the natural language prompt and decides on a course of action.
2. **Acting (Tools as Hands):** If an external interaction is needed (like querying a database), the LLM generates Python code to call a registered `@tool` (e.g., `sql_engine("...")`). The tool's `docstring` (description) is critical for the LLM to understand its capabilities.
3. **Observation & Feedback:** The generated code is executed. The output (e.g., database results, error messages) is fed back to the LLM.
4. **Self-Correction & Iteration:** The LLM analyzes the feedback. If there's an error or the result is unsatisfactory, it refines its reasoning and generates new code, iterating until the task is complete or deemed unfeasible.
This iterative process allows the agent to solve complex problems and recover from errors, making it more robust than traditional direct translation methods.
## Key Concepts Demonstrated
- **Agentic Frameworks:** Using `smolagents` to orchestrate LLM interactions and tool use.
- **Tool Creation:** Defining custom Python functions as tools for agents, complete with detailed descriptions.
- **Dynamic Tool Descriptions:** Updating tool information to reflect changes in available data (e.g., new database tables).
- **LLM Integration:** Leveraging various LLMs for different levels of reasoning complexity.
- **SQLAlchemy:** Programmatically interacting with databases in Python.
- **ReAct Paradigm:** The iterative cycle of reasoning, acting, and observation that enables self-correction.
## Contributing
Feel free to open issues or submit pull requests if you have suggestions or improvements!
## License
This project is open-sourced under the MIT License. See the `LICENSE` file for more details.
|