devjas1
(UPDATE): Revise README metadata for title, emoji, and color attributes
fb3980b
|
raw
history blame
12 kB
metadata
title: Build AI Agent Text-to-SQL
emoji: πŸ—£οΈ
colorFrom: green
colorTo: blue
sdk: static
pinned: false

Here's a README.md file outlining your Text-to-SQL agent project:

Intelligent Text-to-SQL Agent with smolagents

This project demonstrates building a robust Text-to-SQL AI agent using the smolagents framework, capable of translating natural language queries into SQL, executing them, and intelligently processing the results, including handling complex scenarios like table joins.

Table of Contents

Why an AI Agent for Text-to-SQL?

Traditional Text-to-SQL pipelines often suffer from brittleness:

  • Syntactic Errors: Generated SQL queries might be invalid, leading to execution failures.
  • Semantic Errors: Even if syntactically correct, queries can produce incorrect or irrelevant results without explicit error messages, leading to silent failures and potentially misleading information.

An agent-based system overcomes these limitations by:

  • Critical Inspection: Analyzing query outputs and execution logs.
  • Self-Correction: Identifying errors or suboptimal results and iteratively refining the SQL query or subsequent processing steps.
  • Enhanced Robustness: Providing a more reliable and intelligent way to interact with databases from natural language.

Features

  • Natural Language to SQL: Translates user questions into executable SQL queries.
  • Database Interaction: Executes SQL queries against an in-memory SQLite database.
  • Intelligent Parsing: Processes and extracts relevant information from SQL query results.
  • Self-Correction: Learns from execution errors and refines its approach.
  • Multi-Table Querying: Supports questions requiring joins across multiple tables.
  • LLM Flexibility: Integrates with various Large Language Models (LLMs) via smolagents.

Installation

To get started, clone this repository and install the required dependencies:

git clone https://github.com/your-username/text-to-sql-agent.git
cd text-to-sql-agent
pip install smolagents python-dotenv sqlalchemy --upgrade -q

**Note:** To interact with Large Language Models via inference providers (e.g., Hugging Face Inference API), you'll need a valid authentication token set as an environment variable, typically `HF_TOKEN`.

## Project Structure

The core logic of this project is encapsulated in `text_to_sql.py`.

```text
.
β”œβ”€β”€ README.md
└── text_to_sql.py
```

## Usage

This section walks through the `text_to_sql.py` script, explaining each part of building and using the agent.

### Setup and Dependencies

First, load your environment variables, including your LLM token.

```python
# text_to_sql.py
from dotenv import load_dotenv
load_dotenv()

from sqlalchemy import (
    create_engine,
    MetaData,
    Table,
    Column,
    String,
    Integer,
    Float,
    insert,
    inspect,
    text,
)
from smolagents import tool, CodeAgent, InferenceClientModel

# ... (rest of the code)
```

### Database Initialization

We set up an in-memory SQLite database using SQLAlchemy, defining `receipts` and `waiters` tables and populating them with sample data.

````python
# text_to_sql.py
engine = create_engine("sqlite:///:memory:")
metadata_obj = MetaData()

def insert_rows_into_table(rows, table, engine=engine):
    for row in rows:
        stmt = insert(table).values(**row)
        with engine.begin() as connection:
            connection.execute(stmt)

# Define the 'receipts' table
receipts = Table(
    "receipts",
    metadata_obj,
    Column("receipt_id", Integer, primary_key=True),
    Column("customer_name", String(255)), # Adjusted from String(16) for longer names
    Column("price", Float),
    Column("tip", Float),
)
metadata_obj.create_all(engine)

# Sample data for 'receipts'
rows = [
    {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20},
    {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24},
    {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43},
    {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00},
]
insert_rows_into_table(rows, receipts)

# Print table schema (for LLM context)
inspector = inspect(engine)
columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")]
table_description = "Columns:\n" + "\n".join([f"  - {name}: {col_type}" for name, col_type in columns_info])
print(table_description)```

**Output:**

Columns:

  • receipt_id: INTEGER
  • customer_name: VARCHAR(255)
  • price: FLOAT
  • tip: FLOAT

### Creating the SQL Tool

The `sql_engine` function acts as the agent's interface to the database. Its detailed docstring provides the LLM with crucial information about its functionality and the database schema.

```python
# text_to_sql.py
@tool
def sql_engine(query: str) -> str:
    """
    Enables execution of SQL queries against the database.
    Outputs the query results as a formatted string.

    Known tables and their column structures:
    Table 'receipts':
      Columns:
        - receipt_id: INTEGER (Primary Key)
        - customer_name: VARCHAR(255)
        - price: FLOAT
        - tip: FLOAT

    Args:
        query: The precise SQL query string to be executed.
               Example: "SELECT customer_name FROM receipts WHERE price > 10.0;"
    """
    output = ""
    with engine.connect() as con:
        rows = con.execute(text(query))
        for row in rows:
            output += "\n" + str(row)
    return output

Instantiating the Agent (Single Table)

We create a CodeAgent and provide it with the sql_engine tool and an LLM (e.g., meta-llama/Llama-3.1-8B-Instruct).

# text_to_sql.py
agent = CodeAgent(
    tools=[sql_engine],
    model=InferenceClientModel(model_id="meta-llama/Llama-3.1-8B-Instruct"),
)

Querying the Agent: Single Table

Now, we can ask the agent a question and observe its problem-solving process, including self-correction.

# text_to_sql.py
agent.run("Can you give me the name of the client who got the most expensive receipt?")

Expected Agent Output (summarized): The agent will attempt several SQL queries, potentially encountering syntax errors or parsing issues with the raw string output from sql_engine. Through iterative self-correction, it will eventually generate and execute SELECT MAX(price), customer_name FROM receipts ORDER BY price DESC LIMIT 1, parse the result (53.43, 'Woodrow Wilson'), and identify 'Woodrow Wilson'.

Extending for Table Joins

To handle more complex queries, we add a waiters table and update the sql_engine tool's description to include its schema.

# text_to_sql.py
# Define the 'waiters' table
waiters = Table(
    "waiters",
    metadata_obj,
    Column("receipt_id", Integer, primary_key=True),
    Column("waiter_name", String(16), primary_key=True),
)
metadata_obj.create_all(engine)

# Sample data for 'waiters'
rows = [
    {"receipt_id": 1, "waiter_name": "Corey Johnson"},
    {"receipt_id": 2, "waiter_name": "Michael Watts"},
    {"receipt_id": 3, "waiter_name": "Michael Watts"},
    {"receipt_id": 4, "waiter_name": "Margaret James"},
]
insert_rows_into_table(rows, waiters)

# Update the tool's description to include the new table
updated_description = """This tool allows performing SQL queries on the database, returning results as a string.
It can access the following tables:"""

inspector = inspect(engine)
for table in ["receipts", "waiters"]:
    columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)]
    table_description = f"Table '{table}':\n"
    table_description += "  Columns:\n" + "\n".join([f"    - {name}: {col_type}" for name, col_type in columns_info])
    updated_description += "\n\n" + table_description

print(updated_description)
sql_engine.description = updated_description # Update the tool's description

Output:

This tool allows performing SQL queries on the database, returning results as a string.
It can access the following tables:

Table 'receipts':
  Columns:
    - receipt_id: INTEGER
    - customer_name: VARCHAR(255)
    - price: FLOAT
    - tip: FLOAT

Table 'waiters':
  Columns:
    - receipt_id: INTEGER
    - waiter_name: VARCHAR(16)

Querying the Agent: Multi-Table

We switch to a more powerful LLM (Qwen/Qwen2.5-Coder-32B-Instruct) for this harder task.

# text_to_sql.py
agent = CodeAgent(
    tools=[sql_engine],
    model=InferenceClientModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct"),
)

agent.run("Which waiter received the highest total amount in tips?")

Expected Agent Output (summarized): The agent will formulate a SQL query to join waiters and receipts tables (e.g., SELECT w.waiter_name, r.tip FROM waiters w JOIN receipts r ON w.receipt_id = r.receipt_id). It will then process the results in Python to sum tips per waiter and identify "Michael Watts" as having the highest total tips.

How it Works

The smolagents CodeAgent operates on the ReAct (Reasoning + Acting) framework:

  1. Reasoning (LLM as Brain): A Large Language Model (e.g., Llama-3.1, Qwen2.5) interprets the natural language prompt and decides on a course of action.
  2. Acting (Tools as Hands): If an external interaction is needed (like querying a database), the LLM generates Python code to call a registered @tool (e.g., sql_engine("...")). The tool's docstring (description) is critical for the LLM to understand its capabilities.
  3. Observation & Feedback: The generated code is executed. The output (e.g., database results, error messages) is fed back to the LLM.
  4. Self-Correction & Iteration: The LLM analyzes the feedback. If there's an error or the result is unsatisfactory, it refines its reasoning and generates new code, iterating until the task is complete or deemed unfeasible.

This iterative process allows the agent to solve complex problems and recover from errors, making it more robust than traditional direct translation methods.

Key Concepts Demonstrated

  • Agentic Frameworks: Using smolagents to orchestrate LLM interactions and tool use.
  • Tool Creation: Defining custom Python functions as tools for agents, complete with detailed descriptions.
  • Dynamic Tool Descriptions: Updating tool information to reflect changes in available data (e.g., new database tables).
  • LLM Integration: Leveraging various LLMs for different levels of reasoning complexity.
  • SQLAlchemy: Programmatically interacting with databases in Python.
  • ReAct Paradigm: The iterative cycle of reasoning, acting, and observation that enables self-correction.

Contributing

Feel free to open issues or submit pull requests if you have suggestions or improvements!

License

This project is open-sourced under the MIT License. See the LICENSE file for more details.