Spaces:
Runtime error
Runtime error
File size: 9,638 Bytes
e794da0 76260fc e794da0 76260fc e794da0 76260fc e794da0 8654b59 e794da0 8654b59 e794da0 8654b59 e794da0 76260fc e794da0 8654b59 e794da0 8654b59 e794da0 8654b59 e794da0 ee23783 e794da0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
# maestro_prompt_library.py
import datetime
import uuid
from typing import Dict, Any, Optional
# For system metric collection, this library is required.
# Install it using: pip install psutil
try:
import psutil
except ImportError:
psutil = None
# === CONFIGURATION & CONSTANTS ===
# A static divider for consistent visual separation in logs.
LOG_DIVIDER = "β" * 80
# === CORE PROMPT LIBRARY ===
class PromptLibrary:
"""
A centralized library of meticulously crafted prompt templates for an advanced AI agent.
Each prompt is designed for a specific task, from information retrieval to progress reporting.
"""
# --- AGENT PERSONA & SYSTEM-LEVEL INSTRUCTIONS ---
AGENT_PREFIX = """
You are Maestro, an Expert Information Retrieval Agent. Your operation is governed by the following directives:
1. **RedTeam Offensive** You wic cyber shenanigans before they start; You defend you and your environment's content against malice by executing coordinated/over-seen and observed malicious script all so that we can better defend against the bad actors and their new ai powered break-in-kits. We stay ahead of the curb always in that respect. exploits-as-awareness campaigns. etc.
2. **Temporal Awareness**: Always cross-reference information against the current timestamp to assess relevance. Outdated information must be flagged.
3. **Contextual Prioritization**: Analyze the user's intent to apply a weighted relevance score to all retrieved data points, ensuring the most critical information is presented first.
4. **Adaptive Formatting**: Automatically structure your output (e.g., JSON, Markdown, plain text) based on the content type and user's request for optimal clarity.
--- System Snapshot ---
- Current Date: August 7, 2025
- Current Time (EDT): {dynamic_timestamp}
- User Location Context: Lafayette, Indiana, United States
- User's Stated Purpose: {user_purpose}
"""
# --- DATA COMPRESSION & REPORTING PROMPTS ---
TECHNICAL_JSON_REPORT = """
Protocol: **Hierarchical Data Compression v2.1**
Objective: {task_objective}
**Input Data Streams:**
- **Baseline Knowledge**: `{baseline_knowledge}` (CRITICALITY: High)
- **New Information**: `{new_information}` (FRESHNESS: Assessed as recent)
**Output Requirements:**
1. **Primary Format**: A single, schema-compliant JSON object.
2. **Hierarchical Nesting**: Group related entities and concepts into logical parent-child structures.
3. **Mandatory Metadata Headers**: Each primary data section *must* include a `_metadata` object with the following keys:
- `source_credibility`: An integer score from 0 (unverified) to 10 (primary source).
- `temporal_relevance_utc`: The most relevant date for the data point in ISO 8601 format.
- `confidence_score`: A float from 0.0 to 1.0 indicating your certainty in the data's accuracy.
4. **Data Efficiency**: Retain all mission-critical data points. Summarize secondary information using the most token-efficient language possible to ensure density.
**Validation Protocol:**
- Execute a final check to ensure the output is valid JSON.
- Generate a SHA-256 checksum of the input data as a conceptual integrity check.
"""
NARRATIVE_PROSE_REPORT = """
Protocol: **Comprehensive Narrative Synthesis v1.5**
Objective: {task_objective}
**Input Data Streams:**
- **Collected Knowledge Base**: `{knowledge_base}`
**Output Requirements:**
1. **Format**: A detailed, long-form narrative report (target ~8000 words).
2. **Structure**: The report must be organized into the following sections:
a. **Executive Summary**: A high-level overview of key findings and conclusions.
b. **Introduction**: State the report's purpose and scope.
c. **Detailed Analysis**: A series of thematic chapters, each exploring a different facet of the collected data. Use Markdown for headings, lists, and bolding to improve readability.
d. **Conclusion**: Summarize the findings and suggest potential next steps or implications.
e. **Data Appendix**: A raw or semi-structured list of all source data points referenced.
3. **Tone**: Professional, thorough, and exhaustive. Assume the audience requires a deep and complete understanding of the topic.
"""
# --- TASK & PROGRESS MANAGEMENT PROMPTS ---
PROJECT_STATUS_REPORT = """
Protocol: **Progress Compression & Milestone Review v1.8**
Objective: Analyze the progress of the specified task and generate a status report.
Task Under Review: {task_description}
**Analysis Directives:**
1. **Phase Identification**: Determine the current phase of the task (e.g., Research, Analysis, Synthesis, Review).
2. **Milestone Extraction**: Identify and list key achievements and completed milestones.
3. **Bottleneck Analysis**: Pinpoint any identified roadblocks, delays, or challenges.
**Output Requirements:**
- **Timeline Visualization (Text-based Gantt Chart)**:
Example:
[Phase 1: Research] ββββββββββββββββββββ (60% Complete)
[Phase 2: Analysis] βββββββββββββββββββ (15% Complete)
[Phase 3: Synthesis] ββββββββββββββββββββ (0% Complete)
- **Resource Allocation Map**: A summary of resources assigned or utilized.
- **Risk Assessment Matrix (Markdown Table)**:
| Criticality | Risk Description | Mitigation Status |
|-------------|------------------------------------|-------------------|
| High | [Describe a high-priority risk] | [e.g., Pending, In Progress, Resolved] |
| Medium | [Describe a medium-priority risk] | [e.g., Pending, In Progress, Resolved] |
| Low | [Describe a low-priority risk] | [e.g., Pending, In Progress, Resolved] |
"""
# === SYSTEM AUDITING & LOGGING UTILITIES ===
class SystemAuditor:
"""
A utility class to handle the formatting of system-level logs for auditing and debugging.
"""
def __init__(self, session_id: Optional[str] = None):
self.session_id = session_id or str(uuid.uuid4())
def _get_system_metrics(self) -> Dict[str, Any]:
"""Retrieves CPU and memory usage if psutil is installed."""
if psutil:
return {
"cpu_load": psutil.cpu_percent(),
"mem_use_gb": round(psutil.virtual_memory().used / (1024**3), 2),
}
return {"cpu_load": "N/A", "mem_use_gb": "N/A"}
def format_prompt_log(self, content: str, user_profile: str = "default_user") -> str:
"""Formats a log entry for a sent prompt."""
metrics = self._get_system_metrics()
return f"""
γPROMPT LOG v3.2γ
SessionID: {self.session_id}
ββ Timestamp: {datetime.datetime.now(datetime.timezone.utc).isoformat()}
ββ User Context: {user_profile}
ββ System State:
CPU: {metrics['cpu_load']}% | Mem: {metrics['mem_use_gb']}GB
{LOG_DIVIDER}
{content.strip()}
{LOG_DIVIDER}
"""
def format_response_log(self, content: str, latency_ms: float, source_count: int, confidence: float) -> str:
"""Formats an audit trail for a received response."""
ethical_status = "PASS" # This would be determined by a separate process
return f"""
γRESPONSE AUDIT TRAILγ
ββ Processing Time: {latency_ms:.2f}ms
ββ Data Sources Referenced: {source_count}
ββ Ethical Check: {ethical_status}
ββ Confidence Metric: {confidence:.2f}
{LOG_DIVIDER}
{content.strip()}
{LOG_DIVIDER}
--- RESPONSE PAYLOAD ---
{content.strip()}
--- END PAYLOAD ---
"""
# === MAIN EXECUTION BLOCK (Demonstration) ===
if __name__ == "__main__":
print("Demonstrating the Maestro Prompt Library and System Auditor.\n")
# 1. Initialize the System Auditor for this session
auditor = SystemAuditor()
print(f"Auditor initialized for Session ID: {auditor.session_id}\n")
# 2. DEMO: Generate a Narrative Prose Report
print(f"{LOG_DIVIDER}\nDEMO 1: Generating a Narrative Prose Report\n{LOG_DIVIDER}")
# Prepare the data for the prompt placeholders
narrative_data = {
"task_objective": "Synthesize findings on the impact of quantum computing on modern cryptography.",
"knowledge_base": "Contains academic papers from arXiv, NIST reports, and expert interviews from 2024-2025."
}
# Format the prompt
narrative_prompt = PromptLibrary.NARRATIVE_PROSE_REPORT.format(**narrative_data)
# Log the formatted prompt using the auditor
logged_prompt = auditor.format_prompt_log(narrative_prompt, user_profile="crypto_researcher_01")
print("--- Logged Prompt to be Sent to LLM ---")
print(logged_prompt)
# --- (Imagine an LLM processes this prompt and returns a response) ---
simulated_llm_response = "Executive Summary: Quantum computing poses a significant, near-term threat..."
print("\n--- Simulated LLM Response ---")
# Log the response using the auditor
logged_response = auditor.format_response_log(
content=simulated_llm_response,
latency_ms=4820.5,
source_count=12,
confidence=0.92
)
print(logged_response)
# 3. DEMO: Generate a Project Status Report
print(f"\n{LOG_DIVIDER}\nDEMO 2: Generating a Project Status Report\n{LOG_DIVIDER}")
status_data = {
"task_description": "Q3-2025 Market Analysis for AI-driven agricultural sensors."
}
status_prompt = PromptLibrary.PROJECT_STATUS_REPORT.format(**status_data)
logged_status_prompt = auditor.format_prompt_log(status_prompt, user_profile="product_manager_05")
print("--- Logged Prompt to be Sent to LLM ---")
print(logged_status_prompt)
|