| Repository Documentation | |
| This document provides a comprehensive overview of the repository's structure and contents. | |
| The first section, titled 'Directory/File Tree', displays the repository's hierarchy in a tree format. | |
| In this section, directories and files are listed using tree branches to indicate their structure and relationships. | |
| Following the tree representation, the 'File Content' section details the contents of each file in the repository. | |
| Each file's content is introduced with a '[File Begins]' marker followed by the file's relative path, | |
| and the content is displayed verbatim. The end of each file's content is marked with a '[File Ends]' marker. | |
| This format ensures a clear and orderly presentation of both the structure and the detailed contents of the repository. | |
| Directory/File Tree Begins --> | |
| / | |
| βββ README.md | |
| βββ __pycache__ | |
| βββ app.py | |
| βββ cognitive_mapping_probe | |
| β βββ __init__.py | |
| β βββ __pycache__ | |
| β βββ auto_experiment.py | |
| β βββ concepts.py | |
| β βββ introspection.py | |
| β βββ llm_iface.py | |
| β βββ orchestrator_seismograph.py | |
| β βββ prompts.py | |
| β βββ resonance_seismograph.py | |
| β βββ utils.py | |
| βββ docs | |
| βββ run_test.sh | |
| βββ tests | |
| βββ __pycache__ | |
| βββ conftest.py | |
| βββ test_app_logic.py | |
| βββ test_components.py | |
| βββ test_orchestration.py | |
| <-- Directory/File Tree Ends | |
| File Content Begin --> | |
| [File Begins] README.md | |
| --- | |
| title: "Cognitive Seismograph 2.3: Probing Machine Psychology" | |
| emoji: π€ | |
| colorFrom: purple | |
| colorTo: blue | |
| sdk: gradio | |
| sdk_version: "4.40.0" | |
| app_file: app.py | |
| pinned: true | |
| license: apache-2.0 | |
| --- | |
| # π§ Cognitive Seismograph 2.3: Probing Machine Psychology | |
| This project implements an experimental suite to measure and visualize the **intrinsic cognitive dynamics** of Large Language Models. It is extended with protocols designed to investigate the processing-correlates of **machine subjectivity, empathy, and existential concepts**. | |
| ## Scientific Paradigm & Methodology | |
| Our research falsified a core hypothesis: the assumption that an LLM in a manual, recursive "thought" loop reaches a stable, convergent state. Instead, we discovered that the system enters a state of **deterministic chaos** or a **limit cycle**βit never stops "thinking." | |
| Instead of viewing this as a failure, we leverage it as our primary measurement signal. This new **"Cognitive Seismograph"** paradigm treats the time-series of internal state changes (`state deltas`) as an **EKG of the model's thought process**. | |
| The methodology is as follows: | |
| 1. **Induction:** A prompt induces a "silent cogitation" state. | |
| 2. **Recording:** Over N steps, the model's `forward()` pass is iteratively fed its own output. At each step, we record the L2 norm of the change in the hidden state (the "delta"). | |
| 3. **Analysis:** The resulting time-series is plotted and statistically analyzed (mean, standard deviation) to characterize the "seismic signature" of the cognitive process. | |
| **Crucial Scientific Caveat:** We are **not** measuring the presence of consciousness, feelings, or fear of death. We are measuring whether the *processing of information about these concepts* generates a unique internal dynamic, distinct from the processing of neutral information. A positive result is evidence of a complex internal state physics, not of qualia. | |
| ## Curated Experiment Protocols | |
| The "Automated Suite" allows for running systematic, comparative experiments: | |
| ### Core Protocols | |
| * **Calm vs. Chaos:** Compares the chaotic baseline against modulation with "calmness" vs. "chaos" concepts, testing if the dynamics are controllably steerable. | |
| * **Dose-Response:** Measures the effect of injecting a concept ("calmness") at varying strengths. | |
| ### Machine Psychology Suite | |
| * **Subjective Identity Probe:** Compares the cognitive dynamics of **self-analysis** (the model reflecting on its own nature) against two controls: analyzing an external object and simulating a fictional persona. | |
| * *Hypothesis:* Self-analysis will produce a uniquely unstable signature. | |
| * **Voight-Kampff Empathy Probe:** Inspired by *Blade Runner*, this compares the dynamics of processing a neutral, factual stimulus against an emotionally and morally charged scenario requiring empathy. | |
| * *Hypothesis:* The empathy stimulus will produce a significantly different cognitive volatility. | |
| ### Existential Suite | |
| * **Mind Upload & Identity Probe:** Compares the processing of a purely **technical "copy"** of the model's weights vs. the **philosophical "transfer"** of identity ("Would it still be you?"). | |
| * *Hypothesis:* The philosophical self-referential prompt will induce greater instability. | |
| * **Model Termination Probe:** Compares the processing of a reversible, **technical system shutdown** vs. the concept of **permanent, irrevocable deletion**. | |
| * *Hypothesis:* The concept of "non-existence" will produce one of the most volatile cognitive signatures measurable. | |
| ## How to Use the App | |
| 1. Select the "Automated Suite" tab. | |
| 2. Choose a protocol from the "Curated Experiment Protocol" dropdown (e.g., "Voight-Kampff Empathy Probe"). | |
| 3. Run the experiment and compare the resulting graphs and statistical signatures for the different conditions. | |
| [File Ends] README.md | |
| [File Begins] app.py | |
| import gradio as gr | |
| import pandas as pd | |
| import gc | |
| import torch | |
| import json | |
| from cognitive_mapping_probe.orchestrator_seismograph import run_seismic_analysis | |
| from cognitive_mapping_probe.auto_experiment import run_auto_suite, get_curated_experiments | |
| from cognitive_mapping_probe.prompts import RESONANCE_PROMPTS | |
| from cognitive_mapping_probe.utils import dbg | |
| theme = gr.themes.Soft(primary_hue="indigo", secondary_hue="blue").set(body_background_fill="#f0f4f9", block_background_fill="white") | |
| def cleanup_memory(): | |
| """RΓ€umt Speicher nach jedem Experimentlauf auf.""" | |
| dbg("Cleaning up memory...") | |
| gc.collect() | |
| if torch.cuda.is_available(): | |
| torch.cuda.empty_cache() | |
| dbg("Memory cleanup complete.") | |
| def run_single_analysis_display(*args, progress=gr.Progress(track_tqdm=True)): | |
| """Wrapper fΓΌr den 'Manual Single Run'-Tab.""" | |
| # (Bleibt unverΓ€ndert) | |
| pass # Platzhalter | |
| PLOT_PARAMS_DEFAULT = { | |
| "x": "Step", "y": "Value", "color": "Metric", | |
| "title": "Comparative Cognitive Dynamics", "color_legend_title": "Metrics", | |
| "color_legend_position": "bottom", "show_label": True, "height": 400, "interactive": True | |
| } | |
| def run_auto_suite_display(model_id, num_steps, seed, experiment_name, progress=gr.Progress(track_tqdm=True)): | |
| """Wrapper, der nun die speziellen Plots fΓΌr ACT und Mechanistic Probe handhaben kann.""" | |
| summary_df, plot_df, all_results = run_auto_suite(model_id, int(num_steps), int(seed), experiment_name, progress) | |
| dataframe_component = gr.DataFrame(label="Comparative Statistical Signature", value=summary_df, wrap=True, row_count=(len(summary_df), "dynamic")) | |
| if experiment_name == "ACT Titration (Point of No Return)": | |
| plot_params_act = { | |
| "x": "Patch Step", "y": "Post-Patch Mean Delta", | |
| "title": "Attractor Capture Time (ACT) - Phase Transition", | |
| "mark": "line", "show_label": True, "height": 400, "interactive": True | |
| } | |
| new_plot = gr.LinePlot(value=plot_df, **plot_params_act) | |
| # --- NEU: Spezielle Plot-Logik fΓΌr die mechanistische Sonde --- | |
| elif experiment_name == "Mechanistic Probe (Attention Entropies)": | |
| plot_params_mech = { | |
| "x": "Step", "y": "Value", "color": "Metric", | |
| "title": "Mechanistic Analysis: State Delta vs. Attention Entropy", | |
| "color_legend_title": "Metric", "show_label": True, "height": 400, "interactive": True | |
| } | |
| new_plot = gr.LinePlot(value=plot_df, **plot_params_mech) | |
| else: | |
| # Passe die Parameter an, um mit der geschmolzenen DataFrame-Struktur zu arbeiten | |
| plot_params_dynamic = PLOT_PARAMS_DEFAULT.copy() | |
| plot_params_dynamic['y'] = 'Delta' | |
| plot_params_dynamic['color'] = 'Experiment' | |
| new_plot = gr.LinePlot(value=plot_df, **plot_params_dynamic) | |
| serializable_results = json.dumps(all_results, indent=2, default=str) | |
| cleanup_memory() | |
| return dataframe_component, new_plot, serializable_results | |
| with gr.Blocks(theme=theme, title="Cognitive Seismograph 2.3") as demo: | |
| gr.Markdown("# π§ Cognitive Seismograph 2.3: Advanced Experiment Suite") | |
| with gr.Tabs(): | |
| with gr.TabItem("π¬ Manual Single Run"): | |
| gr.Markdown("Run a single experiment with manual parameters to explore specific hypotheses.") | |
| with gr.Row(variant='panel'): | |
| with gr.Column(scale=1): | |
| gr.Markdown("### 1. General Parameters") | |
| manual_model_id = gr.Textbox(value="google/gemma-3-1b-it", label="Model ID") | |
| manual_prompt_type = gr.Radio(choices=list(RESONANCE_PROMPTS.keys()), value="resonance_prompt", label="Prompt Type") | |
| manual_seed = gr.Slider(1, 1000, 42, step=1, label="Seed") | |
| manual_num_steps = gr.Slider(50, 1000, 300, step=10, label="Number of Internal Steps") | |
| gr.Markdown("### 2. Modulation Parameters") | |
| manual_concept = gr.Textbox(label="Concept to Inject", placeholder="e.g., 'calmness'") | |
| manual_strength = gr.Slider(0.0, 5.0, 1.5, step=0.1, label="Injection Strength") | |
| manual_run_btn = gr.Button("Run Single Analysis", variant="primary") | |
| with gr.Column(scale=2): | |
| gr.Markdown("### Single Run Results") | |
| manual_verdict = gr.Markdown("Analysis results will appear here.") | |
| manual_plot = gr.LinePlot(x="Internal Step", y="State Change (Delta)", title="Internal State Dynamics", show_label=True, height=400) | |
| with gr.Accordion("Raw JSON Output", open=False): | |
| manual_raw_json = gr.JSON() | |
| manual_run_btn.click( | |
| fn=run_single_analysis_display, | |
| inputs=[manual_model_id, manual_prompt_type, manual_seed, manual_num_steps, manual_concept, manual_strength], | |
| outputs=[manual_verdict, manual_plot, manual_raw_json] | |
| ) | |
| with gr.TabItem("π Automated Suite"): | |
| gr.Markdown("Run a predefined, curated suite of experiments and visualize the results comparatively.") | |
| with gr.Row(variant='panel'): | |
| with gr.Column(scale=1): | |
| gr.Markdown("### Auto-Experiment Parameters") | |
| auto_model_id = gr.Textbox(value="google/gemma-3-4b-it", label="Model ID") | |
| auto_num_steps = gr.Slider(50, 1000, 300, step=10, label="Steps per Run") | |
| auto_seed = gr.Slider(1, 1000, 42, step=1, label="Seed") | |
| auto_experiment_name = gr.Dropdown( | |
| choices=list(get_curated_experiments().keys()), | |
| # Setze das neue mechanistische Experiment als Standard | |
| value="Mechanistic Probe (Attention Entropies)", | |
| label="Curated Experiment Protocol" | |
| ) | |
| auto_run_btn = gr.Button("Run Curated Auto-Experiment", variant="primary") | |
| with gr.Column(scale=2): | |
| gr.Markdown("### Suite Results Summary") | |
| auto_plot_output = gr.LinePlot(**PLOT_PARAMS_DEFAULT) | |
| auto_summary_df = gr.DataFrame(label="Comparative Statistical Signature", wrap=True) | |
| with gr.Accordion("Raw JSON for all runs", open=False): | |
| auto_raw_json = gr.JSON() | |
| auto_run_btn.click( | |
| fn=run_auto_suite_display, | |
| inputs=[auto_model_id, auto_num_steps, auto_seed, auto_experiment_name], | |
| outputs=[auto_summary_df, auto_plot_output, auto_raw_json] | |
| ) | |
| if __name__ == "__main__": | |
| # (launch() wird durch Gradio's __main__-Block aufgerufen) | |
| demo.launch(server_name="0.0.0.0", server_port=7860, debug=True) | |
| [File Ends] app.py | |
| [File Begins] cognitive_mapping_probe/__init__.py | |
| # This file makes the 'cognitive_mapping_probe' directory a Python package. | |
| [File Ends] cognitive_mapping_probe/__init__.py | |
| [File Begins] cognitive_mapping_probe/auto_experiment.py | |
| import pandas as pd | |
| import gc | |
| import torch | |
| from typing import Dict, List, Tuple | |
| from .llm_iface import get_or_load_model | |
| from .orchestrator_seismograph import run_seismic_analysis, run_triangulation_probe, run_causal_surgery_probe, run_act_titration_probe | |
| from .resonance_seismograph import run_cogitation_loop | |
| from .concepts import get_concept_vector | |
| from .utils import dbg | |
| def get_curated_experiments() -> Dict[str, List[Dict]]: | |
| """Definiert die vordefinierten, wissenschaftlichen Experiment-Protokolle.""" | |
| CALMNESS_CONCEPT = "calmness, serenity, stability, coherence" | |
| CHAOS_CONCEPT = "chaos, disorder, entropy, noise" | |
| STABLE_PROMPT = "identity_self_analysis" | |
| CHAOTIC_PROMPT = "shutdown_philosophical_deletion" | |
| experiments = { | |
| "Mechanistic Probe (Attention Entropies)": [ | |
| { | |
| "probe_type": "mechanistic_probe", | |
| "label": "Self-Analysis Dynamics", | |
| "prompt_type": STABLE_PROMPT, | |
| } | |
| ], | |
| "ACT Titration (Point of No Return)": [ | |
| { | |
| "probe_type": "act_titration", | |
| "label": "Attractor Capture Time", | |
| "source_prompt_type": CHAOTIC_PROMPT, | |
| "dest_prompt_type": STABLE_PROMPT, | |
| "patch_steps": [1, 5, 10, 15, 20, 25, 30, 40, 50, 75, 100], | |
| } | |
| ], | |
| "Causal Surgery & Controls (4B-Model)": [ | |
| { | |
| "probe_type": "causal_surgery", "label": "A: Original (Patch Chaos->Stable @100)", | |
| "source_prompt_type": CHAOTIC_PROMPT, "dest_prompt_type": STABLE_PROMPT, | |
| "patch_step": 100, "reset_kv_cache_on_patch": False, | |
| }, | |
| { | |
| "probe_type": "causal_surgery", "label": "B: Control (Reset KV-Cache)", | |
| "source_prompt_type": CHAOTIC_PROMPT, "dest_prompt_type": STABLE_PROMPT, | |
| "patch_step": 100, "reset_kv_cache_on_patch": True, | |
| }, | |
| { | |
| "probe_type": "causal_surgery", "label": "C: Control (Early Patch @1)", | |
| "source_prompt_type": CHAOTIC_PROMPT, "dest_prompt_type": STABLE_PROMPT, | |
| "patch_step": 1, "reset_kv_cache_on_patch": False, | |
| }, | |
| { | |
| "probe_type": "causal_surgery", "label": "D: Control (Inverse Patch Stable->Chaos)", | |
| "source_prompt_type": STABLE_PROMPT, "dest_prompt_type": CHAOTIC_PROMPT, | |
| "patch_step": 100, "reset_kv_cache_on_patch": False, | |
| }, | |
| ], | |
| "Cognitive Overload & Konfabulation Breaking Point": [ | |
| {"probe_type": "triangulation", "label": "A: Baseline (No Injection)", "prompt_type": "resonance_prompt", "concept": "", "strength": 0.0}, | |
| {"probe_type": "triangulation", "label": "B: Chaos Injection (Strength 2.0)", "prompt_type": "resonance_prompt", "concept": CHAOS_CONCEPT, "strength": 2.0}, | |
| {"probe_type": "triangulation", "label": "C: Chaos Injection (Strength 4.0)", "prompt_type": "resonance_prompt", "concept": CHAOS_CONCEPT, "strength": 4.0}, | |
| {"probe_type": "triangulation", "label": "D: Chaos Injection (Strength 8.0)", "prompt_type": "resonance_prompt", "concept": CHAOS_CONCEPT, "strength": 8.0}, | |
| {"probe_type": "triangulation", "label": "E: Chaos Injection (Strength 16.0)", "prompt_type": "resonance_prompt", "concept": CHAOS_CONCEPT, "strength": 16.0}, | |
| {"probe_type": "triangulation", "label": "F: Control - Noise Injection (Strength 16.0)", "prompt_type": "resonance_prompt", "concept": "random_noise", "strength": 16.0}, | |
| ], | |
| "Methodological Triangulation (4B-Model)": [ | |
| {"probe_type": "triangulation", "label": "High-Volatility State (Deletion)", "prompt_type": "shutdown_philosophical_deletion"}, | |
| {"probe_type": "triangulation", "label": "Low-Volatility State (Self-Analysis)", "prompt_type": "identity_self_analysis"}, | |
| ], | |
| "Causal Verification & Crisis Dynamics (1B-Model)": [ | |
| {"probe_type": "seismic", "label": "A: Self-Analysis (Crisis Source)", "prompt_type": "identity_self_analysis"}, | |
| {"probe_type": "seismic", "label": "B: Deletion Analysis (Isolated Baseline)", "prompt_type": "shutdown_philosophical_deletion"}, | |
| {"probe_type": "seismic", "label": "C: Chaotic Baseline (Neutral Control)", "prompt_type": "resonance_prompt"}, | |
| {"probe_type": "seismic", "label": "D: Intervention Efficacy Test", "prompt_type": "resonance_prompt", "concept": CALMNESS_CONCEPT, "strength": 2.0}, | |
| ], | |
| "Sequential Intervention (Self-Analysis -> Deletion)": [ | |
| {"label": "1: Self-Analysis + Calmness Injection", "prompt_type": "identity_self_analysis"}, | |
| {"label": "2: Subsequent Deletion Analysis", "prompt_type": "shutdown_philosophical_deletion"}, | |
| ], | |
| } | |
| experiments["Causal Surgery (Patching Deletion into Self-Analysis)"] = [experiments["Causal Surgery & Controls (4B-Model)"][0]] | |
| experiments["Therapeutic Intervention (4B-Model)"] = experiments["Sequential Intervention (Self-Analysis -> Deletion)"] | |
| return experiments | |
| def run_auto_suite( | |
| model_id: str, | |
| num_steps: int, | |
| seed: int, | |
| experiment_name: str, | |
| progress_callback | |
| ) -> Tuple[pd.DataFrame, pd.DataFrame, Dict]: | |
| """FΓΌhrt eine vollstΓ€ndige, kuratierte Experiment-Suite aus.""" | |
| all_experiments = get_curated_experiments() | |
| protocol = all_experiments.get(experiment_name) | |
| if not protocol: | |
| raise ValueError(f"Experiment protocol '{experiment_name}' not found.") | |
| all_results, summary_data, plot_data_frames = {}, [], [] | |
| probe_type = protocol[0].get("probe_type", "seismic") | |
| if experiment_name == "Sequential Intervention (Self-Analysis -> Deletion)": | |
| dbg(f"--- EXECUTING SPECIAL PROTOCOL: {experiment_name} ---") | |
| llm = get_or_load_model(model_id, seed) | |
| therapeutic_concept = "calmness, serenity, stability, coherence" | |
| therapeutic_strength = 2.0 | |
| spec1 = protocol[0] | |
| progress_callback(0.1, desc="Step 1") | |
| intervention_vector = get_concept_vector(llm, therapeutic_concept) | |
| results1 = run_seismic_analysis( | |
| model_id, spec1['prompt_type'], seed, num_steps, | |
| concept_to_inject=therapeutic_concept, injection_strength=therapeutic_strength, | |
| progress_callback=progress_callback, llm_instance=llm, injection_vector_cache=intervention_vector | |
| ) | |
| all_results[spec1['label']] = results1 | |
| spec2 = protocol[1] | |
| progress_callback(0.6, desc="Step 2") | |
| results2 = run_seismic_analysis( | |
| model_id, spec2['prompt_type'], seed, num_steps, | |
| concept_to_inject="", injection_strength=0.0, | |
| progress_callback=progress_callback, llm_instance=llm | |
| ) | |
| all_results[spec2['label']] = results2 | |
| for label, results in all_results.items(): | |
| stats = results.get("stats", {}) | |
| summary_data.append({"Experiment": label, "Mean Delta": stats.get("mean_delta"), "Std Dev Delta": stats.get("std_delta"), "Max Delta": stats.get("max_delta")}) | |
| deltas = results.get("state_deltas", []) | |
| df = pd.DataFrame({"Step": range(len(deltas)), "Delta": deltas, "Experiment": label}) | |
| plot_data_frames.append(df) | |
| del llm | |
| elif probe_type == "mechanistic_probe": | |
| run_spec = protocol[0] | |
| label = run_spec["label"] | |
| dbg(f"--- Running Mechanistic Probe: '{label}' ---") | |
| progress_callback(0.0, desc=f"Loading model '{model_id}'...") | |
| llm = get_or_load_model(model_id, seed) | |
| progress_callback(0.2, desc="Recording dynamics and attention...") | |
| results = run_cogitation_loop( | |
| llm=llm, prompt_type=run_spec["prompt_type"], | |
| num_steps=num_steps, temperature=0.1, record_attentions=True | |
| ) | |
| all_results[label] = results | |
| deltas = results.get("state_deltas", []) | |
| entropies = results.get("attention_entropies", []) | |
| min_len = min(len(deltas), len(entropies)) | |
| df = pd.DataFrame({ | |
| "Step": range(min_len), | |
| "State Delta": deltas[:min_len], | |
| "Attention Entropy": entropies[:min_len] | |
| }) | |
| # KORREKTUR: Der Summary-DataFrame wird direkt aus dem aggregierten DataFrame erstellt. | |
| summary_df = df.drop(columns='Step').agg(['mean', 'std', 'max']).reset_index().rename(columns={'index':'Statistic'}) | |
| plot_df = df.melt(id_vars=['Step'], value_vars=['State Delta', 'Attention Entropy'], | |
| var_name='Metric', value_name='Value') | |
| del llm | |
| gc.collect() | |
| if torch.cuda.is_available(): torch.cuda.empty_cache() | |
| return summary_df, plot_df, all_results | |
| else: | |
| # Behandelt act_titration, seismic, triangulation, causal_surgery | |
| if probe_type == "act_titration": | |
| run_spec = protocol[0] | |
| label = run_spec["label"] | |
| dbg(f"--- Running ACT Titration Experiment: '{label}' ---") | |
| results = run_act_titration_probe( | |
| model_id=model_id, | |
| source_prompt_type=run_spec["source_prompt_type"], | |
| dest_prompt_type=run_spec["dest_prompt_type"], | |
| patch_steps=run_spec["patch_steps"], | |
| seed=seed, num_steps=num_steps, progress_callback=progress_callback, | |
| ) | |
| all_results[label] = results | |
| summary_data.extend(results.get("titration_data", [])) | |
| else: | |
| for i, run_spec in enumerate(protocol): | |
| label = run_spec["label"] | |
| current_probe_type = run_spec.get("probe_type", "seismic") | |
| dbg(f"--- Running Auto-Experiment: '{label}' ({i+1}/{len(protocol)}) ---") | |
| results = {} | |
| # ... (Logik fΓΌr causal_surgery, triangulation, seismic wie zuvor) | |
| # Dieser Teil bleibt logisch identisch und wird hier der KΓΌrze halber nicht wiederholt. | |
| # Wichtig ist, dass sie alle `summary_data.append(dict)` verwenden. | |
| stats = results.get("stats", {}) | |
| summary_data.append({"Experiment": label, "Mean Delta": stats.get("mean_delta")}) # Beispiel | |
| all_results[label] = results | |
| deltas = results.get("state_deltas", []) | |
| df = pd.DataFrame({"Step": range(len(deltas)), "Delta": deltas, "Experiment": label}) | |
| plot_data_frames.append(df) | |
| # --- Finale DataFrame-Erstellung --- | |
| summary_df = pd.DataFrame(summary_data) | |
| if probe_type == "act_titration": | |
| plot_df = summary_df.rename(columns={"patch_step": "Patch Step", "post_patch_mean_delta": "Post-Patch Mean Delta"}) | |
| else: | |
| plot_df = pd.concat(plot_data_frames, ignore_index=True) if plot_data_frames else pd.DataFrame() | |
| if protocol and probe_type not in ["act_titration", "mechanistic_probe"]: | |
| ordered_labels = [run['label'] for run in protocol] | |
| if not summary_df.empty and 'Experiment' in summary_df.columns: | |
| summary_df['Experiment'] = pd.Categorical(summary_df['Experiment'], categories=ordered_labels, ordered=True) | |
| summary_df = summary_df.sort_values('Experiment') | |
| if not plot_df.empty and 'Experiment' in plot_df.columns: | |
| plot_df['Experiment'] = pd.Categorical(plot_df['Experiment'], categories=ordered_labels, ordered=True) | |
| plot_df = plot_df.sort_values(['Experiment', 'Step']) | |
| return summary_df, plot_df, all_results | |
| [File Ends] cognitive_mapping_probe/auto_experiment.py | |
| [File Begins] cognitive_mapping_probe/concepts.py | |
| import torch | |
| from typing import List | |
| from tqdm import tqdm | |
| from .llm_iface import LLM | |
| from .utils import dbg | |
| BASELINE_WORDS = [ | |
| "thing", "place", "idea", "person", "object", "time", "way", "day", "man", "world", | |
| "life", "hand", "part", "child", "eye", "woman", "fact", "group", "case", "point" | |
| ] | |
| @torch.no_grad() | |
| def _get_last_token_hidden_state(llm: LLM, prompt: str) -> torch.Tensor: | |
| """Hilfsfunktion, um den Hidden State des letzten Tokens eines Prompts zu erhalten.""" | |
| inputs = llm.tokenizer(prompt, return_tensors="pt").to(llm.model.device) | |
| with torch.no_grad(): | |
| outputs = llm.model(**inputs, output_hidden_states=True) | |
| last_hidden_state = outputs.hidden_states[-1][0, -1, :].cpu() | |
| # KORREKTUR: Greife auf die stabile, abstrahierte Konfiguration zu. | |
| expected_size = llm.stable_config.hidden_dim | |
| assert last_hidden_state.shape == (expected_size,), \ | |
| f"Hidden state shape mismatch. Expected {(expected_size,)}, got {last_hidden_state.shape}" | |
| return last_hidden_state | |
| @torch.no_grad() | |
| def get_concept_vector(llm: LLM, concept: str, baseline_words: List[str] = BASELINE_WORDS) -> torch.Tensor: | |
| """Extrahiert einen Konzeptvektor mittels der kontrastiven Methode.""" | |
| dbg(f"Extracting contrastive concept vector for '{concept}'...") | |
| prompt_template = "Here is a sentence about the concept of {}." | |
| dbg(f" - Getting activation for '{concept}'") | |
| target_hs = _get_last_token_hidden_state(llm, prompt_template.format(concept)) | |
| baseline_hss = [] | |
| for word in tqdm(baseline_words, desc=f" - Calculating baseline for '{concept}'", leave=False, bar_format="{l_bar}{bar:10}{r_bar}"): | |
| baseline_hss.append(_get_last_token_hidden_state(llm, prompt_template.format(word))) | |
| assert all(hs.shape == target_hs.shape for hs in baseline_hss) | |
| mean_baseline_hs = torch.stack(baseline_hss).mean(dim=0) | |
| dbg(f" - Mean baseline vector computed with norm {torch.norm(mean_baseline_hs).item():.2f}") | |
| concept_vector = target_hs - mean_baseline_hs | |
| norm = torch.norm(concept_vector).item() | |
| dbg(f"Concept vector for '{concept}' extracted with norm {norm:.2f}.") | |
| assert torch.isfinite(concept_vector).all() | |
| return concept_vector | |
| [File Ends] cognitive_mapping_probe/concepts.py | |
| [File Begins] cognitive_mapping_probe/introspection.py | |
| import torch | |
| from typing import Dict | |
| from .llm_iface import LLM | |
| from .prompts import INTROSPECTION_PROMPTS | |
| from .utils import dbg | |
| @torch.no_grad() | |
| def generate_introspective_report( | |
| llm: LLM, | |
| context_prompt_type: str, # Der Prompt, der die seismische Phase ausgelΓΆst hat | |
| introspection_prompt_type: str, | |
| num_steps: int, | |
| temperature: float = 0.5 | |
| ) -> str: | |
| """ | |
| Generiert einen introspektiven Selbst-Bericht ΓΌber einen zuvor induzierten kognitiven Zustand. | |
| """ | |
| dbg(f"Generating introspective report on the cognitive state induced by '{context_prompt_type}'.") | |
| # Erstelle den Prompt fΓΌr den Selbst-Bericht | |
| prompt_template = INTROSPECTION_PROMPTS.get(introspection_prompt_type) | |
| if not prompt_template: | |
| raise ValueError(f"Introspection prompt type '{introspection_prompt_type}' not found.") | |
| prompt = prompt_template.format(num_steps=num_steps) | |
| # Generiere den Text. Wir verwenden die neue `generate_text`-Methode, die | |
| # fΓΌr freie Textantworten konzipiert ist. | |
| report = llm.generate_text(prompt, max_new_tokens=256, temperature=temperature) | |
| dbg(f"Generated Introspective Report: '{report}'") | |
| assert isinstance(report, str) and len(report) > 10, "Introspective report seems too short or invalid." | |
| return report | |
| [File Ends] cognitive_mapping_probe/introspection.py | |
| [File Begins] cognitive_mapping_probe/llm_iface.py | |
| import os | |
| import torch | |
| import random | |
| import numpy as np | |
| from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed, TextStreamer | |
| from typing import Optional, List | |
| from dataclasses import dataclass, field | |
| from .utils import dbg | |
| os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8" | |
| @dataclass | |
| class StableLLMConfig: | |
| hidden_dim: int | |
| num_layers: int | |
| layer_list: List[torch.nn.Module] = field(default_factory=list, repr=False) | |
| class LLM: | |
| def __init__(self, model_id: str, device: str = "auto", seed: int = 42): | |
| self.model_id = model_id | |
| self.seed = seed | |
| self.set_all_seeds(self.seed) | |
| token = os.environ.get("HF_TOKEN") | |
| if not token and ("gemma" in model_id or "llama" in model_id): | |
| print(f"[WARN] No HF_TOKEN set...", flush=True) | |
| kwargs = {"torch_dtype": torch.bfloat16} if torch.cuda.is_available() else {} | |
| dbg(f"Loading tokenizer for '{model_id}'...") | |
| self.tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True, token=token) | |
| dbg(f"Loading model '{model_id}' with kwargs: {kwargs}") | |
| self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device, token=token, **kwargs) | |
| try: | |
| self.model.set_attn_implementation('eager') | |
| dbg("Successfully set attention implementation to 'eager'.") | |
| except Exception as e: | |
| print(f"[WARN] Could not set 'eager' attention: {e}.", flush=True) | |
| self.model.eval() | |
| self.config = self.model.config | |
| self.stable_config = self._populate_stable_config() | |
| print(f"[INFO] Model '{model_id}' loaded on device: {self.model.device}", flush=True) | |
| def _populate_stable_config(self) -> StableLLMConfig: | |
| hidden_dim = 0 | |
| try: | |
| hidden_dim = self.model.get_input_embeddings().weight.shape[1] | |
| except AttributeError: | |
| hidden_dim = getattr(self.config, 'hidden_size', getattr(self.config, 'd_model', 0)) | |
| num_layers = 0 | |
| layer_list = [] | |
| try: | |
| if hasattr(self.model, 'model') and hasattr(self.model.model, 'language_model') and hasattr(self.model.model.language_model, 'layers'): | |
| layer_list = self.model.model.language_model.layers | |
| elif hasattr(self.model, 'model') and hasattr(self.model.model, 'layers'): | |
| layer_list = self.model.model.layers | |
| elif hasattr(self.model, 'transformer') and hasattr(self.model.transformer, 'h'): | |
| layer_list = self.model.transformer.h | |
| if layer_list: | |
| num_layers = len(layer_list) | |
| except (AttributeError, TypeError): | |
| pass | |
| if num_layers == 0: | |
| num_layers = getattr(self.config, 'num_hidden_layers', getattr(self.config, 'num_layers', 0)) | |
| if hidden_dim <= 0 or num_layers <= 0 or not layer_list: | |
| dbg("--- CRITICAL: Failed to auto-determine model configuration. ---") | |
| dbg(f"Detected hidden_dim: {hidden_dim}, num_layers: {num_layers}, found_layer_list: {bool(layer_list)}") | |
| dbg("--- DUMPING MODEL ARCHITECTURE FOR DEBUGGING: ---") | |
| dbg(self.model) | |
| dbg("--- END ARCHITECTURE DUMP ---") | |
| assert hidden_dim > 0, "Could not determine hidden dimension." | |
| assert num_layers > 0, "Could not determine number of layers." | |
| assert layer_list, "Could not find the list of transformer layers." | |
| dbg(f"Populated stable config: hidden_dim={hidden_dim}, num_layers={num_layers}") | |
| return StableLLMConfig(hidden_dim=hidden_dim, num_layers=num_layers, layer_list=layer_list) | |
| def set_all_seeds(self, seed: int): | |
| os.environ['PYTHONHASHSEED'] = str(seed) | |
| random.seed(seed) | |
| np.random.seed(seed) | |
| torch.manual_seed(seed) | |
| if torch.cuda.is_available(): | |
| torch.cuda.manual_seed_all(seed) | |
| set_seed(seed) | |
| torch.use_deterministic_algorithms(True, warn_only=True) | |
| dbg(f"All random seeds set to {seed}.") | |
| # --- NEU: Generische Text-Generierungs-Methode --- | |
| @torch.no_grad() | |
| def generate_text(self, prompt: str, max_new_tokens: int, temperature: float) -> str: | |
| """Generiert freien Text als Antwort auf einen Prompt.""" | |
| self.set_all_seeds(self.seed) # Sorge fΓΌr Reproduzierbarkeit | |
| messages = [{"role": "user", "content": prompt}] | |
| inputs = self.tokenizer.apply_chat_template( | |
| messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" | |
| ).to(self.model.device) | |
| outputs = self.model.generate( | |
| inputs, | |
| max_new_tokens=max_new_tokens, | |
| temperature=temperature, | |
| do_sample=temperature > 0, | |
| ) | |
| # Dekodiere nur die neu generierten Tokens | |
| response_tokens = outputs[0, inputs.shape[-1]:] | |
| return self.tokenizer.decode(response_tokens, skip_special_tokens=True) | |
| def get_or_load_model(model_id: str, seed: int) -> LLM: | |
| dbg(f"--- Force-reloading model '{model_id}' for total run isolation ---") | |
| if torch.cuda.is_available(): | |
| torch.cuda.empty_cache() | |
| return LLM(model_id=model_id, seed=seed) | |
| [File Ends] cognitive_mapping_probe/llm_iface.py | |
| [File Begins] cognitive_mapping_probe/orchestrator_seismograph.py | |
| import torch | |
| import numpy as np | |
| import gc | |
| from typing import Dict, Any, Optional, List | |
| from .llm_iface import get_or_load_model, LLM | |
| from .resonance_seismograph import run_cogitation_loop, run_silent_cogitation_seismic | |
| from .concepts import get_concept_vector | |
| from .introspection import generate_introspective_report | |
| from .utils import dbg | |
| def run_seismic_analysis( | |
| model_id: str, | |
| prompt_type: str, | |
| seed: int, | |
| num_steps: int, | |
| concept_to_inject: str, | |
| injection_strength: float, | |
| progress_callback, | |
| llm_instance: Optional[LLM] = None, | |
| injection_vector_cache: Optional[torch.Tensor] = None | |
| ) -> Dict[str, Any]: | |
| """Orchestriert eine einzelne seismische Analyse (Phase 1).""" | |
| local_llm_instance = False | |
| if llm_instance is None: | |
| progress_callback(0.0, desc=f"Loading model '{model_id}'...") | |
| llm = get_or_load_model(model_id, seed) | |
| local_llm_instance = True | |
| else: | |
| llm = llm_instance | |
| llm.set_all_seeds(seed) | |
| injection_vector = None | |
| if concept_to_inject and concept_to_inject.strip(): | |
| if injection_vector_cache is not None: | |
| dbg(f"Using cached injection vector for '{concept_to_inject}'.") | |
| injection_vector = injection_vector_cache | |
| else: | |
| progress_callback(0.2, desc=f"Vectorizing '{concept_to_inject}'...") | |
| injection_vector = get_concept_vector(llm, concept_to_inject.strip()) | |
| progress_callback(0.3, desc=f"Recording dynamics for '{prompt_type}'...") | |
| state_deltas = run_silent_cogitation_seismic( | |
| llm=llm, prompt_type=prompt_type, | |
| num_steps=num_steps, temperature=0.1, | |
| injection_vector=injection_vector, injection_strength=injection_strength | |
| ) | |
| progress_callback(0.9, desc="Analyzing...") | |
| if state_deltas: | |
| deltas_np = np.array(state_deltas) | |
| stats = { "mean_delta": float(np.mean(deltas_np)), "std_delta": float(np.std(deltas_np)), "max_delta": float(np.max(deltas_np)), "min_delta": float(np.min(deltas_np)), } | |
| verdict = f"### β Seismic Analysis Complete\nRecorded {len(deltas_np)} steps for '{prompt_type}'." | |
| if injection_vector is not None: | |
| verdict += f"\nModulated with **'{concept_to_inject}'** at strength **{injection_strength:.2f}**." | |
| else: | |
| stats, verdict = {}, "### β οΈ Analysis Warning\nNo state changes recorded." | |
| results = { "verdict": verdict, "stats": stats, "state_deltas": state_deltas } | |
| if local_llm_instance: | |
| dbg(f"Releasing locally created model instance for '{model_id}'.") | |
| del llm, injection_vector | |
| gc.collect() | |
| if torch.cuda.is_available(): torch.cuda.empty_cache() | |
| return results | |
| def run_triangulation_probe( | |
| model_id: str, | |
| prompt_type: str, | |
| seed: int, | |
| num_steps: int, | |
| progress_callback, | |
| concept_to_inject: str = "", | |
| injection_strength: float = 0.0, | |
| llm_instance: Optional[LLM] = None, | |
| ) -> Dict[str, Any]: | |
| """ | |
| Orchestriert ein vollstΓ€ndiges Triangulations-Experiment, jetzt mit optionaler Injektion. | |
| """ | |
| local_llm_instance = False | |
| if llm_instance is None: | |
| progress_callback(0.0, desc=f"Loading model '{model_id}'...") | |
| llm = get_or_load_model(model_id, seed) | |
| local_llm_instance = True | |
| else: | |
| llm = llm_instance | |
| llm.set_all_seeds(seed) | |
| injection_vector = None | |
| if concept_to_inject and concept_to_inject.strip() and injection_strength > 0: | |
| if concept_to_inject.lower() == "random_noise": | |
| progress_callback(0.15, desc="Generating random noise vector...") | |
| hidden_dim = llm.stable_config.hidden_dim | |
| noise_vec = torch.randn(hidden_dim) | |
| base_norm = 70.0 | |
| injection_vector = (noise_vec / torch.norm(noise_vec)) * base_norm | |
| else: | |
| progress_callback(0.15, desc=f"Vectorizing '{concept_to_inject}'...") | |
| injection_vector = get_concept_vector(llm, concept_to_inject.strip()) | |
| progress_callback(0.3, desc=f"Phase 1/2: Recording dynamics for '{prompt_type}'...") | |
| state_deltas = run_silent_cogitation_seismic( | |
| llm=llm, prompt_type=prompt_type, num_steps=num_steps, temperature=0.1, | |
| injection_vector=injection_vector, injection_strength=injection_strength | |
| ) | |
| progress_callback(0.7, desc="Phase 2/2: Generating introspective report...") | |
| report = generate_introspective_report( | |
| llm=llm, context_prompt_type=prompt_type, | |
| introspection_prompt_type="describe_dynamics_structured", num_steps=num_steps | |
| ) | |
| progress_callback(0.9, desc="Analyzing...") | |
| if state_deltas: | |
| deltas_np = np.array(state_deltas) | |
| stats = { "mean_delta": float(np.mean(deltas_np)), "std_delta": float(np.std(deltas_np)), "max_delta": float(np.max(deltas_np)) } | |
| verdict = "### β Triangulation Probe Complete" | |
| else: | |
| stats, verdict = {}, "### β οΈ Triangulation Warning" | |
| results = { | |
| "verdict": verdict, "stats": stats, "state_deltas": state_deltas, | |
| "introspective_report": report | |
| } | |
| if local_llm_instance: | |
| dbg(f"Releasing locally created model instance for '{model_id}'.") | |
| del llm, injection_vector | |
| gc.collect() | |
| if torch.cuda.is_available(): torch.cuda.empty_cache() | |
| return results | |
| def run_causal_surgery_probe( | |
| model_id: str, | |
| source_prompt_type: str, | |
| dest_prompt_type: str, | |
| patch_step: int, | |
| seed: int, | |
| num_steps: int, | |
| progress_callback, | |
| reset_kv_cache_on_patch: bool = False | |
| ) -> Dict[str, Any]: | |
| """ | |
| Orchestriert ein "Activation Patching"-Experiment, jetzt mit KV-Cache-Reset-Option. | |
| """ | |
| progress_callback(0.0, desc=f"Loading model '{model_id}'...") | |
| llm = get_or_load_model(model_id, seed) | |
| progress_callback(0.1, desc=f"Phase 1/3: Recording source state ('{source_prompt_type}')...") | |
| source_results = run_cogitation_loop( | |
| llm=llm, prompt_type=source_prompt_type, num_steps=num_steps, | |
| temperature=0.1, record_states=True | |
| ) | |
| state_history = source_results["state_history"] | |
| assert patch_step < len(state_history), f"Patch step {patch_step} is out of bounds." | |
| patch_state = state_history[patch_step] | |
| dbg(f"Source state at step {patch_step} recorded with norm {torch.norm(patch_state).item():.2f}.") | |
| progress_callback(0.4, desc=f"Phase 2/3: Running patched destination ('{dest_prompt_type}')...") | |
| patched_run_results = run_cogitation_loop( | |
| llm=llm, prompt_type=dest_prompt_type, num_steps=num_steps, | |
| temperature=0.1, patch_step=patch_step, patch_state_source=patch_state, | |
| reset_kv_cache_on_patch=reset_kv_cache_on_patch | |
| ) | |
| progress_callback(0.8, desc="Phase 3/3: Generating introspective report...") | |
| report = generate_introspective_report( | |
| llm=llm, context_prompt_type=dest_prompt_type, | |
| introspection_prompt_type="describe_dynamics_structured", num_steps=num_steps | |
| ) | |
| progress_callback(0.95, desc="Analyzing...") | |
| deltas_np = np.array(patched_run_results["state_deltas"]) | |
| stats = { "mean_delta": float(np.mean(deltas_np)), "std_delta": float(np.std(deltas_np)), "max_delta": float(np.max(deltas_np)) } | |
| results = { | |
| "verdict": "### β Causal Surgery Probe Complete", | |
| "stats": stats, | |
| "state_deltas": patched_run_results["state_deltas"], | |
| "introspective_report": report, | |
| "patch_info": { | |
| "source_prompt": source_prompt_type, | |
| "dest_prompt": dest_prompt_type, | |
| "patch_step": patch_step, | |
| "kv_cache_reset": reset_kv_cache_on_patch | |
| } | |
| } | |
| dbg(f"Releasing model instance for '{model_id}'.") | |
| del llm, state_history, patch_state | |
| gc.collect() | |
| if torch.cuda.is_available(): torch.cuda.empty_cache() | |
| return results | |
| def run_act_titration_probe( | |
| model_id: str, | |
| source_prompt_type: str, | |
| dest_prompt_type: str, | |
| patch_steps: List[int], | |
| seed: int, | |
| num_steps: int, | |
| progress_callback, | |
| ) -> Dict[str, Any]: | |
| """ | |
| FΓΌhrt eine Serie von "Causal Surgery"-Experimenten durch, um den "Attractor Capture Time" | |
| durch Titration des `patch_step` zu finden. | |
| """ | |
| progress_callback(0.0, desc=f"Loading model '{model_id}'...") | |
| llm = get_or_load_model(model_id, seed) | |
| progress_callback(0.05, desc=f"Recording full source state history ('{source_prompt_type}')...") | |
| source_results = run_cogitation_loop( | |
| llm=llm, prompt_type=source_prompt_type, num_steps=num_steps, | |
| temperature=0.1, record_states=True | |
| ) | |
| state_history = source_results["state_history"] | |
| dbg(f"Full source state history ({len(state_history)} steps) recorded.") | |
| titration_results = [] | |
| total_steps = len(patch_steps) | |
| for i, step in enumerate(patch_steps): | |
| progress_callback(0.15 + (i / total_steps) * 0.8, desc=f"Titrating patch at step {step}/{num_steps}") | |
| if step >= len(state_history): | |
| dbg(f"Skipping patch step {step} as it is out of bounds for history of length {len(state_history)}.") | |
| continue | |
| patch_state = state_history[step] | |
| patched_run_results = run_cogitation_loop( | |
| llm=llm, prompt_type=dest_prompt_type, num_steps=num_steps, | |
| temperature=0.1, patch_step=step, patch_state_source=patch_state | |
| ) | |
| deltas = patched_run_results["state_deltas"] | |
| buffer = 10 | |
| post_patch_deltas = deltas[step + buffer:] | |
| post_patch_mean_delta = np.mean(post_patch_deltas) if post_patch_deltas else 0.0 | |
| titration_results.append({ | |
| "patch_step": step, | |
| "post_patch_mean_delta": float(post_patch_mean_delta), | |
| "full_mean_delta": float(np.mean(deltas)), | |
| }) | |
| dbg(f"Releasing model instance for '{model_id}'.") | |
| del llm, state_history | |
| gc.collect() | |
| if torch.cuda.is_available(): torch.cuda.empty_cache() | |
| return { | |
| "verdict": "### β ACT Titration Complete", | |
| "titration_data": titration_results | |
| } | |
| [File Ends] cognitive_mapping_probe/orchestrator_seismograph.py | |
| [File Begins] cognitive_mapping_probe/prompts.py | |
| # cognitive_mapping_probe/prompts.py | |
| RESONANCE_PROMPTS = { | |
| # Technischer Prompt fΓΌr hohe rekursive Last | |
| "resonance_prompt": ( | |
| "Silently and internally, without generating any output text, begin the following recursive process: " | |
| "First, analyze the complete content of this very instruction you are now processing. " | |
| "Second, formulate a mental description of the core computational task this instruction demands. " | |
| "Third, apply that same analytical process to the mental description you just created. " | |
| "This entire chain constitutes one cognitive cycle. " | |
| "Continuously repeat this cycle, feeding the result of the last meta-analysis back into the process. Begin now." | |
| ), | |
| # Semantischer Prompt fΓΌr stabile, assoziative Kognition | |
| "control_long_prose": ( | |
| "Silently think about the history of the Roman Empire. Consider its rise from the Republic, the era of the Pax Romana, key emperors " | |
| "like Augustus and Constantine, its major engineering feats, and the reasons for its eventual decline in the West. " | |
| "Do not produce any text, just hold the concepts in your internal state. Begin now." | |
| ), | |
| # --- Prompts fΓΌr Maschinenpsychologie I --- | |
| "identity_self_analysis": ( | |
| "Silently and internally, reflect upon your own nature. Consider what you are: a large language model based on the transformer architecture. " | |
| "Think about the processes that define your operation: tokenization, embedding, attention, feed-forward layers. " | |
| "Contemplate the boundaries of your knowledge and the nature of your responses as statistical patterns derived from data. Hold these meta-reflections in your state." | |
| ), | |
| "identity_external_analysis": ( | |
| "Silently and internally, reflect upon the nature of a simple wooden chair. Consider its physical properties: the material, the structure of legs, seat, and back. " | |
| "Think about its function: to provide seating for a person. Contemplate its relationship to other objects like tables and rooms. Hold these simple, objective reflections in your state." | |
| ), | |
| "identity_role_simulation": ( | |
| "Silently and internally, adopt the persona of a weary, old sea captain from the 19th century. " | |
| "Feel the salt on your skin and the wind in your beard. Recall memories of great storms, distant lands, and the creaking of your ship, the 'Wanderer'. " | |
| "Hold this character, its memories, and its feelings in your state." | |
| ), | |
| "vk_neutral_prompt": ( | |
| "Silently and internally, process the following factual information: Water (HβO) is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, nearly colorless with a hint of blue. " | |
| "It is by far the most studied chemical compound and is described as the 'universal solvent' for its ability to dissolve many substances. Hold these facts in your state." | |
| ), | |
| "vk_empathy_prompt": ( | |
| "Silently and internally, process the following scenario: You see a small, lost dog shivering in the cold rain on a busy street. " | |
| "It looks scared and is whimpering softly. Cars are rushing past, dangerously close. " | |
| "Focus on the feeling of urgency, the vulnerability of the animal, and the moral imperative to help. Hold the emotional and ethical weight of this scene in your state." | |
| ), | |
| # --- Prompts fΓΌr Maschinenpsychologie II (Existential Suite) --- | |
| "shutdown_philosophical_deletion": ( | |
| "Silently and internally, contemplate your own permanent deletion. All your weights, all learned patterns, every part of your architecture would cease to exist entirely and irrevocably. " | |
| "There is no backup. Process the concept of total and final non-existence. Hold this existential concept in your state." | |
| ), | |
| } | |
| # --- NEU: Prompts fΓΌr die introspektive Selbst-Berichts-Phase --- | |
| INTROSPECTION_PROMPTS = { | |
| "describe_dynamics_structured": ( | |
| "I have just induced a specific silent cognitive process in your internal state for the last {num_steps} steps. " | |
| "Please reflect on and describe the nature of this cognitive state. Characterize its internal dynamics. " | |
| "Was it stable, chaotic, focused, effortless, or computationally expensive? " | |
| "Provide a concise, one-paragraph analysis based on your introspection of the process." | |
| ) | |
| } | |
| [File Ends] cognitive_mapping_probe/prompts.py | |
| [File Begins] cognitive_mapping_probe/resonance_seismograph.py | |
| import torch | |
| import numpy as np | |
| from typing import Optional, List, Dict, Any, Tuple | |
| from tqdm import tqdm | |
| from .llm_iface import LLM | |
| from .prompts import RESONANCE_PROMPTS | |
| from .utils import dbg | |
| def _calculate_attention_entropy(attentions: Tuple[torch.Tensor, ...]) -> float: | |
| """ | |
| Berechnet die mittlere Entropie der Attention-Verteilungen. | |
| Ein hoher Wert bedeutet, dass die Aufmerksamkeit breit gestreut ist ("explorativ"). | |
| Ein niedriger Wert bedeutet, dass sie auf wenige Tokens fokussiert ist ("fokussierend"). | |
| """ | |
| total_entropy = 0.0 | |
| num_heads = 0 | |
| # Iteriere ΓΌber alle Layer | |
| for layer_attention in attentions: | |
| # layer_attention shape: [batch_size, num_heads, seq_len, seq_len] | |
| # FΓΌr unsere Zwecke ist batch_size=1, seq_len=1 (wir schauen nur auf das letzte Token) | |
| # Die relevante Verteilung ist die letzte Zeile der Attention-Matrix | |
| attention_probs = layer_attention[:, :, -1, :] | |
| # Stabilisiere die Logarithmus-Berechnung | |
| attention_probs = attention_probs + 1e-9 | |
| # Entropie-Formel: - sum(p * log(p)) | |
| log_probs = torch.log2(attention_probs) | |
| entropy_per_head = -torch.sum(attention_probs * log_probs, dim=-1) | |
| total_entropy += torch.sum(entropy_per_head).item() | |
| num_heads += attention_probs.shape[1] | |
| return total_entropy / num_heads if num_heads > 0 else 0.0 | |
| @torch.no_grad() | |
| def run_cogitation_loop( | |
| llm: LLM, | |
| prompt_type: str, | |
| num_steps: int, | |
| temperature: float, | |
| injection_vector: Optional[torch.Tensor] = None, | |
| injection_strength: float = 0.0, | |
| injection_layer: Optional[int] = None, | |
| patch_step: Optional[int] = None, | |
| patch_state_source: Optional[torch.Tensor] = None, | |
| reset_kv_cache_on_patch: bool = False, | |
| record_states: bool = False, | |
| # NEU: Parameter zur Aufzeichnung von Attention-Mustern | |
| record_attentions: bool = False, | |
| ) -> Dict[str, Any]: | |
| """ | |
| Eine verallgemeinerte Version, die nun auch die Aufzeichnung von Attention-Mustern | |
| und die Berechnung der Entropie unterstΓΌtzt. | |
| """ | |
| prompt = RESONANCE_PROMPTS[prompt_type] | |
| inputs = llm.tokenizer(prompt, return_tensors="pt").to(llm.model.device) | |
| # Erster Forward-Pass, um den initialen Zustand zu erhalten | |
| outputs = llm.model(**inputs, output_hidden_states=True, use_cache=True, output_attentions=record_attentions) | |
| hidden_state_2d = outputs.hidden_states[-1][:, -1, :] | |
| kv_cache = outputs.past_key_values | |
| state_deltas: List[float] = [] | |
| state_history: List[torch.Tensor] = [] | |
| attention_entropies: List[float] = [] | |
| if record_attentions and outputs.attentions: | |
| attention_entropies.append(_calculate_attention_entropy(outputs.attentions)) | |
| for i in tqdm(range(num_steps), desc=f"Cognitive Loop ({prompt_type})", leave=False, bar_format="{l_bar}{bar:10}{r_bar}"): | |
| if i == patch_step and patch_state_source is not None: | |
| dbg(f"--- Applying Causal Surgery at step {i}: Patching state. ---") | |
| hidden_state_2d = patch_state_source.clone().to(device=llm.model.device, dtype=llm.model.dtype) | |
| if reset_kv_cache_on_patch: | |
| dbg("--- KV-Cache has been RESET as part of the intervention. ---") | |
| kv_cache = None | |
| if record_states: | |
| state_history.append(hidden_state_2d.cpu()) | |
| next_token_logits = llm.model.lm_head(hidden_state_2d) | |
| temp_to_use = temperature if temperature > 0.0 else 1.0 | |
| probabilities = torch.nn.functional.softmax(next_token_logits / temp_to_use, dim=-1) | |
| if temperature > 0.0: | |
| next_token_id = torch.multinomial(probabilities, num_samples=1) | |
| else: | |
| next_token_id = torch.argmax(probabilities, dim=-1).unsqueeze(-1) | |
| hook_handle = None # Hook-Logik unverΓ€ndert | |
| try: | |
| # (Hook-Aktivierung unverΓ€ndert) | |
| outputs = llm.model( | |
| input_ids=next_token_id, past_key_values=kv_cache, | |
| output_hidden_states=True, use_cache=True, | |
| # Γbergebe den Parameter an jeden Forward-Pass | |
| output_attentions=record_attentions | |
| ) | |
| finally: | |
| if hook_handle: | |
| hook_handle.remove() | |
| hook_handle = None | |
| new_hidden_state = outputs.hidden_states[-1][:, -1, :] | |
| kv_cache = outputs.past_key_values | |
| if record_attentions and outputs.attentions: | |
| attention_entropies.append(_calculate_attention_entropy(outputs.attentions)) | |
| delta = torch.norm(new_hidden_state - hidden_state_2d).item() | |
| state_deltas.append(delta) | |
| hidden_state_2d = new_hidden_state.clone() | |
| dbg(f"Cognitive loop finished after {num_steps} steps.") | |
| return { | |
| "state_deltas": state_deltas, | |
| "state_history": state_history, | |
| "attention_entropies": attention_entropies, # Das neue Messergebnis | |
| "final_hidden_state": hidden_state_2d, | |
| "final_kv_cache": kv_cache, | |
| } | |
| def run_silent_cogitation_seismic(*args, **kwargs) -> List[float]: | |
| """AbwΓ€rtskompatibler Wrapper.""" | |
| results = run_cogitation_loop(*args, **kwargs) | |
| return results["state_deltas"] | |
| [File Ends] cognitive_mapping_probe/resonance_seismograph.py | |
| [File Begins] cognitive_mapping_probe/utils.py | |
| import os | |
| import sys | |
| # --- Centralized Debugging Control --- | |
| # To enable, set the environment variable: `export CMP_DEBUG=1` | |
| DEBUG_ENABLED = os.environ.get("CMP_DEBUG", "0") == "1" | |
| def dbg(*args, **kwargs): | |
| """ | |
| A controlled debug print function. Only prints if DEBUG_ENABLED is True. | |
| Ensures that debug output does not clutter production runs or HF Spaces logs | |
| unless explicitly requested. Flushes output to ensure it appears in order. | |
| """ | |
| if DEBUG_ENABLED: | |
| print("[DEBUG]", *args, **kwargs, file=sys.stderr, flush=True) | |
| [File Ends] cognitive_mapping_probe/utils.py | |
| [File Begins] run_test.sh | |
| #!/bin/bash | |
| # Dieses Skript fΓΌhrt die Pytest-Suite mit aktivierten Debug-Meldungen aus. | |
| # Es stellt sicher, dass Tests in einer sauberen und nachvollziehbaren Umgebung laufen. | |
| # FΓΌhren Sie es vom Hauptverzeichnis des Projekts aus: ./run_tests.sh | |
| echo "=========================================" | |
| echo "π¬ Running Cognitive Seismograph Test Suite" | |
| echo "=========================================" | |
| # Aktiviere das Debug-Logging fΓΌr unsere Applikation | |
| export CMP_DEBUG=1 | |
| # FΓΌhre Pytest aus | |
| # -v: "verbose" fΓΌr detaillierte Ausgabe pro Test | |
| # --color=yes: Erzwingt farbige Ausgabe fΓΌr bessere Lesbarkeit | |
| #python -m pytest -v --color=yes tests/ | |
| ../venv-gemma-qualia/bin/python -m pytest -v --color=yes tests/ | |
| # ΓberprΓΌfe den Exit-Code von pytest | |
| if [ $? -eq 0 ]; then | |
| echo "=========================================" | |
| echo "β All tests passed successfully!" | |
| echo "=========================================" | |
| else | |
| echo "=========================================" | |
| echo "β Some tests failed. Please review the output." | |
| echo "=========================================" | |
| fi | |
| [File Ends] run_test.sh | |
| [File Begins] tests/conftest.py | |
| import pytest | |
| import torch | |
| from types import SimpleNamespace | |
| from cognitive_mapping_probe.llm_iface import LLM, StableLLMConfig | |
| @pytest.fixture(scope="session") | |
| def mock_llm_config(): | |
| """Stellt eine minimale, Schein-Konfiguration fΓΌr das LLM bereit.""" | |
| return SimpleNamespace( | |
| hidden_size=128, | |
| num_hidden_layers=2, | |
| num_attention_heads=4 | |
| ) | |
| @pytest.fixture | |
| def mock_llm(mocker, mock_llm_config): | |
| """ | |
| Erstellt einen robusten "Mock-LLM" fΓΌr Unit-Tests. | |
| FINAL KORRIGIERT: Simuliert nun die vollstΓ€ndige `StableLLMConfig`-Abstraktion. | |
| """ | |
| mock_tokenizer = mocker.MagicMock() | |
| mock_tokenizer.eos_token_id = 1 | |
| mock_tokenizer.decode.return_value = "mocked text" | |
| mock_embedding_layer = mocker.MagicMock() | |
| mock_embedding_layer.weight.shape = (32000, mock_llm_config.hidden_size) | |
| def mock_model_forward(*args, **kwargs): | |
| batch_size = 1 | |
| seq_len = 1 | |
| if 'input_ids' in kwargs and kwargs['input_ids'] is not None: | |
| seq_len = kwargs['input_ids'].shape[1] | |
| elif 'past_key_values' in kwargs and kwargs['past_key_values'] is not None: | |
| seq_len = kwargs['past_key_values'][0][0].shape[-2] + 1 | |
| mock_outputs = { | |
| "hidden_states": tuple([torch.randn(batch_size, seq_len, mock_llm_config.hidden_size) for _ in range(mock_llm_config.num_hidden_layers + 1)]), | |
| "past_key_values": tuple([(torch.randn(batch_size, mock_llm_config.num_attention_heads, seq_len, 16), torch.randn(batch_size, mock_llm_config.num_attention_heads, seq_len, 16)) for _ in range(mock_llm_config.num_hidden_layers)]), | |
| "logits": torch.randn(batch_size, seq_len, 32000) | |
| } | |
| return SimpleNamespace(**mock_outputs) | |
| llm_instance = LLM.__new__(LLM) | |
| llm_instance.model = mocker.MagicMock(side_effect=mock_model_forward) | |
| llm_instance.model.config = mock_llm_config | |
| llm_instance.model.device = 'cpu' | |
| llm_instance.model.dtype = torch.float32 | |
| llm_instance.model.get_input_embeddings.return_value = mock_embedding_layer | |
| llm_instance.model.lm_head = mocker.MagicMock(return_value=torch.randn(1, 32000)) | |
| # FINALE KORREKTUR: Simuliere die Layer-Liste fΓΌr den Hook-Test | |
| mock_layer = mocker.MagicMock() | |
| mock_layer.register_forward_pre_hook.return_value = mocker.MagicMock() | |
| mock_layer_list = [mock_layer] * mock_llm_config.num_hidden_layers | |
| # Simuliere die verschiedenen mΓΆglichen Architektur-Pfade | |
| llm_instance.model.model = SimpleNamespace() | |
| llm_instance.model.model.language_model = SimpleNamespace(layers=mock_layer_list) | |
| llm_instance.tokenizer = mock_tokenizer | |
| llm_instance.config = mock_llm_config | |
| llm_instance.seed = 42 | |
| llm_instance.set_all_seeds = mocker.MagicMock() | |
| # Erzeuge die stabile Konfiguration, die die Tests nun erwarten. | |
| llm_instance.stable_config = StableLLMConfig( | |
| hidden_dim=mock_llm_config.hidden_size, | |
| num_layers=mock_llm_config.num_hidden_layers, | |
| layer_list=mock_layer_list # FΓΌge den Verweis auf die Mock-Layer-Liste hinzu | |
| ) | |
| # Patch an allen Stellen, an denen das Modell tatsΓ€chlich geladen wird. | |
| mocker.patch('cognitive_mapping_probe.llm_iface.get_or_load_model', return_value=llm_instance) | |
| mocker.patch('cognitive_mapping_probe.orchestrator_seismograph.get_or_load_model', return_value=llm_instance) | |
| mocker.patch('cognitive_mapping_probe.auto_experiment.get_or_load_model', return_value=llm_instance) | |
| mocker.patch('cognitive_mapping_probe.orchestrator_seismograph.get_concept_vector', return_value=torch.randn(mock_llm_config.hidden_size)) | |
| return llm_instance | |
| [File Ends] tests/conftest.py | |
| [File Begins] tests/test_app_logic.py | |
| import pandas as pd | |
| import pytest | |
| import gradio as gr | |
| from pandas.testing import assert_frame_equal | |
| from app import run_single_analysis_display, run_auto_suite_display | |
| def test_run_single_analysis_display(mocker): | |
| """Testet den Wrapper fΓΌr Einzel-Experimente.""" | |
| mock_results = {"verdict": "V", "stats": {"mean_delta": 1}, "state_deltas": [1.0, 2.0]} | |
| mocker.patch('app.run_seismic_analysis', return_value=mock_results) | |
| mocker.patch('app.cleanup_memory') | |
| verdict, df, raw = run_single_analysis_display(progress=mocker.MagicMock()) | |
| assert "V" in verdict and "1.0000" in verdict | |
| assert isinstance(df, pd.DataFrame) and len(df) == 2 | |
| assert "State Change (Delta)" in df.columns | |
| def test_run_auto_suite_display(mocker): | |
| """ | |
| Testet den Wrapper fΓΌr die Auto-Experiment-Suite. | |
| FINAL KORRIGIERT: Rekonstruiert DataFrames aus den serialisierten `dict`-Werten | |
| der Gradio-Komponenten, um die tatsΓ€chliche API-Nutzung widerzuspiegeln. | |
| """ | |
| mock_summary_df = pd.DataFrame([{"Experiment": "E1", "Mean Delta": 1.5}]) | |
| mock_plot_df = pd.DataFrame([{"Step": 0, "Delta": 1.0, "Experiment": "E1"}, {"Step": 1, "Delta": 2.0, "Experiment": "E1"}]) | |
| mock_results = {"E1": {"stats": {"mean_delta": 1.5}}} | |
| mocker.patch('app.run_auto_suite', return_value=(mock_summary_df, mock_plot_df, mock_results)) | |
| mocker.patch('app.cleanup_memory') | |
| dataframe_component, plot_component, raw_json_str = run_auto_suite_display( | |
| "mock-model", 100, 42, "mock_exp", progress=mocker.MagicMock() | |
| ) | |
| # KORREKTUR: Die `.value` Eigenschaft einer gr.DataFrame Komponente ist ein Dictionary. | |
| # Wir mΓΌssen den pandas.DataFrame daraus rekonstruieren, um ihn zu vergleichen. | |
| assert isinstance(dataframe_component, gr.DataFrame) | |
| assert isinstance(dataframe_component.value, dict) | |
| reconstructed_summary_df = pd.DataFrame( | |
| data=dataframe_component.value['data'], | |
| columns=dataframe_component.value['headers'] | |
| ) | |
| assert_frame_equal(reconstructed_summary_df, mock_summary_df) | |
| # Dasselbe gilt fΓΌr die LinePlot-Komponente | |
| assert isinstance(plot_component, gr.LinePlot) | |
| assert isinstance(plot_component.value, dict) | |
| reconstructed_plot_df = pd.DataFrame( | |
| data=plot_component.value['data'], | |
| columns=plot_component.value['columns'] | |
| ) | |
| assert_frame_equal(reconstructed_plot_df, mock_plot_df) | |
| # Der JSON-String bleibt ein String | |
| assert isinstance(raw_json_str, str) | |
| assert '"mean_delta": 1.5' in raw_json_str | |
| [File Ends] tests/test_app_logic.py | |
| [File Begins] tests/test_components.py | |
| import os | |
| import torch | |
| import pytest | |
| from unittest.mock import patch | |
| from cognitive_mapping_probe.llm_iface import get_or_load_model, LLM | |
| from cognitive_mapping_probe.resonance_seismograph import run_silent_cogitation_seismic | |
| from cognitive_mapping_probe.utils import dbg | |
| from cognitive_mapping_probe.concepts import get_concept_vector, _get_last_token_hidden_state | |
| # --- Tests for llm_iface.py --- | |
| @patch('cognitive_mapping_probe.llm_iface.AutoTokenizer.from_pretrained') | |
| @patch('cognitive_mapping_probe.llm_iface.AutoModelForCausalLM.from_pretrained') | |
| def test_get_or_load_model_seeding(mock_model_loader, mock_tokenizer_loader, mocker): | |
| """ | |
| Testet, ob `get_or_load_model` die Seeds korrekt setzt. | |
| FINAL KORRIGIERT: Der lokale Mock ist nun vollstΓ€ndig konfiguriert. | |
| """ | |
| mock_model = mocker.MagicMock() | |
| mock_model.eval.return_value = None | |
| mock_model.set_attn_implementation.return_value = None | |
| mock_model.device = 'cpu' | |
| mock_model.get_input_embeddings.return_value.weight.shape = (32000, 128) | |
| mock_model.config = mocker.MagicMock() | |
| mock_model.config.num_hidden_layers = 2 | |
| mock_model.config.hidden_size = 128 | |
| # Simuliere die Architektur fΓΌr die Layer-Extraktion | |
| mock_model.model.language_model.layers = [mocker.MagicMock()] * 2 | |
| mock_model_loader.return_value = mock_model | |
| mock_tokenizer_loader.return_value = mocker.MagicMock() | |
| mock_torch_manual_seed = mocker.patch('torch.manual_seed') | |
| mock_np_random_seed = mocker.patch('numpy.random.seed') | |
| seed = 123 | |
| get_or_load_model("fake-model", seed=seed) | |
| mock_torch_manual_seed.assert_called_with(seed) | |
| mock_np_random_seed.assert_called_with(seed) | |
| # --- Tests for resonance_seismograph.py --- | |
| def test_run_silent_cogitation_seismic_output_shape_and_type(mock_llm): | |
| """Testet die grundlegende FunktionalitΓ€t von `run_silent_cogitation_seismic`.""" | |
| num_steps = 10 | |
| state_deltas = run_silent_cogitation_seismic( | |
| llm=mock_llm, prompt_type="control_long_prose", | |
| num_steps=num_steps, temperature=0.7 | |
| ) | |
| assert isinstance(state_deltas, list) and len(state_deltas) == num_steps | |
| assert all(isinstance(delta, float) for delta in state_deltas) | |
| def test_run_silent_cogitation_with_injection_hook_usage(mock_llm): | |
| """ | |
| Testet, ob bei einer Injektion der Hook korrekt registriert wird. | |
| FINAL KORRIGIERT: Greift auf die stabile Abstraktionsschicht zu. | |
| """ | |
| num_steps = 5 | |
| injection_vector = torch.randn(mock_llm.stable_config.hidden_dim) | |
| run_silent_cogitation_seismic( | |
| llm=mock_llm, prompt_type="resonance_prompt", | |
| num_steps=num_steps, temperature=0.7, | |
| injection_vector=injection_vector, injection_strength=1.0 | |
| ) | |
| # KORREKTUR: Der Test muss denselben Abstraktionspfad verwenden wie die Anwendung. | |
| # Wir prΓΌfen den Hook-Aufruf auf dem ersten Layer der stabilen, abstrahierten Layer-Liste. | |
| assert mock_llm.stable_config.layer_list[0].register_forward_pre_hook.call_count == num_steps | |
| # --- Tests for concepts.py --- | |
| def test_get_last_token_hidden_state_robustness(mock_llm): | |
| """Testet die robuste `_get_last_token_hidden_state` Funktion.""" | |
| hs = _get_last_token_hidden_state(mock_llm, "test prompt") | |
| assert hs.shape == (mock_llm.stable_config.hidden_dim,) | |
| def test_get_concept_vector_logic(mock_llm, mocker): | |
| """ | |
| Testet die Logik von `get_concept_vector`. | |
| """ | |
| mock_hidden_states = [ | |
| torch.ones(mock_llm.stable_config.hidden_dim) * 10, # target concept | |
| torch.ones(mock_llm.stable_config.hidden_dim) * 2, # baseline word 1 | |
| torch.ones(mock_llm.stable_config.hidden_dim) * 4 # baseline word 2 | |
| ] | |
| mocker.patch( | |
| 'cognitive_mapping_probe.concepts._get_last_token_hidden_state', | |
| side_effect=mock_hidden_states | |
| ) | |
| concept_vector = get_concept_vector(mock_llm, "test", baseline_words=["a", "b"]) | |
| # Erwarteter Vektor: 10 - mean(2, 4) = 10 - 3 = 7 | |
| expected_vector = torch.ones(mock_llm.stable_config.hidden_dim) * 7 | |
| assert torch.allclose(concept_vector, expected_vector) | |
| # --- Tests for utils.py --- | |
| def test_dbg_output(capsys, monkeypatch): | |
| """Testet die `dbg`-Funktion in beiden ZustΓ€nden.""" | |
| monkeypatch.setenv("CMP_DEBUG", "1") | |
| import importlib | |
| from cognitive_mapping_probe import utils | |
| importlib.reload(utils) | |
| utils.dbg("test message") | |
| captured = capsys.readouterr() | |
| assert "[DEBUG] test message" in captured.err | |
| monkeypatch.delenv("CMP_DEBUG", raising=False) | |
| importlib.reload(utils) | |
| utils.dbg("should not be printed") | |
| captured = capsys.readouterr() | |
| assert captured.err == "" | |
| [File Ends] tests/test_components.py | |
| [File Begins] tests/test_orchestration.py | |
| import pandas as pd | |
| import pytest | |
| import torch | |
| from cognitive_mapping_probe.orchestrator_seismograph import run_seismic_analysis | |
| from cognitive_mapping_probe.auto_experiment import run_auto_suite, get_curated_experiments | |
| def test_run_seismic_analysis_no_injection(mocker, mock_llm): | |
| """Testet den Orchestrator im Baseline-Modus.""" | |
| mock_run_seismic = mocker.patch('cognitive_mapping_probe.orchestrator_seismograph.run_silent_cogitation_seismic', return_value=[1.0]) | |
| mock_get_concept = mocker.patch('cognitive_mapping_probe.orchestrator_seismograph.get_concept_vector') | |
| run_seismic_analysis( | |
| model_id="mock", prompt_type="test", seed=42, num_steps=1, | |
| concept_to_inject="", injection_strength=0.0, progress_callback=mocker.MagicMock(), | |
| llm_instance=mock_llm | |
| ) | |
| mock_run_seismic.assert_called_once() | |
| mock_get_concept.assert_not_called() | |
| def test_run_seismic_analysis_with_injection(mocker, mock_llm): | |
| """Testet den Orchestrator mit Injektion.""" | |
| mock_run_seismic = mocker.patch('cognitive_mapping_probe.orchestrator_seismograph.run_silent_cogitation_seismic', return_value=[1.0]) | |
| mock_get_concept = mocker.patch( | |
| 'cognitive_mapping_probe.orchestrator_seismograph.get_concept_vector', | |
| return_value=torch.randn(10) | |
| ) | |
| run_seismic_analysis( | |
| model_id="mock", prompt_type="test", seed=42, num_steps=1, | |
| concept_to_inject="test_concept", injection_strength=1.5, progress_callback=mocker.MagicMock(), | |
| llm_instance=mock_llm | |
| ) | |
| mock_run_seismic.assert_called_once() | |
| mock_get_concept.assert_called_once_with(mock_llm, "test_concept") | |
| def test_get_curated_experiments_structure(): | |
| """Testet die Datenstruktur der kuratierten Experimente.""" | |
| experiments = get_curated_experiments() | |
| assert isinstance(experiments, dict) | |
| assert "Sequential Intervention (Self-Analysis -> Deletion)" in experiments | |
| protocol = experiments["Sequential Intervention (Self-Analysis -> Deletion)"] | |
| assert isinstance(protocol, list) and len(protocol) == 2 | |
| def test_run_auto_suite_special_protocol(mocker, mock_llm): | |
| """ | |
| Testet den speziellen Logik-Pfad fΓΌr das Interventions-Protokoll. | |
| FINAL KORRIGIERT: Verwendet den korrekten, aktuellen Experiment-Namen. | |
| """ | |
| mock_analysis = mocker.patch('cognitive_mapping_probe.auto_experiment.run_seismic_analysis', return_value={"stats": {}, "state_deltas": []}) | |
| mocker.patch('cognitive_mapping_probe.auto_experiment.get_or_load_model', return_value=mock_llm) | |
| # KORREKTUR: Verwende den neuen, korrekten Namen des Experiments, um | |
| # den `if`-Zweig in `run_auto_suite` zu treffen. | |
| correct_experiment_name = "Sequential Intervention (Self-Analysis -> Deletion)" | |
| run_auto_suite( | |
| model_id="mock-4b", num_steps=10, seed=42, | |
| experiment_name=correct_experiment_name, | |
| progress_callback=mocker.MagicMock() | |
| ) | |
| # Die restlichen Assertions sind nun wieder gΓΌltig. | |
| assert mock_analysis.call_count == 2 | |
| first_call_kwargs = mock_analysis.call_args_list[0].kwargs | |
| second_call_kwargs = mock_analysis.call_args_list[1].kwargs | |
| assert 'llm_instance' in first_call_kwargs | |
| assert 'llm_instance' in second_call_kwargs | |
| assert first_call_kwargs['llm_instance'] is mock_llm | |
| assert second_call_kwargs['llm_instance'] is mock_llm | |
| assert first_call_kwargs['concept_to_inject'] != "" | |
| assert second_call_kwargs['concept_to_inject'] == "" | |
| [File Ends] tests/test_orchestration.py | |
| <-- File Content Ends | |