Koalar commited on
Commit
c0f75b6
·
verified ·
1 Parent(s): 13c6cad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +143 -132
README.md CHANGED
@@ -1,132 +1,143 @@
1
- # KaLLaM - Motivational-Therapeutic Advisor
2
-
3
- KaLLaM is a bilingual (Thai/English) multi-agent assistant designed for physical and mental-health conversations. It orchestrates specialized agents (Supervisor, Doctor, Psychologist, Translator, Summarizer), persists state in SQLite, and exposes Gradio front-ends alongside data and can use evaluation tooling for model psychological skill benchmark.
4
- Finalist in PAN-SEA AI DEVELOPER CHALLENGE 2025 Round 2: Develop Deployable Solutions & Pitch
5
-
6
- ## Highlights
7
- - Multi-agent orchestration that routes requests to domain specialists.
8
- - Thai/English support backed by SEA-Lion translation services.
9
- - Conversation persistence with export utilities for downstream analysis.
10
- - Ready-to-run Gradio demo and developer interfaces.
11
- - Evaluation scripts for MISC/BiMISC-style coding pipelines.
12
-
13
- ## Requirements
14
- - Python 3.10 or newer (3.11+ recommended; Docker/App Runner images use 3.11).
15
- - pip, virtualenv (or equivalent), and Git for local development.
16
- - Access tokens for the external models you plan to call (SEA-Lion, Google Gemini, optional OpenAI or AWS Bedrock).
17
-
18
- ## Quick Start (Local)
19
- 1. Clone the repository and switch into it.
20
- 2. Create and activate a virtual environment:
21
- ```powershell
22
- python -m venv .venv
23
- .venv\Scripts\Activate.ps1
24
- ```
25
- ```bash
26
- python -m venv .venv
27
- source .venv/bin/activate
28
- ```
29
- 3. Install dependencies (editable mode keeps imports pointing at `src/`):
30
- ```bash
31
- python -m pip install --upgrade pip setuptools wheel
32
- pip install -e .[dev]
33
- ```
34
- 4. Create a `.env` file at the project root (see the next section) and populate the keys you have access to.
35
- 5. Launch one of the Gradio apps:
36
- ```bash
37
- python gui/chatbot_demo.py # bilingual demo UI
38
- python gui/chatbot_dev_app.py # Thai-first developer UI
39
- ```
40
-
41
- The Gradio server binds to http://127.0.0.1:7860 by default; override via `GRADIO_SERVER_NAME` and `GRADIO_SERVER_PORT`.
42
-
43
- ## Environment Configuration
44
- Configuration is loaded with `python-dotenv`, so any variables in `.env` are available at runtime. Define only the secrets relevant to the agents you intend to use.
45
-
46
- **Core**
47
- - `SEA_LION_API_KEY` *or* (`SEA_LION_GATEWAY_URL` + `SEA_LION_GATEWAY_TOKEN`) for SEA-Lion access.
48
- - `SEA_LION_BASE_URL` (optional; defaults to `https://api.sea-lion.ai/v1`).
49
- - `SEA_LION_MODEL_ID` to override the default SEA-Lion model.
50
- - `GEMINI_API_KEY` for Doctor/Psychologist English responses.
51
-
52
- **Optional integrations**
53
- - `OPENAI_API_KEY` if you enable any OpenAI-backed tooling via `strands-agents`.
54
- - `AWS_REGION` (and optionally `AWS_DEFAULT_REGION`) plus temporary credentials (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`) when running Bedrock-backed flows.
55
- - `AWS_ROLE_ARN` if you assume roles for Bedrock access.
56
- - `NGROK_AUTHTOKEN` when tunnelling Gradio externally.
57
- - `TAVILY_API_KEY` if you wire in search or retrieval plugins.
58
-
59
- Example scaffold:
60
- ```env
61
- SEA_LION_API_KEY=your-sea-lion-token
62
- SEA_LION_MODEL_ID=aisingapore/Gemma-SEA-LION-v4-27B-IT
63
- GEMINI_API_KEY=your-gemini-key
64
- OPENAI_API_KEY=sk-your-openai-key
65
- AWS_REGION=ap-southeast-2
66
- # AWS_ACCESS_KEY_ID=...
67
- # AWS_SECRET_ACCESS_KEY=...
68
- # AWS_SESSION_TOKEN=...
69
- ```
70
- Keep `.env` out of version control and rotate credentials regularly. You can validate temporary AWS credentials with `python test_credentials.py`.
71
-
72
- ## Running and Persistence
73
- - Conversations, summaries, and metadata persist to `chatbot_data.db` (SQLite). The schema is created automatically on first run.
74
- - Export session transcripts with `ChatbotManager.export_session_json()`; JSON files land in `exported_sessions/`.
75
- - Logs are emitted per agent into `logs/` (daily files) and to stdout.
76
-
77
- ## Docker
78
- Build and run the containerised Gradio app:
79
- ```bash
80
- docker build -t kallam .
81
- docker run --rm -p 8080:8080 --env-file .env kallam
82
- ```
83
- Environment variables are read at runtime; use `--env-file` or `-e` flags to provide the required keys. Override the entry script with `APP_FILE`, for example `-e APP_FILE=gui/chatbot_dev_app.py`.
84
-
85
- ## AWS App Runner
86
- The repo ships with `apprunner.yaml` for AWS App Runner's managed Python 3.11 runtime.
87
- 1. Push the code to a connected repository (GitHub or CodeCommit) or supply an archive.
88
- 2. In the App Runner console choose **Source code** -> **Managed runtime** and upload/select `apprunner.yaml`.
89
- 3. Configure AWS Secrets Manager references for the environment variables listed under `run.env` (SEA-Lion, Gemini, OpenAI, Ngrok, etc.).
90
- 4. Deploy. App Runner exposes the Gradio UI on the service URL and honours the `$PORT` variable (defaults to 8080).
91
-
92
- For fully containerised deployments on App Runner, ECS, or EKS, build the Docker image and supply the same environment variables.
93
-
94
- ## Project Layout
95
- ```
96
- project-root/
97
- |-- src/kallam/
98
- | |-- app/ # ChatbotManager facade
99
- | |-- domain/agents/ # Supervisor, Doctor, Psychologist, Translator, Summarizer, Orchestrator
100
- | |-- infra/ # SQLite stores, exporter, token counter
101
- | `-- infrastructure/ # Shared SEA-Lion configuration helpers
102
- |-- gui/ # Gradio demo and developer apps
103
- |-- scripts/ # Data prep and evaluation utilities
104
- |-- data/ # Sample datasets (gemini, human, orchestrated, SEA-Lion)
105
- |-- exported_sessions/ # JSON exports created at runtime
106
- |-- logs/ # Runtime logs (generated)
107
- |-- Dockerfile
108
- |-- apprunner.yaml
109
- |-- test_credentials.py
110
- `-- README.md
111
- ```
112
-
113
- ## Development Tooling
114
- - Run tests: `pytest -q`
115
- - Lint: `ruff check src`
116
- - Type-check: `mypy src`
117
- - Token usage: see `src/kallam/infra/token_counter.py`
118
- - Supervisor/translator fallbacks log warnings if credentials are missing.
119
-
120
- ## Scripts and Evaluation
121
- The `scripts/` directory includes:
122
- - `eng_silver_misc_coder.py` and `thai_silver_misc_coder.py` for SEA-Lion powered coding pipelines.
123
- - `model_evaluator.py` plus preprocessing and visualisation helpers (`ex_data_preprocessor.py`, `in_data_preprocessor.py`, `visualizer.ipynb`).
124
-
125
- ## Note:
126
- ### Proporsal
127
- Refer to KaLLaM Proporsal.pdf for more information of the project
128
- ### Citation
129
- See `Citation.md` for references and datasets.
130
- ### License
131
- Apache License 2.0. Refer to `LICENSE` for full terms.
132
-
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # KaLLaM - Motivational-Therapeutic Advisor
2
+
3
+ KaLLaM is a bilingual (Thai/English) multi-agent assistant designed for physical and mental-health conversations. It orchestrates specialized agents (Supervisor, Doctor, Psychologist, Translator, Summarizer), persists state in SQLite, and exposes Gradio front-ends alongside data and can use evaluation tooling for model psychological skill benchmark.
4
+ Finalist in PAN-SEA AI DEVELOPER CHALLENGE 2025 Round 2: Develop Deployable Solutions & Pitch
5
+
6
+ ---
7
+ title: {{title}}
8
+ emoji: {{emoji}}
9
+ colorFrom: {{colorFrom}}
10
+ colorTo: {{colorTo}}
11
+ sdk: {{sdk}}
12
+ sdk_version: "{{sdkVersion}}"
13
+ app_file: app.py
14
+ pinned: false
15
+ ---
16
+
17
+ ## Highlights
18
+ - Multi-agent orchestration that routes requests to domain specialists.
19
+ - Thai/English support backed by SEA-Lion translation services.
20
+ - Conversation persistence with export utilities for downstream analysis.
21
+ - Ready-to-run Gradio demo and developer interfaces.
22
+ - Evaluation scripts for MISC/BiMISC-style coding pipelines.
23
+
24
+ ## Requirements
25
+ - Python 3.10 or newer (3.11+ recommended; Docker/App Runner images use 3.11).
26
+ - pip, virtualenv (or equivalent), and Git for local development.
27
+ - Access tokens for the external models you plan to call (SEA-Lion, Google Gemini, optional OpenAI or AWS Bedrock).
28
+
29
+ ## Quick Start (Local)
30
+ 1. Clone the repository and switch into it.
31
+ 2. Create and activate a virtual environment:
32
+ ```powershell
33
+ python -m venv .venv
34
+ .venv\Scripts\Activate.ps1
35
+ ```
36
+ ```bash
37
+ python -m venv .venv
38
+ source .venv/bin/activate
39
+ ```
40
+ 3. Install dependencies (editable mode keeps imports pointing at `src/`):
41
+ ```bash
42
+ python -m pip install --upgrade pip setuptools wheel
43
+ pip install -e .[dev]
44
+ ```
45
+ 4. Create a `.env` file at the project root (see the next section) and populate the keys you have access to.
46
+ 5. Launch one of the Gradio apps:
47
+ ```bash
48
+ python gui/chatbot_demo.py # bilingual demo UI
49
+ python gui/chatbot_dev_app.py # Thai-first developer UI
50
+ ```
51
+
52
+ The Gradio server binds to http://127.0.0.1:7860 by default; override via `GRADIO_SERVER_NAME` and `GRADIO_SERVER_PORT`.
53
+
54
+ ## Environment Configuration
55
+ Configuration is loaded with `python-dotenv`, so any variables in `.env` are available at runtime. Define only the secrets relevant to the agents you intend to use.
56
+
57
+ **Core**
58
+ - `SEA_LION_API_KEY` *or* (`SEA_LION_GATEWAY_URL` + `SEA_LION_GATEWAY_TOKEN`) for SEA-Lion access.
59
+ - `SEA_LION_BASE_URL` (optional; defaults to `https://api.sea-lion.ai/v1`).
60
+ - `SEA_LION_MODEL_ID` to override the default SEA-Lion model.
61
+ - `GEMINI_API_KEY` for Doctor/Psychologist English responses.
62
+
63
+ **Optional integrations**
64
+ - `OPENAI_API_KEY` if you enable any OpenAI-backed tooling via `strands-agents`.
65
+ - `AWS_REGION` (and optionally `AWS_DEFAULT_REGION`) plus temporary credentials (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`) when running Bedrock-backed flows.
66
+ - `AWS_ROLE_ARN` if you assume roles for Bedrock access.
67
+ - `NGROK_AUTHTOKEN` when tunnelling Gradio externally.
68
+ - `TAVILY_API_KEY` if you wire in search or retrieval plugins.
69
+
70
+ Example scaffold:
71
+ ```env
72
+ SEA_LION_API_KEY=your-sea-lion-token
73
+ SEA_LION_MODEL_ID=aisingapore/Gemma-SEA-LION-v4-27B-IT
74
+ GEMINI_API_KEY=your-gemini-key
75
+ OPENAI_API_KEY=sk-your-openai-key
76
+ AWS_REGION=ap-southeast-2
77
+ # AWS_ACCESS_KEY_ID=...
78
+ # AWS_SECRET_ACCESS_KEY=...
79
+ # AWS_SESSION_TOKEN=...
80
+ ```
81
+ Keep `.env` out of version control and rotate credentials regularly. You can validate temporary AWS credentials with `python test_credentials.py`.
82
+
83
+ ## Running and Persistence
84
+ - Conversations, summaries, and metadata persist to `chatbot_data.db` (SQLite). The schema is created automatically on first run.
85
+ - Export session transcripts with `ChatbotManager.export_session_json()`; JSON files land in `exported_sessions/`.
86
+ - Logs are emitted per agent into `logs/` (daily files) and to stdout.
87
+
88
+ ## Docker
89
+ Build and run the containerised Gradio app:
90
+ ```bash
91
+ docker build -t kallam .
92
+ docker run --rm -p 8080:8080 --env-file .env kallam
93
+ ```
94
+ Environment variables are read at runtime; use `--env-file` or `-e` flags to provide the required keys. Override the entry script with `APP_FILE`, for example `-e APP_FILE=gui/chatbot_dev_app.py`.
95
+
96
+ ## AWS App Runner
97
+ The repo ships with `apprunner.yaml` for AWS App Runner's managed Python 3.11 runtime.
98
+ 1. Push the code to a connected repository (GitHub or CodeCommit) or supply an archive.
99
+ 2. In the App Runner console choose **Source code** -> **Managed runtime** and upload/select `apprunner.yaml`.
100
+ 3. Configure AWS Secrets Manager references for the environment variables listed under `run.env` (SEA-Lion, Gemini, OpenAI, Ngrok, etc.).
101
+ 4. Deploy. App Runner exposes the Gradio UI on the service URL and honours the `$PORT` variable (defaults to 8080).
102
+
103
+ For fully containerised deployments on App Runner, ECS, or EKS, build the Docker image and supply the same environment variables.
104
+
105
+ ## Project Layout
106
+ ```
107
+ project-root/
108
+ |-- src/kallam/
109
+ | |-- app/ # ChatbotManager facade
110
+ | |-- domain/agents/ # Supervisor, Doctor, Psychologist, Translator, Summarizer, Orchestrator
111
+ | |-- infra/ # SQLite stores, exporter, token counter
112
+ | `-- infrastructure/ # Shared SEA-Lion configuration helpers
113
+ |-- gui/ # Gradio demo and developer apps
114
+ |-- scripts/ # Data prep and evaluation utilities
115
+ |-- data/ # Sample datasets (gemini, human, orchestrated, SEA-Lion)
116
+ |-- exported_sessions/ # JSON exports created at runtime
117
+ |-- logs/ # Runtime logs (generated)
118
+ |-- Dockerfile
119
+ |-- apprunner.yaml
120
+ |-- test_credentials.py
121
+ `-- README.md
122
+ ```
123
+
124
+ ## Development Tooling
125
+ - Run tests: `pytest -q`
126
+ - Lint: `ruff check src`
127
+ - Type-check: `mypy src`
128
+ - Token usage: see `src/kallam/infra/token_counter.py`
129
+ - Supervisor/translator fallbacks log warnings if credentials are missing.
130
+
131
+ ## Scripts and Evaluation
132
+ The `scripts/` directory includes:
133
+ - `eng_silver_misc_coder.py` and `thai_silver_misc_coder.py` for SEA-Lion powered coding pipelines.
134
+ - `model_evaluator.py` plus preprocessing and visualisation helpers (`ex_data_preprocessor.py`, `in_data_preprocessor.py`, `visualizer.ipynb`).
135
+
136
+ ## Note:
137
+ ### Proporsal
138
+ Refer to KaLLaM Proporsal.pdf for more information of the project
139
+ ### Citation
140
+ See `Citation.md` for references and datasets.
141
+ ### License
142
+ Apache License 2.0. Refer to `LICENSE` for full terms.
143
+