Model Card for Gemma-SEA-LION-v4-27B
Last updated: 2025-08-25
Gemma-SEA-LION-v4-27B is based on Gemma 3 (which supports over 100 languages) and is a multilingual model which has undergone continued pre-training on approximately 500B tokens sampled from a bucket of over one trillion tokens across 11 SEA languages: Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese.
Gemma-SEA-LION-v4-27B inherits Gemma 3's:
Large 128K context length
Image and text understanding capabilities, including document comprehension, visual Q&A, and image-grounded reasoning
Advanced function calling and structured outputs to allow for seamless integration into larger systems
Model Details
Model Description
SEA-LION stands for Southeast Asian Languages In One Network.
We performed continued pre-training in English and SEA languages on Gemma 3 27B IT, a decoder model using the Gemma 3 architecture, to create Gemma-SEA-LION-v4-27B.
For tokenization, the model employs the default tokenizer used in Gemma 3 27B IT.
- Developed by: Products Pillar, AI Singapore
- Funded by: Singapore NRF
- Shared by: Products Pillar, AI Singapore
- Model type: Decoder
- Context length: 128k
- Language(s) (NLP): Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese
- License: Gemma Terms of Use
- Continued pretrained from model: Gemma-3-27B-IT
Model Sources
- Repository: https://github.com/aisingapore/sealion.git
Uses
Out-of-Scope Use
The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
Bias, Risks, and Limitations
The model was not tested for robustness against adversarial prompting. It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies.
Limitations
In terms of vision capability, Gemma-SEA-LION-v4-27B has been continued pretrained exclusively on the text back-end. As a result, its vision capabilities are expected to be comparable to those of Gemma 3 IT 27B, and may not exhibit significant improvements or differences in this area. ๐ค google/gemma-3-27b-it
How to Get Started with the Model
Use the code below to get started with the model.
Use the code below to get started with the model using the ๐ค Transformers library.
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="aisingapore/Gemma-SEA-LION-v4-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Write a poem on southeast asian countries in Indonesian."}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Training Details
Training Data
The dataset comprises Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese languages, collected from a mixture of sources including web data, code, open-source datasets, and synthetically generated datasets, amounting to a total of 500 billion tokens.
The 500 billion tokens are sampled from a much larger pool of 1 trillion tokens from open-sourced datasets with the optimal datamix shown below determined by our experiments.
| Language | Dataset Name | Total Tokens (B) | Percentage (%) | Total percentage (%) |
|---|---|---|---|---|
| Code | StarCoder (OLMo 2 Version) | 50B | 10 | 10 |
| EN | Fineweb-Edu | 80B | 16 | 40 |
| DCLM-OLMo2-HQ | 80B | 16 | ||
| Non-CC-EN | 40B | 8 | ||
| ZH | SEA-LION Pile v1 | 13.5B | 2.7 | 9 |
| Fineweb2 | 13.5B | 2.7 | ||
| Fineweb2-HQ | 4.5B | 0.9 | ||
| VI | SEA-LION Pile v1 | 4.25B | 0.85 | 8.5 |
| SEA-LION Pile v2 | 12.75B | 2.55 | ||
| Fineweb2 | 8.5B | 1.7 | ||
| Non-CC-VI | 17B | 3.4 | ||
| ID | SEA-LION Pile v1 | 5.66B | 1.13 | 8.5 |
| SEA-LION Pile v2 | 17B | 3.4 | ||
| Fineweb2 | 11.33B | 2.27 | ||
| Non-CC-ID | 8.5B | 1.7 | ||
| TH | SEA-LION Pile v1 | 3.035B | 0.61 | 8.5 |
| SEA-LION Pile v2 | 9.107B | 1.82 | ||
| Fineweb2 | 3.035B | 0.61 | ||
| WangChanBERTa | 3.035B | 0.61 | ||
| Dolmav1 | 3.035B | 0.61 | ||
| Non-CC-TH | 21.25B | 4.25 | ||
| TL, TA, MS, KM, LO and MY | ALL_LANG | 77.5B | 15.5 | 15.5 |
Note:
All token counts are counted using Gemma 3 tokenizer.
Pre-training was conducted with batches of 8k token lengths.
SEA-Pile v1 is processed from Common Crawl WET, which is published here. The main proportion is from mC4 dataset (corpus link). The cutoff date of this version is September 2020.
SEA-Pile v2 is processed from Common Crawl WARC from October 2020 to April 2024.
Tamil news is sourced with permission from Seithi
We utilized 0.5% of synthetically generated datasets for the low-resource language, Khmer.
Training Procedure
Training Hyperparameters
- Training regime:
| Hyperparameter | Gemma-SEA-LION-v4-27B |
|---|---|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | CosineAnnealing |
| Learning Rate | 4.00E-08 |
| Global Batch Size | 1024 |
Results
For details on Gemma-SEA-LION-v4-27B-IT performance, please refer to the Leaderboard results on SEA-HELM.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Nvidia H200 140GB GPUs
- Hours used: 214 hrs
- Cloud Provider: SMC H200
- Compute Region: Singapore
- Carbon Emitted: appx. 98 kg CO2 e
More Information
This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
For more info, please contact us at sealion@aisingapore.org
Team
Antonyrex Sajeban, Chan Hok Teng Adwin, Cheng Zi Yi Nicholas, Choa Hsueh Mei Esther, Heng Jonathan, Huang Yuli, Hulagadri Adithya Venkatadri, Jann Railey Estrada Montalan, Kang Siow Wei Bryan, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Muhammad Ridzuan Bin Mokhtar, Nagarajan Karthik, Ng Boon Cheong Raymond, Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Jin Jie Brandon, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Rengarajan Hamsawardhini, Susanto Yosephine, Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Yeo Yeow Tong, Yong Xianbin, Liew Rachel, Liu Bing Jie Darius, Teo Wei Yi, Zhou Lin (NCS), Gopalakrishnan Roshan (NCS), Anda Cuahtemoc (NCS), Sri Devi Wijaya (NCS), Nandi Partha (NCS), Elliott Chris (Google), Mohseni Mohammadreza (Google), Sharan Mayank (Google), Wei Fanny (Google), Tang Jiuqiang (Google), Xu Xiang (Google), Yu Ting (Google), Loh Michelle (Google), Mangal Saurabh (Google), Mukherjee Pratyusha (Google), Sim Stephanie (Google)
Contact
- Downloads last month
- 1,015
Model tree for aisingapore/Gemma-SEA-LION-v4-27B
Base model
google/gemma-3-27b-pt