π¨ Gelato β From Data Curation to Reinforcement Learning: Building a Strong Grounding Model for Computer-Use Agents
π¨ Gelato-30B-A3B (model)β|βπ±οΈ Click-100k (dataset) | π Training Instructions | π Evaluation
We are releasing π¨ Gelato-30B-A3B, a state-of-the-art grounding model for GUI computer-use tasks! Gelato is trained on our open-sourced Click-100k dataset and achieves 63.88% accuracy on ScreenSpot-Pro[3] and 67.19% / 73.40% on OS-World-G / OS-World-G (Refined)[4], surpassing prior specialized computer grounding models like GTA1-32B [5] and much larger VLMs including Qwen3-VL-235B-A22B-Instruct [10]. When combined with GPT-5, Gelato enables frontier-level agentic performanceβplacing TBD on the OS-World leaderboard at TBD accuracy.
For details on data curation and training refer to our blog post.
Performance
Gelato-30B-A3B outperforms the SoTA specialized computer grounding model, GTA1-32B, and larger VLMs on the ScreenSpot-Pro and OS-World-G grounding benchmarks. When paired with GPT-5, Gelato as a computer-use agent attains TBD success rate on OS-World placing it TBD on the leaderboard.
| Model | Total Size | Activated Size | Open Source | ScreenSpot-V2 | ScreenSpotPro | OSWORLD-G |
|---|---|---|---|---|---|---|
| Qwen3-VL-30B-A3B-Instruct | 30 B | 3.3 B | β | β | β | β |
| Qwen3-VL-235B-A22B-Instruct | 235 B | 22 B | β | - | 62.0 | 66.7 |
| OpenCUA-72B | 72 B | β | β | β | 60.8 | 59.2 |
| GTA1-32B | 32 B | β | β | β | β | β |
| Gelato-30B-A3B | 30 B | 3.3 B | β | β | 63.88 | 73.40 |
Inference
Below is a code snippet demonstrating how to ground using our model. Given an image and an instruction, we output normalized coordinates in the range [0,1000].
from transformers import Qwen3VLMoeForConditionalGeneration, AutoProcessor
import re
from PIL import Image, ImageDraw
import requests
from io import BytesIO
def extract_coordinates(raw_string):
"""
Extract the coordinates from the raw string.
Args:
raw_string: str (e.g. "(100, 200)")
Returns:
x: float (e.g. 100.0)
y: float (e.g. 200.0)
"""
try:
matches = re.findall(r"\((-?\d*\.?\d+),\s*(-?\d*\.?\d+)\)", raw_string)
return [tuple(map(int, match)) for match in matches][0]
except:
return 0,0
def visualize_prediction(img, pred_x, pred_y, img_width, img_height):
"""
Visualize the predicted coordinates on the image.
Args:
img: PIL.Image.Image
pred_x: float
pred_y: float
img_width: int
img_height: int
"""
pred_x = int((pred_x * img_width)/1000)
pred_y = int((pred_y * img_height)/1000)
draw = ImageDraw.Draw(img)
r = 20
draw.ellipse((pred_x - r, pred_y - r, pred_x + r, pred_y + r), outline="green", width=2)
cross_len = 6
draw.line((pred_x - cross_len, pred_y, pred_x + cross_len, pred_y), fill="green", width=2)
draw.line((pred_x, pred_y - cross_len, pred_x, pred_y + cross_len), fill="green", width=2)
img.save("predicted_coordinates.png")
print(f"Predicted coordinates: ({pred_x}, {pred_y})")
# Load the model and processor
MODEL_PATH = "mlfoundations-cua-dev/Gelato-30B-A3B"
model = Qwen3VLMoeForConditionalGeneration.from_pretrained(
MODEL_PATH,
device_map="auto",
dtype="auto"
)
processor = AutoProcessor.from_pretrained(
MODEL_PATH,
max_pixels=10*7 # 10MP
)
url = "https://github.com/QwenLM/Qwen3-VL/raw/main/cookbooks/assets/computer_use/computer_use1.jpeg"
response = requests.get(url)
print(response.status_code)
print(response.headers.get("Content-Type"))
img = Image.open(BytesIO(response.content))
img_width, img_height = img.size
# Prepare messages
PROMPT = '''
You are an expert UI element locator. Given a GUI image and a user's element description, provide the coordinates of the specified element as a single (x,y) point. The image resolution is height {height} and width {width}. For elements with area, return the center point.
Output the coordinate pair exactly:
(x,y)
'''
PROMPT = PROMPT.strip()
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": PROMPT},
{
"type": "image",
"image": img,
},
{"type": "text", "text": "Reload the cache."},
],
}
]
device = next(model.parameters()).device
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
# Extract the coordinates from the output text
print(f"Model output: {output_text[0]}")
pred_x, pred_y = extract_coordinates(output_text[0])
# Calculate the absolute coordinates from normalized coordinates
visualize_prediction(img, pred_x, pred_y, img_width, img_height)
- Downloads last month
- 281
