How about running by llama.cpp

#1
by rosspanda0 - opened

It's better that Qwen team develop their own adaptation of llama.cpp for VL models, then merge code to the main branch.

Sign up or log in to comment