what data and its volume were used to train the model?
Hi. I am very interested in the question that is already mentioned in the title. So, how much data is in tokens. is the model trained? and does it have enough understanding of psychology, for example?
did you even read the description ?
"...built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence."
the api is free, so is their pro agent on their website, try it out and get to know yourself.
but I believe models higher in parameters, like ling/ring-1t could be way better at understanding things. this model should understand what it has to execute, how and do it fast.
hope it helps.
I was disappointed with its knowledge, so I came here. And yes, I've read the whole text, just to clarify.
Yes, it's natural to be disappointed with its general knowledge, but as coding worker it does a pretty good job.
yeah i know. So, again, i'm curious about its knowledge, so i'm asking about its dataset