When Models Lie, We Learn: Multilingual Span-Level Hallucination Detection with PsiloQA Paper • 2510.04849 • Published 27 days ago • 109
<think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs Paper • 2509.08358 • Published Sep 10 • 13
Will It Still Be True Tomorrow? Multilingual Evergreen Question Classification to Improve Trustworthy QA Paper • 2505.21115 • Published May 27 • 139
Through the Looking Glass: Common Sense Consistency Evaluation of Weird Images Paper • 2505.07704 • Published May 12 • 29
Knowledge Packing Collection Models and datasets from the paper: "How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?" https://arxiv.org/abs/2502.14502 • 9 items • Updated Feb 25 • 2
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? Paper • 2502.14502 • Published Feb 20 • 91
SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators Paper • 2502.06394 • Published Feb 10 • 89