metadata
license: mit
task_categories:
- question-answering
- text-classification
language:
- en
arxiv: 2501.14851
JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models
JustLogic is a deductive reasoning datataset that is
- highly complex, capable of generating a diverse range of linguistic patterns, vocabulary, and argument structures;
- prior knowledge independent, eliminating the advantage of models possessing prior knowledge and ensuring that only deductive reasoning is used to answer questions; and
- capable of in-depth error analysis on the heterogeneous effects of reasoning depth and argument form on model accuracy.
Dataset Format
premises: List of premises in the question, in the form of a Python list.paragraph: A paragraph consisting of the abovepremises. This is given as input to models.conclusion: The expected conclusion of the given premises.question: The statement in which models must determine its truth-value.label: True | False | Uncertainarg: The argument structurestatements: Matching symbols inargto their corresponding natural language statements.depth: The argument depth of the given question
Dataset Construction
JustLogic is a synthetically generated dataset. The script to construct your own dataset can be found in the Github repo.
Citation
@article{chen2025justlogic,
title={JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models},
author={Chen, Michael K and Zhang, Xikun and Tao, Dacheng},
journal={arXiv preprint arXiv:2501.14851},
year={2025}
}