Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation
View PDF
HTML (experimental)
Abstract:Logical reasoning is essential for large language models (LLMs) to ensure accurate and coherent inference. However, LLMs struggle with reasoning order variations and fail to generalize across logically equivalent transformations. LLMs often rely on fixed sequential patterns rather than true logical understanding. To address this issue, we introduce an order-centric data augmentation framework based on commutativity in logical reasoning. We first randomly shu...
Read more at arxiv.org