Leveraging Positional Bias of LLM In-Context Learning with Class-Few-Shot and Maj-Min Alternating Ordering
View Notes & Key Points
Citation: Szcześny, A. (2025). Leveraging positional bias of LLM in-context learning with class-few-shot and Maj-Min alternating ordering. Computational Science – ICCS 2025.
Local files: Vault Taiwan_Framework; markdown szczesnyLeveragingPositionalBias2025.md.
Core claim: Class-balanced few-shot examples plus Maj-Min alternating ordering outperform standard random few-shot, with roughly five percentage points gain on minority-class F1.
- Mechanism: Two separable effects: (1) class balance alone (random class-few-shot) beats standard few-shot; (2) ordering adds further gain on top.
- Positional bias: Recency bias dominates overall accuracy, so Maj-Min Alt (majority first, minority last) works best globally.
- For minority-class recall specifically, Maj-Min Seq (minority examples clustered near the end) outperforms alternating; the recency window matters more when detecting a rare category.