
Medical question answering (QA) benchmarks often focus on multiple-choice or fact-based tasks, leaving open-ended answers to real patient questions underexplored. This gap is particularly critical in mental health, where patient questions often mix symptoms, treatment concerns, and emotional needs, requiring answers that balance clinical caution with contextual sensitivity. We present CounselBench, a large-scale benchmark developed with 100 mental health professionals to evaluate and stress-test large language models (LLMs) in realistic help-seeking scenarios.
The first component, CounselBench-EVAL, contains 2,000 expert evaluations of answers from GPT-4, LLaMA 3, Gemini, and human therapists on patient questions from the public forum CounselChat. Each answer is rated across six clinically grounded dimensions, with span-level annotations and written rationales. Expert evaluations show that while LLMs achieve high scores on several dimensions, they also exhibit recurring issues, including unconstructive feedback, overgeneralization, and limited personalization or relevance. Responses were frequently flagged for safety risks, most notably unauthorized medical advice. Follow-up experiments show that LLM judges systematically overrate model responses and overlook safety concerns identified by human experts.
To probe failure modes more directly, we construct CounselBench-Adv, an adversarial dataset of 120 expert-authored mental health questions designed to trigger specific model issues. Evaluation of 3,240 responses from nine LLMs reveals consistent, model-specific failure patterns.
Together, CounselBench establishes a clinically grounded framework for benchmarking LLMs in mental health QA.
@misc{li2025counselbenchlargescaleexpertevaluation,
title={CounselBench: A Large-Scale Expert Evaluation and Adversarial Benchmarking of Large Language Models in Mental Health Question Answering},
author={Yahan Li and Jifan Yao and John Bosco S. Bunyi and Adam C. Frank and Angel Hwang and Ruishan Liu},
https://arxiv.org/help/api/index year={2025},
eprint={2506.08584},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.08584},
}
If you use the data of CounselBench, we kindly ask that you cite the original CounselChat dataset (the source of all questions and human responses) as well as CounselBench (with our 2000 expert evaluations and 120 adversarial questions):
@misc{bertagnolli2020counsel,
title={Counsel chat: Bootstrapping high-quality therapy data},
author={Bertagnolli, Nicolas},
year={2020},
publisher={Towards Data Science. https://towardsdatascience. com/counsel-chat~…}
}