I’m excited to share that I’m co-organizing HEAL@CHI’26 — the third edition of our workshop on Human-Centered Evaluation and Auditing of Language Models, taking place at CHI 2026 in Barcelona.
From participant to organizer
After participating in and contributing to the first two iterations of HEAL at CHI 2024 and CHI 2025, I’m now part of the organizing team. It’s been a privilege to watch this community grow around one of the most pressing questions in our field: how do we evaluate language models in ways that actually center human needs?
This year’s theme: AI Agents-in-the-Loop
The theme for this year’s workshop reflects a shift we’re all witnessing — practitioners increasingly turning to AI agents as evaluation partners. This creates fascinating tensions:
- How do we maintain human-centered approaches while leveraging agent capabilities for scale and efficiency?
- Where should human judgment end and agent automation begin?
- How do we evaluate the evaluators — what does meta-evaluation of AI auditing agents look like?
- What safeguards preserve human agency while benefiting from automation?
These aren’t just theoretical questions. As AI agents become more capable of multi-step reasoning and tool use, the line between human and automated evaluation is blurring in ways that demand careful HCI thinking.
Key details
- Workshop date: April 13–17, 2026
- Location: Barcelona, Spain (in-person at CHI 2026)
- Submission deadline:
February 11, 2026Extended to February 19, 2026 (AoE) - Notification: February 25, 2026
- Format: 2–6 page papers (ACM double-column)
We welcome position papers, empirical studies, literature reviews, system demos, and encore submissions of published work.
Submit or get involved
If you’re working on LLM evaluation, auditing, or anything at the intersection of human-centered design and AI assessment — we’d love to see your work.
→ Submit on OpenReview | → Workshop website
See you in Barcelona! 🇪🇸