The International Digital Interfaces Consortium (IDIC) has published CLARITY-1.0, a proposed reporting standard that requires AI tools to disclose their expected cognitive load impact on users — not by measuring brain activity (thankfully), but by standardizing what systems demand from attention, verification, and decision-making.
CLARITY treats cognitive load as an interface + workflow property. The goal is comparability, not perfect psychological truth.
| Dimension | Reported as | Why it matters |
|---|---|---|
| Interrupt Density (ID) | events/hour (range) | Context switching erodes deep work |
| Verification Burden (VB) | review actions/task | Safety requires checking; checking costs attention |
| Decision Compression (DC) | time window band | Defaults steer choices under time pressure |
| Accountability Shift (AS) | human/shared/system | Who owns outcomes defines risk |
The label is intentionally range-based to discourage fake precision. Vendors must also publish a short method note describing how estimates were derived.
CLARITY-1.0 cites a pattern observed across workplace deployments: AI systems often increase throughput, but shift hidden work onto users in the form of micro-decisions, reviewing, verifying, and monitoring. This “shadow labor” becomes visible only when tools are compared consistently across environments.
“We don’t need mind-reading. We need honest disclosure about how much attention a tool consumes to be used safely.” — Dr. Kaori Sato, IDIC Rapporteur
According to the bulletin, CLARITY labels are being piloted in three areas:
“If CLARITY becomes a checkbox, it’ll fail. If it becomes a procurement norm, it’ll change everything.” — Prof. Daniel Reyes, Human-Computer Interaction Lab