Standards Release · Interface Governance / Work Systems

CLARITY-1.0 Introduces the First Cross-Platform “Cognitive Load Label” for AI Tools

International Digital Interfaces Consortium (IDIC) · Standards Bulletin · January 2026 · Voluntary Procurement-ready Disclosure-first

The International Digital Interfaces Consortium (IDIC) has published CLARITY-1.0, a proposed reporting standard that requires AI tools to disclose their expected cognitive load impact on users — not by measuring brain activity (thankfully), but by standardizing what systems demand from attention, verification, and decision-making.

Core thesis: Modern AI tools can increase productivity while quietly increasing oversight burden. CLARITY-1.0 makes that tradeoff visible through standardized labels, ranges, and accountability declarations.
What CLARITY-1.0 Measures

CLARITY treats cognitive load as an interface + workflow property. The goal is comparability, not perfect psychological truth.

Dimension Reported as Why it matters
Interrupt Density (ID) events/hour (range) Context switching erodes deep work
Verification Burden (VB) review actions/task Safety requires checking; checking costs attention
Decision Compression (DC) time window band Defaults steer choices under time pressure
Accountability Shift (AS) human/shared/system Who owns outcomes defines risk
Example CLARITY Label (How Tools Would Display It)
CLARITY LABEL
Cognitive Load: Moderate
Based on typical use in long-session workflows. Ranges vary by expertise and task criticality.
6–11
Interrupts / hour (ID)
3–5
Review actions / task (VB)
Standard
Decision window (DC)
Shared
Accountability shift (AS)

The label is intentionally range-based to discourage fake precision. Vendors must also publish a short method note describing how estimates were derived.

Why This Standard Emerged Now

CLARITY-1.0 cites a pattern observed across workplace deployments: AI systems often increase throughput, but shift hidden work onto users in the form of micro-decisions, reviewing, verifying, and monitoring. This “shadow labor” becomes visible only when tools are compared consistently across environments.

“We don’t need mind-reading. We need honest disclosure about how much attention a tool consumes to be used safely.” — Dr. Kaori Sato, IDIC Rapporteur

Early Adoption

According to the bulletin, CLARITY labels are being piloted in three areas:

Limitations & Critiques

“If CLARITY becomes a checkbox, it’ll fail. If it becomes a procurement norm, it’ll change everything.” — Prof. Daniel Reyes, Human-Computer Interaction Lab