AI Trust after the Explorer Era
In 2024, we argued that AI has a reputation problem, and described that trustworthy AI must produce evidence, support simplicity, remain human-accountable, and be operated with pragmatic skepticism. We placed “trust” at the decision and evidence chain. Two articles published within the last 8 hours, one by an IDC-recognized leader in Intelligent Document Processing and the other by investment firm a16z, place the trust boundary at different points: within the governed enterprise workflow (IDP vendor), or at the boundary between global AI models and local institutions (a16z). In that sense, the current discussion extends our talk at ENISA CA day from 2024 outward.
a16z:
The first billion AI users were explorers and tech-optimists. But the Explorer era is over. [...] In this era, the only metric that mattered was Model Parity: Who topped the latest benchmark? Who has the most parameters?
While all three positions agree that a display of raw model capability is insufficient in the trust discussion, each of the three thought pieces takes a different stance. The a16z article treats AI trust as a distribution problem, and identifies the bottleneck in local trust networks and sovereignty. The IDP vendor article treats trust as an enterprise execution problem: auditability, document integrity, compliance, traceability. It thus says AI’s bottleneck is trustworthy enterprise data and governed execution. Our earlier CA Day position was narrower and more fundamental: AI has to strengthen evidence, reduce ambiguity, and remain accountable to humans who own the result - especially when backed by Generative AI.
Remedies to the diagnoses
The IDP vendor’s remedy is to push trust upstream. Their argument is that trustworthy AI starts before the model response, with document intelligence, reliable extraction, classification, governance, and data that can stand up to scrutiny in finance, public records, compliance processes, and government workflows. Their vision is operational. If AI is going to touch invoices, records, payments, or agency processes, then trust comes from auditable execution and from systems that can be examined after the fact.
Our own remedy outlined in the 2024 talk is close in spirit but stricter in emphasis. We argued for evidence, simplicity, human accountability, human ownership of results, and human checks and verification. We also argued for knowing the model, choosing the right model for the job, and proceeding with pragmatic skepticism. The “road ahead: agentic systems” slide is especially relevant now because it shows that agents do not float above trust services disconnected. They depend on permissions, security, governance, identity verification, secure communications, and signatures. That is a useful corrective to current agent rhetoric: OpenClaw’s security debacles showed how quickly an “agent that can act” becomes a liability when agents backed by weak unaligned LLMs run unrestrained and unchecked.
The a16z piece offers a different remedy. If AI adoption does not naturally benefit from strong social network effects, then it has to scale through trust effects. That means it reaches users through Identity Anchors, Trust Brokers, and Trust Translators: the people and institutions that already hold legitimacy in a local context. On that view, the real bottleneck is not just product quality, but Trust Latency: the time, risk, and uncertainty required before people are willing to rely on a system. The remedy is not to eliminate that latency with branding alone, but to route intelligence through actors who are already absorbing and amortizing it on behalf of others.
Conclusion
Seen together, the three remedies are complementary. a16z explains how AI gets invited in. The IDP vendor article explains how AI behaves once admitted into an enterprise workflow. Our earlier CA Day position explains what must remain true if the output is to matter in a trust service at all: the system must help users act with confidence, but also leave behind sufficient evidence, clear accountability, and room for human verification.
