Don't make traditional software. Build AI-native systems.
Despite the AI fever sweeping the industry, here's the paradox: users don't want to know anything about AI. They don't care about your agents, your RAG pipelines, or your fancy models. They want their problems solved. Period.
This brings us to a fundamental shift: UX is no longer about screens. Welcome to AX—Agentic Experience.
Just because your code works doesn't mean it's working. The new battleground? Context engineering and prompt design. The challenge? Building systems where uncertainty isn't a bug—it's a feature you design for.
The Paradigm Shift
We now need to:
- Design data flows, not just interfaces
- Anticipate AI errors and plan for graceful failures
- Build APIs with usability in mind (yes, APIs now need UX thinking)
- Consider AI trust and explainability from day one
Designers are becoming architects of invisible experiences, creating systems where humans and AI seamlessly collaborate without the human even noticing there's an AI involved.
Key Principles for Agentic UX
Here's what matters most:
1. Understand AI capabilities + limitations — Push to the bleeding edge of model capability. As Usama Bin Shafqat from the NotebookLM team puts it: "I feel like consistently, the most magical moments out of AI building come when I'm really, really, really just close to the edge of the model capability."
2. Handle edge cases and AI mistakes gracefully — Engineer for reliability, not just accuracy
3. Advocate for discoverable, scalable APIs — In a world of agentic experiences, your API is your interface
4. Prioritize data clarity and structure — Garbage in, garbage out applies 10x more with AI
5. Measure success across both human and AI interactions — Traditional metrics won't cut it anymore
Dynamic Interfaces, In Every Sense
AI enables interfaces that are truly dynamic—both vertically and horizontally. We're talking:
- Multimodal interactions across external devices
- Asynchronous experiences where the work happens in the background
- Human-in-the-loop task queues that meet users where they are
- Context-aware triggers that don't force users to navigate through tabs
The key insight? It's not about how long something takes—it's about the sensation. Experience happens in the background.
Sounds great, right? But here's the catch: low adoption. You have to build with the user in mind, not with the AI in mind.
The Hidden Metric That Determines Success
AI product success depends on CAIR (Confidence in AI Results), not just technical accuracy. Instead of only asking "Is the AI accurate enough?", you also need to ask "Is CAIR high enough for adoption?"
CAIR = Value ÷ (Risk × Correction)
Components:
- Value: The benefit when AI works correctly
- Risk: The consequences of errors
- Correction: The effort required to fix mistakes
Design > Technical Precision: An AI with 85% accuracy + high CAIR beats
an AI with 95% accuracy + low CAIR. Every. Single. Time.
Adoption is blocked by fear, not capability. Users need low perceived risk and high confidence.
Five Principles to Optimize CAIR
1. Strategic Human-in-the-Loop
Don't add human oversight everywhere (it kills value). Place it at key decision points instead.
Requiring approval for every suggestion destroys productivity. Requiring it before irreversible actions maintains both safety and utility.
The art is identifying where human oversight optimizes CAIR with minimal value dilution.
Practical application:
- ✅ Approval before irreversible actions
- ✅ Batch approval for similar actions
- ❌ Approval for every single AI suggestion
2. Reversibility
Objective: Reduce correction effort.
When users know they can easily undo an AI action, correction effort plummets. The psychological safety of a clear "escape route" transforms anxiety into confidence.
Adoption rates double simply by adding prominent undo capabilities.
Practical application:
- Prominent, visible undo button
- Reversible change history
- Clear rollback mechanisms
3. Consequence Isolation
Eliminate risk during experimentation by creating safe spaces for AI exploration through sandboxes, previews, and draft modes. This separates the mental models of "trying" vs "deploying", effectively eliminating fear of consequences during exploration.
Sandbox environments consistently show 3-4x higher adoption rates.
Practical application:
- Sandbox mode for experimentation
- Preview interface before applying changes
- Clear separation between test and production environments
4. Transparency
Reduce perceived risk. When users understand why AI made a decision, they can better evaluate its reliability (reducing perceived Risk) and identify specific issues to fix (reducing Correction effort).
Explanation features dramatically increase repeat usage because users can correct specific wrong assumptions instead of completely discarding AI outputs.
Practical application:
- Show AI reasoning
- Visible confidence scores
- Display data sources used
5. Control Gradients
Allow users to calibrate CAIR to their personal comfort level. Start with low-risk features and progressively offer higher-value capabilities as trust builds.
This recognizes that everyone has different risk tolerance and creates a natural progression path.
Practical application:
- Progressive disclosure of functionalities
- User-configurable risk settings
- Feature unlocking based on built confidence
The D-Curve: Adaptive UX
Build UX/UI based on the user journey so power users can extract full value, but new users aren't overwhelmed.
Hide functionality so it doesn't clutter or depreciate UX when you release new features (e.g., Canvas Components), but combine this with extracting UX value so users can extract maximum value at maximum speed (e.g., One-tap intelligence).
Think of it as a learning curve designed in reverse—the interface adapts to the user's expertise level.
The Adoption Trifecta
Explainability
Users need traceability of AI-generated information because, generally, they distrust AI results. You need to explain how you arrived at that result. Without this, adoption stalls.
Configurability
To improve AI adoption, you need to give users a sense of control. This means:
- Allowing certain functionalities to be done manually (e.g., Pulse questions, Content focus)
- Providing configuration options that users can adjust
As users gain confidence in AI, you can progressively reduce configuration options. But start with more control, not less.
Editability
Users need to feel they can modify AI outputs. A mixed interface—balancing chat and dashboard elements—works particularly well in B2B contexts. The chat acts as a central unifying element, eliminating the need to navigate through multiple tabs.
The Autonomy Spectrum
Autonomy by definition leads to efficiency. The greater the degree of autonomy, the better the business case—but the harder the adoption. The key is knowing how and when to apply different levels.
Levels of Autonomy:
- Level 1 → Processing workflows (2+ seconds) where users can choose to wait
- Level 2 → User intent for autonomy on pre-defined tasks (intentional autopilot mode)
- Level 3 → Agentic loops until the right response is provided
- Level 4 → Smart triggers determine when autonomy is needed
- Level 5 → Continuously autonomous triggering
Each level requires different design considerations for trust and control.
From Monitoring to Acting
AI enables extreme personalization through dynamic interfaces. We shift from deterministic interfaces (where we control what data is presented) to non-deterministic interfaces (where we're prescriptive about what data we give the AI and its objectives, then let it intelligently decide what to present).
The data we present must lead to productivity and, primarily, to action.
Personalization allows us to present the most relevant information users need to make decisions or take actions. Data monitoring becomes residual. The interface objective shifts from SEE to DO.
The future isn't about building better software with AI sprinkled on top. It's about building AI-native systems where the AI is invisible, the experience is seamless, and users feel more empowered than ever—even if they never know an AI is helping them.
That's the art of agentic experience design.