The Confirmation Pattern: AI That Asks Before Acting
We could make AI that acts without asking. We chose not to. Here is why the extra click matters.
TL;DR
Learn about the confirmation pattern for AI actions and why human approval before AI execution builds trust. Understand the tradeoffs between autonomous AI and user-controlled AI assistants.
There is a growing temptation in AI product development to make assistants more autonomous. Let the AI handle things without constant human oversight. Trust the model to make the right decisions. Reduce friction by eliminating confirmation steps. We understand the appeal of this approach, and we deliberately chose not to take it.
Air uses what we call the confirmation pattern: before any action is taken on your behalf, you see exactly what is about to happen and explicitly approve it. This adds a step to every interaction, and we believe that step is essential.
The Case for Autonomous AI
Proponents of autonomous AI make reasonable arguments. Every confirmation step adds friction to the user experience. If the AI is accurate 99 percent of the time, requiring confirmation for every action means you are clicking through unnecessary approvals 99 times out of 100. That friction adds up and makes the assistant feel slow and cumbersome.
There is also an argument about trust. If you never let the AI act independently, you never learn to trust it. The relationship stays at the level of a tool that requires constant supervision rather than an assistant that can handle things on its own.
We have considered these arguments carefully. We still believe confirmation is the right default for AI actions, especially at this stage of AI development.
Why Autonomous AI Is Not Ready
Even highly accurate AI systems make mistakes. When those mistakes involve actions that affect your real life, the consequences can be significant.
Consider what happens if an AI sends a message to the wrong person. Maybe it interpreted "text Alex" as Alex Chen when you meant Alex Thompson. The message goes to someone who should not have received it. You might not even notice the error until the wrong Alex responds with confusion.
Or consider file operations. The AI moves what it thinks are old downloads to the trash, but one of those files was an important document you downloaded yesterday. You discover the mistake when you need the file and cannot find it.
Or calendar events. The AI schedules a meeting based on its interpretation of your request, but it chose 2pm when you said "two" and meant the day after tomorrow, not today.
Each of these mistakes has consequences. Embarrassment from messages sent to the wrong person. Lost work from deleted files. Scheduling conflicts from misplaced calendar events. And critically, many of these actions are difficult or impossible to undo.
The Confirmation Pattern in Practice
Air's confirmation pattern works simply. When you request an action, Air shows you a preview of exactly what will happen before it happens.
For messages, you see the recipient, the message text, and the conversation thread. You can verify that Air found the right contact and composed the right message. One tap sends the message. Or you can edit the text, change the recipient, or cancel entirely.
For calendar events, you see the title, date, time, location, and attendees. You can verify that Air interpreted your natural language correctly. "Next Tuesday at 2" resolved to the date you intended. The meeting duration is what you expected. The attendees are the right people.
For file operations, you see a list of exactly which files will be affected and what will happen to them. Move operations show the source and destination. Rename operations show the before and after names. You can spot problems before they happen.
The preview takes about one second to review in most cases. That one second provides certainty that the action is correct and eliminates the anxiety of wondering what the AI might have done wrong.
Trust Is Earned, Not Assumed
We believe trust between humans and AI systems should be earned incrementally, not assumed from the start. The confirmation pattern creates a learning loop where you develop calibrated trust in Air's capabilities.
When you use Air for the first few times, you might scrutinize every preview carefully. You are learning how Air interprets your requests and how accurately it resolves ambiguous references. Over time, you develop a sense for when Air is likely to get things right and when extra caution is warranted.
This calibrated trust is healthier than blind trust. You learn Air's strengths and limitations through direct experience rather than assuming it is always correct or always wrong.
The Anxiety Factor
Beyond the practical risk of errors, there is a psychological dimension to autonomous AI that we wanted to avoid. When AI acts without your knowledge, you never quite know what it has done.
Did that message send correctly? Was it to the right person? Did the calendar event get created with the right details? Without confirmation, these questions linger. You might find yourself checking Messages to make sure the right text went through, or opening Calendar to verify the meeting details.
This background anxiety is subtle but real. It consumes mental energy that could be spent on other things. It undermines the efficiency that autonomous AI is supposed to provide.
The confirmation pattern eliminates this anxiety entirely. You saw exactly what would happen. You approved it. You know it happened correctly. There is nothing to second-guess.
A Path Toward More Autonomy
We are not opposed to autonomous AI in principle. As AI systems become more reliable and as users develop deeper trust through experience, there is room for actions that do not require explicit confirmation.
The key is that autonomy should be opt-in and graduated. Low-stakes actions with easy undo might become automatic first. High-stakes or irreversible actions would continue to require confirmation. Users would have granular control over which categories of actions require approval.
But we believe the right default is confirmation for everything. You can always reduce friction later as trust develops. You cannot easily undo the damage from an AI that acted incorrectly without your knowledge.
Why One Click Is Worth It
Some users initially find the confirmation step annoying. They want to speak a command and have it execute immediately. We understand this impulse, but we ask you to consider what that extra click buys you.
It buys you certainty that the action is correct. It buys you the ability to catch errors before they happen. It buys you freedom from background anxiety about what the AI might have done. It buys you a healthier relationship with AI that is based on verified trust rather than blind faith.
One click seems like a small price for all of that. We think you will agree once you experience the alternative.