Companies like Apple are building AI agents with limits

Source
Companies like Apple are building AI agents with limits

Next-generation AI assistants are being developed within the Apple ecosystem and by chipmakers like Qualcomm, but early reports suggest they are being designed with certain limitations. According to Tom’s Guide, early versions of these assistants are capable of navigating apps, making bookings, and managing tasks across services. For instance, a private beta agentic system successfully completed tasks such as booking services or posting content in apps. In one test, it navigated through an app workflow and reached a payment screen before asking the user for confirmation.

AI agents are being built with approval checkpoints. Sensitive actions, especially those related to payments or account changes, require user confirmation before they are executed. The “human-in-the-loop” model allows the system to prepare an action but leaves the approval to the user. Research related to Apple’s AI work has explored ways to ensure systems pause before taking actions that users did not explicitly request.

Banking apps already require confirmation for transfers. This same concept is now being applied to AI-driven actions across multiple services. A control layer comes from restricting what the AI can access. Instead of giving the system full access to apps and data, businesses are establishing limits, such as which apps the AI can interact with and when actions can be triggered.

In practice, this means the AI may be able to draft a purchase or prepare a booking, but it cannot finalize it without approval. It also means the system cannot move freely across all services unless granted permission. According to Tom’s Guide, this is necessary for privacy. If data remains on the device, it eliminates the need to send sensitive information to external servers.

In areas like payments, AI systems are expected to work with partners that already have strict rules in place. In one reported example, payment providers’ services are being integrated to provide secure authentication before transactions are completed, although such safeguards are still under development. Existing systems act as an additional layer of oversight, capable of setting transaction limits or requiring extra verification.

Much of the discussion around AI governance has focused on enterprise use, including areas like cybersecurity and large-scale automation. The consumer side introduces a different challenge, and companies must design controls that work for everyday users. This means clear approval steps and built-in privacy protections.

As AI gains the ability to carry out actions, the risks increase as errors can lead to financial loss or data exposure. By placing controls at multiple points, including approval and infrastructure, companies are trying to manage those risks. This approach may shape how agentic AI develops in the near term. Rather than aiming for full independence, companies seem focused on controlled environments where risks can be managed.

Related articles