Governance Challenges of Agentic AI under the EU AI Act in 2026
AI agents promise to automatically move data between systems and make decisions, but in some cases, they can act without a clear record of what, when, and why they undertook their tasks. This creates a governance problem for which IT leaders are ultimately responsible. If an organization cannot trace an agent's actions and lacks proper control over its authority, leaders cannot prove that a system is operating safely or even lawfully to regulators. This issue is set to become more critical from August this year as the enforcement of the EU AI Act begins.
According to the text of the Act, there will be substantial penalties for governance failures related to AI, especially in high-risk areas such as when personally identifiable information is processed or financial operations take place. IT leaders need to consider several steps to alleviate high levels of risk, including agent identity, comprehensive logs, policy checks, human oversight, rapid revocation, vendor documentation availability, and formulating evidence for regulators.
One solution is the use of a Python SDK, such as Asqav, which can cryptographically sign each agent's action and link all records to an immutable hash chain, a technique more associated with blockchain technology. If someone alters or removes a record, the verification of the chain fails. For governance teams, using a centralized, possibly encrypted system of record for all agentic AIs is a measure that provides data well beyond the scattered text logs produced by individual software platforms.
Regardless of the technical details, IT leaders need to see exactly where, when, and how agentic instances are acting throughout the enterprise. Many organizations fail at this first step in recording automated, AI-driven activity. It is necessary to maintain a registry of every agent in operation, uniquely identified, along with records of its capabilities and granted permissions. This 'agentic asset list' ties neatly into the requirements of Article 9 of the EU AI Act, which states that for high-risk areas, AI risk management must be an ongoing, evidence-based process built into every stage of deployment.
It is also important for any agentic deployment to offer a facility for the rapid revocation of an AI’s operating role, preferably within seconds. Revocation options should include immediate removal of privileges, ceasing API access, and flushing queued tasks. The presence of human oversight, combined with providing sufficient context for humans to make informed decisions, means that human operators must be able to reject any proposed action. Effective oversight requires information around context, every agent’s authority, and enough time to intervene to prevent missteps.
Multi-agent processes are particularly complex to track, as failures can occur among chains of agents. Therefore, it is crucial to test security policies during the development of any system intending to utilize multiple agents. Finally, governing authorities may require logs and technical documentation at any time and will certainly need them after any incident they have been made aware of.
The question for IT leaders considering using AI on sensitive data or in high-risk environments is whether every aspect of the technology can be identified, constrained by policy, audited, interrupted, and explained. If the answer is unclear, governance is not yet in place.
Amazon Bedrock introduces new capabilities for agent interaction
How Visual-Language-Action (VLA) Models Work for Robots
Related articles
OpenAI unveils new safety blueprint to protect children online
OpenAI has introduced a new safety blueprint aimed at protecting children from online exploitation.
Governance in AI: Ensuring Autonomous Systems Are Controlled
Autonomous AI systems require clear governance and control for safe operation.
Secure AI Systems: 5 Best Practices
Learn about five best practices for securing AI systems.