Nobody Is Holding the Rope
Why every organisation using AI needs one person who actually owns it
There is a conversation I keep having.
It happens in boardrooms, on calls, at industry events. Someone leans forward and says: “We have AI tools deployed. We have policies somewhere. We did some training last year. But honestly? Nobody really knows who’s responsible when something goes wrong.”
Every time, I nod. Because this is not a confession. It is the norm.
Organisations have rushed to adopt AI. They have bought tools, integrated systems, written acceptable use policies that live in a shared drive nobody opens. They have checked boxes. And somewhere along the way, they told themselves that was enough.
It is not enough.
The EU AI Act, the UK AI Safety Framework, the UAE AI Regulation guidelines all demand human oversight. They all require documentation, risk assessment, ongoing monitoring. The EU AI Act, for example, requires in Article 26(2) that oversight be assigned to natural persons with the necessary competence, training and authority. What none of them tell you is what that role looks like, what to call it, or where it sits in your organisation. That part is left entirely to you.
And most organisations have left it to nobody.
The problem is not ignorance. It is diffusion.
When responsibility is spread across everyone, it belongs to no one.
Your IT team knows the technical infrastructure. Your legal team knows the regulatory requirements. Your HR team manages the people side. Your data team handles the pipelines. Your procurement team approved the vendor. Your department heads decide how tools get used day to day.
All of them know something. None of them know everything. And nobody is standing at the intersection, making sure the whole picture holds together.
I see it play out in specific ways. A developer deploys an AI model. They have configured it correctly from a technical standpoint. But has anyone checked whether the outputs are explainable to the people affected by them? Has anyone made sure the employees using it understand its limitations? Has anyone considered what happens when the model drifts six months from now?
The developer thinks legal owns that. Legal thinks IT owns that. IT thinks the department head owns that. The department head thinks it came pre-approved.
Nobody owns it.
This is not a technology problem. It is a governance problem. And governance problems have a known solution: you assign a person.
What we call them matters less than what they do
Let me offer a few names that have appeared in early conversations around this role:
AI System Owner. Precise. Signals operational accountability. Borrowed from the logic of data ownership and system ownership already familiar in IT governance.
Chief AI Officer (CAIO). Elevated, strategic. Works well in larger organisations where AI is core to the business model. Risks becoming a title without teeth if not properly scoped.
AI Governance Lead. Neutral. Useful in organisations where “ownership” language creates internal friction.
AI Compliance Officer. More limited in scope, focused on regulatory adherence. Not wrong, but it undersells the role.
My instinct is that the title matters less than the mandate. Whatever you call this person, they need formal authority, cross-functional access, and genuine accountability. Not an advisory role. Not a committee. One person.
Think of the Chief Financial Officer. Nobody asks whether the CFO or the marketing director “owns” the annual accounts. We know. The CFO owns it. They have training for it, authority for it, and liability attached to it.
Think of the Data Protection Officer under GDPR. Organisations spent years resisting this role, arguing it was unnecessary overhead. Now those same organisations cannot imagine operating without one.
AI needs its equivalent.
What this person actually owns
This is where I want to be specific. Because vague job descriptions are how we end up back at diffusion.
The AI System Owner (whatever you call them) should be responsible for the following areas. These map, not coincidentally, to the Seven Core Ethical Pillars I have developed through my audit and advisory work: Human Oversight, Technical Safety, Data Privacy, Transparency, Fairness, Social Impact, and Accountability.
1. Inventory and classification
They maintain a live register of every AI system in use across the organisation. Not just the systems IT deployed. All of them. The tool the marketing team signed up for on a company card. The AI assistant your customer service team started using last month. The algorithm your recruitment software uses to rank CVs. They classify each system by risk level, in line with applicable regulation.
2. Policy development and language alignment
They write and keep current the organisation’s AI usage standards. But more than writing policy, they ensure that language is shared across departments.
This one matters more than it sounds.
Ask a developer what “AI safety” means. Then ask your legal counsel. You will get different answers. Both are correct within their own frame. Neither is sufficient on its own. The AI System Owner translates between these frames. They build a common vocabulary so that when something goes wrong (and at some point, something will) the organisation is not tripped up by the fact that two teams were talking past each other the whole time.
3. Training and competency oversight
They ensure that everyone who works with AI, at any level and in any function, receives training appropriate to their role. This is not a one-time induction. AI systems change. Regulations change. Risk profiles change. Training is an ongoing programme, not an event.
4. Risk assessment and incident management
They own the risk register for AI systems. They conduct or commission conformity assessments. They define what constitutes an incident and what the response protocol looks like. They are the person who gets called when something goes wrong.
5. Vendor and procurement oversight
When the organisation buys or integrates an AI product, the AI System Owner is at the table. They review third-party systems before deployment. They ensure contracts include the right data protection and audit clauses. They are not the procurement team, but they are the person procurement needs to consult before a deal closes.
6. Monitoring and drift detection
Deploying an AI system is not the end of the process. Models drift. Datasets shift. Regulatory requirements evolve. The AI System Owner establishes ongoing monitoring protocols and defines what triggers a review.
7. Accountability and regulatory interface
When a regulator asks how your organisation is managing AI risk, this is the person who answers. They maintain the documentation trail. They interface with external auditors. They keep the board informed.
Case study one: A digital health platform
Consider a healthtech company running a platform that uses AI to support clinical decision-making. Physicians access diagnostic support tools. Patients interact with AI-driven triage interfaces. The company also uses AI internally: for hiring, for customer service automation, and increasingly for predicting churn and patient outcomes.
Under the EU AI Act, several of these systems are high-risk. AI used in medical diagnosis falls under Annex III. That means conformity assessments, technical documentation, human oversight mechanisms, post-market monitoring. The requirements are significant.
Now ask: who in this organisation is accountable for all of that?
In most healthtech companies the answer is fractured. The clinical team owns the medical validation. The tech team owns the model. Legal owns the regulatory filings. Nobody owns the intersection.
Here is where the AI System Owner becomes critical. And it is also where the concept of Standard Operating Procedures (SOPs) becomes essential.
Healthcare as an industry runs on SOPs. Clinical staff understand that a procedure is not real until it is written, validated, and trained. The AI System Owner in a healthtech company should apply this same discipline to AI governance.
What does that look like in practice?
It means an SOP for how new AI features are validated before deployment. Not just technically, but clinically, ethically, and legally. It means an SOP for how physicians are informed about the limitations of the diagnostic support tool they are using. What it can flag. What it cannot. What happens when its confidence score is low.
It means an SOP for what happens when the system produces an unexpected output. Who is notified? Within what timeframe? Who makes the call on whether it constitutes a reportable incident?
It means an SOP for the patient-facing triage interface: how patients are informed they are interacting with AI, what escalation paths exist, how the interaction is logged and for how long.
And critically, it means updating the HR team’s existing recruitment SOPs to account for AI screening tools. HR already documents its hiring procedures precisely because those decisions carry legal risk. That same logic needs to extend to the algorithm making the first cut.
The AI System Owner in this scenario is probably someone with a background in clinical governance, information governance, or health informatics. They understand both the regulatory environment and the clinical stakes. They sit between the product team and the medical team. They speak both languages.
Without this person, you have a company that is technically compliant on paper and operationally exposed in practice.
Case study two: A software house deploying AI for clients and using it internally
Now consider a different type of organisation. A mid-sized software house. They build AI-powered products for clients across sectors. They also use AI heavily in their own operations: code generation, project management, documentation, and yes, an AI-assisted CV screening tool in HR.
This company has a dual exposure. They are a deployer of AI for external clients. And they are a user of AI internally. Both carry risk. Both require governance.
On the client side: they are integrating AI systems into sectors with varying regulatory requirements. One client is in financial services. One is in retail. One is in local government. Each brings its own regulatory context. The software house cannot be expert in all of them, but they need someone who can ask the right questions, build the right contractual protections, and ensure their own development processes are sound.
On the internal side: the risks are quieter but no less real.
Code generation tools can produce outputs with embedded vulnerabilities. If no one is auditing this, those vulnerabilities ship. The AI-assisted CV screening tool, used by HR without deep technical understanding, may be amplifying historic biases in hiring. If no one is examining the outputs against fairness metrics, the company is making consequential employment decisions on the basis of a model it does not really understand.
The AI System Owner in this company needs a different profile than in healthcare. They are likely more technically fluent, perhaps someone from a senior engineering background who has developed legal and ethical literacy. Or someone from a legal or compliance background who has gone deep on AI systems.
What they own internally: they establish a usage policy for internal AI tools. Not a prohibition list, but a clear framework for what tools are approved, under what conditions, and with what guardrails. They work with the lead developer to define when AI-generated code requires human review before it goes into production. They review the CV screening tool. They look at what data it was trained on. They examine the pass/fail criteria. They run a fairness audit. They decide whether the tool is fit for purpose and, if it is not, they have the authority to pause it.
What they own on the client side: they build a standard AI delivery framework that the company uses across all client engagements. Before an AI feature ships, this framework requires sign-off from the AI System Owner, or a clear delegation to a named person on the client side who has accepted equivalent responsibility. They define the documentation the company produces for every AI system it delivers: what it does, what its limitations are, how it should be monitored, what constitutes a failure.
This is the moment where the AI System Owner is not just a compliance function. They are a commercial differentiator. Clients in regulated industries are beginning to ask these questions. The software house that can answer them clearly, that can say we have a governance framework and a named person who owns it, has an advantage over the one that shrugs.
This is not overhead. This is infrastructure.
I anticipate the objection. “We are a lean team. We cannot add headcount for a governance role.”
Two responses.
First: this does not have to be a new hire. In smaller organisations, this role can be held by an existing senior person, provided they have the time, the training, and the formal mandate. A half-hearted side project for someone already stretched is not the answer. But a DPO often holds dual responsibilities in smaller organisations. A Head of Legal often covers compliance. The AI System Owner can follow the same model.
In larger organisations, the picture looks different. One person cannot personally execute every decision, review every vendor contract, and run every training programme across a global workforce. That is not the point. The AI System Owner in a larger organisation is not doing all of this alone. They are leading a team, setting the standards, and carrying the accountability. Think of a CFO: they do not personally reconcile every account. They own the function, set the framework, and answer for it. The AI System Owner works the same way. The team executes. The owner is responsible.
Second: the cost of not having this person is not theoretical. It is regulatory exposure, reputational risk, operational incidents waiting to happen, and a workforce that is using AI tools without adequate understanding of their limitations. The cost of the incidents is higher than the cost of the role.
We built this infrastructure for finance. We built it for data protection. We are building it, slowly and unevenly, for information security. AI is next.
What the regulations say and what they leave out
The EU AI Act requires human oversight of high-risk AI systems. It requires technical documentation. It requires post-market monitoring. It requires fundamental rights impact assessments in certain contexts.
It does not say: appoint one person and call them the AI System Owner.
That is not a gap in the regulation. Regulators set the standard. Organisations choose the structure.
But someone has to connect those dots inside your organisation. Someone has to look at Article 9 on risk management systems and translate it into an actual process. Someone has to look at Article 14 on human oversight and define what that means for your specific tool, your specific user base, your specific risk profile.
That translation work is not legal work alone. It is not technical work alone. It sits at the intersection. And intersections, in organisations, need owners.
The question worth asking this week
Not: do we have the right policies?
Ask instead: if something went wrong with our AI systems tomorrow, an unexplainable output, a biased decision, a data breach, a regulatory inquiry, who in this organisation would I call first?
If the answer is a committee, a shared inbox, or a long pause followed by uncertainty, you already know what you need to build.
One person. Clear mandate. Real authority. Full accountability.
That is not a complicated structure. It is just a decision you have not made yet.
Anna August, PhD is an AI ethics auditor and trainer working across UK, EU & Gulf markets. Her Seven Core Ethical Pillars framework supports organisations in building AI governance that is both regulatory-ready and operationally grounded.








This lands because it names the real failure mode most organizations still refuse to face. Responsibility has been abstracted away from execution.
What stands out is that the problem is not just that nobody owns AI. It is that ownership, as most organizations define it, lives in documents and org charts while the system’s power lives somewhere else, at the moment an action becomes irreversible.
Most harm does not come from bad intent or even bad models. It comes from systems crossing a point of no return faster than human judgment can reassert itself. A payment sent. Access granted. A record updated. A decision acted on. By the time governance shows up, reality has already changed.
Holding the rope only matters if the rope is attached to the thing that moves. Otherwise the role becomes ceremonial. Accountable for outcomes without control over whether those outcomes were ever allowed to happen.
What this really points to is a shift in how we think about governance itself. Not as oversight layered around AI, but as a boundary inside the system, where intent becomes action. Governance has to operate at the same layer as execution, not after it.
We solved versions of this problem long ago in other high‑stakes systems by treating action as a transaction. Prove admissibility first, then commit. If conditions are not met, nothing happens. No paperwork. No post‑mortem. Just a refusal to proceed.
Your argument makes clear that AI governance is heading toward the same reckoning. The organizations that understand this early will not just be more compliant. They will be the only ones who can honestly say they are in control.
Holding the rope matters. Designing where it is tied is the real work.
The role definition is sharp, and the DPO parallel is the right one. But I want to pick up on something Sheridan said in the comments, because it points to the infrastructure gap that makes or breaks this role.
He said governance has to operate at the same layer as execution, not after it. That's exactly right. An AI System Owner who reviews incidents after the fact is a coroner, not a governor. The role only works if the system itself enforces boundaries before the agent acts, not after the damage is done.
Your point 6, monitoring and drift detection, is where this gets architectural. Most organisations treat monitoring as dashboards and periodic reviews. But if the AI System Owner is supposed to know when a model has drifted, they need the system to surface that automatically... not wait for a quarterly audit to discover that the recruitment tool has been amplifying bias for three months while everyone assumed someone else was watching.
The version of this that works: the system produces a verifiable evidence chain for every decision. The AI System Owner doesn't have to manually audit every output. They have a provenance layer that tells them what the system knew, what it decided, and whether the evidence base has shifted since. Their job becomes reviewing the signals the system surfaces, not reconstructing what happened from logs after the fact.
Sheridan's framing is right. Prove admissibility first, then commit. The AI System Owner needs infrastructure that enforces that pattern, not just a mandate that says they're responsible for it.