DEV Community

Cover image for Getting AI Governance Right
Peter Harrison
Peter Harrison

Posted on

Getting AI Governance Right

Organisations are under real pressure to govern AI responsibly. The risks are genuine: data exposure, unsecured systems, staff submitting confidential information to tools with opaque data handling terms. Most are responding with the tools they know: approved lists, firewall rules, and access controls.

Those tools have their place. The challenge is applying them where they work without undermining the autonomy and ownership of the people whose judgement the organisation actually depends on.


Governance Should Match the Role

The most important thing to understand about AI governance is that a single policy applied to everyone will be wrong for most of them.

A front line customer service representative using a PC to handle customer queries is in a fundamentally different position from a developer building the systems those queries run on. The first person was not hired to exercise technical judgement about data handling or AI tool risk. Expecting them to navigate those questions unaided is unreasonable. Guardrails, firewall rules, blocked domains, restricted tool access, are appropriate here. Not because staff cannot be trusted as people, but because they have not been given the context to make good judgements in this specific domain. The guardrail is a substitute for training that has not happened, and a reasonable one.

The developer, IT professional, or technically literate manager is a different case entirely. Their job involves exercising technical judgement. Applying the same blanket restrictions to them does not reduce risk. It transfers the constraint to the wrong place, signals institutional distrust of the people whose competence the organisation depends on, and erodes the sense of ownership and autonomy that makes skilled professionals effective.

There is a broader pattern here that predates AI. Developers who cannot install tools, who need a ticket to add a library, who operate inside locked-down environments managed by central IT, cannot stay current, cannot adapt, and eventually stop trying. AI governance applied uniformly is heading down the same path. Centralised control optimises for the appearance of security at the direct expense of competence.

The governance framework worth building distinguishes between these populations explicitly. Front line staff get appropriate guardrails. Professionals get training, clear principles, and accountability. The question for each group is different: not "is this tool permitted" but "does this person have the context to make good decisions, and are they accountable for the outcomes?"


The Real Risks Worth Managing

For the population that warrants professional discretion rather than blanket restriction, the risks are real but specific. They are also manageable with practices that do not require a permission register.

Personal and confidential data. Some data should not be submitted to any AI tool, regardless of provider, tier, or training settings. Customer personal information, personnel matters, legal advice, strategic plans, commercially sensitive proposals. The question here is not which tool to use or how to configure it. It is whether submitting this information to an external service is appropriate at all. In most cases it is not, and no amount of enterprise licensing changes that. Privacy law in New Zealand and most other jurisdictions places specific obligations on how personal information is collected, used, and disclosed. Sending a customer record to an AI assistant to help draft a response is a disclosure to a third party, and it requires a lawful basis. That is a legal question before it is a governance one.

Code and intellectual property. Submitting code to an AI tool is a different question. The risk is not primarily legal but commercial: code may contain proprietary logic, unreleased product details, or confidential architecture. Here the tier and training terms matter. Consumer and individual paid tiers for major providers typically default to allowing training on submitted data, with opt-out available but not obvious. Enterprise tiers operate under data processing agreements that explicitly exclude data from training. The risk sits precisely in the middle: professionals on individual subscriptions who believe they are using a professional tool but are on terms that treat their input as consumer data.

The mitigation is straightforward. Understand which tier you are on and what the terms say. Disable training in your account settings if you have not already done so. We trust Google with our email. We can trust AI providers with code, provided we understand and control the terms on which we do so.

Copyright and ownership. There is a legal question underneath AI-assisted development that most organisations have not yet thought through. In the United States, courts have established that works created by AI without meaningful human involvement cannot be copyrighted. The Copyright Office has confirmed that prompts alone are not sufficient to establish human authorship. New Zealand has not yet ruled on the question directly, but the direction of travel in comparable jurisdictions is consistent.

The practical implication is this: code generated by AI and accepted without meaningful human review, judgement, or modification may not be protectable intellectual property. For any organisation whose core asset is its software, that is a material concern. It is also one more reason why the human review practices described later in this article matter beyond quality and security. The developer who steers, evaluates, and makes architectural decisions about AI-generated code is in a different legal position from one who ships whatever the agent produces.

There is a second and separate question about provenance. AI models are trained on large bodies of existing code, but they do not simply reproduce it. Normal development use, asking an agent to implement a feature or solve a problem, produces synthesised output, not verbatim copies of training data. Verbatim reproduction would require explicitly prompting the model to reproduce specific code, which is a different activity entirely. The provenance risk in ordinary AI-assisted development is low, and human review of generated code reduces it further.

Both questions are still being worked out in the courts. The practical response is the same one the rest of this article points toward: keep humans genuinely engaged with the code rather than treating AI output as a black box. That is good practice on quality, security, and legal grounds simultaneously.


Programming With AI: Practices That Matter

For developers working with agentic tools day to day, the risks go beyond data exposure. They include access, integrity, and the quality of what gets built.

Production access. AI coding agents should not have access to live systems, production credentials, or administrative database access. AI systems make mistakes, misunderstand scope, and can cause irreversible damage. That is not a theoretical concern, it is the observed reality of working with these tools. Production systems are not the place to find out what an agent gets wrong.

Version control gates. Despite the ability of agentic tools to commit and push to git repositories, they should not be given permission to do so. The commit is the natural human checkpoint, the moment where generated code enters the shared context of the team and becomes something the organisation is responsible for. Granting an agent the ability to bypass that gate removes the last review point before the code becomes shared reality. The agent writes the code. The developer commits it.

Write tests first, commit them before implementation. An AI agent asked to implement a feature and write tests for it will write tests designed to pass. That is not the same as tests designed to verify. The test is shaped by the implementation rather than the other way around, and an agent optimising to satisfy the immediate request has every incentive to make the tests work rather than make them rigorous. Writing tests first and committing them before any implementation begins creates a structural constraint the agent cannot easily circumvent. Any subsequent modification to the committed test is immediately visible in the diff, a signal worth investigating every time.

Review the code. This sounds obvious. It is less obvious in practice when the code works, the tests pass, and the feature behaves as expected. The failure modes that matter most are not the ones that produce visible errors. They are the ones that look fine and are not. An agent asked to implement an API will implement one that works. It may not implement one that is properly secured. Authentication checks may be superficial, endpoints exposed to the wrong network, error responses leaking information, with no rate limiting in place. None of those failures show up in a functional test. They show up when someone finds the unsecured endpoint, which may be immediately or may be at the worst possible moment.

An agent can also make locally rational decisions that are globally wrong. Hardcoding a value to make a test pass. Implementing a workaround that solves the immediate problem and creates three others. Taking a structural shortcut that is invisible until the system needs to scale. These require a developer who understands what the code is supposed to do and can recognise when it is doing something subtly different. That judgement cannot be delegated to the tool.

This is not vibe coding. There is a current in AI development culture that treats code review as unnecessary overhead. Let the agent write it, ship it if it works, trust the tool. That may be defensible for personal projects where the cost of failure is low. For production code, client work, or anything with security or data integrity implications, it is a form of negligence dressed up as efficiency. Agentic tools make developers more powerful. They do not make developers optional. The human in the loop is not a bottleneck. They are the reason the output is trustworthy.


Training Completes the Picture

Different roles carry different levels of technical context, different exposure to risk, and different capacity to exercise informed judgement. Governance that recognises those differences will be more effective than governance that does not.

Policy sets the expectations. Training and education are what make those expectations stick. A developer who understands why production access is dangerous will follow the policy that prohibits it, and will apply the same judgement in situations the policy did not anticipate. A developer who does not understand it will follow the rule when someone is watching and work around it when they are not.

Governance that actually works does not stop at rules and controls. It extends to the people expected to operate within them, ensuring they understand what they are working with, take ownership of the decisions they make, and are accountable for the outcomes.

The investment that makes the most difference is not a more comprehensive control framework. It is ensuring that the professionals operating with AI discretion have the training to understand the risks, the discipline to apply that understanding consistently, and a genuine sense of responsibility for what they produce. That means understanding what the tools can do, what the data handling terms mean, and where the failure modes live. It also means understanding their personal responsibility to maintain the confidentiality of sensitive information and the security of the systems they work with. Those responsibilities do not change because an AI tool is in the loop. If anything, they become more important when the tool can act autonomously on their behalf.

The guardrails for front line staff matter. Centralised controls have a role. The firewall is not wrong for the population it is designed to protect. But none of those things substitute for competence, discipline, and ownership in the people who need to operate beyond them. Getting AI governance right means knowing the difference.

Top comments (0)