Eight steps towards AI governance

12 November 2025
7 minute read
AI

Eight steps towards AI governance

12 November 2025
7 minute read

I delivered our “Managing AI as an Asset” training course the day before the Wisdom conference last week. Thank you to those who attended and provided feedback. It will be available on the LISA platform before Christmas.

The AI market is growing fast and, as with all technology sprawl and innovation, governance is only just catching up. From the course, here are eight first baby steps you should be thinking about in order to manage the risk and opportunity of AI. This is a follow-on from my previous article on AI Governance through an ITAM lens.

(1) Set AI Policy

The first step is an AI policy, either amending your Acceptable Use policy or having a definitive AI policy, and some ownership at a senior level that this is a new technology beast. It’s transforming economies and transforming businesses, and should be recognised as a new class of risk. My approach around AI governance is that we should take a modern approach, not being the department of no, but instead helping the business innovate with the appropriate guardrails.

Blocking AI use because you don’t know how to manage it yet is great, but your competitors might be getting a 10% productivity gain per employee because they have figured it out (Example, NHS MS365 Copilot trial suggests 43 minutes per day saved). Push back from those requesting the latest shiny AI wonder tool is a lot easier if those requesting have AI risk awareness training, which leads us to…

(2) AI Training – Both risks and opportunities / use case spotting

Closely follow AI policy with company wide AI awareness training. A lot of the fear around AI “OMG, the robots are coming for my job” can be alleviated by educating the entire workforce around AI: both risks and opportunities.

Some companies will have a specific AI skunkworks or centre of excellence working on AI. While that is good from a moving-quickly point of view, it can be dangerous if it appears exclusive. Not including the entire workforce in AI training and AI awareness might create fear. Also an AI centre of excellence will not have a monopoly on good ideas; some of the best ideas will come from those at the rock face. You should be educating people not only on the risks of misuse but also on the AI opportunities within their roles, AI is a tool to add superpowers to existing team members. See: How to Turn AI Fears into Confidence and Capability: The Psychology of AI Integration

(3) AI Discovery

Governance 101 leads us to discovery of the risk itself. How does AI turn up in your enterprise? By SaaS, by Cloud, By your datacentre. Then there is Shadow AI (individuals bringing their own AI or departments bypassing IT), Trojan AI (Companies trickling in AI capabilities or services without your knowledge).

As I covered in my previous article, it’s worth thinking about how you manage AI within your existing systems, treat it as an asset to be managed (because its loaded with potential risk and unmanaged cost), but also as a special asset class (because the context or data being used is the key risk not just the AI technology itself).

Eight steps towards AI governance

Are you behind the 8 ball with AI risk? – Eight steps towards AI governance

(4) Monitor usage and put controls in place

Once you’ve discovered AI, you need to monitor its usage and put the necessary controls in place as you would for any asset. Decide what you’ll log, who sees it, and what triggers a stop, treat model changes like any other change. This is a tweak to existing controls not a rewrite.

(5) Maintain inventory

Although AI is transformative, arguably as big a transformation as the internet itself, it is still a technology asset, and we can use a lot of the existing systems we already have. All of the information security and data controls you already have for public cloud, SaaS, internal systems apply. We’re not starting from scratch. In terms of AI register, a starting point is the AI system, its purpose, its owner, model in use, where it’s located in the world, what data is coming in and out, impact and risk level, what controls are in place.

(6) Classify Risk

The EU AI Act and ISO/IEC 42001 are good reference points for best practice around AI risk management. The EU Act because it’s a) Live now and b) the most draconian in the world so far. 42,001 is to AI what 9000 is to quality or 27001 is to Information Security. As with all risks and best practices, use what is appropriate for your business.

There are certain AI use cases deemed as high risk in certain industries, build risk management accordingly. For many organisations you might realise that no modifications are required at all, for some, embracing AI and processing customer data you’ll need an overhaul.

Certain AI uses have already been outlawed by the EU with fines (7% of turnover) applicable. Major alignment is required by August 2026 and, as with things like GDPR, it applies for anyone wanting to do business in the EU, not just those located in the EU.

High risk, and the focus of the vast majority of global regulation around AI, is preservation of protected characteristics or civil liberties. We don’t want the robot telling us we can’t have a mortgage because of the colour of our skin, for example. Medium risk is processing of data that might be construed as personally identifiable and low risk is your spell-check / mundane under the radar AI usage.

(7) Ownership and Accountability

Governance 101: you want to assign ownership and accountability for the high-risk systems. Name one accountable owner per high-risk system and a simple escalation rule (who signs off, when, with what evidence). Again, adaptation of existing controls, not a rewrite.

(8) Guardrails on spend

Tag AI spend, set a monthly cap, and publish and circulate usage snapshots to drive behaviour (e.g. top use cases, model versions, exceptions). See also https://www.finops.org/wg/finops-for-ai-overview/

The case for automation – Time to lean on your ITAM tooling suppliers

To repeat the points above: This is about adapting existing systems you should already have. Most organisations will be managing data protection, managing high-risk data assets, and operating information security controls. They should have grey/shadow IT controls. So AI Governance for most will be a modification of existing governance systems.

Looking forward, nobody needs another spreadsheet of AI systems to maintain, or another list to supply to regulators. Automation is required here, push back to your ITAM, SAM, procurement, license management tools and service providers:

  • Q. What are you doing to help us discover, identify and classify the risk of AI in our estate?
  • Q. How will you help us automate this process in a robust and comprehensive way given the potential penalties and stakes involved?
  • Q. For AI in our enterprises entering outside of existing (Asset Request, Change Control) channels – how will you help us find traces of AI use to protect us from AI risk?

I’m not suggesting this is an exhaustive governance model for AI, but it’s a starting point. Let me know how you are getting on with AI Governance and what’s worked for your environment.

Photo Credit

About Martin Thompson

Martin is the founder of ITAM Forum, a not-for-profit trade body for the advancement of IT Asset Management.

He is also the author of the book "Practical ITAM - The essential guide for IT Asset Managers", a book that describes how to get started and make a difference in the field of IT Asset Management. In addition, Martin developed the PITAM training course and certification.

Connect with Martin on LinkedIn.

Subscribe

Can’t find what you’re looking for?