8 Dimensions of Trustworthy ITAM Data

Accurate ITAM data strengthens careers and helps expand teams. I’ve seen it with my own eyes, people drive up the accuracy of their ITAM data, referred to as “Trustworthy Data” in the ISO/IEC 19770-1 ITAM

Written by: Martin Thompson

Published on: January 28, 2026

Accurate ITAM data strengthens careers and helps expand teams.

I’ve seen it with my own eyes, people drive up the accuracy of their ITAM data, referred to as “Trustworthy Data” in the ISO/IEC 19770-1 ITAM standard, and their reputation climbs up with it. Decisions are sharper, less things get overlooked and more resources are allocated to ITAM teams.

Most organisations don’t suffer from a lack of data. They suffer from a lack of trust in the data they already have. If data isn’t trusted, there can be scepticism over the effectiveness of the ITAM team.

In IT Asset Management, that lack of trust shows up everywhere: licence audits that drag on for months, security teams arguing about how many endpoints really exist, finance questioning asset numbers, and ITAM teams stuck reconciling spreadsheets instead of improving outcomes.

So, we hear “our data isn’t accurate” and “we need to improve data quality”. But how?

How to improve ITAM data quality?

The first thing to note is that accuracy is not a single thing.

One of the reasons accuracy initiatives fail is that accuracy is treated as a single score, a percentage on a dashboard, or a vague aspiration. In practice, when someone says ITAM data is “inaccurate”, they’re usually reacting to a very specific failure:

  • assets that exist but aren’t recorded
  • records that are out of date
  • software that’s misclassified
  • duplicates inflating counts
  • systems that don’t agree
  • missing owners or cost centres
  • numbers that can’t be explained when challenged
  • Lumping all of this into one word, “accuracy” guarantees confusion.

Borrowing from data quality research and Wang & Wang in the 1990s, we can say that data quality is multi-dimensional, and whether it is fit for purpose depends on the decision we’re trying to support with it. In other words, data can be “accurate” for one purpose and dangerously misleading for another.

If you strip the academic language away and look at real IT estates, data reliability in ITAM tends to break down in eight distinct ways. These are not abstract concepts; they are practical questions teams ask every day, often without realising they’re different problems.

I’ve listed them in what I consider to be descending business impact, so I reckon getting on top of “coverage” has more impact than “validity” but as we’ve just mentioned, it really depends on what you are looking to achieve.

8 Dimensions of Trustworthy ITAM data
8 Dimensions of Trustworthy ITAM data

8 Dimensions of Trustworthy ITAM data

Below are the 8 dimensions of Trustworthy ITAM data, classic signs of failure when data is inaccurate, technology and processes required to address this area, and how you might go about measuring “accuracy”.


1. Coverage – Do we know about everything?

This is the foundation. If an asset exists, a laptop, a VM, a SaaS subscription, a cloud account, but isn’t visible in your data, every decision that follows is already compromised.

Business return of getting this right: Good coverage reduces security blind spots, prevents audit surprises, and stops you paying for assets you didn’t know you had.

Signs of failure

  • Security finds laptops or servers that “aren’t in the asset register”
  • Finance sees spend for devices or cloud accounts IT “doesn’t recognise”
  • Licence audits uncover installs on machines you didn’t know existed
  • Vulnerability scans show more endpoints than ITAM reports

Technology and processes required

  • Endpoint management (e.g. Windows laptops via Intune, Macs via Jamf)
  • Network or vulnerability discovery to spot unmanaged devices
  • Cloud account inventory (all AWS/Azure subscriptions known and owned)
  • Process to onboard every new asset source into ITAM

How to measure it

  • Compare ITAM inventory vs endpoint tool device count
  • Compare ITAM inventory vs vulnerability scanner asset count
  • Count “seen on network but not in ITAM” devices
  • Percentage of cloud accounts/subscriptions represented in ITAM

2. Correct classification – are we classifying it correctly?

Knowing that something exists is not enough. You also need to understand what it is. A server mis-recorded as a laptop, the wrong Oracle edition, or SaaS plans mixed together will all lead to bad decisions.

Business return of getting this right: Correct classification is where most licence savings come from, and where most audit pain originates when it’s wrong.

Signs of failure

  • Laptops recorded as servers (or vice versa)
  • Oracle or SQL Server editions mis-identified
  • Microsoft 365 E3 and E5 users mixed together
  • Production and non-production servers treated the same in licensing

Technology and processes required

  • ITAM/SAM tools with software recognition and normalisation
  • Clear definitions of asset types (server, laptop, VM, SaaS user, etc.)
  • Agreed rules for environments (prod, test, dev)
  • Review process for high-risk vendors and top spend software

How to measure it

  • Spot-check top vendors for correct product and edition mapping
  • Percentage of assets correctly classified by type
  • Number of manual licence corrections required during audits
  • Mismatch rate between ITAM classification and endpoint/cloud reality

3. Timeliness – is it still true?

Data that was accurate last quarter may be useless today. Devices are rebuilt, users leave, cloud resources are spun up and torn down, SaaS access changes daily.
Business return of getting this right: Current data enables faster security response, effective licence reclaim, and fewer operational failures caused by stale information.

Signs of failure

  • Decommissioned servers still showing as active
  • Leavers still assigned Microsoft 365 or Salesforce licences
  • Devices last seen months ago still counted
  • Cloud resources deleted but still reported for cost

Technology and processes required

  • Endpoint tools reporting “last seen”
  • SaaS management pulling usage and login data
  • Joiner/mover/leaver process tied to ITAM and SaaS tools
  • Regular refresh of cloud inventory

How to measure it

  • Age of “last seen” or “last updated” field
  • Percentage of assets updated in last 7 / 30 days
  • Number of licences assigned to inactive users
  • Time between HR event and ITAM update

4. Uniqueness – are we counting it just once?

Duplicates quietly destroy trust. Reimaged laptops, renamed hosts, cloned VMs, all can inflate numbers without anyone noticing.

Business return of getting this right: Removing duplicates reduces inflated licence counts, support costs, and false risk exposure.

Signs of failure

  • Same laptop appears twice after reimaging
  • Virtual machines cloned and double-counted
  • Hostname changes create new records
  • Licence counts higher than actual users or devices

Technology and processes required

  • CMDB or ITAM deduplication rules
  • Stable identifiers (serial number, UUID, cloud instance ID)
  • Reconciliation logic when multiple tools report the same asset
  • Clear rules for retired vs active assets

How to measure it

  • Number of duplicate records per asset class
  • Difference between raw discovery count and reconciled count
  • Manual clean-up effort required per month
  • Duplicate rate on active assets

5. Consistency – do our systems agree?

ITAM, security, finance, CMDB, identity, if each sees a different version of the same asset, confidence collapses and reconciliation work explodes.

Business return of getting this right: Consistency reduces time spent arguing about numbers and enables decisions to be made faster, with less friction.

Signs of failure

  • ITAM says 10,000 devices, security says 11,500
  • Finance reports different asset totals than IT
  • CMDB owner doesn’t match endpoint or directory owner
  • Cloud costs can’t be tied back to services or teams

Technology and processes required

  • Defined “system of record” per data field
  • CMDB reconciliation rules
  • Integration between ITAM, security, finance, and identity systems
  • Agreement on which system wins conflicts

How to measure it

  • Attribute match rate between systems (e.g. owner, asset type)
  • Number of unresolved reconciliation conflicts
  • Time spent manually reconciling reports
  • Variance between ITAM and finance/security counts

6. Completeness – do we have what we need to support a decision?

An asset record without an owner, cost centre, or lifecycle state may exist, but it can’t be governed properly.

Business return of getting this right: Completeness enables chargeback, accountability, lifecycle management, and audit defensibility.

Signs of failure

  • Assets with no owner or cost centre
  • SaaS apps with no business owner
  • Servers with no environment or support group
  • Chargeback or showback not possible

Technology and processes required

  • Mandatory fields enforced in ITAM/CMDB
  • Procurement feeds to populate financial data
  • Ownership assignment during onboarding
  • Periodic data quality reviews

How to measure it

  • Percentage of assets with all mandatory fields populated
  • Missing owner or cost centre counts
  • Completeness by asset class (servers vs laptops vs SaaS)
  • Trend of missing data over time

7. Provenance – do we know where this data came from?

When a number is challenged, can you explain which system it came from, when it was updated, and why it should be trusted?

Business return of getting this right: Clear provenance shortens audits, strengthens credibility with finance and vendors, and reduces rework.

Signs of failure

  • Nobody can explain where a number came from
  • Audit questions require manual investigation
  • Different teams distrust each other’s reports
  • Data changed but no one knows why

Technology and processes required

  • Source tracking in ITAM and CMDB
  • Audit trails for updates and overrides
  • Clear ownership of data sources
  • Documentation of reconciliation rules

How to measure it

  • Percentage of fields with known source and timestamp
  • Number of manual overrides without explanation
  • Audit queries resolved without rework
  • Time to explain a reported number

8. Validity – does it look right?

Finally, there’s basic hygiene. Invalid serial numbers, impossible dates, free text where structured values are expected, these don’t usually cause big problems on their own, but they undermine everything else.

Business return of getting this right: Validity checks prevent downstream matching failures and stop bad data spreading across systems.

Signs of failure

  • Serial numbers like “UNKNOWN” or “12345”
  • Dates in the future or clearly wrong
  • Free text in structured fields
  • Assets failing to match across systems due to bad values

Technology and processes required

  • Validation rules in ITAM and CMDB
  • Drop-downs and controlled values instead of free text
  • Automated checks on data ingestion
  • Rejection or flagging of invalid records

How to measure it

  • Validation rule failure rate
  • Number of records rejected or flagged
  • Fields with non-conforming values
  • Reduction in matching errors after validation

Summary

By looking at these 8 dimensions we can:

  • diagnose why data isn’t trusted, not just that it isn’t
  • focus effort where the business impact is highest
  • set realistic tolerances instead of chasing perfection
  • align tools and processes to specific failure modes

Trustworthy data is about decisions, not dashboards.

Perfect data does not exist in a living IT estate. Assets move, systems lag, humans make changes, and automation is never complete.

Trustworthy data is not data without flaws.

It is data whose limits are understood, whose accuracy is measurable, and whose quality is sufficient for the decisions it supports.

Further reading, see also:

 

Leave a Comment

Previous

FinOps prices the rooms, ITAM manages the hotel