The asset naming conventions of the early 2010s were designed for static, on-premises IT infrastructures. Today, organisations operate in highly dynamic environments with widespread remote work, extensive cloud adoption, and increasingly ephemeral workloads driven by containerisation and automation.
This formal guide presents a modern framework for naming IT assets that addresses these hybrid realities.
Modern IT infrastructures must support employees working remotely, in the office, or a combination of both. In addition, organisations increasingly rely on a workforce made up of both permanent employees and contractors. Naming conventions should account for both access mode (remote, office, hybrid) and role type (employee or contractor) to enable targeted policy enforcement, security monitoring, and inventory reporting.
Cloud resources can span multiple regions, accounts, and platforms. A standardised naming convention ensures consistency and clarity for cost management, security auditing, and operational oversight.
Ephemeral workloads—such as containers, serverless functions, and short-lived VMs—require naming practices that support traceability. These names should be designed to remain meaningful even after the asset has been deprovisioned.
A robust naming convention supports policy enforcement and automation across environments. It can be enforced through IaC tools, CI/CD pipelines, and cloud policy engines to ensure consistent compliance.
Asset names should avoid revealing sensitive or identifiable information. Names should never include usernames, business functions, or cloud provider names. If wishing to tag by provider name (e.g. AWS, Azure), create public obfuscated location codes and map internally.
While using readable location codes can help operational teams, they can also expose your infrastructure layout. Obfuscation helps reduce risk.
Recommendation: Use codes like R1, R2 and maintain a private mapping internally.
Obfuscated Code | Actual Location | Region/Time Zone | Notes |
R1 | London, UK | Europe/London | Primary EU datacenter |
R2 | New York City, USA | America/New_York | Core East Coast site |
R3 | Sydney, Australia | Australia/Sydney | Asia-Pacific users |
R4 | Frankfurt, Germany | Europe/Berlin | GDPR-compliant infrastructure |
Z1 | Tokyo, Japan | Asia/Tokyo | Low-latency APAC workloads |
Z2 | São Paulo, Brazil | America/Sao_Paulo | South American presence |
X1 | Internal VPN Zone | N/A | Not location-specific |
X2 | Global CDN / Edge Nodes | Multiple | Used for caching proxies |
C1 | Azure | As per location code (e.g. USW1) | Workload running in the Azure US West 1 datacenter |
Asset Name | Description |
CLD-RMT-EMP-WKS-R2-983F | Cloud-managed workstation for remote employee based in R2 (New York) |
ONP-OFF-CTR-WKS-R1-0023 | On-premises desktop for contractor in R1 (London) |
CLD-HYB-EMP-APP-R4-44D8 | Cloud app accessed by hybrid-mode employees in R4 (Frankfurt) |
CNT-HYB-SYS-API-Z1-V1F2 | System-owned container API in Z1 (Tokyo), hybrid accessibility |
ONP-OFF-SYS-DC-R1-0001 | On-prem domain controller in London |
CLD-RMT-EMP-VDI-R3-A7E1 | Virtual desktop for remote employee in Sydney |
CLD-HYB-EMP-DB-R2-5C9B | Cloud database accessed by hybrid employee base in New York |
CLD-OFF-EMP-WEB-X2-DF19 | Web server on global edge (X2), accessed from office networks |
CLD-RMT-CTR-WKS-X1-AB43 | Contractor’s remote laptop via VPN zone (X1) |
CNT-RMT-SYS-JOB-Z2-31F8 | Remote job-running container in São Paulo (Z2), used by backend service |