What we know so far is that they’ve followed police guidance and not paid the ransom demanded by the hackers and that they’re rebuilding their IT estate from scratch. Maersk took the same approach following NotPetya (for more on that see our article here). The difference is that Maersk took around 2 weeks to complete the restoration of service and were very proactive in keeping stakeholders updated. That’s been missing from Travelex’s response to date. Could their ITAM team be helping them to restore service more quickly?
The impact has been significant. Other institutions relying on Travelex have been unable to conduct foreign exchange business, and Travelex themselves have resorted to manual methods. Despite this, parent company Finablr see no impact on 2019 or 2020 revenues. The market seems to disagree – whilst the stock was already under pressure prior to the attack, it has halved in value since.
So, what role is their ITAM team playing in all this? Could their ITAM team have enabled them to be better prepared? First things first, it has been widely reported that the ransomware was deployed to their network via unpatched Virtual Private Network (VPN) software. Whilst patching is not an ITAM responsibility ITAM teams could be reporting the level of potentially vulnerable software deployed on the network. For more on this see this article. Software recognition is a core capability for ITAM toolsets, and some go further by referencing installed software against known vulnerability lists. That could have flagged up the VPN issue before it was exploited by the hackers. Clearly that didn’t happen at Travelex and they’re having to restore service from scratch.
If you were the Travelex ITAM team how might you be helping restore service? What actions could you take now to prepare for an attack on your organisation?
To restore service, we first need to know what our estate looked like, in detail. Whilst this is a Configuration Management task, clearly ITAM will have discovery and inventory data which can help. It’s very likely that SAM data will be more detailed in terms of versions, etc. – plus all the peripheral software that might not be in your CMDB. Inevitably there will be discrepancies between the CMDB, ITAM tooling, and other records. By working together, it’s possible to get a more detailed picture of your estate prior to the attack. Of course, this assumes that you have access to your ITAM data – which won’t be the case if it only exists on a server on your now dead corporate network. Certainly something to consider if you’re deciding on an on-premises vs partner vs cloud hosting model for your ITAM tool.
Is your Definitive Software Library (DSL) offsite? Or is it stored electronically on a network file share? If the latter, in the case of Travelex it’s probably encrypted and inaccessible. Ensure that you have an offsite, cold, offline backup of the current deployed versions of all your software and keys. If not, ensure that the server containing your DSL & your ITAM data is given a high restore priority in your organisation’s Disaster Recovery/Business Continuity plan. It’s not good enough to be able to download replacement software from vendor portals – that will simply take too long and SAM will get in the way of restoration activities, which won’t look good.
Do you have a secure offsite store of credentials for vendor portals? If your network is as dead as Travelex’s appears to be you won’t have immediate or easy access to the vendor portals which will be critical for acquiring media for anything you don’t have in your DSL. You may find that your corporate network, once restored, is isolated from the internet and so you won’t be able to use your business connectivity to get to these portals. And, if your business email address isn’t yet restored, you may not be able to do password resets for portal accounts if you don’t know the passwords. Therefore, test your ability to access vendor portals in the event of a network failure or non-availability of corporate email.
Travelex, as Maersk did before them, are buying brand new personal computing kit, and one would imagine server kit as well. This won’t be the same spec as what was previously installed. For the datacentre, check that processor/core counts aren’t increasing, and make sure lower level technical config such as clustering and hyperthreading matches what was in place prior to the attack. You may be increasing capacity simply because you can’t buy quad-core processors anymore so make sure your server teams aren’t generating a non-compliance for software licensed by per-core metrics. Engage with your server teams and make them aware of processor and core restrictions. This might not be popular, as it may get in the way of service restoration, but also unpopular is a failure to meet the sub-capacity requirements for licensing certain IBM or Oracle products.
Make note of any software that requires hardware changes to be notified to the publisher. Examples include Autodesk & Quest, if volume license keys aren’t being used. Check your license agreements carefully for reassignment rights and ensure you meet them and notify where appropriate. Check the specific terms around Disaster Recovery and Business Continuity. Publishers may be willing to grant temporary rights to help you through the recovery process. Equally, some may see this disruption as a trigger for audit activity.
Once everything is restored and running smoothly it’s very likely that your ITAM data, particularly if it’s held offsite, will be full of duplicates or data about the now dead and buried former network. Work with your ITAM tool vendor/partner to clean this up – you may need to ask for a temporary uplift in your licensed capacity because it is vital that you get good inventory and discovery data quickly once the network is restored. Multiple stakeholders will need this information as the network moves from restoration to business-as-usual operation.
Finally – for next time, or before this happens to you – work with your Business Continuity and/or IT Security teams and ensure that you’re included in the Incident Response Plan. You and your team are a critical resource in the event of your IT estate being ground-zeroed by a cyber-attack. You need to be involved in the table-top exercises and simulations that good IT Security/Business Continuity Planning teams will be running on a quarterly or six-monthly basis to test their response to such incidents.
This article has outlined some practical steps to ensure that you and your team are ready to respond to a cyber-attack or some other event requiring large-scale restoration of service. Whilst ITAM isn’t directly responsible for IT Security & BCP there are actions you need to take in order to ensure that ITAM isn’t blocking service restoration. Once that baseline capability is in place ITAM can step up and play an active role in recovering from disaster. It remains to be seen when Travelex will return to full service, but I don’t recall a longer outage for such a large, global corporation. It should be a wake-up call to us all to prepare for that next large cyber-attack.