Cloud security provider Wiz, which recently made headlines by discovering a massive vulnerability in Microsoft Azure’s CosmosDB-managed database service, found another hole in Azure.
The new vulnerability affects Linux virtual machines on Azure. They end up with a little-known service called OMI installed as a by-product of enabling any of the various log management and / or reporting options in the Azure UI.
In the worst case, the vulnerability in OMI could be exploited for remote root code execution, although luckily Azure’s firewall will, by default, outside of the virtual machine, limit it to only the internal networks of most of the the clients.
By opting for any of several attractive Azure infrastructure services (such as distributed ledger), a little-known service is automatically installed. within the Azure virtual machine in question. That service, OMI, short for Open Management Interface, is designed to function much like the Microsoft Windows WMI service, allowing for log and metric collection, as well as some remote administration.
Part of the OMI specification requires authentication to bind commands and requests to a specific user ID (UID), but unfortunately, a bug caused malformed requests that skip the authentication stanza entirely to be accepted as given by the
root the user himself.
When configured for remote administration, OMI runs an HTTPS server on port 5986, which can connect with a standard HTTPS client like
curl and given reasonably human-readable commands in the derived XML SOAP protocol. In other configurations, OMI only runs on a local Unix socket in
/var/opt/omi/run/omiserver.sock, which limits its exploitation to local users only.
As a senior security researcher at Wiz Nir ohfeld He walked me through a demo of the vulnerability, describing it primarily in terms of privilege escalation – an attacker taking over an affected virtual machine can issue any arbitrary command as root using OMI syntax.
In larger environments where OMI listens on a network port, not just a local Unix socket, it’s also a great way to pivot laterally – an attacker getting a shell on a VM on a customer’s local Azure network can usually use the faulted OMI to gain control of any other virtual machine on the same network segment.
Turns out Azure isn’t the only place you’ll find OMI. Organizations that adopt Microsoft System Center (advertised on every new installation of Windows Server 2019 and later) and manages Linux hosts on or off premises and also ends with buggy version of OMI deployed on those managed hosts.
As Nir and I discussed the extent of the vulnerability, I pointed out the likelihood that some Azure customers will enable user interface login and add a “default permission” rule to the Azure firewall of a Linux virtual machine; sure, it’s a wrong practice, but happens. “Oh my gosh,” I exclaimed, and the Wiz team laughed. Turns out that’s exactly what they called the vulnerability: OMIGOD.
A difficult reward to collect
Despite OMIGOD’s obvious severity, which includes four separate but related bugs that Wiz discovered, the company struggled to get Microsoft to pay it a reward for its responsible discovery and disclosure. In a series of emails that Ars reviewed, Microsoft representatives initially dismissed the vulnerabilities as “out of reach” for Azure. According to Wiz, Microsoft representatives in a phone call further characterized the bugs at OMI as an “open source” problem.
This claim is complicated by the fact that Microsoft was the author of OMI in the first place, which donated to The Open Group in 2012. Since then, the vast majority of OMI commitments have continued to come from Redmond, Microsoft employees contributors—Open source or not, this is clearly a Microsoft project.
Besides Microsoft de facto owned by the project, the Azure management system itself automatically implements OMI; administrators are not asked to hit the command line and install the package themselves. Instead, it is automatically deployed within the virtual machine each time an OMI-dependent option is clicked in the Azure GUI.
Even when Azure management implements OMI, there is little obvious notification to the administrator who enabled it. We found that most Azure admins seem only to find out which IMO exists if your / var partition fills up with your core dumps.
Ultimately, Microsoft relented on its refusal to pay an Azure Management bug bounty for OMIGOD and awarded Wiz a total of $ 70,000 for the four component bugs.
A dusty corner of the supply chain
“IMO it’s like a Linux implementation of Windows Management infrastructure“Ohfeld said to Ars.” Our guess is that when they moved to the cloud and had to support Linux machines, they wanted to bridge the gap, have the same interface available for both Windows and Linux machines. “
The inclusion of OMI in Azure Management, and in Microsoft System Center, which is advertised directly on every new installation of Windows Server, means that it installs as a low-level component on a staggering number of critical, virtual and desktop Linux machines. another type. The fact that it listens for commands on an open network port in some configurations, using well-known protocols (SOAP over HTTPS), makes it a very attractive target for attackers.
With the scope of the implementation and the potential vulnerability, one could reasonably expect many eyes to be on OMI, enough that a vulnerability summarized as “forgot to make sure authenticated user” was quickly discovered. Unfortunately, this is not the case: OMI has an eerily low total of 24 contributors, 90 forks, and 225 “stars” (a measure of relatively casual interest to developers) over the nine years that it has had a home on Github.
In contrast, my own ZFS management project, Sanoid, which does not listen on any ports and has been accurately, albeit loosely, described as “a couple thousand lines of Perl script”, has more than twice as many contributors. and forks and almost 10 times the stars.
By any reasonable standard, a critically important infrastructure component like IMO should receive much more attention, raising questions about how many other The dusty corners of the software supply chain are equally poorly inspected and poorly maintained.
An unclear upgrade path
Microsoft Employee Deepak Jain committed fixes needed for OMI’s GitHub repository on Aug 11, but as Ars directly confirmed, those fixes had not yet been rolled out to Azure on Sept 13. Microsoft told Wiz it would announce a CVE on patch Tuesday, but Wiz researchers expressed uncertainty as to how or when those fixes could be universally implemented.
“Microsoft has not shared their mitigation plan with us,” Ami Luttwak, Wiz CTO, told Ars, “but based on our customer’s telemetry, this could be tricky to patch properly. OMI is built into various Azure services and each may require a different upgrade path. “
For arbitrary Linux systems managed remotely from Microsoft System Center, the upgrade path can be even more complicated, because the Linux agents for System Center have been obsolete. Customers still using System Center with OMI-enabled Linux may need to manually update the OMI agent.
Mitigation for affected users
If you are a Linux system administrator and you are concerned that it may be running OMI, you can easily detect it by looking for listening ports on TCP 5985 and 5986 (TCP 1270, for OMI agents implemented by Microsoft System Center instead of Azure) or a Unix. socket located underneath
If you have the Unix socket but no ports, you will remain vulnerable until Microsoft implements a patch, but the scope is limited to local privilege escalation only.
In cases where OMI listens on TCP ports, it binds to all interfaces, including public ones. We highly recommend limiting access to these ports through the Linux firewall, whether your OMI instance is fixed or not.
In particular, security-conscious administrators should carefully limit access to this and any other network services to only those network segments that actually need access. Machines running Microsoft System Center obviously need OMI access on client systems, as does Azure’s own infrastructure, but the clients themselves don’t need OMI access from one to another.
The best practice to reduce the attack surface of the network, with this and any other potentially vulnerable service, is a global firewall.
deny rule, with specificity
allow current rules only for machines that need to access a specific service.
When that is not practical, for example in an Azure environment where the administrator is not sure which Microsoft network segments they need to access OMI for Azure Management to work properly, simply deny access from other VMs on the same segment network at least Avoid lateral movement of attackers from one machine to another.