If I’m reading this report correctly, the only viable attack vector against an otherwise uncompromised system is through software which grants access to some means by which an attacker can supply a string and cause the software to attempt a DNS lookup against said string by using a deprecated glibc function. If and only if such a piece of software exists on the system, the attacker may cause a buffer overflow and use common techniques to execute arbitrary code, but said arbitrary code remains constrained by the security measures within which the software operates (both the UNIX user context and the SE Linux context of the vulnerable software process still apply, for example). The attacker could also cause segmentation faults which crash processes.
The authors of the report provide Exim as a proof-of-concept example because it surfaces a means to the attacker (it listens for SMTP connections) by which a string (intended to represent the connecting system’s hostname) can be provided and a DNS resolution attempt initiated (as long as Exim is configured to perform DNS lookups of connecting hosts).
Since you basically have to exploit each vulnerable software package individually, based on its use of the deprecated gethostbyname() glibc function, exploits for this vulnerability will be more burdensome to design than perhaps other, more general exploits (such as CVE-2014-9295, which attacks the ubiquitous ntpd). Any system which has a vulnerable version of glibc could theoretically support software which makes use of the vulnerable (but deprecated) function, of course, but determining which software that is and how to attack it remotely is not necessarily trivial.
Opportunities for Mitigation through Standard Best Practices
As usual, I’d like to point out the standard best practices which benefit us in mitigating the risk for this vulnerability. General areas of system design which offer potential for mitigation are in:
- Controlling access to the server (e.g. authentication/authorization for access to vulnerable processes, solid firewall design which prevents access from untrusted systems)
- Enforcing standard constraints on processes (good UID/GID design)
- Enforcing mandatory access controls on processes (e.g. SE Linux)
- Hardening easily-accessed system resources (e.g. mounting /tmp with the noexec option, thereby preventing the world-writable /tmp directory from being used as a platform for remote code execution)
The most worrisome systems would be public-facing, but even there, good attention to standard (user/group context of the vulnerable processes) and mandatory access control (SE Linux) can go a very long way. Segmentation fault-induced crashes, however, seem unavoidable in the event that attackers can subject the system to exploitative input.
The Red Hat security response team has weighed in, according to threatpost.com, in part by remarking “It’s not looking like a huge remote problem, right now,” so that’s some comforting confirmation of my conclusion regarding the real potential for remote exploitation of this vulnerability.
Some users around the Interwebs appear confused about the need for an attacker to control DNS infrastructure to exploit this vulnerability. The reason this was initially mentioned appears to be based on the observation that the report linked above lists among its understanding of mitigating factors:
Most of the other programs, especially servers reachable remotely, use gethostbyname() to perform forward-confirmed reverse DNS (FCrDNS, also known as full-circle reverse DNS) checks. These programs are generally safe, because the hostname passed to gethostbyname() has normally been pre-validated by DNS software
This led among some to the inference that exploiting the vulnerability against such software therefore requires control over the DNS resolver (or the ability to spoof DNS resolution traffic to the vulnerable system) which is authoritative for the domain name used to exploit the vulnerability and be provided to the vulnerable software. The idea is that, without such control, an attacker’s exploit string would not survive scrutiny from DNS software, and therefore would never be passed to the vulnerable gethostbyname() function.
This strikes me as misled, however, given the assessment of the authors of the above-linked report which concludes that DNS validation makes it an impossible task to author a string in excess of 1 KB in size, which is the minimum required size to overflow the buffer.
I don’t mean to downplay a security issue, or anything, but I do think it’s important for people to be aware that these issues being given trendy names as of late aren’t always the end of the world. I think it is especially important in our current IT climate that we pay attention to the benefits of intelligent system architecture and its mitigating effects on vulnerabilities like these.
Nonetheless, I’ll definitely be patching up ASAP. The upside to the patching process is that a full server restart isn’t required, but the downside is that those processes which potentially make use of glibc’s gethostbyname() function must be identified and restarted after the patch is applied, and this may require some vendor coordination in certain cases. I’ll keep you updated.