Occasionally, we like to highlight interesting or significant security issues that users of NGINX Open Source and NGINX Plus might encounter. Application stacks are complex and it’s very easy to overlook obscure or unexpected ways that common features can be exploited. NGINX and NGINX Plus are a powerful way to both provide access to these features and restrict access. Careless or unwitting configuration can leave a door open for attackers.
We have previously covered attacks that exploit HTTP headers. In the
HTTPoxy attack, the attacker uses the HTTP
Proxy
header to capture internal HTTP requests generated by an application, and in the
Apache Struts vulnerability the attacker performs command injection with a carefully constructed
Content-Type
header. Both attacks exploit little‑known features in the application environment, and are dealt with by intercepting suspect requests.
Most recently, Michael Higashi at RedLock
disclosed an attack method that uses a particular configuration to access the internal “instance metadata” APIs exposed by some cloud providers. Once again, a little‑known and presumably secure feature is laid bare to ingenious attackers.
The RedLock article uses NGINX configuration as an example, but the issue is not specific to NGINX – anyone operating a reverse proxy service needs to be aware of this attack method and the general implications of trusting user input.
How to Create an Overly Permissive Proxy
The core of the issue lies with an overly permissive configuration like this one for NGINX and NGINX Plus:
# Don’t ever use ‘proxy_pass’ like this!
location / {
proxy_pass http://$host; # To repeat: don’t do this!
}
This simple configuration processes an HTTP request, forwarding it to the server identified in the
Host
header of the HTTP request (as captured in the
$host
variable). It’s a very easy way to create an HTTP
reverse proxy, but it can also be used as a general‑purpose
forward proxy.
The potential for exploitation is signficant. Remote users can use this configuration to request that NGINX send an HTTP request to an arbitrary server that they name in the
Host
header. NGINX returns the result to the user. If NGINX is running in a position of privilege, such as a DMZ, remote users might get access to servers they can’t access directly.
This forward‑proxy configuration can also be used to conceal the source IP address of a request, enabling a user to access a remote service while making the requests appear to originate from the NGINX server.
The Nature of the Cloud Metadata Attack
Some cloud providers provide a service (in the form of an API) that enables services running in a virtual machine to query “instance metadata”, which can include sensitive data such as authentication credentials:
The risks of these services have been
discussed elsewhere. Any process running within the cloud virtual machine can access the API, retrieving and storing potentially sensitive data. However, by publishing these metadata services on an
internal, unroutable IP address (each of the providers listed above uses 169.254.169.254), providers have assumed that the services are secure from outside.
In fact, the overly permissive proxy configuration makes it trivially easy to access the internal IP address, as in this example with the AWS API:
$ curl http://remote-ip-address/latest/meta-data/ -H “Host: 169.154.169.254”
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
iam/
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/
RedLock explained “the team wrote a little script which finds such proxies and sure enough, they found dozens of such vulnerable servers.”
Blocking the Cloud Metadata Attack
The simple advice from the NGINX team is: “Don’t do this!”. Proxying requests to the server named in the
Host
header
(proxy_pass
http://$host
) is a clear example of trusting user input too far, but it’s an easy mistake to make.
NGINX was never designed to be a general‑purpose forward proxy. Securing this sort of configuration is hard.
Our advice is to proxy traffic only to known upstream servers that you have pre‑selected. It’s safest to identify servers by IP address or DNS name, either directly in the argument to the
proxy_pass
directive or in the
server
directives in the named
upstream
block to which traffic is forwarded. If you use static DNS names, you must ensure that an attacker cannot subvert the DNS resolution process to inject arbitrary IP addresses.
Many NGINX users rely on
NGINX Amplify to help them build reliable, secure configuration. We will update Amplify’s
configuration analyzer shortly to detect and warn about this insecure configuration.
For a
“belt-and-braces” solution, you can also drop requests with unusual
Host
header values, either in the NGINX configuration or using a web application firewall (WAF) such as
NGINX ModSecurity WAF, a dynamic module for NGINX Plus powered by ModSecurity.
[Editor – NGINX ModSecurity WAF officially went End-of-Sale as of April 1, 2022 and is transitioning to End-of-Life effective March 31, 2024. For more details, see F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life on our blog.]
Conclusion
Trust no one! Estimates of bot traffic vary between 40% and
over 50% of HTTP traffic on the Internet. Distil Networks (which
relies on NGINX to protect their clients from bad bots) reports that
over 20% of Internet traffic originates from ‘bad bots’, and a proportion of which are scanners and reconnaissance tools seeking to find vulnerable systems.
NGINX, Inc. is here to help. NGINX Plus is the most heavily tested NGINX release, and NGINX’s Professional Services team has a wealth of experience in securing NGINX Plus configurations for some of the world’s leading websites and brands.
Contact us if we can be of assistance.