The Internet has historically been a slow adopter of security initiatives. Take the Domain Name System (DNS) for example.
The Internet Engineering Task Force (IETF) published its first threat analysis paper (RFC 3833) nearly a decade after Steve Bellovin, the principal researcher at AT&T Bell Laboratories and the main contributor for RFC 3833, released his research into DNS security in 1995, at which time he felt that such was the serious nature of DNS vulnerability, the only choice left to decision makers was to give up entirely on the DNS protocol as the preferred name-based authentication system.
Fast-forward to 2020; DNS attacks are still making headlines and hackers are continuously crafting new and creative attacks to break the trust people and devices have over the DNS as a name resolution protocol.
DNS rebinding attacks
One type of DNS attack that has become increasingly popular in recent years has been DNS rebinding attacks. This manoeuvre allows the attacker to gain a foothold in the victim’s network, which can include gaining access and control over any devices connected to it.
It’s relatively easy for attackers to circumvent browsers’ same-origin policies by making the browser believe the two servers belong to the same origin because they share a hostname, thus allowing the script to read back the response. RFC 6454 describes this as the Same-Origin Policy (SOP), which “restricts access of active content to objects that share the same origin. The origin is, hereby, defined by the protocol, the domain and the port used to retrieve the object”.
Looking deeper into the taxonomy of this attack, all common defence techniques such as DNS pining (where once the browser resolves a hostname to an IP address, the browser caches the result for a fixed duration, regardless of TTL) and Domain Name System Security Extensions (DNSSEC) provide no protection for DNS rebinding attacks.
The attacker can legitimately sign all DNS records provided by the DNS server in the attack. The principal mismatch is between the DNS server, which provides IP-domain mapping and the web server, which means that an unauthorized host entry is now associated in the DNS registry.
In SOP-enforcement semantics, the web-server is not involved in the security decision making. Instead, all enforcement happens at the browser layer, and the target server expects to receive whatever traffic that is decided by the DNS server and SOP.
According to the most recent (2015) threat model analysis done by the IETF, there is no mention of DNS rebinding attacks. However, more recent research into ‘Zero Trust’ architecture by John Kindervag at Forrester Research gives hope to a possible solution to the menacing problem of DNS validation.
The ability for attackers to provide valid, authoritative responses for a domain owned by the attacker goes unchallenged due to technical gaps in the existing trust model for all system, software services or agents responsible for handling such traffic.
The legitimacy of the DNS host identity is not a one-off affair either. In fact, looking at tenants of the zero-trust model, for the trust to remain validated and true to security policy it needs to be continuously assessed and ensured to all participants, not just for the lifetime of DNS communications but beyond.
What defensive measures are currently available?
The DNS provides foundational trust, for the entities wanting to communicate over the Internet, which when subverted, compromises all dependent layers and protocols in TCP/IP. As such, there is no barrier to entry for an attacker to use the DNS for an attack, as the only useful security service it provides, even in the case of DNSSEC, is designed to counter spoofing attacks.
However, the real gap exists where even a DNSSEC-approved client or service can manipulate the trust boundary. Currently, there is no single source of DNS security policy authority that can manage disparate systems like web servers and local browsers in a coherent way.
The closest thing to a policy store is a DNS registry, which is responsible for maintaining vital system information, or DNS server records, including important information about a domain or hostname. But even this information is insufficient to hold and describe complex policy relationships across many distributed entities. The relatively stale nature and update cycle of DNS entries also make it a bad candidate for a real-time decision process.
Compliance is not a very good indicator of good or safe behaviour. Without a sound security policy based upon an international standard, the Internet will be full of compliant but unsafe DNS traffic.
How should we approach defending the DNS?
Proof-checking good behaviour of DNS servers is a matter not left to single enterprise enforcement, but should be duly and independently enforced through a mechanism where enforcement and decision points are decoupled through a trusted third party, such as how an ISP and IANA supervise the management of DNS security policy stores at a local and regional level. A similar model already exists in web services — WS-policy/WS-security policy — and has already proven very successful in enabling trust in B2B and B2C business communication.
If the attacker DNS server is forced to declare or agree on organization-stated security policy, with security attributes such as authentication proof, risk level, reputation, message format, and with an ACL for allowed actions, it would severely disrupt an attacker’s ability to launch a successful attack. It would also be highly probable they would be detected.
A standardized approach to achieving this would require a solution to be vendor-neutral and easily adopted through a uniformed policy-enforced policing system, whereby various security vendors’ solutions such as firewalls help increase the overall awareness of DNS threats. A compliant browser would then be able to invoke a firewall or at-home your local Internet router to block or deny certain malicious traffic or host even when an unapproved message during the process of DNS name resolution is received. The attacker, thus, may be forced to go through some kind of DNS screening process, a sub-system of larger policing infrastructure, where behaviour from suspected DNS hosts is paired and matched against expected policy requirements as approved by the client or customer.
In the case of DNS-rebinding attacks, this means making the web server responsive to user or client DNS security policy requirements, where unapproved DNS binding is checked and dropped as a policy action.
Expecting point solutions like firewalls and gateways to secure DNS traffic is unfair, mainly due to a lack of runtime context provided to make an accurate decision and increases the chances of adding false positives, a feature no security vendor would want of their product.
A more resilient approach may be to have full-context analysis, where, for example, a low DNS TTL value=1 is reflected through an electronic policy-statement negotiated earlier between one or more signing parties’ DNS-server, client or web server.
Businesswise there are few good cases where, besides general DNS administration, a normal server would open a connection with a low value of TTL and then later, either malicious or accidentally want to rebind connection to your internal network. As there are no good reasons to share a tethering or hot-spot Internet connection on your enterprise network, such similar actions should invoke a response from all trust-boundary participants, especially the server (target resource) to check if its security attributes with DNS rebinding have been violated in any way.
An authorized principal, such as a trusted security vendor, may be authorized to perform a browser SOP bypass, but others, perhaps new clients, cannot use this permission without providing undeniable proof, like an audit record, that their use of browser configuration has not in any way violated the security of the enterprise. Without a necessary accountability mechanism in place, the entity, whoever so, will act more cautiously and sincerely since they can now be held legally responsible for their actions.
The envisioned system will allow businesses to search a DNS-security policy registry and for another business’ details, and develop, configure, or purchase software to interface with that business’ policy interfaces and develop a Collaboration Protocol Agreement. Once initialized, all DNS transactions would occur via some form of secure messaging service.
Without a secure DNS, the Internet cannot effectively evolve
The Internet today is drastically different to its origin, where messages are no longer just between clients and servers, but a host of systems and devices collecting and sending data to operators via the cloud. We’re not just talking about smart fridges and lamps but safety sensors and temperature gauges within critical infrastructure such as natural gas processing plants and power stations, all of which can be remotely accessed through edge-computing. Curtailing such a complex threat model is not a point solution, it’s a whole system problem and must be addressed from all angles including design, technology and relevant standards.
The same can be said about the DNS’s role too. It is my honest opinion that without some careful rejuvenation into the DNS’s design, the next phase of the Internet, primarily that of the Internet of Things, can only promise high-risk for low reward.
The coalescing of the IT world with the machine world is a rare combination of semi-to full-autonomous machines sending traffic to and from human-assisted computing, and without any fail-safe mechanism for ‘secure DNS resolution’, it can aggravate a situation drastically, turning it into an actual phenomenon, which in the right situation could result in a billion rogue devices or sensors, all silently subsuming a perfect storm-like situation for an Internet blackout.
Asad Ali is the Senior Cyber Security Architect at Dawood Hercules Corporation.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.