Where does zero trust begin and why is it important?

By on 16 Apr 2021

Category: Tech matters

Tags: , ,

Blog home

Zero trust is an important information security architectural shift. It brings us away from the perimeter defence-in-depth models of the past, to layers of control closer to what is valued most — the data.

When initially defined by an analyst at Forrester, zero trust was focused on the network providing application isolation to prevent attacker lateral movement. It has evolved to become granular and pervasive, providing authentication and assurance between components including microservices.

As the benefits of zero trust become increasingly clear, the pervasiveness of this model is evident, relying upon a trusted computing base and data-centric controls as defined in NIST Special Publication 800-207. So, as zero trust becomes more pervasive, what does that mean? How do IT and cybersecurity professionals manage the deployment, and maintain an assurance of its effectiveness?

Zero trust architecture: never trust, always verify

Zero trust architectures reinforce the point that no layer of the stack trusts the underlying components, whether that be hardware or software. As such, security properties are verified to assure they are as expected for every dependency and interdependency on first use and intermittently (the dynamic authentication and verification tenets of zero trust). Each component is built or constructed as if the adjoining or dependent components may be vulnerable. As such, each individual component assumes it must be the component to assure the trust level asserted and must be able to detect a compromise or even an attempted compromise.

This can be a bit of a confusing paradigm in that zero trust instils the principles of isolation at every layer. This enforces the point of so-called zero trust between components, while verification of the security properties and identity is continually performed to provide an assurance that expected controls are met. One component may choose not to execute if the expected properties of the dependencies are not assured. Zero trust architectures assert to ‘never trust, always verify’. This enables detection and prevention of lateral movement and privilege execution for each component, and results in higher assurance for the system and software.

Core tenets

Identity, authentication, authorization, access controls, and encryption are among the core tenets of any zero trust architecture where deliberate and dynamic decisions are continuously made to verify assurance between components. While zero trust is often discussed at the network layer as a result of its origin as a concept by Forrester, the definition of zero trust has evolved considerably over the last decade to be a pervasive concept that spans infrastructure, device firmware, software, and data.

Zero trust is discussed often as it relates to the network with isolation of applications by network segments, ensuring controls such as strong encryption and dynamic authentication are met. Zero trust can also be applied at the microservices level, providing assurance of controls and measurements via verification between services. The granular application of this model further enforces prevention and detection for attacker lateral movement.

Infrastructure assurance

Zero trust begins with infrastructure assurance; it has become pervasive up the stack and across applications. A hardware root of trust (RoT) is immutable with a cryptographic identity bound to the Trusted Platform Module (TPM). The infrastructure assurance example instils the tenets of a zero trust architecture. Upon boot, the system first verifies that the hardware components are as expected.

Next, the system boot process begins verifying the system and each dependency against a set of so-called ‘golden policies’, which include expected measurements, attested to with a digital signature using the cryptographic identity in the TPM. If one of the policy comparisons do not match, the process may be restarted, or the system boot process may be halted. While there are several hardware and software-based RoT options, from boot the resiliency guidelines for firmware and BIOS are generally followed in the development of the policies and measurements used.

Read: Which practices help us maintain a secure cyberspace in the Asia Pacific?

Attestations are signed by a RoT at each stage of the boot process and are used to both identify the relying components as well as to provide an assurance of trust, thus at the very basic level identifying the system and components are as required. The dependencies may be chained or may be verified individually. These attestations are also provided at runtime, supporting the zero trust requirement for dynamic authentication and access control — in this case, for infrastructure components. Attestations aid in the requirement to verify identity of components, essential for providing assurance of said component.

Any attacker that has infiltrated the component or software would need to survive this dynamic and periodic verification and authentication to remain a threat. The attacker would also have to figure out how to escalate privileges or move laterally between isolated components that don’t trust each other.

Trusted control sets

The Trusted Computing Group’s (TCG) Reference Integrity Manifest based on NIST’s Firmware Resiliency Special Publication provide the trusted controls for policy and measurement of firmware. As you go up the stack, trusted control sets to provide the verification necessary for zero trust include the CIS Controls and the CIS Benchmarks. Trusted third parties such as NIST, CIS, and TCG provide a necessary external and established vetting process to set control and benchmark requirements. An example of this would be attestations used to comply with a CIS operating system or container benchmark at a specified level of assurance.

What evidence supports this shift to zero trust?

Interestingly, about the same time that zero trust architectures began to take shape, Lockheed Martin developed their Cyber Kill Chain (in 2011). The Cyber Kill Chain was first defined to separate the stages of an attack, enabling mitigation and detection defences between stages. The MITRE ATT&CK Framework is used more predominantly today with the foundation provided by Lockheed Martin’s model, plus identified gaps learned from use and the evolving threat landscape. For the purpose of this paper, the Cyber Kill Chain will be used to simplify the correlation process but can be abstracted to the MITRE ATT&CK Framework.

The Lockheed Martin Kill Chain was developed in response to the ever-increasing sophistication of advanced persistent threat attacks (APT) that had shifted to include supply chain attacks. By implementing defences and controls between attack phases, including requirements to prove identity (dynamically) via authentication, attackers’ lateral movements or privilege escalation attempts could be more easily detected. Moving detection and prevention earlier in the Kill Chain is ideal to prevent attacks from being successful (such as exfiltration of data or disruption within the network).

Applying detection and prevention techniques pervasively in the stack and across applications and functions with dynamic access controls to verify authentication attested components, supports zero trust architectural tenets, and enables detection early in the Kill Chain. The evidence of the tenets of zero trust working is clear when you consider its deployment in concert with Kill Chain detection controls as evidenced by attacker dwell time patterns.

Reducing dwell time

Since the use of the Kill Chain was first invoked, attacker dwell time (the time an attacker remains on a network undetected) has been dramatically reduced. This can be clearly seen with both the global and regional dwell time changes as different regions adopted the Cyber Kill Chain and zero trust defences. According to FireEye’s M-Trends annual reports, the global median dwell time was 229 days in 2013 and in the 2020 report is 56 days. The regional numbers support the success of this architectural approach as well, with known disparity in adoption of the zero trust architectural pattern and the defence frameworks of the Kill Chain and MITRE ATT&CK.

The United States was known to be an early adopter of both. Selecting 2017 as an example, the median dwell time in the Americas was 75 days and in Asia was 172 days. Smaller organizations or those with fewer resources in any region at any point in time may experience wildly different dwell times from larger and well-resourced organizations. The dwell time numbers do help demonstrate the success of these controls with tangible data.

Zero trust evolved from a network-only definition, where applications were segregated, to a more granular level in support of detecting unexpected behaviours between all components. The logical connection between zero trust and the Lockheed Kill Chain demonstrates the clear value of the models. This also helps to project the future for zero trust as increasingly data-centric, built upon a foundation of isolated components from boot in infrastructure attesting to their verified identity and assurance levels up and across the stack to the microservices level.

NIST SP 800-207 defines zero trust as follows:

“Zero trust (ZT) provides a collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised. Zero trust architecture (ZTA) is an enterprise’s cybersecurity plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies. Therefore, a zero trust enterprise is the network infrastructure (physical and virtual) and operational policies that are in place for an enterprise as a product of a zero trust architecture plan.”

Tenets of zero trust

The following is a list sourced from the NIST CSRC publication SP 800-207:

  1. All data sources and computing services are considered resources
  2. All communication is secured regardless of location
  3. Access to individual enterprise resources is granted on a per-session basis
  4. Access to resources is determined by dynamic policy
  5. All owned and associated devices are in the most secure state possible
  6. All resource authentication and authorization are dynamic and strictly enforced
  7. Collect as much information as possible on the current state of network infrastructure to improve security posture

An objective of the Lockheed Kill Chain is to proactively detect threats. The tenets of zero trust aid in prevention and detection along the phases of the kill chain.

Lockheed’s Cyber Kill Chain:

  1. Reconnaissance: Harvesting email addresses, conference information, network data
  2. Weaponization: Coupling exploit with backdoor into deliverable payload
  3. Delivery: Delivering weaponized bundle to the victim via email, web, USB, and so on
  4. Exploitation: Exploiting a vulnerability to execute code on victim’s system
  5. Installation: Installing malware on the asset
  6. Command & Control C2: Command channel for remote manipulation of victim
  7. Actions on Objectives: With hands on keyboard access, intruders accomplish their original goals

Lockheed Kill Chain mapped to NIST zero trust tenets

1. Reconnaissance1 — Inventory and monitoring of all assets
2 — Encryption to limit information gathering
7 — Detection of unusual behaviours with log analysis and advanced AI/ML capabilities
2. Weaponization 
3. Delivery5 — Increase difficulty for any delivery to be successful as only authorized code and communications are permitted
4. Exploitation3 — Access granted on per session basis to limit scope of attack
4 — Dynamic policy may be used to remove access for attacker, such as posture assessment fails
6 — Dynamic authentication prevents the attacked from remaining if authentication fails on retry
7 — Detection of exploit through log analysis
5. Installation3 — Access granted on per session basis to limit scope of attack
4 — Dynamic policy may be used to remove access for attacker, such as posture assessment fails
5 — Prevents unauthorized software or firmware from executing
6 — Dynamic authentication prevents the attacker from remaining if authentication fails on retry
7 — Detection of installation through log analysis
6. Command and Control5 — Prevents unauthorized communication on systems and network
7 — Detection of anomalous behaviour on the network
7. Actions on Objective5 — Prevents unauthorized communication on systems and network
7 — Detection of anomalous behaviours on systems and network

Kathleen Moriarty is Chief Technology Officer at Center for Internet Security and former IETF Security Area Director. She has more than two decades of experience working on ecosystems, standards, and strategy.

This post was first published on the Center for Internet Security Blog.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top