Threat Integration: Lessons of indicator and incident exchange

By on 3 Oct 2022

Category: Tech matters

Tags: , ,

Blog home

Within the United States (US), there are 20 industry-centric Information Sharing and Analysis Centers (ISACs), numerous other sharing alliances, and commercial vendors that curate and manage threat feeds. The ISACs are responsible for information sharing with members within their respective sectors.

The Center for Internet Security (CIS) has a combined 18 years of experience running the Multi-State and Elections Infrastructure Information Sharing and Analysis Centers (MS- and EI-ISACs). They are two of the largest industry-focused ISACs responsible for aggregating, vetting, disseminating, and directly applying threat intelligence to aid their members. The difference here is that the MS- and EI-ISAC receive federal funds to provide no-cost core cyber services to members, whereas other ISACs primarily rely on membership fees.

CIS maintains the largest cyber threat database specific to US state, local, tribal, and territorial (SLTT) governments and the election community, enriched by threat intelligence from various sources, including the US Department of Homeland Security (DHS) Intelligence & Analysis (I&A) division and other federal agencies within the broader Intelligence Community (IC). For its part, the MS-ISAC team of experts integrates and normalizes 200+ threat intelligence sources, such as our own threat intelligence analysed from monitoring services that aid more than 13,000 members. These services include our nationwide deployment of Albert sensors, endpoint detection and response agents, and our no-cost web security service, Malicious Domain Blocking and Reporting (MDBR). This is an important part of the MS- and EI-ISAC mission — to provide actionable threat intelligence and effective resources to support the cybersecurity posture of the ISAC communities at large.

CIS has learned some lessons along the way that can help organizations make the best use of this threat intelligence. To explore those applications, let’s first look at what we’ve learned about detection and response over the past several decades.

A history of detection and response lessons

It’s interesting to think back to 1995-1997 when I was working at PSINet and coordinating with our Network Operation Center (NOC) team and other service providers’ NOC teams to track down the source of a denial-of-service (DoS) attack. This attack was orchestrated by a single person, a teenager, who took down a website using a SYN flood attack.

Attacks began to increase in size and intent after that, as the use of the Internet for business purposes increased. In these early days, intrusion detection happened through firewall logs before we eventually advanced to dedicated intrusion detection systems. Coordination was through mailing lists and groups like SANS. Around 1999-2000, SANS issued a challenge to assist with detecting and thwarting attacks. I responded to the challenge with a proposal that later evolved to become Real-time Inter-network Defence (RID), the first DDoS indicator exchange and attack coordination protocol.

Overall, much has been learned in the last 20+ years from the work to automate incident information and indicator sharing. This work stretches back to 2001 with the initial standards for both sharing formats (for example, Incident Object Description and Exchange Format (IODEF) and incident response and indicator exchange protocols like RID). The work on formats and exchange protocols evolved over time to meet specific use cases including MISPStructured Threat Information eXpression (STIX), and Trusted Automated eXchange of Intelligence Information (TAXII).

An important lesson learned is that integration of shared indicators of compromise (IoCs) is very difficult when the responsibility is distributed out to each organization to turn an IoC into a defensive action (or blocking rule). This reflects a dearth of technical expertise requirements and a mismatch in expectations that impact adoption.

Significant research has been conducted to examine why protocols are adopted or fail to gain adoption. Evaluation of incident and indicator exchange protocols by researchers and practitioners were captured in the Coordinating Attack Response at Internet Scale 2 Workshop report to better understand the low adoption levels for these technologies. The primary finding matched that of other protocol research in that a small interoperable core of extensible standards is more widely adopted than a protocol developed to meet many use cases, potentially requiring profiles for interoperability. It is possible that Security Orchestration and Automation Resource (SOAR) tooling will help to progress the integration and automation of threat information at the organization level, but distributed resources including funding and expertise are still necessary. To reach businesses of all sizes and resource levels, learning from history and focusing on the main problem of mitigating threats at scale is important.

MS-ISAC: A lens for understanding today’s challenges

For the MS-ISAC, the following information offers insight into the challenges facing similar initiatives that are reliant upon industry-focused groups, including those that are member-financed or government-funded:

  • The ISAC industry-focused model was established by the Department of Homeland Security (DHS) in 2003.
  • There are an estimated 90,000 potential member organizations for the MS-ISAC that can opt into CIS Services. These services include a wide range of offerings aimed at improving the overall security posture of US SLTT organizations’ networks.
  • The MS-ISAC has a steadily growing number of member organizations, currently above 13,000, that use at least one of our services or products.
  • About 4,200 of our member organizations receive threat intelligence or IoC information, with about 350 of those being automated using a data format and protocol to support the exchange. Of those, only a handful ingest the threat intelligence, translate it into defensive actions, and run it directly in their infrastructure in a completely automated way. This level of integration requires skilled resources, which is a hurdle for any skilled-labour-challenged organization.
  • Threat intelligence that integrates directly into CIS Services has significant reach, all while requiring no additional steps from distributed organizations that may or may not have skilled staffing to take action. Examples of fully intelligence-integrated services managed by the MS-ISAC include Albert Network Monitoring and Management, MDBR, and Endpoint Security Services. In each case, CIS supplements the indicators of compromise to add recent known threats and data. This threat intelligence is then continuously updated through vendor threat feeds and capabilities that are part of the service offering.
  • Those with fewer resources are unable to consume and use threat intelligence in their environments unless it is integrated within a product and fully managed with automated actions enabled.

What’s necessary to stay protected?

CIS has learned from experience that direct integration of threat intelligence into monitoring and protection services on behalf of our members is beneficial in detecting known threats or compromised systems. In particular, the CIS team curates indicators based on threats targeting SLTT organizations. The integration on the part of CIS eliminates the need for members to independently process, understand, and deploy this information effectively. Threat defence and response products are increasingly becoming the integrator of threat intelligence on behalf of their customers. This is a very positive step toward democratizing security, but it still leaves a gap for organizations that may not be able to afford such tools.

What if applications and operating systems more directly applied threat intelligence, eliminating the need to further disseminate this information? Organizations that by design build threat intelligence or its application into products directly more widely benefit the user base of that product. In other words, built-in security that includes the use of threat intelligence by the vendor, where appropriate, is most effective without the need for distributed management at a company like CIS or at individual organizations.

As an example, let’s say we have a vendor organization using threat intelligence to remove a vulnerability and update software through the DevSecOps Continuous Deployment process to have an immediate and widespread impact. A patch would resolve the vulnerability with a Continuous Deployment model that results in automated updates for all cloud instances. At that point, the IoCs could be removed from circulation, as they would no longer apply.

Responsible disclosure processes would result in remediation being available within 90 days in an environment not using Continuous Delivery or Deployment with extended timelines until all systems were patched in traditional software update practices. Alternatively, the vendor’s product that is associated with a particular IoC could be the one to enable detection rather than requiring third-party products to perform detection for a wide range of services. The first step toward this broader goal is to ensure there is an association between vulnerability reports and IoCs that correlate to the exploitation of that vulnerability. This could redirect the work to push out threat feeds to end customers without the capability to consume them to where they could be more usefully applied.

The ‘shift left’ in threat intelligence

Solutions are beginning to emerge that integrate threat intelligence within products at the vendor, minimizing the need for each organization to have experts on staff who are capable of disseminating this data effectively on their network. This further ‘shifts left’ the responsibility of preventing and detecting threats to the vendor, aiding with improving the scale of management over time for attack prevention on known threats. Endpoint protection products include this capability; it’s often implemented with machine learning (ML) or artificial intelligence (AI) by a group of experts at the vendor.

With the push for built-in security to align better with zero trust architectural models, this type of integration is appearing directly in products without the need for add-on security solutions. This allows the decisions for a product to be made by the product team analysts with a broad impact on the product’s user base. For example, Windows 11 has announced that application allowlisting will be standard to support remote users and unmanaged systems. Cloud or hosted solutions may also offer integrated intelligence as part of the platform offering. These trusted intelligence source feeds provide reliable threat metadata into the larger threat ecosystem, alleviating the need for distributed responsibility and increasing the impact by providing relevant prevention and detection capabilities based on analysis of available intelligence.

When products directly integrate threat intelligence, their application has greater reach. An example would be a product vendor using threat intelligence to remediate vulnerabilities, eliminating the need to perform detection, and then updating the threat intelligence to notify that it is no longer needed. If the threat intelligence remains helpful, could it be deployed with the product to enable preventive detection? Built-in security managed at scale is the only way the industry will be able to reduce the attack surface for organizations large and small.

Threat information at scale

Thinking through how we as an industry achieve the desired end result is necessary to ensure architectures are designed to support the end result successfully. When we consider information exchange, it must be in the context of the desired end result to reduce threats. In order to achieve this end result, the exchanges must lead to effective integrations of trusted threat information at scale to reduce or eliminate vulnerabilities as close to the source as possible.

Putting lessons learned in context, it is safe to say that lessons on shifting left could be applied early as new solutions are in the design and development phase to achieve supply chain assurance and software allowlisting. We often design just thinking about the scale of the engineering portion of a solution for delivery, but in these cases, the consumption end must adapt to the evolving threat landscape and scale, or the solution will fail.

My next blog in this series will explore supply chain assurance and allowlisting considerations for applying what we’ve learned over the last 20+ years to exchange and integrate threat intelligence effectively.

Kathleen Moriarty is Chief Technology Officer at the Center for Internet Security and the former IETF Security Area Director. She has more than two decades of experience working on ecosystems, standards, and strategy.

This post was originally published on the CIS blog.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top