top of page

Threat Capability (TCap) Rubric

  • Writer: FAIR INTEL
    FAIR INTEL
  • 6 days ago
  • 5 min read

Overview

Threat Capability (TCap) measures the ability of a threat actor to successfully execute an attack against an organization's assets. Per Open FAIR methodology, TCap is assessed across three factors:

  1. Resources - Tool diversity, operational duration, exploit acquisition, and custom development capability

  2. Skills/Expertise - Technical knowledge, experience, and development capabilities

  3. Access - Ability to reach targets through vulnerabilities, tools, and positioning

Each factor is scored independently using the rubric below, then averaged to produce the overall TCap estimate. Scores are expressed as decimal ranges (e.g., 0.60-0.80) for use in downstream FAIR calculations.


Resources Rubric

Tier

Range

Observable Criteria

Very High

0.80-1.00

Zero-day acquisition demonstrated. Multiple custom malware families. Operational duration exceeding 5 years. Extensive CVE exploitation (5+). Sophisticated custom tooling with active development.

High

0.60-0.80

Multiple custom malware families or frameworks. Operational duration of 2-5 years. Multiple CVEs exploited (3-5). Custom code with evidence of iteration or versioning.

Moderate

0.40-0.60

Mix of custom and public tools. Operational duration of 1-2 years. Limited CVE exploitation (1-2). Some custom code or significant modification of public tools.

Low

0.20-0.40

Primarily public tools with minor modifications. Operational duration under 1 year. Single CVE or no exploit use. Minimal custom code.

Very Low

0.00-0.20

Off-the-shelf tools only. No operational history. No CVE exploitation. No custom code.

Evidence considerations:

  • Tool diversity indicates development capacity; multiple custom families suggests greater resources than single-tool operations

  • Operational duration reflects sustained capability; longer operations indicate consistent resource availability

  • Zero-day acquisition is expensive and rare; its presence strongly indicates significant resources

  • CVE count reflects breadth of exploit capability; more CVEs suggests dedicated vulnerability research or acquisition

  • Custom code versus public tools indicates investment in development


Skills/Expertise Rubric

Tier

Range

Observable Criteria

Very High

0.80-1.00

Zero-day vulnerability discovery and exploit development. Novel techniques not previously documented in threat intelligence. Kernel-level or firmware capabilities. Advanced cryptographic implementations. Ability to defeat state-of-the-art defenses consistently.

High

0.60-0.80

Custom malware development with original code. Multiple evasion techniques implemented (e.g., AMSI bypass, ETW disabling, in-memory execution). Multi-environment expertise (e.g., Windows, Linux, cloud, mobile). Advanced obfuscation beyond publicly available tools. Ability to adapt tooling to evade specific defenses.

Moderate

0.40-0.60

Modifying and extending existing tools for specific purposes. Known evasion techniques implemented from public research. Single environment focus with competent execution. Basic obfuscation using available packers or encoders. Can maintain persistent access but limited adaptation to defender response.

Low

0.20-0.40

Using public tools with minor configuration changes. Limited evasion capability, easily detected by updated signatures. Script-level capability without deeper system understanding. Relies on targets having weak or outdated defenses.

Very Low

0.00-0.20

Copy/paste attacks using tutorials or leaked tools. No tool modification capability. No evasion techniques employed. Attacks succeed only against unprotected or misconfigured targets.

Evidence considerations:

  • Zero-day usage indicates Very High; N-day exploit usage (known vulnerabilities) indicates High at most

  • Multi-environment frameworks suggest High expertise; single-platform tools may be Moderate

  • Evasion technique count and sophistication matters; a single known bypass is Moderate, multiple layered techniques is High

  • Obfuscation quality varies; commercial packers alone are Moderate, custom obfuscation layers suggest High


Access Rubric

Tier

Range

Observable Criteria

Very High

0.80-1.00

Zero-day vulnerabilities for initial access. Supply chain compromise capability. Insider access or recruitment. Positioning within critical infrastructure or trusted vendors. Ability to compromise certificate authorities or trusted signing processes.

High

0.60-0.80

Stolen code-signing certificates from legitimate organizations. Watering-hole attacks on major or sector-specific websites. N-day exploits for recent vulnerabilities. Broad target access through multiple vectors (phishing, web compromise, LOLBins). Demonstrated ability to compromise government or enterprise targets.

Moderate

0.40-0.60

Public exploits for known vulnerabilities. Limited website compromise for watering-hole positioning. Purchased or leaked credentials. Targeted phishing with moderate success rate. Single or few access vectors employed.

Low

0.20-0.40

Relies on widely-known, often-patched vulnerabilities. Single access vector with limited flexibility. Opportunistic targeting based on exposed attack surface. Success depends on target having poor security hygiene.

Very Low

0.00-0.20

No special access capabilities. Relies entirely on user error (clicking links, opening attachments). Targets only public-facing services with default configurations. No ability to bypass even basic security controls.

Evidence considerations:

  • Stolen certificates indicate High; the source and method of acquisition may push toward Moderate (opportunistic) or stay High (established channel)

  • Watering-hole positioning on government or high-traffic sites indicates High; compromising low-traffic or personal sites is Moderate

  • Exploit age matters; CVEs from current year suggest High, CVEs 3+ years old suggest Moderate at best

  • Multiple access vectors (phishing + watering-hole + LOLBins) indicates High; single vector is typically Moderate


Calculating TCap

Step 1: Score Each Factor

For each of the three factors (Resources, Skills/Expertise, Access):

  1. Review available evidence from threat intelligence

  2. Map evidence to rubric criteria

  3. Select the tier that best matches the evidence

  4. Document the rubric match and supporting evidence

Step 2: Calculate Average

TCap uses the full tier range, not point estimates:

TCap Low = (Resources Low + Skills Low + Access Low) / 3
TCap High = (Resources High + Skills High + Access High) / 3

Example:

  • Resources: Moderate (0.40-0.60)

  • Skills/Expertise: High (0.60-0.80)

  • Access: High (0.60-0.80)

TCap Low = (0.40 + 0.60 + 0.60) / 3 = 0.53
TCap High = (0.60 + 0.80 + 0.80) / 3 = 0.73
TCap Estimate: 0.53-0.73

Step 3: Interpret the Result

The final range may span two tiers. This is expected and reflects honest uncertainty:

Result Range

Interpretation

Falls within one tier

Strong evidence alignment; high confidence

Spans two adjacent tiers

Mixed evidence; moderate confidence

Spans three+ tiers

Insufficient evidence; consider gathering more intelligence

Handling Insufficient Evidence

When threat intelligence does not provide sufficient evidence to map to rubric criteria:

Step

Action

1

Identify which rubric criteria cannot be assessed

2

Document the specific evidence gap

3

Default to Moderate tier as baseline

4

Mark estimate as [LOW CONFIDENCE]

5

Note impact on analysis reliability

6

Recommend update when additional intelligence becomes available

Why Default to Moderate?

  • Moderate represents the mathematical center of the scale

  • Avoids overstating (High/Very High) or understating (Low/Very Low) risk

  • Provides a consistent, repeatable baseline across analyses

  • Allows calculations to proceed while flagging uncertainty


When NOT to Default

Do not default to Moderate if:

  • Evidence explicitly indicates a different tier (even if incomplete)

  • Partial evidence strongly suggests High or Low

  • The analysis would be misleading with a Moderate estimate

In these cases, use the best available evidence and document the uncertainty.


Partial Scoring

For TCap specifically, if evidence is available for some factors but not others:

  1. Score the factors with sufficient evidence

  2. Default missing factors to Moderate (0.40-0.60)

  3. Mark the overall TCap as [LOW CONFIDENCE]

  4. Document which factors were defaulted

Example:

  • Resources: High (0.60-0.80) - evidence available

  • Skills/Expertise: Moderate (0.40-0.60) [DEFAULTED] - no tooling analysis in report

  • Access: High (0.60-0.80) - evidence available

TCap = (0.60+0.40+0.60)/3 - (0.80+0.60+0.80)/3 = 0.53-0.73 [LOW CONFIDENCE]

Limitations and Assumptions

  1. Evidence availability - TCap assessment quality depends on threat intelligence depth; limited reporting may underestimate or overestimate capability

  2. Attribution confidence - State-backed attribution should be weighted by confidence level; "low confidence" attribution should not drive High resource scores

  3. Point-in-time assessment - TCap reflects capability at time of analysis; threat actors evolve

  4. Averaging simplification - Equal weighting of three factors may not reflect actual contribution to capability; some attacks depend more heavily on one factor

  5. Range interpretation - The range represents uncertainty, not variability in actor capability; use the full range in downstream calculations


Version History

Version

Date

Changes

2.0

January 2026

Revision


Comments


bottom of page