top of page

Loss Magnitude Rubric

  • Writer: FAIR INTEL
    FAIR INTEL
  • 4 days ago
  • 12 min read

Updated: a few seconds ago


Based on Cyentia Institute's Information Risk Insights Study

Cyentia Institute. “Information Risk Insights Study.” 2026. https://www.cyentia.com/iris2025/.


Loss Magnitude (LM) answers a fundamental question in risk analysis: "If this threat succeeds, how much will it cost us?" Loss Magnitude combines with Loss Event Frequency to produce annualized risk—a dollar figure that enables direct comparison between different threats and informed decisions about security investments. This rubric provides a structured explanation behind the methodology for calculating Loss Magnitude used during reframing analysis commonly found on this site.


What is IRIS?

The Information Risk Insights Study (IRIS) is produced by the Cyentia Institute and represents one of the largest empirical analyses of cybersecurity incident costs ever conducted. The 2025 edition analyzes over 150,000 real security incidents spanning 2008-2024, providing statistically valid loss distributions across different incident types.


Why Use Empirical Data?

Traditional loss estimation often relies on analyst intuition or arbitrary assumptions. This approach produces inconsistent results and struggles to withstand scrutiny. IRIS provides:

  • Validated benchmarks based on actual incident costs

  • Incident-type-specific values reflecting real-world cost differences

  • Min/Median/Max ranges capturing inherent uncertainty

  • Defensible figures backed by peer-reviewed research


Baseline Assumptions

All values in this framework assume a typical mid-market organization:

  • Revenue: $10M-$100M annually

  • Security posture: Average maturity (some controls, some gaps)

  • Location: US-based (regulatory environment affects fines)

  • Industry: General commercial (not highly regulated)

If the organization being analyzed differs significantly from this profile—Fortune 500 enterprise, healthcare provider, financial institution, small business—adjust expectations accordingly and note the deviation in your own analysis.


The Five-Step Process

Loss Magnitude calculation follows five sequential steps:

Step

Purpose

Quality Points

1

Select the appropriate incident type table

20

2

Determine which loss categories apply

15

3

Apply table values with justifications

25

4

Calculate total Loss Magnitude

20

5

Calculate annualized Risk

20

Total


100

Each step builds on the previous one. Errors in early steps propagate forward, making careful attention to Steps 1 and 2 particularly important. The quality points are used for the following reasons:

  • To determine the quality of the reframed information. Some artifacts analyzed on this site (e.g., data breach notifications via media sources that do not contain concise or detailed information regarding the event).

  • To allow for audit of low quality artifacts versus high quality artifacts.

  • As a filter for separating out low quality artifacts/analysis from high quality artifacts/analysis



Step 1: Select Incident Type Table (20 Quality Points)


Purpose

Different incident types produce dramatically different cost profiles. A ransomware attack has a median cost of $3.2 million while an accidental data exposure averages $6,900 to $200,000. Using a single "average cyber incident" cost would produce misleading results.

This step matches the threat being analyzed to the appropriate cost table.


The Seven Incident Types


TABLE A: RANSOMWARE/EXTORTION

IRIS Benchmark: Median $3.2M, 90th percentile $27.6M

Use when the threat encrypts data and demands payment, or threatens to leak stolen data unless paid.

Indicators:

  • Ransomware deployment or encryption

  • Ransom demands or extortion notes

  • Double extortion tactics (encrypt and threaten leak)

  • Ransomware-as-a-Service infrastructure

Examples: LockBit deployment, Cl0p campaign, REvil extortion

Do NOT use for: APT espionage (even if data is stolen), intrusions without ransom demands

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$300,000

$800,000

$3,000,000

Response

$800,000

$2,000,000

$8,000,000

Replacement

$100,000

$300,000

$1,200,000

Fines & Judgments (1°)

$50,000

$100,000

$500,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$50,000

$150,000

$600,000

Competitive Advantage

$100,000

$300,000

$1,200,000

Reputation

$100,000

$250,000

$1,000,000

TABLE B: SYSTEM INTRUSION

IRIS Benchmark: Median $1.3M, 90th percentile $7.4M

Use when the threat involves unauthorized access for espionage, persistent access, or data theft—without ransomware.

Indicators:

  • APT activity or nation-state attribution

  • Non-ransomware malware deployment

  • Credential harvesting or theft

  • Backdoors and command-and-control infrastructure

  • Supply chain compromise

  • Zero-day exploitation

Examples: Chinese APT campaign, SolarWinds-style attack, corporate espionage, credential harvesting operation

Do NOT use for: Ransomware (Table A), accidental exposure (Table C)

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$100,000

$300,000

$1,000,000

Response

$300,000

$800,000

$3,000,000

Replacement

$50,000

$150,000

$600,000

Fines & Judgments (1°)

$25,000

$50,000

$200,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$25,000

$75,000

$300,000

Competitive Advantage

$50,000

$150,000

$600,000

Reputation

$50,000

$125,000

$500,000

TABLE C: DATA EXPOSURE (Accidental/Misconfiguration)

IRIS Benchmark: Median $6.9K-$200K, 90th percentile $1.6M

Use when data is exposed through human error or misconfiguration—not malicious external attack.

Indicators:

  • Misconfigured cloud storage (open S3 bucket)

  • Accidental email to wrong recipient

  • Lost or stolen device

  • Unprotected database exposed to internet

  • Non-malicious insider error

Examples: Open S3 bucket discovery, employee emails sensitive file to wrong person, laptop left in taxi

Do NOT use for: Malicious insider (Table F), external hacking (Table B)

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$5,000

$15,000

$75,000

Response

$30,000

$100,000

$400,000

Replacement

$15,000

$50,000

$200,000

Fines & Judgments (1°)

$10,000

$35,000

$150,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$10,000

$30,000

$125,000

Competitive Advantage

$5,000

$20,000

$80,000

Reputation

$5,000

$15,000

$60,000

TABLE D: DENIAL OF SERVICE / SYSTEM FAILURE

IRIS Benchmark: Median $600K-$1M, 90th percentile $5M

Use when the primary impact is disrupting availability rather than stealing data.

Indicators:

  • DDoS attacks

  • Service disruption campaigns

  • Infrastructure targeting

  • Website defacement

  • Availability-focused attacks

Examples: DDoS against financial services, hacktivist defacement, infrastructure sabotage

Do NOT use for: Ransomware (Table A)—even though it causes downtime, use Table A

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$200,000

$500,000

$2,000,000

Response

$150,000

$400,000

$1,500,000

Replacement

$25,000

$75,000

$300,000

Fines & Judgments (1°)

$10,000

$25,000

$100,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$10,000

$30,000

$125,000

Competitive Advantage

$25,000

$75,000

$300,000

Reputation

$50,000

$150,000

$600,000

TABLE E: FRAUD/SCAM (BEC, Wire Fraud)

IRIS Benchmark: Median $300K-$500K, 90th percentile $3M

Use when social engineering tricks victims into directly transferring money or assets.

Indicators:

  • Business email compromise (BEC)

  • Invoice fraud or payment redirection

  • Wire transfer scams

  • CEO fraud or executive impersonation

  • Payroll diversion

Examples: Fake vendor invoice, spoofed CEO wire transfer request, payroll diversion scam

Do NOT use for: Phishing leading to malware (Table B), ransomware payments (Table A)

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$25,000

$75,000

$300,000

Response

$50,000

$150,000

$600,000

Replacement

$10,000

$30,000

$125,000

Fines & Judgments (1°)

$15,000

$45,000

$175,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$15,000

$50,000

$200,000

Competitive Advantage

$10,000

$30,000

$125,000

Reputation

$25,000

$75,000

$300,000

TABLE F: INSIDER MISUSE (Malicious)

IRIS Benchmark: Median $280K, 90th percentile $1.9M

Use when a trusted insider intentionally abuses access for unauthorized purposes.

Indicators:

  • Privilege abuse by employee or contractor

  • Data theft before departure

  • Selling access credentials

  • IT administrator sabotage

  • Intentional policy violation for personal gain

Examples: Employee stealing customer database before resigning, contractor selling access, IT admin deleting systems after termination

Do NOT use for: Accidental insider error (Table C), external attacker using stolen credentials (Table B)

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$15,000

$50,000

$200,000

Response

$40,000

$125,000

$500,000

Replacement

$20,000

$60,000

$250,000

Fines & Judgments (1°)

$15,000

$45,000

$175,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$15,000

$45,000

$175,000

Competitive Advantage

$30,000

$90,000

$350,000

Reputation

$20,000

$60,000

$250,000

TABLE G: GENERAL DEFAULT (Unknown/Hybrid)

IRIS Benchmark: Median $603K, 95th percentile $32M

Use when the incident type is unclear, spans multiple categories, or doesn't fit other tables.

Indicators:

  • Vague "cyber attack" language without specifics

  • Hybrid attacks without dominant type

  • Third-party vendor breach

  • Insufficient detail to categorize

Examples: News report saying "Company X was hacked" with no details, vendor breach affecting organization

Do NOT use for: Incidents that clearly fit another table—G is the fallback, not the default

Primary Losses:

Loss Type

Min

Median

Max

Productivity

$100,000

$275,000

$1,100,000

Response

$200,000

$550,000

$2,200,000

Replacement

$40,000

$110,000

$450,000

Fines & Judgments (1°)

$25,000

$65,000

$250,000

Secondary Losses:

Loss Type

Min

Median

Max

Fines & Judgments (2°)

$25,000

$70,000

$275,000

Competitive Advantage

$40,000

$115,000

$450,000

Reputation

$35,000

$100,000

$400,000


Decision Guide for Ambiguous Cases

On rare occasions it is not possible to determine the exact table to use. In cases where the information within the artifact leads to a lack of determination the following table is consulted to determine a final ruling.

Situation

Guidance

Multiple attack types possible

Use highest-impact applicable table

Attack chain (phishing leading to ransomware)

Use end-state table (ransomware = Table A)

Vague "cyber attack" with no details

Use Table G

Third-party breach affecting organization

Use Table G

Nation-state, espionage, supply chain

Use Table B

Website defacement

Use Table D

Physical theft of devices

Use Table C

Phishing leading to malware

Use Table B

Phishing leading to wire transfer

Use Table E


Step 1 Quality Scoring Criteria

Score

Criteria

20

Correct table selected; rationale clearly maps threat characteristics to table criteria; decision guide followed when ambiguous

15

Correct table selected; adequate rationale with minor gaps

10

Correct table selected; rationale weak, generic, or missing

5

Incorrect table but rationale shows reasonable (though flawed) logic

0

Incorrect table with no rationale, or selection contradicts threat evidence

Common Errors

  • Selecting Table A for APT campaigns without ransomware

  • Selecting Table G when a specific table clearly applies

  • Using Table B for accidental data exposure

  • Confusing phishing-as-delivery with fraud (phishing leading to malware = Table B; phishing leading to wire transfer = Table E)



Step 2: Determine Applicable Loss Categories (15 Quality Points)

Purpose

Not every incident activates every loss category. A DDoS attack causes productivity loss but probably not regulatory fines. This step identifies which cost categories apply based on the specific threat's characteristics.


The Four Identification Items

Before applying triggers, identify four characteristics from the threat analysis:

  1. Data types targeted: What data does the threat pursue? (credentials, PII, financial records, trade secrets, healthcare data, etc.)

  2. Exfiltration capability: Can the threat steal data and move it outside the organization? (yes/no)

  3. Ransomware/extortion: Does the threat involve ransom demands or extortion? (yes/no)

  4. Public disclosure likely: Will this incident probably become public knowledge? Consider: regulatory notification requirements, media interest, threat actor behavior, scope of impact. (yes/no/moderate)


Trigger Logic

The framework uses cascading triggers. Every breach activates baseline categories, and additional conditions activate additional categories:

Trigger Condition

Categories Activated

Any breach (always)

Productivity (Primary), Response (Primary)

PII or financial data targeted

+ Replacement (Primary)

Regulated data OR ransomware

+ Fines & Judgments (Primary)

Data exfiltration + likely disclosure

+ Competitive Advantage (Secondary), Reputation (Secondary)

Ransomware/extortion specifically

+ Fines & Judgments (Secondary), Reputation (Secondary)

Understanding the Loss Categories

Primary Losses (direct costs from the incident):

  • Productivity: Revenue and output lost while systems are unavailable or employees cannot work. Includes business interruption, overtime, and delayed projects.

  • Response: Investigation, forensics, legal counsel, crisis management, remediation labor, and contractor support. This is often the largest single cost category.

  • Replacement: Notifying affected individuals, providing credit monitoring, replacing compromised credentials and certificates, and restoring data from backups.

  • Fines & Judgments (Primary): Regulatory fines, contractual penalties, and legal settlements directly resulting from the breach.

Secondary Losses (indirect and downstream costs):

  • Fines & Judgments (Secondary): Class action lawsuits, additional regulatory actions, and long-term legal exposure emerging after initial response.

  • Competitive Advantage: Loss of trade secrets, intellectual property theft impact, and damage to competitive positioning.

  • Reputation: Customer churn, brand damage, increased customer acquisition costs, stock price impact, and executive credibility loss.


Step 2 Quality Scoring Criteria

Score

Criteria

15

All four identification items completed with evidence; triggers correctly applied; categories activated match trigger logic exactly

12

Categories correctly activated; minor omissions in identification items

8

Most categories correct; 1-2 errors in trigger application

4

Multiple errors in category activation; triggers misapplied

0

Categories activated without reference to triggers; identification items missing

Common Errors

  • Activating Secondary Fines & Judgments without ransomware/extortion present

  • Missing Replacement when PII or financial data is clearly targeted

  • Activating Reputation and Competitive Advantage without exfiltration plus likely disclosure

  • Skipping the four identification items and jumping directly to categories



Step 3: Apply Table Values (25 points)

Purpose

This step builds the actual loss calculation by taking the selected table (Step 1) and activated categories (Step 2) and documenting the specific dollar values with justifications tied to the threat.


How to Apply Values

  1. Use ONLY the table selected in Step 1

  2. For each loss category in the table:

    • If activated in Step 2: Mark "Y", use the table's Min/Median/Max values, provide justification

    • If NOT activated in Step 2: Mark "N", enter $0 for all values

  3. Justifications must reference specific evidence from the threat analysis

  4. Sum applicable rows for Primary Total and Secondary Total


Writing Effective Justifications

Justifications explain WHY this category applies to THIS specific threat. They should reference particular TTPs, capabilities, or characteristics from the threat analysis.

Good justification: "Multi-environment investigation required (browser extensions, MSHTA, WScript, .NET loaders); LOLBins usage complicates forensic analysis and extends remediation timeline"

Poor justification: "Response costs will be incurred"

The poor example is too generic—it applies to any incident and provides no insight into why this specific threat drives these costs.

Poor justification: [blank]

Missing justifications suggest the analyst hasn't connected the loss estimate to the actual threat.


Step 3 Quality Scoring Criteria

Score

Criteria

25

Correct table values; Y/N markings match Step 2 exactly; threat-specific justifications for each applicable category; $0 for non-applicable categories; arithmetic correct

20

Correct values and Y/N markings; justifications adequate but somewhat generic

15

Correct values; minor Y/N errors or some justifications missing

10

Correct table values but multiple Y/N errors or justifications disconnected from threat

5

Wrong table values OR significant application errors

0

Values fabricated, wrong table entirely, or section missing

Common Errors

  • Using values from wrong table

  • Marking "Y" for categories not activated in Step 2 (or vice versa)

  • Generic justifications not tied to specific threat

  • Arithmetic errors in totals

  • Forgetting $0 for non-applicable categories



Step 4: Calculate Total Loss Magnitude (20 Quality Points)

Purpose

This step combines Primary and Secondary totals into final Loss Magnitude figures and validates the result against IRIS benchmarks.


The Calculation

Total LM (Min)    = Primary Total (Min) + Secondary Total (Min)
Total LM (Median) = Primary Total (Median) + Secondary Total (Median)
Total LM (Max)    = Primary Total (Max) + Secondary Total (Max)

Required Summary Elements

After calculating totals, complete five summary elements:

  1. Table Used: Which table (A-G) and its name

  2. Primary losses driven by: Which TTPs or threat characteristics drive primary costs

  3. Secondary losses driven by: Which TTPs or threat characteristics drive secondary costs

  4. Excluded categories: Which categories were NOT activated and why

  5. Baseline validation: Does the calculated total fall within IRIS benchmark range?


Baseline Validation Reference

Table

Expected Median

Expected Maximum

A - Ransomware

~$3.2M

Should not exceed ~$30M

B - System Intrusion

~$1.3M

Should not exceed ~$10M

C - Data Exposure

~$200K or less

Should not exceed ~$2M

D - Denial of Service

~$800K

Should not exceed ~$6M

E - Fraud/Scam

~$400K

Should not exceed ~$4M

F - Insider Misuse

~$280K

Should not exceed ~$2M

G - General Default

~$600K

Wide range acceptable

If calculated LM falls significantly outside these ranges, explain why (unusual threat characteristics, highly regulated industry, etc.) or revisit calculations.


Step 4 Scoring Criteria

Score

Criteria

20

Arithmetic correct for all three columns; all five summary elements present; baseline validation performed

16

Arithmetic correct; 4 of 5 summary elements present; baseline validation present

12

Arithmetic correct; 2-3 summary elements; baseline validation missing

8

Minor arithmetic errors; summary sparse

4

Significant arithmetic errors; summary missing

0

Totals incorrect or section missing

Common Errors

  • Arithmetic errors summing Primary + Secondary

  • Missing summary elements

  • Skipping baseline validation

  • Claiming validation passes when numbers clearly exceed benchmarks



Step 5: Calculate Risk (20 Quality Points)

Purpose

This step combines Loss Magnitude with Loss Event Frequency (LEF) from earlier analysis to produce annualized risk—expected dollar loss per year.


Required Inputs

From earlier analysis, you need:

  • LEF (low): Lower bound of expected successful attacks per year

  • LEF (high): Upper bound of expected successful attacks per year

From Step 4, you have:

  • LM (min): Lower bound of loss per incident

  • LM (median): Central estimate of loss per incident

  • LM (max): Upper bound of loss per incident


The Calculations

Three Scenarios:

Low Risk  = LEF (low) × LM (min)
Mid Risk  = LEF (mid) × LM (median)
High Risk = LEF (high) × LM (max)

Where: LEF (mid) = (LEF low + LEF high) / 2

Most Likely Estimate (MLE):

MLE = √(LEF low × LEF high) × LM (median)

The MLE uses geometric mean rather than arithmetic mean because loss distributions are typically lognormal (skewed right with a long tail of extreme outcomes). Geometric mean provides a better central estimate for skewed data.


Understanding the Outputs

  • Low scenario: Best case—fewer attacks succeed, each costs less

  • High scenario: Worst case—more attacks succeed, each costs more

  • Mid scenario: Arithmetic midpoint of the range

  • MLE: Statistically most likely outcome given typical loss distributions

For decision-making:

  • Use the range to communicate uncertainty to stakeholders

  • Use MLE for budget planning and control prioritization

  • Use High for worst-case planning and insurance decisions


Step 5 Quality Scoring Criteria

Score

Criteria

20

LEF values from prior analysis; all scenarios calculated correctly; MLE uses geometric mean formula; units include "/year"

16

Calculations correct; MLE present but formula not shown

12

Low/Mid/High correct; MLE missing or incorrect

8

Arithmetic errors in 1-2 scenarios; MLE missing

4

Multiple calculation errors

0

Calculations wrong or section missing

Common Errors

  • Using LEF values that don't match earlier analysis

  • Arithmetic multiplication errors

  • Confusing geometric mean (MLE) with arithmetic mean (Mid)

  • Omitting "/year" units

  • Using LM median for all scenarios instead of min/median/max


Interpretation

After completing calculations, provide a one-sentence interpretation that:

  1. References the target organization profile

  2. States the key dollar figure (typically MLE)

  3. Explains what drives the risk

  4. Notes if the organization differs significantly from baseline assumptions

Example: "For a typical mid-market organization in the manufacturing sector, annualized risk exposure from this threat is approximately $2.3M, driven primarily by expected forensic investigation costs and credential remediation following multiple intrusion attempts per year."



Scoring Summary

Step

Description

Points

1

Incident Type Table Selection

20

2

Loss Category Determination

15

3

Table Value Application

25

4

Total LM Calculation

20

5

Risk Calculation

20

Total


100

Grade Scale

Score

Grade

Meaning

90-100

Excellent

Accurate, well-justified, follows framework precisely

80-89

Good

Minor errors or omissions; analysis is reliable

70-79

Acceptable

Some errors but core logic sound; usable with review

60-69

Needs Improvement

Multiple errors; requires revision

Below 60

Unacceptable

Fundamental errors; should be redone


Quick Reference Checklist

Step 1: Table Selection

  •  Correct table selected (A-G)

  •  Rationale provided

  •  Rationale maps threat evidence to table criteria

  •  Decision guide followed if ambiguous

Step 2: Category Determination

  •  Data types targeted identified

  •  Exfiltration capability identified (yes/no)

  •  Ransomware/extortion identified (yes/no)

  •  Public disclosure likelihood identified

  •  Triggers correctly applied

  •  Activated categories listed

Step 3: Value Application

  •  Values from correct table only

  •  Y/N markings match Step 2 activations

  •  $0 for non-applicable categories

  •  Justifications reference specific threat TTPs

  •  Primary total arithmetic correct

  •  Secondary total arithmetic correct

Step 4: Total LM Calculation

  •  Total = Primary + Secondary (all three columns)

  •  Summary: Table Used

  •  Summary: Primary losses driven by

  •  Summary: Secondary losses driven by

  •  Summary: Excluded categories

  •  Summary: Baseline validation

Step 5: Risk Calculation

  •  LEF values from prior analysis

  •  Low: LEF low × LM min

  •  Mid: LEF mid × LM median

  •  High: LEF high × LM max

  •  LEF mid = (LEF low + LEF high) / 2

  •  MLE = √(LEF low × LEF high) × LM median

  •  Units include "/year"

Interpretation

  •  References target profile

  •  States key dollar figure

  •  Explains risk drivers

  •  Notes baseline deviations if applicable


Conclusion

Loss Magnitude calculation transforms abstract threat information into concrete financial terms. By following this structured methodology with IRIS 2025 empirical data, analysts produce consistent, defensible estimates that support informed risk management decisions.

The five-step process ensures:

  • Appropriate categorization through incident type selection

  • Relevant scoping through loss category determination

  • Empirical grounding through standardized table values

  • Transparent reasoning through required justifications

  • Actionable output through annualized risk calculation

When combined with Contact Frequency, Probability of Action, Threat Capability, and Resistance Strength analysis, Loss Magnitude enables comprehensive risk quantification that translates technical threats into business impact.

bottom of page