What determines price for cyber insurance?
Cyber is the only major commercial line where the exposure itself is technologically determined. Attack surface, threat actor capability, and defensive posture shift in months rather than decades, which breaks the stationarity assumption underpinning standard ratemaking and forces a pricing approach that looks nothing like general liability or property. A single cyber policy bundles heterogeneous coverages, including privacy liability, network security, cyber business interruption, technology errors and omissions, and ransomware, each with a distinct loss-generating mechanism and a distinct natural exposure base. This guide covers the exposure measures, rating factors, binary underwriting gates, and actuarial methods that define cyber pricing in the post-2020 market.
Cyber exposure cannot be captured by a single base. Privacy coverage scales with records, network security with endpoints, business interruption with revenue, and many filed loss costs use a composite measure such as revenue per employee.
Security controls have transitioned from pricing credits to binary insurability gates. MFA, offline backups, and endpoint detection and response (EDR) are now conditions of coverage on most cyber programs, not rating factors.
Ransomware remains the dominant loss driver in the line. The 2024 Verizon DBIR found that ransomware or some form of extortion appears in 92% of industries as one of the top threats.
Vulnerability exploitation has surged as an entry vector. The 2024 DBIR recorded a 180% increase in exploitation of vulnerabilities as the critical path to breach versus the prior year.
Business email compromise remains a persistent and high-value cyber loss category. The FBI IC3 2024 Annual Report recorded 21,442 BEC complaints.
Exposure measures unique to cyber
No single exposure base adequately captures cyber risk. Frameworks published by the Casualty Actuarial Society identify a vector of potential exposure measures for core and supplementary cyber coverages, organized around the loss-generating mechanism of each peril. Records exposure drives privacy liability rating, endpoint and asset counts drive network security, and revenue scales business interruption.
Filed advisory loss costs in the US market commonly use revenue per employee as a composite, simultaneously capturing scale, human-factor risk density, and implicit data intensity. This composite has no analog in traditional lines. Traditional bases fail outright: payroll has no relationship to data volume or network complexity, square footage has no relationship to digital attack surface, and a 50-person SaaS company may operate thousands of endpoints while a 5,000-employee manufacturer operates far fewer.
Rating factors that shape cyber premiums
Industry and data type
Industry classification drives both base rate and the slope of severity by record type. Healthcare, financial services, and merchants handling card data receive heightened underwriting scrutiny because of regulatory penalty exposure, the volume of regulated records, and the litigation environment. Record sensitivity modifies severity independently of volume: protected health information attracts statutory penalties under HIPAA, payment card data attracts card-brand fines under PCI-DSS contracts, and personally identifiable information attracts notification costs that vary by state.
Cause of loss and entry vector
This is where the largest quantitative relativities sit. Ransomware events drive significantly higher average severity than human-error events such as misdirected emails or staff mistakes, and entry vector matters even within ransomware: corporate-system compromise via VPN or RDP produces materially higher severity than email-borne compromise, because escalation potential is higher when an attacker lands directly on internal infrastructure. Privileged access management and MFA on remote access are priced on this basis: they are the controls that prevent escalation from the email tier to the corporate-server tier.
Infrastructure configuration
Specific technology choices on the policyholder side drive frequency in ways that have become measurable as claims data has matured. Internet-facing remote access products, certain VPN appliance vendors, and unpatched edge devices are correlated with higher ransomware frequency in published threat-research analyses. The 2024 DBIR documented a 180% year-over-year increase in vulnerability exploitation as the critical path to breach, much of it concentrated in unpatched internet-facing systems.
Security controls, now mostly gates
Multi-factor authentication on remote access, offline immutable backups with tested restoration, endpoint detection and response, privileged access management, an incident response plan, and the absence of unpatched critical vulnerabilities on internet-facing systems have all migrated from continuous pricing credits to binary underwriting gates over the 2020 to 2024 hard-market cycle. Within the remaining continuous factors, MFA quality (FIDO2 hardware keys versus push notifications), patch cadence, segmentation maturity, and security-training frequency still modulate premium.
Third-party concentration
The 2024 DBIR introduced an expanded concept of third-party-involved breach, encompassing partner infrastructure, supply-chain vulnerabilities, and data custody arrangements. The methodological shift reflects how aggregation has changed: a single vendor or software-supply-chain event can drive simultaneous claims across hundreds of insureds. This is a structural change in how cyber portfolios accumulate, not just a rating factor.
Size
Account size functions as two different products. Small and mid-market accounts are underwritten primarily on controls and industry, with frequency dominating the loss model. Large enterprise accounts have lower frequency but materially higher severity tails, and underwriters use bespoke modeling, scenario testing, and individual risk pricing rather than rate-table application. The break point varies by carrier but is typically aligned with revenue thresholds where the controls questionnaire is replaced by direct security review.
How actuaries price with cyber's non-stationarity problem
Standard methods fail in cyber because the underlying risk environment shifts faster than triangles mature. Pre-2019 trend selection is unreliable because ransomware frequency rose sharply over the period, the threat-actor ecosystem reorganized, and notification regimes evolved. Practitioners use methods built for non-stationary, heavy-tailed, correlated data:
Negative binomial frequency with time-varying parameters accommodates overdispersion from non-stationary rate parameters; standard Poisson misattributes structural trend to parameter noise.
Log-skew-normal or generalized Pareto severity, segmented by breach type, avoids the misspecification of pooling ransomware, BEC, and staff-mistake losses into a single lognormal.
Bayesian nowcasting for IBNR outperforms chain-ladder because it integrates temporal correlation in reporting lags, which are themselves non-stationary as regulatory notification regimes evolve.
Bühlmann credibility on quarterly or monthly observation periods captures more observations per insured than the annual accident year and adapts to intra-year structural shifts.
D-vine copulas capture intra-event dependency between first-party forensics, business interruption, regulatory fines, and third-party liability arising from a single breach.
Marked point-process and SIR-style epidemic frameworks model systemic risk with correlation structure that emerges endogenously from network topology rather than being imposed via ad hoc matrices.
Deterministic scenario-based accumulation, supported by vendor cyber catastrophe models such as CyberCube, Cyence, and Kovrr, is treated as a sensitivity tool and conversation enabler rather than a point estimator, given the absence of physical-science grounding.
What's shaping cyber pricing now
Frequency and severity trends diverge by segment, which makes single-portfolio benchmarks unreliable. Threat actor consolidation, payment refusal among insured victims, and the rise of double-extortion tactics have reshaped the loss profile within ransomware itself. The hard-market binary gates established between 2020 and 2022 have not relaxed, even as rate adequacy has improved enough to support a softer market in some segments. For pricing teams, the implication is that controls posture and technology configuration matter more than premium level: a soft market with strict gates is structurally different from a soft market with loose underwriting.
How hx supports cyber insurance pricing
Configurable pricing logic for complex rating structures
Cyber's multi-coverage structure requires coverage-specific exposure bases, including record count for privacy liability, endpoints for network security, and revenue for business interruption, that legacy raters struggle to express. The hx Decision Engine implements these heterogeneous rating variables in native Python, with full audit trails, and lets actuaries deploy changes with version control as model assumptions evolve.
Submission triage aligned to appetite
Cyber submissions arrive with documentation that determines both insurability and pricing tier. hx Submission Triage extracts this data from unstructured broker submissions and surfaces it alongside appetite checks and indicative pricing, so underwriters can identify gaps before investing time in full analysis. Binary control gates can be applied at ingestion, auto-declining risks that fail required controls before underwriter review.
Portfolio intelligence for aggregation management
Cyber's systemic risk requires portfolio-level visibility that policy-by-policy pricing cannot provide. hx Portfolio Intelligence enables batch rating, what-if analysis, and concentration monitoring across exposures that geographic rollups do not detect: cloud-provider dependency, shared software supply chains, and managed service provider relationships. This supports both internal aggregation management and the regulatory reporting that accompanies it.
Audit trails for evolving regulatory requirements
Pricing teams need documented lineage from model assumptions to individual policy decisions. hx captures every action automatically, creating the governance trail cyber's regulatory environment demands and the change attribution required for quarterly recalibration cycles.
See how hx supports for cyber underwriting.
FAQs
Why is cyber insurance pricing so different from other commercial lines?
The exposure itself shifts with technology, threat actors, and defensive posture, often in months rather than decades. That non-stationarity breaks the assumption underlying chain-ladder and other standard methods, which require relatively stable loss processes. Cyber pricing teams use shorter observation periods, segmented severity distributions, and Bayesian methods built for evolving rate parameters.
What is the difference between a rating factor and an underwriting gate?
A rating factor adjusts premium continuously based on a characteristic of the risk. An underwriting gate is a binary condition: if the risk fails it, coverage is declined or sublimited regardless of price. In cyber, controls such as MFA on remote access and offline backups have moved from rating factors to gates over the 2020 to 2024 hard-market cycle.
How do actuaries handle ransomware's heavy tail?
Most teams fit a separate severity distribution to ransomware losses rather than pooling them with other breach types. Common choices are log-skew-normal or generalized Pareto for the tail, sometimes with a thresholded peaks-over-thresholds approach. The point is to avoid the misspecification that pooling produces.
Why is third-party risk treated as a portfolio issue, not just a rating factor?
Because a single third-party event, such as a software supply-chain compromise or a managed service provider breach, can trigger correlated claims across many insureds simultaneously. Pricing the individual policy correctly does not address the portfolio-level accumulation, which is why aggregation tools and scenario testing complement individual rating in cyber.
What role do vendor cyber catastrophe models play?
CyberCube, Kovrr, Cyence, and similar models are widely used for scenario-based accumulation analysis and regulatory reporting, but most pricing teams treat them as sensitivity tools rather than point estimators. The underlying parameter uncertainty and the absence of physical-science grounding mean their outputs are best used to stress-test portfolio assumptions, not to set rates directly.
Explore hx for Cyber insurance →
This guide is part of Hyperexponential's insurance pricing resource library. For more information on how hx supports Cyber pricing, contact us.
Book a demo
Learn about our platform and its capabilities, from pricing model development to portfolio intelligence.

