Disclaimers

BenchmarkHQ is an independent product of Dynakai Industries LLC. We are not affiliated with, endorsed by, or licensed by any of the organizations, firms, or publications referenced in our reports. All trademarks and company names belong to their respective owners.

Benchmark figures are provided for informational purposes only and represent BenchmarkHQ's independent analysis. They are not guaranteed to be accurate and should not be the sole basis for business decisions. Users should verify critical data points against original sources.

On this page

  1. Overview & philosophy
  2. Source categories
  3. Data inclusion criteria
  4. Metric definitions & formulas
  5. Normalization & reconciliation
  6. Update cadence
  7. Limitations & caveats
  8. Source taxonomy

1. Overview & philosophy

BenchmarkHQ produces independent composite benchmark analysis informed by publicly available SaaS research materials. We do not collect primary company data. Our methodology explains the definitions, inclusion rules, normalization logic, and caveats used to standardize member-facing outputs.

BenchmarkHQ does not survey companies directly. Instead, it reviews publicly available industry materials and applies standardized definitions, inclusion rules, and reconciliation methods to produce its own analysis. Public methodology pages describe source categories and process, not a firm-by-firm attribution map for each metric.

Core principle: We never hide methodology behind "proprietary model" language. Every calculation, every inclusion rule, and every reconciliation decision is documented here. If something is unclear, email us.

We target the $1–20M ARR B2B SaaS window because most free public benchmarks (VC-published, analyst reports) target $10M–$100M+ ARR companies. The dynamics at $1–20M ARR are meaningfully different — higher churn rates, longer CAC payback periods, lower NRR — and conflating them with mature-company benchmarks leads to bad target-setting.

ARR segmentation: Public previews show 3 rolled-up ARR bands. Members unlock 5 finer-grained peer groups across the $1–20M range.

2. Source categories

BenchmarkHQ may review a range of publicly available SaaS research materials, including:

Source mix varies by metric and reporting cycle. BenchmarkHQ does not claim affiliation with, endorsement from, licensing by, or access to any nonpublic dataset, feed, or proprietary database of an outside organization.

Public methodology materials describe source categories and methodology rather than maintaining a firm-by-firm public source roster.


3. Data inclusion criteria

Not all data from source reports is included in BenchmarkHQ. We apply the following inclusion rules to ensure data quality and relevance.

Company type

Only B2B SaaS companies are included. Consumer SaaS, marketplace businesses, hardware/software hybrids, and transactional businesses (even if software-based) are excluded. When a source does not clearly segment the relevant cohort, BenchmarkHQ may exclude the source from cohort-specific benchmarking, label the result as directional, or note the cohort limitation rather than forcing a precise cohort match.

ARR band eligibility

Data points are only included in an ARR band if the source explicitly segments by ARR range or provides sufficient disaggregation to infer band-level benchmarks. We do not extrapolate overall benchmarks (e.g., "all ARR ranges") into specific bands.

Sample size minimum

We require a minimum sample size of n ≥ 25 for a data point to be reported. Data points with n < 25 are suppressed and marked as "insufficient sample." We report sample sizes in all exports so you can weight data points appropriately.

Recency

For our quarterly reports, we include data published within the past 18 months. Older data is retained in our historical archive but not included in current benchmark calculations. This prevents stale data from diluting current benchmarks.

Criterion Rule Rationale
Business model B2B SaaS only Consumer and transactional metrics are not comparable
ARR band segmentation Must be explicitly segmented or inferable No extrapolation from aggregate data
Sample size n ≥ 25 per data point Suppresses high-variance, unrepresentative data
Data age Published within 18 months SaaS benchmarks shift meaningfully year-over-year
Geographic bias US-centric; non-US data labeled Geographic market affects CAC, pricing, and growth norms

4. Metric definitions & formulas

Different sources define the same metric differently. Our definitions are documented below. When a source uses a different definition, we note how we adjusted its data to conform to our standard.

Representative inputs for the metrics below may include public survey reports, public benchmark reports, public billing and subscription analyses, and public finance or efficiency research. Public methodology materials do not provide a firm-by-firm source list for each metric.
Net Revenue Retention (NRR)
NRR = (MRR at end of period from cohort) / (MRR at start of period from cohort)
Includes expansion, contraction, and churn from an existing customer cohort. Excludes new logo revenue. Measured over a 12-month rolling period. Some sources call this "Net Dollar Retention (NDR)" — these are equivalent. We exclude churned customers from the denominator (gross ARR retention denominator), consistent with most institutional definitions.
CAC Payback Period
CAC Payback = (Total S&M spend in period) / (New ARR added in period × Gross Margin %)
Expressed in months. Uses gross-margin-adjusted new ARR to account for the cost of serving new revenue. Some sources report "blended" CAC payback (including expansion), others use new-logo-only. BenchmarkHQ standardizes to a new-logo-only CAC payback framework for comparability across inputs where the source provides enough methodological detail to support that comparison. If an input cannot be standardized with confidence, it is treated as directional or excluded from direct comparison.
Gross Margin
Gross Margin % = (Revenue − COGS) / Revenue × 100
COGS includes hosting/infrastructure, customer success headcount (when directly attributed to service delivery), and third-party software costs. It excludes sales, marketing, R&D, and G&A. Capitalized software development costs are excluded from COGS per standard SaaS accounting. "Pure SaaS" gross margins (no professional services, no hardware) are reported separately from blended margins when available.
Rule of 40
Rule of 40 = ARR Growth Rate (%) + FCF Margin (%)
Primary (BenchmarkHQ standardized): ARR Growth Rate (%) + FCF Margin (%). FCF margin uses free cash flow (operating cash flow minus capex) divided by revenue. ARR growth rate is calculated year-over-year.

Alternative (source-specific variant): Some sources substitute EBITDA margin for FCF margin — when a source makes this substitution, we note it explicitly. EBITDA-based Rule of 40 typically runs 5–10 points higher than FCF-based for growth-stage companies, so the two variants are not directly comparable without labeling.
Logo Churn Rate
Annual Logo Churn = Customers churned in 12 months / Customers at start of period
Customer-count based (logos), not revenue-based. Expressed as an annual rate. Monthly churn rates from sources are annualized using the formula: Annual = 1 − (1 − Monthly)^12. Involuntary churn (failed payments) is included in some sources and excluded in others — we report each source's definition and note when involuntary churn is included.
Magic Number (Sales Efficiency)
Magic Number = (New ARR in quarter) / (S&M spend in prior quarter)
Measures how much new ARR is generated per dollar of sales and marketing spend. Values > 1.0 indicate strong sales efficiency. Annualized new ARR is sometimes used instead of quarterly — when source uses annual, we divide by 4 to get the quarterly-equivalent. S&M spend is the reported sales and marketing operating expense line; excludes capitalized commissions where relevant.
Burn Multiple
Burn Multiple = Net Cash Burned / Net New ARR
Measures how much cash is burned to generate each dollar of net new ARR. Lower is better. Values < 1.0 are considered efficient for growth-stage companies. This metric has gained prominence post-2022 as a key investor efficiency signal. Note: burn multiple is highly sensitive to growth rate — companies growing faster will have higher burn multiples at the same efficiency level.

Core formulas are shown here; additional metric notes appear in the glossary, report footnotes, and export metadata.


5. Normalization & reconciliation

When multiple sources report the same metric for the same ARR band, they often produce different results. This section explains how we handle conflicts.

Weighted averaging

When sources agree within ±5 percentage points (or ±5 units for non-percentage metrics), we report a weighted average. Weights are assigned based on: (1) sample size (larger sample → higher weight), (2) recency (newer data → higher weight), and (3) methodology similarity (how closely the source's definition matches our standard).

Conflict resolution

When sources disagree by more than ±5 percentage points, we apply the following resolution hierarchy:

  1. Definition mismatch check — If the conflict stems from definitional differences (e.g., one source includes involuntary churn, another doesn't), we adjust the outlier source to match our standard definition and re-evaluate.
  2. Sample bias check — If one source has a materially different company profile (e.g., heavy enterprise bias vs. SMB-heavy), we apply a correction or flag the source as a separate data point.
  3. Credibility weighting — If unresolvable, we weight toward the source with the larger sample size and more transparent methodology. We document which source was de-weighted and why in the report footnotes.
  4. Disclosure — If sources remain in conflict after the above steps, we report both values with a note explaining the discrepancy rather than presenting a false consensus.
Example reconciliation: When public inputs describe materially different cohort mixes for the same metric, BenchmarkHQ may segment the cohort further, exclude an input from a blended output, or present the result as directional rather than forcing a single number. Public methodology examples are illustrative and omit source names and source-specific figures.

Percentile reporting

We report p25 (bottom quartile), p50 (median), and p75 (top quartile) rather than averages. This matters because SaaS metric distributions are typically right-skewed — the mean is pulled up by outliers and misrepresents the typical company's experience. The median is a more reliable "what does a normal company look like" signal.


6. Update cadence

BenchmarkHQ publishes a new benchmark report each quarter. Here's how the update cycle works:

Timeline Activity
Quarter-end (Day 0) New source data reviewed for inclusion; quarterly sources incorporated as they publish
Days 1–14 New data points normalized, reconciled, and incorporated into the database
Days 15–21 Report drafted, reviewed, and formatted; metric changes from prior quarter flagged
By Day 21 Report published. Members notified by email.
Ongoing corrections / database updates Database remains live and searchable; interim corrections applied with change log
Target cadence: BenchmarkHQ aims to publish quarterly reports within several weeks of quarter-end. Timing may vary based on source availability, review, and quality assurance. If timing changes materially, members will be notified.

Annual sources (survey-based reports) are typically published between January and April each year. When a major annual source publishes new data, we issue a supplemental update to our database, noting which benchmarks changed and by how much.


7. Limitations & caveats

We believe transparency about limitations is more valuable than projecting false confidence. Read these carefully before using the data to make decisions.

We are not a primary data source

BenchmarkHQ does not collect data directly from companies. We depend on the accuracy and representativeness of our source reports. If a source has a selection bias (e.g., companies that use a particular billing platform tend to have higher NRR), that bias may be present in our data.

Survivorship bias

Most benchmark sources survey or measure companies that are still operating. Companies that churned or shut down between data collection and publication are typically excluded. This means benchmarks likely overstate how well "typical" companies do, especially at early ARR stages where failure rates are higher.

Geographic concentration

The majority of our data sources are US-centric. Non-US SaaS companies face different pricing environments, CAC structures, and growth dynamics. We label data as "US-weighted" where relevant and note when a source has meaningful international representation.

Definition drift

Metric definitions evolve over time. What counts as "CAC" has shifted as attribution models have matured. We document our current definitions but acknowledge that year-over-year comparisons may reflect definitional changes as much as actual performance shifts.

Not financial advice

Benchmark data describes what companies have achieved historically, under varying market conditions. It is not a guarantee of what's achievable or appropriate for your company. Use it as directional context, not as a hard target. Discuss with your investors and advisors before setting formal goals.

Questions or corrections? If you find a methodology error, a definition that seems off, or a data point that doesn't match your experience, email us at support@benchmarkhqdata.com. We take accuracy seriously and will investigate promptly.

8. Source use and attribution

BenchmarkHQ may review publicly available industry materials as inputs or context for its analysis. References on this page describe source categories and methodology only. They do not imply affiliation, endorsement, licensing, formal partnership, access to nonpublic data, or access to an outside organization's underlying database.

Where BenchmarkHQ presents a benchmark, it presents BenchmarkHQ's own standardized analysis. Outside materials may inform that analysis, but BenchmarkHQ does not represent that any outside organization sponsors, approves, or provides the member-facing output.

Any company names or trademarks that appear on this page are used descriptively only and remain the property of their respective owners.