Disclaimers
BenchmarkHQ is an independent product of Dynakai Industries LLC. We are not affiliated with, endorsed by, or licensed by any of the organizations, firms, or publications referenced in our reports. All trademarks and company names belong to their respective owners.
Benchmark figures are provided for informational purposes only and represent BenchmarkHQ's independent analysis. They are not guaranteed to be accurate and should not be the sole basis for business decisions. Users should verify critical data points against original sources.
On this page
1. Overview & philosophy
BenchmarkHQ produces independent composite benchmark analysis informed by publicly available SaaS research materials. We do not collect primary company data. Our methodology explains the definitions, inclusion rules, normalization logic, and caveats used to standardize member-facing outputs.
BenchmarkHQ does not survey companies directly. Instead, it reviews publicly available industry materials and applies standardized definitions, inclusion rules, and reconciliation methods to produce its own analysis. Public methodology pages describe source categories and process, not a firm-by-firm attribution map for each metric.
We target the $1–20M ARR B2B SaaS window because most free public benchmarks (VC-published, analyst reports) target $10M–$100M+ ARR companies. The dynamics at $1–20M ARR are meaningfully different — higher churn rates, longer CAC payback periods, lower NRR — and conflating them with mature-company benchmarks leads to bad target-setting.
2. Source categories
BenchmarkHQ may review a range of publicly available SaaS research materials, including:
- public annual survey reports
- public benchmark reports
- public finance and efficiency research
- public billing and subscription analyses
- public industry commentary used for directional context when clearly identified as such
Source mix varies by metric and reporting cycle. BenchmarkHQ does not claim affiliation with, endorsement from, licensing by, or access to any nonpublic dataset, feed, or proprietary database of an outside organization.
Public methodology materials describe source categories and methodology rather than maintaining a firm-by-firm public source roster.
3. Data inclusion criteria
Not all data from source reports is included in BenchmarkHQ. We apply the following inclusion rules to ensure data quality and relevance.
Company type
Only B2B SaaS companies are included. Consumer SaaS, marketplace businesses, hardware/software hybrids, and transactional businesses (even if software-based) are excluded. When a source does not clearly segment the relevant cohort, BenchmarkHQ may exclude the source from cohort-specific benchmarking, label the result as directional, or note the cohort limitation rather than forcing a precise cohort match.
ARR band eligibility
Data points are only included in an ARR band if the source explicitly segments by ARR range or provides sufficient disaggregation to infer band-level benchmarks. We do not extrapolate overall benchmarks (e.g., "all ARR ranges") into specific bands.
Sample size minimum
We require a minimum sample size of n ≥ 25 for a data point to be reported. Data points with n < 25 are suppressed and marked as "insufficient sample." We report sample sizes in all exports so you can weight data points appropriately.
Recency
For our quarterly reports, we include data published within the past 18 months. Older data is retained in our historical archive but not included in current benchmark calculations. This prevents stale data from diluting current benchmarks.
| Criterion | Rule | Rationale |
|---|---|---|
| Business model | B2B SaaS only | Consumer and transactional metrics are not comparable |
| ARR band segmentation | Must be explicitly segmented or inferable | No extrapolation from aggregate data |
| Sample size | n ≥ 25 per data point | Suppresses high-variance, unrepresentative data |
| Data age | Published within 18 months | SaaS benchmarks shift meaningfully year-over-year |
| Geographic bias | US-centric; non-US data labeled | Geographic market affects CAC, pricing, and growth norms |
4. Metric definitions & formulas
Different sources define the same metric differently. Our definitions are documented below. When a source uses a different definition, we note how we adjusted its data to conform to our standard.
Alternative (source-specific variant): Some sources substitute EBITDA margin for FCF margin — when a source makes this substitution, we note it explicitly. EBITDA-based Rule of 40 typically runs 5–10 points higher than FCF-based for growth-stage companies, so the two variants are not directly comparable without labeling.
Core formulas are shown here; additional metric notes appear in the glossary, report footnotes, and export metadata.
5. Normalization & reconciliation
When multiple sources report the same metric for the same ARR band, they often produce different results. This section explains how we handle conflicts.
Weighted averaging
When sources agree within ±5 percentage points (or ±5 units for non-percentage metrics), we report a weighted average. Weights are assigned based on: (1) sample size (larger sample → higher weight), (2) recency (newer data → higher weight), and (3) methodology similarity (how closely the source's definition matches our standard).
Conflict resolution
When sources disagree by more than ±5 percentage points, we apply the following resolution hierarchy:
- Definition mismatch check — If the conflict stems from definitional differences (e.g., one source includes involuntary churn, another doesn't), we adjust the outlier source to match our standard definition and re-evaluate.
- Sample bias check — If one source has a materially different company profile (e.g., heavy enterprise bias vs. SMB-heavy), we apply a correction or flag the source as a separate data point.
- Credibility weighting — If unresolvable, we weight toward the source with the larger sample size and more transparent methodology. We document which source was de-weighted and why in the report footnotes.
- Disclosure — If sources remain in conflict after the above steps, we report both values with a note explaining the discrepancy rather than presenting a false consensus.
Percentile reporting
We report p25 (bottom quartile), p50 (median), and p75 (top quartile) rather than averages. This matters because SaaS metric distributions are typically right-skewed — the mean is pulled up by outliers and misrepresents the typical company's experience. The median is a more reliable "what does a normal company look like" signal.
6. Update cadence
BenchmarkHQ publishes a new benchmark report each quarter. Here's how the update cycle works:
| Timeline | Activity |
|---|---|
| Quarter-end (Day 0) | New source data reviewed for inclusion; quarterly sources incorporated as they publish |
| Days 1–14 | New data points normalized, reconciled, and incorporated into the database |
| Days 15–21 | Report drafted, reviewed, and formatted; metric changes from prior quarter flagged |
| By Day 21 | Report published. Members notified by email. |
| Ongoing corrections / database updates | Database remains live and searchable; interim corrections applied with change log |
Annual sources (survey-based reports) are typically published between January and April each year. When a major annual source publishes new data, we issue a supplemental update to our database, noting which benchmarks changed and by how much.
7. Limitations & caveats
We believe transparency about limitations is more valuable than projecting false confidence. Read these carefully before using the data to make decisions.
We are not a primary data source
BenchmarkHQ does not collect data directly from companies. We depend on the accuracy and representativeness of our source reports. If a source has a selection bias (e.g., companies that use a particular billing platform tend to have higher NRR), that bias may be present in our data.
Survivorship bias
Most benchmark sources survey or measure companies that are still operating. Companies that churned or shut down between data collection and publication are typically excluded. This means benchmarks likely overstate how well "typical" companies do, especially at early ARR stages where failure rates are higher.
Geographic concentration
The majority of our data sources are US-centric. Non-US SaaS companies face different pricing environments, CAC structures, and growth dynamics. We label data as "US-weighted" where relevant and note when a source has meaningful international representation.
Definition drift
Metric definitions evolve over time. What counts as "CAC" has shifted as attribution models have matured. We document our current definitions but acknowledge that year-over-year comparisons may reflect definitional changes as much as actual performance shifts.
Not financial advice
Benchmark data describes what companies have achieved historically, under varying market conditions. It is not a guarantee of what's achievable or appropriate for your company. Use it as directional context, not as a hard target. Discuss with your investors and advisors before setting formal goals.
8. Source use and attribution
BenchmarkHQ may review publicly available industry materials as inputs or context for its analysis. References on this page describe source categories and methodology only. They do not imply affiliation, endorsement, licensing, formal partnership, access to nonpublic data, or access to an outside organization's underlying database.
Where BenchmarkHQ presents a benchmark, it presents BenchmarkHQ's own standardized analysis. Outside materials may inform that analysis, but BenchmarkHQ does not represent that any outside organization sponsors, approves, or provides the member-facing output.
Any company names or trademarks that appear on this page are used descriptively only and remain the property of their respective owners.