How We Rate Products
Every product reviewed on HealthTechCheck receives an Expert Score — a weighted numerical rating that reflects how well that product performs against the criteria that matter most to the people who buy and use it. This page explains how those scores are determined, what they mean, and what their limitations are.
For information about our broader editorial standards, independence, and conflicts of interest, see our Editorial Guidelines.
Scores are category-specific, not universal
Health technology is not a single category. The criteria that determine whether an employee mental health platform is worth recommending are fundamentally different from those relevant to a remote cardiac monitoring device or a clinical decision support system.
We group products into categories based on their primary function and the type of buyer or organization they serve. A category might be something like Employer Telehealth Platforms, Remote Patient Monitoring, or Employee Mental Health Platforms — defined by what the product primarily does and who it is primarily sold to. Some products could reasonably fit more than one category. In those cases, we assign each product to the category that best reflects its primary purpose and the context in which it is most likely to be evaluated. That assignment is made before a review is written and remains consistent.
Each category has its own defined set of scoring dimensions — typically five to seven — that reflect what actually matters to buyers and users in that specific context. These dimensions are established before any review in that category is written and remain consistent across every product we evaluate within it.
Occasionally, products from different categories may appear together in a comparison — for example, in a buyer guide covering adjacent solutions. In those cases, products may technically be scored against slightly different dimensional frameworks. In practice this rarely creates a meaningful problem: products similar enough to be compared are almost always evaluated on nearly identical dimensions. And where products genuinely differ enough to belong in separate categories, it is appropriate that their scores reflect what each product is primarily designed to do.
What each dimension measures and how it is scored
Each scoring dimension is accompanied by a set of specific anchors — concrete standards that define what different score levels mean in practice. These anchors are tied to verifiable evidence, not general impressions.
A dimension measuring clinical outcomes, for example, does not award a high score because a company claims its product works. A high score requires independently published peer-reviewed research demonstrating measurable results. Company-disclosed outcome data with a disclosed methodology earns a moderate score. Marketing claims unsupported by data earn a lower one. The complete absence of published evidence is treated as meaningful information and scored accordingly.
Some dimensions carry automatic score caps. Where a specific verifiable condition exists — such as a documented data breach affecting user health data within the past three years — the maximum possible score for the relevant dimension is capped regardless of other strengths. These caps exist to protect scoring integrity and are applied consistently, though editorial judgment always remains part of our process.
How we research each review
Our evaluations prioritize primary and verifiable sources: published clinical studies, regulatory filings and clearances, technical documentation, independent certifications, company-disclosed data with disclosed methodology, and direct company disclosures. These sources are more reliable and more specific than aggregated user opinion, and they allow us to draw conclusions that go beyond what a reader could find quickly on their own.
Aggregated user reviews and platform ratings are considered where relevant but do not serve as the primary basis for scoring. Where our research reveals meaningful gaps in publicly available evidence, we aim to reflect this in the review rather than draw conclusions beyond what the evidence supports.
Our coverage will typically indicate the basis for our assessment, including whether direct product testing was conducted or whether the evaluation is research-based. We do not draw conclusions that go beyond what our evidence supports.
The weighted Expert Score
Individual dimension scores are combined into a single Expert Score using a weighted average. The weights reflect editorial judgment about what matters most to buyers making real decisions in that category — not all dimensions are equal, and the weighting is designed to reflect that. Weights are set before any review is written and remain consistent across all products in the same category.
Dimensions where rigorous independent evidence exists and where our research can go deeper than surface-level aggregation are weighted more heavily. This is an intentional editorial choice — we believe the most defensible scores are grounded in the most verifiable evidence.
The weights for each category are set before reviews are written and do not change between products in the same category.
Human editorial judgment
Every review is written and edited by a human writer and editor before publication. Scores may be adjusted based on editorial judgment, including knowledge or context that does not appear in public documentation. The writer credited on each review takes editorial responsibility for its conclusions.
For reviews in categories with significant clinical implications, we aim to have content reviewed by a qualified healthcare professional prior to publication. Where such review has been completed, this is indicated in the article.
What our scores do not tell you
No scoring system captures everything. Our Expert Score reflects how a product performs against a defined set of criteria at a point in time. It does not account for every possible use case, organization size, or individual circumstance. Products evolve, and scores may not reflect recent changes until a review is updated.
Our scores are intended to inform purchasing decisions, not replace them. We encourage readers to use our reviews as one input alongside their own due diligence, vendor conversations, and — where relevant — guidance from qualified healthcare or benefits professionals.
Questions about a specific score
If you are reviewing a product and have a question about why it received a particular score on a specific dimension, you are welcome to contact our editorial team. We strive to respond to methodology questions within one business day, though during busy periods responses may take a little longer.

