How We Score

In order to give context to our reviews, we include a numerical rating for every product. These numerical ratings range from '0' to '10.0,' with '0' being the worst and '10.0' being the best. We chose this system for its intuitiveness, though it may appear a bit limiting at first glance.

What, you may ask, do we do when a product comes out that is clearly better than our current '10.0?' The answer lies in our infinite scoring system, a complex but clever approach to data management that means our scores are always relevant and always up to date, no matter when you visit Reviewed.com.

But before we got into the idea of infinite scoring, we should first explain how we arrive at the component scores for each individual product.

Breaking it down - the Sub-Scores

The rating that you see on each product is the end result of many component scores, or sub-scores.

Each of these sub-scores speaks to one aspect of the product in question: In televisions, for instance, it might be a score for the contrast ratio; In dishwashers it could be the cleaning performance during the heavy-duty wash cycle. Each product category has many sub-scores in order to address as many aspects of the product as possible. In some categories, there are upwards of 20 sub-scores.

Most of the sub-scores are based on objective data. We strive for the scientific method and best practices when and where possible, so using sophisticated measuring devices and getting hard numbers is our preferred way of obtaining the data. Once we get this data, we compare it to the data from previously tested products and normalize it based on a statistical model. This prevents any one aspect of a category from dominating the score. Remember when the pixel count on camcorders increased exponentially almost overnight? Our sharpness score would have exploded if it weren't for this safety valve.

Scoring the 'Je Ne Sais Quoi'

While we love the hard data, we also understand that there are less tangible aspects that can very much affect the performance of products, like the touch and feel of handles, buttons, and menus. These are not as easily measured, but that doesn't mean we simply assign a score for the controls of a dishwasher based on how we feel. Instead, we break down each subjective score into smaller, less subjective fragments and base our subjective scores on the sum of those fragments. For example, does the knob feel heavy? Does the dial have good action or does it make a cheap sounding 'click' sound?

Price is not part of the equation

Please note that price is NOT a part of these scoring process. We know that prices fluctuate dramatically over the lifetime of a product, which makes it a poor metric for scoring, and most people are very good about filtering possible candidates for purchasing based on price. Our systems provide filtering tools so you can look at the scores of only the products within your price range, and pick the best scoring one for you.

Putting it back together

Once we have obtained all sub-scores, they need to be combined into one single score. But whenever you combine scores, you make a value judgment as to the importance of each sub-score. Is the resolution of a camera as important as the number of picture modes it has? Are the buttons on a TV as important as the image quality?

No, of course not. Some aspects are clearly more important than others. Taking that idea forward just a little, there is a hierarchy of importance of the different aspects. To reflect that reality, we devised a set of weights for each category, giving higher weight to those aspects of products that more severely impact a user's experience with the product and assigning lower weight where the impact to the user would only be marginal. The final overall score is then obtained by multiplying the sub-scores by their respective weights and totaling them up.

Grading on a Curve

Once all sub-scores are weighted and combined, we are left with an arbitrary number, say 4,721. Once upon a time, we published this number, much to our readers' confusion. We would often be asked: '4,721? Out of what?' We realized that these numbers without proper context have almost no meaning. To contextualize the overall scores, we chose to normalize them, giving our best scoring product in each category a rating of 10.0. It's just like grading on a curve: every other product is assigned a rating based on how close its score comes to that of the top product: a product that only scores half the amount of overall points, in this case 2360.5, gets a rating of 5.0.

10.0 Is Always the Best

Going back to the initial question, what do we do when a new product beats the current 10.0? First we celebrate that product's performance. We love it when manufacturers are able to innovate to the point where their products distinguish themselves and rise above the rest.

Once it's time to take that review live, we assign the new top product the 10.0 and readjust all other existing ratings in accordance with the new top score. The previous 10.0 might drop down to a 9.9, or even lower, depending on just how much better the new 10.0 really is. All other products are affected as well, decreasing in score accordingly.

A fortunate byproduct of this scoring methodology is that there is only a single 10.0 in every category. Technically, it's possible for two products to hold the 10.0, but realistically that situation is highly unlikely; the two products would have to have identical sub-scores. We consider this to be another strength of our system, as it only allows for one 'best product' at any given time, rather than having dozens of products with seemingly equally perfect scores as you might see on other review sites. Our system makes it much easier for you, the reader, to pick the single best product out of this snapshot in time.

Our Scores Are Truly Infinite

Being able to allow new products to climb over the current '10.0' is what makes our scoring system infinite: in theory, there is no upper limit on the overall score of the product. Any product can come along and earn the spot on the top of our list, and if it does that would trigger all other products to drop a few fractions of a point in their ratings. This is why repeat visitors might see the ratings shift over time. Chances are that's because we discovered a new 10.0 since you last viewed that product.

Where We Shine: Static vs. Dynamic Scoring

This system has one fundamental advantage over all others: it is a clear snapshot in time as to the overall performance of a product within a given category. In a category where innovation is fast, like laptops, this is especially important. In a static scoring system, where the ratings don't change over time, you might be tempted to buy last year's model if it received a perfect score (be it 5 out of 5 stars, or two thumbs up) over the brand new laptop that earned the same score. In reality, that new model is superior in every way, which means, the two scores aren't truly comparable. In our dynamic system, the old product's rating would have steadily declined as newer, faster models are reviewed to reflect its real-life performance when compared to the best that is out there right now. This ensures that you have to do as little extra work as possible. After all, we're the experts so we should be doing that for you.

We believe our infinite scoring system represents the best approach for evaluating and comparing products across all categories of user electronics, appliances, and everything else that we have not yet reviewed. It provides you, the reader, with an extremely powerful tool to make the best purchasing decision possible.