The Crozscore algorithm is a fully automated predictive scoring algorithm, using artificial intelligence and machine learning techniques. It analyses large amounts of both quantitative and qualitative data to estimate performance, user satisfaction and maturity of software solutions and compares this to their competitors - all in one simple metric, on a scale of 0-100.
The purpose of the Crozscore is to improve the quality of software rankings, by taking into account the likelihood that a software solution ends up getting picked by new software buyers.
The Crozscore is category-agnostic and is calculated by comparing a product to its closest competitors and target markets. Crozdesk also has a category-specific adaptation, called the “Relevancy-Score”, which drives a product's ranking in the secondary categories it is listed in. For instance, an accounting solution with great payroll functionality might be a better fit for someone looking for a payroll product than a mediocre payroll software solution.
Ranking software can be incredibly complex. Should the product with the most sophisticated functionality be on top? The product with the happiest customers? Maybe the solution with the largest market share? Should an analyst decide which is best? Should the crowd decide?
Our approach to solving this conundrum has always been to rank products based on their likelihood of being the “best fit” for the software buyers browsing our site. To estimate this “fit”, the Crozscore looks at a wide range of metrics and adapts based on what data is available on each product in a given category or industry vertical. For instance, if we have hundreds of credible user reviews on a product, this factor will be given more weight than if there are only a few - but we always try to look at as many relevant and reliable factors as possible.
This way, we can make sure our rankings are not only relevant to our audience but also very hard to manipulate by vendors or their competitors.
As with any fully-automated algorithm, the Crozscore isn’t perfect. For new products or products with limited data availability, the Crozscore extrapolates the “fit” based on what information is available. Rather than not ranking these products at all (like most other platforms), we try to be as predictive as we can. We benchmark available data against other relevant players to predictively fill the gaps.
This means, however, that there are fluctuations over time and the scoring needs to be taken with a grain of salt. The Crozscore is great to gain a quick overview of a software market and figure out which products could be a good fit for your business. It does not mean that a software solution with a lower score is necessarily “worse” than one with a higher score. It rather represents an objective view of a product’s overall performance across a spectrum of different factors, as predicted by an unbiased machine.
Since the factors that are taken into account are adaptive, the best way to improve your score is to ask your users to leave unbiased user reviews on Crozdesk. If the Crozscore detects a large amount of credible user reviews the weight of this factor will increase during the calculation of the score - however, there’s a limit to this.
We never interfere or manipulate the Crozscore for the benefit of any particular vendor. If you have reason to believe that our algorithm might be confusing your product with another with a similar name, or you have several products that may cause data to get mixed up, please let us know and we will investigate. It can occasionally happen that the wrong data is taken into account during the scoring process. While we can make sure that only the correct data sources are taken into account, we don’t interfere with the actual calculation process or the algorithm itself.
User Reviews and Satisfaction
The user reviews and satisfaction data included in the Crozscore is aggregated from a range of sources. It includes Crozdesk reviews and reviews from select external sources. The purpose of this is to gather insight into how software users feel about the solutions they are using. The weighted review average and the number of reviews a solution possesses are the largest and most important factors in the calculation of the Crozscore if sufficient review data is available. Generally, newer reviews are more important than older ones. Additionally, lengthier and more detailed reviews from credible users will also carry more weight than their shorter and “spammier” counterparts.
The buzz score serves as an indicator of how popular a particular software solution is. This includes estimated user numbers, market share, overall popularity, press mentions and other similar factors. The purpose of this is to estimate the mass-appeal, traction and, by extension, the likelihood of leading to a conversion of a new software buyer.
Relative Recent Interest
The relative recent interest component is generated through monitoring recent online traffic, trends and social mentions. The change in this factor is measured over a 3 month period to determine whether a solution is gaining or losing in overall popularity.
This component is similar to the buzz score, but measures the relative change to the status-quo rather than the absolute value. For instance, a new CRM solution with incredible traction, growing its user base at triple-digit percentages every year, would get a boost through this factor even if it still has very little market share as compared to larger market leaders.
A range of additional factors, including functionality, social, user satisfaction and traction metrics are taken into account in the scoring process in some verticals. We continuously test and improve upon our ranking algorithms and data sources to make sure they are as relevant and fair as possible.
This depends on the category or industry vertical. It’s generally best to compare Crozscores among competitors in the same category.
Generally speaking, in very competitive categories a “good” Crozscore is in the range of 70 or higher. As such, most Crozdesk Market Radars only feature software solutions that fall into this range. There are, however, a few exceptions across niche markets that do not have a high enough volume to allow for effective predictive comparison.
Software solutions boasting a score of over 80 can be considered as very effective at solving a given pain point in the market, whereas those with a score of over 90 are generally the category “Champions”, boasting both great user satisfaction and a sizeable market share.