In this article, you will find instructions how to:
- Evaluate prediction performance through business and machine learning metrics.
- Use dashboards to measure prediction performance.
Note: You can find instructions how to prepare an example dashboard here.
- Analyze performance.
Performance evaluation methods
In general, you can always analyze and evaluate prediction results viewed from two angles: business and machine learning. Both of them can be evaluated using Synerise.
Dashboards and other analytical tools
What really matters is whether a certain action brings uplift to your KPIs like conversion, or CTR rates. It makes sense to always evaluate predictions against the ground truth.
For example, after creating a prediction that forecasts if a certain group of customers will make a transaction in the next days:
- Wait several days.
- Compare the number of purchases for those customers who were labeled as
lowto a randomly selected control group.
This way, you can verify that the model predicts a phenomenon correctly.
Dashboards in communication
One of other alternatives for monitoring model performance in terms of business metrics is to use in-built campaign dashboards.
- Go to Communication.
- Select a channel.
- From the list, choose a communication campaign that was built for the audience based on predictions outputs.
- Change the tab with the statistics (for instance, in case of email channel it will be called Email campaign).
If you use prediction results in Automation, you can check results such as clickthrough rate, open rate, and more, in the automation statistics view.
Machine learning and synthetic measures
In addition to business metrics, you can also evaluate the model against selected machine learning metrics. There are some specific metrics available for predictions, for instance: F1, Accuracy, Precision or Mean Average Precision. In addition, to make sure the users who are less acquainted with these metrics understand them, there is also a synthetic metric called Model quality, which evaluates overall effectiveness of the models, taking into account the problem that was addressed.
If you selected Regression as the type of the model, you will be shown the following metrics: Mean p-value, R2, and Model quality. In the case of all of them, the higher the better and their maximum value is 100% (1). For the less acquainted with the definitions of specific metrics, we strongly recommend to look at the Model quality metric as it takes the most relevant metric for a specific use case.
If you selected Classification as the type of the model, you will be shown the following metrics: Accuracy, F1, Precision, AUC, and Model quality. In the case of all of them, the higher the better and their maximum value is 100% (1). For the less acquainted with the definitions of specific metrics, we strongly recommend to look at the Model quality metric as it takes the most relevant metric for specific use case.
How to check whether your model is good?
There is no one specific way to verify the overall quality of the prediction model, neither in machine learning nor business metrics. Nevertheless, there are standard procedures that can be helpful in determining your model efficiency.
The most important thing is to always find a baseline to compare with. For example, it could be the performance of other most similar campaigns. The best way to do it is by performing an ABx test.