Performance measurement

In this article, you will find instructions how to:

  1. Evaluate prediction performance through business and machine learning metrics.
  2. Use dashboards to measure prediction performance.
    Note: You can find instructions how to prepare an example dashboard here.
  3. Analyze performance.

Performance evaluation methods


In general, you can always analyze and evaluate prediction results viewed from two angles: business and machine learning. Both of them can be evaluated using Synerise.

Dashboards and other analytical tools

What really matters is whether a certain action brings uplift to your KPIs like conversion, or CTR rates. It makes sense to always evaluate predictions against the ground truth.

For example, after creating a prediction that forecasts if a certain group of customers will make a transaction in the next days:

  1. Wait several days.
  2. Compare the number of purchases for those customers who were labeled as high and low to a randomly selected control group.

This way, you can verify that the model predicts a phenomenon correctly.

Dashboards in communication

One of other alternatives for monitoring model performance in terms of business metrics is to use in-built campaign dashboards.

  1. Go to Image presents the Communication icon Communication.
  2. Select a channel.
  3. From the list, choose a communication campaign that was built for the audience based on predictions outputs.
  4. Change the tab with the statistics (for instance, in case of email channel it will be called Email campaign).
    Dashboard in an email communication
    Dashboard in an email communication

Workflow analytics

If you use prediction results in Automation, you can check results such as clickthrough rate, open rate, and more, in the automation statistics view.

Machine learning and synthetic measures

In addition to business metrics, you can also evaluate the model against selected machine learning metrics. There are some specific metrics available for predictions, for instance: F1, Accuracy, Precision or Mean Average Precision. In addition, to make sure the users who are less acquainted with these metrics understand them, there is also a synthetic metric called Model quality, which evaluates overall effectiveness of the models, taking into account the problem that was addressed.

Additionally, you can add your own custom dashboards to the machine learning and synthetic measures, so you can maximize insights from prediction results.

  1. Next to the Overview tab, click the Three-dot icon icon.
    Adding custom dashboards
    Adding custom dashboards
  2. From the dropdown list, select Manage dashboards.
  3. On the pop-up, in the text field, enter the name of the dashboard you want to add.
  4. Confirm your choice by clicking Add.
  5. Optionally, you can define the order of displaying dashboards by dragging and dropping them in the desired order.
  6. Confirm the dashboard settings by clicking Apply.

Regression

If you selected Regression as the type of the model, you will be shown the following metrics: Mean p-value, R2, and Model quality. In the case of all of them, the higher the better and their maximum value is 100% (1). For the less acquainted with the definitions of specific metrics, we strongly recommend to look at the Model quality metric as it takes the most relevant metric for a specific use case.

Metrics for the Regression model type
Metrics for the Regression model type

Classification

If you selected Classification as the type of the model, you will be shown the following metrics: Accuracy, F1, Precision, AUC, and Model quality. In the case of all of them, the higher the better and their maximum value is 100% (1). For the less acquainted with the definitions of specific metrics, we strongly recommend to look at the Model quality metric as it takes the most relevant metric for specific use case.

Metrics for the Classification model type
Metrics for the Classification model type

Performance analysis


How to check whether your model is good?

There is no one specific way to verify the overall quality of the prediction model, neither in machine learning nor business metrics. Nevertheless, there are standard procedures that can be helpful in determining your model efficiency.
The most important thing is to always find a baseline to compare with. For example, it could be the performance of other most similar campaigns. The best way to do it is by performing an ABx test.

😕

We are sorry to hear that

Thank you for helping improve out documentation. If you need help or have any questions, please consider contacting support.

😉

Awesome!

Thank you for helping improve out documentation. If you need help or have any questions, please consider contacting support.

Close modal icon Placeholder alt for modal to satisfy link checker