Performance measurement

In this article, you will find instructions how to:

  1. Evaluate prediction performance through business and machine learning metrics.
  2. Use dashboards to measure prediction performance.
    Note: You can find instructions how to prepare an example dashboard here.
  3. Analyze performance.

Performance evaluation methods


In general, you can always analyze and evaluate prediction results viewed from two angles: business and machine learning. Both of them can be evaluated using Synerise.

Dashboards and other analytical tools

What really matters is whether a certain action brings uplift to your KPIs like conversion, or CTR rates. It makes sense to always evaluate predictions against the ground truth.

For example, after creating a prediction that forecasts if a certain group of customers will make a transaction in the next days:

  1. Wait several days.
  2. Compare the number of purchases for those customers who were labeled as high and low to a randomly selected control group.

This way, you can verify that the model predicts a phenomenon correctly.

Dashboards in communication

One of other alternatives for monitoring model performance in terms of business metrics is to use in-built campaign dashboards.

  1. Go to Image presents the Communication icon Communication.
  2. Select a channel.
  3. From the list, choose a communication campaign that was built for the audience based on predictions outputs.
  4. Change the tab with the statistics (for instance, in case of email channel it will be called Email campaign).
    Dashboard in an email communication
    Dashboard in an email communication

Workflow analytics

If you use prediction results in Automation, you can check results such as clickthrough rate, open rate, and more, in the automation statistics view.

Machine learning and synthetic measures

A statistics tab for all prediction types shows estimated prediction performance together with direct call-to-action capabilities. On the predefined statistics dashboards, you can see predicted KPIs and additional metrics. In addition to that, it is also possible to:

  • from the model’s statistics preview, you can create segments of customers depending on the prediction results (for example, a segment with customers who are most likely to buy, churning customers, the closest lookalikes, and so on)
  • add your own custom dashboards to the prediction statistics

You can view the statistics only of active predictions.

Prediction preview for Propensity
Prediction preview for Propensity

Propensity preview


  1. To preview Propensity predictions, go to Image presents the Prediction icon.
  2. On the left pane, select Propensity.
  3. From the Propensity prediction list, select the active prediction you want to preview.
    Result: You are redirected to the statistics.

Statistics explanation


Model card

In this part of preview, you can get the basic information about the prediction and model results.

  • Model type - The type of prediction.
  • Model quality - The quality of the prediction. It can take five values: Very low, Low, Medium, High, and Very high. The quality of the prediction is estimated based on the data input. To increase the model quality, you may widen the segmentation of profiles for whom you prepare a prediction or extend the range of items for which the propensity prediction is calculated.
  • Precision - It evaluates the share of positive conversions that were correctly predicted. The result can take values from 0 (lowest precision) to 1. In the case of propensity, it can be interpreted to what extent we can be sure that predicted conversions will be correct.
  • AUC - Area under the ROC Curve is one of the most commonly used metrics for classification problems. It calculates the area underneath the entire ROC curve and ranges between 0 (lowest ROC) and 1. However, only results higher than 0.5 are considered better than random choice.
  • Recalculation frequency - It describes how often the prediction is recalculated. This was defined in the prediction settings when the prediction was created.
  • Last calculation - It is the time since the last recalculation of the prediction.
  • Total number of generated predictions - This is the number of all snr.propensity.score events generated since the first calculation of the prediction.
    Note: The snr.propensity.score event is available on the activity list of each Profile and includes that Profile’s score. You can learn more about it here.

Audience summary

In this part of the preview, you can see the basic information about the segmentation for which the prediction was made.

  • Audience - The name of the segmentation for which the prediction was made.
  • Audience size - The number of profiles in the segmentation at the moment of the latest recalculation.
    Important: If you open the segmentation in Analytics, it is calculated every time you preview its results. Segmentations are likely to often change in size as profiles start or stop meeting their conditions.
  • Profiles without generated predictions - The number of profiles for whom the snr.propensity.score event could not be generated (due to the lack of interactions) during the latest recalculation.

Distribution charts

In this part of the preview, you can see the distribution of the profiles according to percentile and score they received.

Score label

The score can be presented using one of the following scales (depending on the option you selected in the prediction settings):

  1. Two-point scale (Low, High)
  2. Five-point scale (Very low, Low, Medium, High, Very high)

On the right side of the chart, you can see the summary of each score label. Each column contains the following rows:

  • Audience size - The number of profiles who received this score label.
  • Audience share - How much (as a percentage) of the whole segmentation received this label.
  • Estimated conversion rate - Estimated conversion, based on historical performance data for this segment.
  • Estimated number of transactions - Estimated number of transactions for this segment, based on historical performance data.
  • Gain - Percentage of the target covered in each score label.
  • Lift - The gain percentage to the random percentage at a given score label level.

Percentile

On the right side of the chart, you can see the distribution of profiles in terms of percentiles. For each percentile range, you can check the following statistics:

  • Estimated conversion rate - Estimated conversion, based on historical performance data for this segment.
  • Estimated no. of transactions - Estimated number of transactions for this segment, based on historical performance data.
  • Gain - Expected number of positive responses (conversions) for a segment to overall expected number of positive responses (conversions). Example: 50% of Gain for a specific score label or a decile. Let’s assume you want to contact TOP 20% customers with the highest score, and Gain for this segment equals 50%. It would mean that 50% of overall expected positive responses (conversions) are expected to be realized just from contacting TOP 20% of customers.
  • Lift - Expected ratio of positive responses (conversion rate) for a segment to expected ratio of positive responses (conversion rate) for randomly picked customers. Example: Lift for a specific score or a decile equals 2,5x. It means that a conversion likelihood by contacting these customers is 2,5 times higher than by contacting a randomly picked group.

Lookalikes preview

  1. To preview Lookalikes predictions, go to Image presents the Prediction icon.
  2. On the left pane, select Lookalikes.
  3. From the Lookalikes prediction list, select the active prediction you want to preview.
    Result: You are redirected to the statistics.

Statistics explanation


Model card

In this part of preview, you can get the basic information about the prediction and model results.

  • Model type - The type of prediction.
  • Model quality - The quality of the prediction. It can take five values: Very low, Low, Medium, High, and Very high. The quality of the prediction is estimated based on the data input. To increase the model quality, you may increase the number of profiles in the source segmentation, or extend the range of events in Settings > AI Engine Configuration > Predictions.
  • Precision - The share of positive conversions that were correctly predicted. The result can take values from 0 to 1. In the case of propensity, it can be interpreted to what extent we can be sure that predicted conversions will be correct.
  • AUC - Area under the ROC Curve is one of the most commonly used metrics for classification problems. It calculates the area underneath the entire ROC curve and ranges between 0 and 1. However, only results higher than 0.5 are considered better than random choice.
  • Recalculation frequency - It describes how often the prediction is recalculated. This was defined in the prediction settings when the prediction was created.
  • Last calculation - Time since the last recalculation of the prediction.
  • Total number of generated predictions - This is the number of all snr.lookalike.score events generated since the first calculation of the prediction.
    Note: The snr.lookalike.score event is available on the activity list of each Profile and includes that Profile’s score. You can learn more about it here.

Audience summary

In this part of the preview, you can see the basic information about the segmentation for which the prediction was made.

  • Audience - The name of the segmentation for which the prediction was made.
  • Audience size - The number of profiles in the segmentation at the moment of the latest recalculation.
    Important: If you open the segmentation in Analytics, it is calculated every time you preview its results. Segmentations are likely to often change in size as profiles start or stop meeting their conditions.
  • Profiles without generated predictions - The number of profiles for whom the snr.lookalike.score event could not be generated (due to the lack of interactions) during the latest recalculation.

Distribution charts

In this part of the preview, you can see the distribution of the profiles according to percentile and score they received.

Score label

The score can be presented using one of the following scales (depending on the option you selected in the prediction settings):

  1. Two-point scale (Low, High)
  2. Five-point scale (Very low, Low, Medium, High, Very high)

On the right side of the chart, you can see the summary of each score label. Each column contains the following rows:

  • Audience size - The number of profiles who received this score label.
  • Audience share - How much (as a percentage) of the whole segmentation received this label.
  • Number of transactions (last 30 days) - The number of transactions in the last 30 days for a group of profiles from the target segmentation who received this label.
  • Historical conversion rate (last 30 days) - The unique conversion rate from the last 30 days prior most recent calculation for a given decile.
  • Average daily page visits per user - Average number of page visits for profiles in the given decile per day.

Percentile

On the right side of the chart, you can see the distribution of profiles in terms of percentiles. For each percentile range, you can check the following statistics:

  • Number of transactions (last 30 days) - The number of transactions in the last 30 days for a group of profiles from the target segmentation who received this label.
  • Historical conversion rate (last 30 days) - The unique conversion rate from the last 30 days prior most recent calculation for a given decile.
  • Average daily page visits per user - Average number of page visits for profiles in the given decile per day.

Custom preview

  1. To preview Custom predictions, go to Image presents the Prediction icon.
  2. On the left pane, select Custom.
  3. From the Custom prediction list, select the active prediction you want to preview.
    Result: You are redirected to the statistics.

Statistics explanation

Model card

In this part of the preview, you can get the basic information about the prediction and model results.

  • Model type - The type of prediction.
  • Model quality - The quality of the prediction. It can take five values: Very low, Low, Medium, High, and Very high. The quality of the prediction is estimated based on the data input. To increase the model quality, you may increase the number of profiles in the source segmentation, or extend the range of events in Settings > AI Engine Configuration > Predictions.
  • Precision - The share of positive conversions that were correctly predicted. The result can take values from 0 to 1. In the case of propensity, it can be interpreted to what extent we can be sure that predicted conversions will be correct.
  • AUC - Area under the ROC Curve is one of the most commonly used metrics for classification problems. It calculates the area underneath the entire ROC curve and ranges between 0 and 1. However, only results higher than 0.5 are considered better than random choice.
    Note: Precision and AUC are available only for the Classification model.
  • RMSE - A measure of the differences between values predicted by a model and the actual values. Lower RMSE indicates better model performance, with 0 indicating a perfect fit to the data.
  • R2 - Represents the percentage of the dependent variable’s variance explained by the model. A higher R2 indicates better predictive accuracy - the maximum value is 1.
    Note: RMSE and R2 are available only for the Regression model.
  • Recalculation frequency - It describes how often the prediction is recalculated. This was defined in the prediction settings when the prediction was created.
  • Last calculation - Time since the last recalculation of the prediction.
  • Total number of generated predictions - This is the number of all snr.prediction.score events generated since the first calculation of the prediction.
    Note: The snr.prediction.score event is available on the activity list of each profile and includes that profile’s score. You can learn more about it here.

Audience summary

In this part of the preview, you can see the basic information about the segmentation for which the prediction was made.

  • Audience - The name of the segmentation for which the prediction was made.
  • Audience size - The number of profiles in the segmentation at the moment of the latest recalculation.
    Important: If you open the segmentation in Analytics, it is calculated every time you preview its results. Segmentations are likely to often change in size as profiles start or stop meeting their conditions.
  • Profiles without generated predictions - The number of profiles for whom the snr.lookalike.score event could not be generated (due to a lack of interactions) during the latest recalculation.

Distribution charts

In this part of the preview, you can see the distribution of the profiles according to percentile and score they received.

Score label

The score can be presented using one of the following scales (depending on the option you selected in the prediction settings):

  • Two-point scale (Low, High)
  • Five-point scale (Very low, Low, Medium, High, Very high)

On the right side of the chart, you can see the summary of each score label. Each column contains the following rows:

  • Profiles - The number of profiles who received this score label.
  • % of all profiles in the prediction - How much (as a percentage) of the whole segmentation received this label.
  • Lift - Expected ratio of positive responses (conversion rate) for this segment to expected ratio of positive responses (conversion rate) for randomly picked customers. Example: When the lift value for a specific score or a decile equals 2.5x,it means that the conversion likelihood from contacting these customers is 2.5 times higher compared to a randomly picked group.
  • Gain - Expected number of positive responses (conversions) for this segment to overall expected number of positive responses (conversions). Example: Let’s assume you want to contact the top 20% customers with the highest score, and gain for this segment is 50%. It means that 50% of overall expected positive responses (conversions) are expected to be realized just from contacting these top 20% of customers.

Percentile

On the right side of the chart, you can see the distribution of profiles in terms of percentiles. For each percentile range, you can check the following statistics:

  • Percentile - Percentile is a score below which a specified percentage of customers from an analyzed group falls. For instance, the 50th percentile means that 50% of the customers have lower score than this one.
  • Lift - Expected ratio of positive responses (conversion rate) for this segment to expected ratio of positive responses (conversion rate) for randomly picked customers. Example: When the lift value for a specific score or a decile equals 2.5x,it means that the conversion likelihood from contacting these customers is 2.5 times higher compared to a randomly picked group.
  • Gain - Expected number of positive responses (conversions) for this segment to overall expected number of positive responses (conversions). Example: Let’s assume you want to contact the top 20% customers with the highest score, and gain for this segment is 50%. It means that 50% of overall expected positive responses (conversions) are expected to be realized just from contacting these top 20% of customers.

Adding custom dashboards


In the prediction statistics view, you can use the option that allows you to add your own dashboards, so you can maximize insights from prediction results.

  1. Next to the Overview tab, click the Three-dot icon icon.
    Adding custom dashboards
    Adding custom dashboards
  2. From the dropdown list, select Manage dashboards.
  3. On the pop-up, in the text field, enter the name of the dashboard you want to add.
  4. Confirm your choice by clicking Add.
  5. Optionally, you can define the order of displaying dashboards by dragging and dropping them in the desired order.
  6. Confirm the dashboard settings by clicking Apply.

Additionally, you can add your own custom dashboards to the machine learning and synthetic statistics, so you can maximize insights from prediction results.

  1. Next to the Overview tab, click the Three-dot icon icon.
    Adding custom dashboards
    Adding custom dashboards
  2. From the dropdown list, select Manage dashboards.
  3. On the pop-up, in the text field, enter the name of the dashboard you want to add.
  4. Confirm your choice by clicking Add.
  5. Optionally, you can define the order of displaying dashboards by dragging and dropping them in the desired order.
  6. Confirm the dashboard settings by clicking Apply.

Performance analysis


How to check whether your model is good?

There is no one specific way to verify the overall quality of the prediction model, neither in machine learning nor business metrics. Nevertheless, there are standard procedures that can be helpful in determining your model efficiency.
The most important thing is to always find a baseline to compare with. For example, it could be the performance of other most similar campaigns. The best way to do it is by performing an ABx test.

😕

We are sorry to hear that

Thank you for helping improve out documentation. If you need help or have any questions, please consider contacting support.

😉

Awesome!

Thank you for helping improve out documentation. If you need help or have any questions, please consider contacting support.

Close modal icon Placeholder alt for modal to satisfy link checker