Overview
Maximize engagement by testing multiple versions of content against the campaign target.
What to test
Test any aspect of the content with up to five different email variations supported. However, to encourage more conclusive results, it's recommended that only two or three versions are used. Things that you might consider changing in the variants include:
- Subject line: Consider changing the length, tone, or level of personalization.
- Sending profile: Consider sending the email from an employee email address instead of a generic company or department address.
- Offer: Consider leveraging different offer types, such as a whitepaper versus a video.
- Format: Experiment with the content's formatting, such as paragraphs versus bullet points.
- Image: Try using different pictures or graphics to see if they have any effect on your engagement.
Configure an A/B test
To initiate an A/B test for a campaign touchpoint:
- Navigate to Campaigns.
- Select or create a One-Time, Behavioral, or Transactional campaign.
- Select a campaign touchpoint to access the editor.
- Click the + Add Variation tab in the editor.
The A/B testing panel will appear below Sender Profile after adding a variation:
The configurable options for the test are:
Test type
There are currently two testing methods available:
Automatic: During an automated A/B test, Zaius sends different versions of content to a small sample of the campaign's segment. Following a testing period, a winner is automatically determined based on preselected criteria. The winning content sends to all remaining customers.
Manual: In a manual test, the content sends to the percentages of the segment specified with the total adding up to 100%. The touchpoints send as a single phase with no secondary winner phase. A comparison of the two versions following the single send determines the winner. If this option is selected, only the variation slider option will be configurable. All other options are only applicable to automatic tests.
Test Duration
Only available for automatic tests - This setting determines the length of the testing period. The percentages assigned to each variant will determine the share of emails sent at the campaign start time. Then, after the test duration, the winner is determined, and the remainder of the segment receives the winning content.
It is essential to allow enough time for your customers to act on the campaign (e.g., open and click emails) so that a clear winner can be determined. A duration of at least 4 hours is recommended.
When testing recurring campaigns, all emails sent from the start time through the duration of the test are A/B tested, and then the winning content is determined and used for subsequent campaign runs.
Win Criteria
Only available for automatic tests - The test will be evaluated based on the winning criteria selection. The options are:
- Opens: The count of unique recipients who've opened the email, divided by total sends.
- Clicks: The count of unique recipients who've clicked in the email, divided by total sends.
- Click Rate of Opens: The count of unique recipients who've clicked in the email, divided by the count of unique recipients who've opened the email.
Depending on your content changes, one metric might be more relevant than another. For example, opens might be the preferred metric for modifications to the subject line and preheader. In contrast, the click rate of opens might be more appropriate for changes to the email's body or offer.
Default Winner
Only available for automatic tests - A variation can only be the winner if the difference in winning criteria values for the changes is statistically significant. If the difference is statistically insignificant or if the winning amounts are equal, the test is deemed inconclusive. When a test is inconclusive, the default winner is sent to the remainder of the segment instead.
For a test between two pieces of content, statistical significance is determined by calculating a Z-score of the two proportions of the test group that match the winning criteria. More specifically, Message A is sent to na recipients in the testing period with the fraction pa matching the win criteria, and Message B is sent to nb recipients in the testing period with the fraction pb matching the win criterion. The Z-score calculates the confidence that pa and pb represent the true difference in outcomes matching the winning criteria, within some margin of error, as opposed to representing a chance outcome. To be considered a statistically significant win, a Z-value of 1, or a 68% confidence level, is used.
If a test has more than two variants, the results from each piece of content are compared against each other. The winner is statistically significant if it is pairwise statistically significant against all other content tested.
Variation Slider
For automatic tests - The slider indicates the percentages of the campaign segment to send each variation during the testing period. For a one-time automatic test, 10% is recommended for each variant to increase the likelihood of the test result being statistically significant. For a small segment (below 100k), consider increasing this percentage to 20-25%. Equal percentages for each piece of content is generally recommended.
In the case of an automatic test on a recurring campaign, email sends during your testing period are tested at 100%, with the ability to shift what percentage of enrolled customers receives each variation.
For manual tests - 100% of your segment is targeted, and you can change the breakdown within. Equal percentages for each piece of content is generally recommended.
Considerations
A/B testing isn't available in every situation:
- A/B testing is not available for API triggered push campaigns.
- Only manual testing is available for event-triggered campaigns.
The campaign audience is calculated differently based on the type of A/B test:
The campaign segment is determined at the campaign start time (when the testing period starts), and again at the time of the winning phase. For example, if customer A is not in the campaign segment at the campaign start time, but is in the segment at the time of the winning phase, they will be targeted in the winning phase.
Review results
To review the results of an A/B test:
- Navigate to Campaigns.
- Select the A/B tested campaign you'd like to review.
- Select a campaign touchpoint that has been tested to access it's performance metrics.
For automatic tests -If an automatic test was performed, you will be able to compare the results side by side. There will be a label specifying whether or a winner was determined during the testing period. If a statistically significant winner was determined, then a Winner label will be applied to one of the variants.
If a clear winner couldn't be determined during the testing period, then a Default Winner label will be applied to a variant instead.
For manual tests - If a manual test was performed, you will be able to compare the results side by side. No winner label will be applied as a winner was not selected on your behalf. Instead, the relative performance of the variants will determine your winner and any takeaways that may be applied to future campaigns.