Maximize engagement by testing multiple versions of content against the campaign target. Using A/B testing, different versions will each be sent to a small sample of the segment and then a winner will be automatically determined based on the criterion you choose. The winning content will be sent to the remainder of your segment.
What can be tested
Any aspect of the content can be changed between touchpoints, with up to five different variations. However, for the purposes of a sound test with conclusive results, it is recommended to limit variations to two to three total, with succinct differences among them. For example, change a subject line, a hero image, or an offered deal, but not all three.
Initiating an A/B Content Test
In order to initiate an A/B content test, simply click the + Add a Variation tab in the Messages section of the campaign creation screen:
A panel of content testing options will appear after adding a variation:
The configurable options are:
Automatic Testing vs Manual Testing
An automatic test is one in which Zaius executes campaigns in two phases: a test or experimental phase, and a winner phase. After the experimental phase, the winning piece of content is automatically determined and delivered to all remaining customers during the winner phase.
In a manual test, the content will be sent to the percentages of the segment specified with the total percentages adding up to 100%. The touchpoint is delivered as a single phase with no winner phase.
If "Use recipient's time zone when available" is enabled in the campaign's schedule section, each UTC offset is tested independently of all others. Results per offset are presented in the campaign overview screen.
Test Duration (automatic tests)
This is the length of the experimental phase. For a campaign scheduled to run one time, the percentages specified in the slider at the bottom of the options will be targeted at the campaign start time. Then, after the test duration, the winner is determined and the remainder of the segment is targeted.
A duration of at least 4 hours is recommended for an automatic test in order to allow enough time for your customers to act on the campaign (e.g., open and click emails) so that a clear winner can be determined. In the case of a recurring campaign, campaign runs from the campaign start time up to the duration of the test are A/B tested, and then the winning content is determined and used for subsequent campaign runs.
Campaign Audience (automatic tests)
The campaign segment is determined both at campaign start time (when the test phase starts), and again at the time of the winning phase. For example, if user A is not in the campaign segment at campaign start time, but is in the segment at the time of the winning phase, they will be targeted in the winning phase.
Winning Criteria (automatic tests)
The test will be evaluated based on the winning criteria selection. For email campaigns, the options are open rate (count of unique users who have opened the email, divided by sends), click rate (count of unique users who have clicked in the email, divided by number of sends), and click rate of opens (count of unique users who have clicked, divided by count of unique users who have opened).
Opens might be the preferred metric for subject line and preheader changes, whereas if copy changes are made in the body of the touchpoint, click rate of opens would be more appropriate.
Winning Content and Default Winner (automatic tests)
A variation can only be the winner if the difference in winning criteria values for the variations is statistically significant. If the difference is statistically insignificant (or if the winning values are equal), the test is deemed inconclusive. When this occurs, the default winner is used in the winning phase.
For a test between two pieces of content, significance is determined by calculating a Z-score of the two proportions of the test group that match the winning criteria.
More formally, Message A is sent to na recipients in the test phase with the fraction pa matching the win criteria, and Message B is sent to nb recipients in the test phase with the fraction pb matching the win criterion. The Z-score calculates the confidence that pa and pb represent the true difference in outcomes matching the winning criteria, within some margin of error, as opposed to representing a chance outcome. To be considered a statistically significant win, a Z-value of 1, or a 68% confidence level, is used.
If a test has more than two pieces of content tested, the results from each piece of content is compared against each other piece of content. The winner is statistically significant if it is pairwise statistically significant against all other content tested.
In an automatic test, the slider indicates the percentages of the campaign segment to send each variation during the test phase. For a one-time automatic test, 10% is recommended for each message variation, in order to increase the likelihood of the test result being statistically significant. For a small segment (below 100k), consider increasing this percentage.
In the case of an automatic test in a recurring campaign, campaign runs occurring during your test period are tested at 100%, with the ability to shift what percentage receives each variation. Similarly, for a manual test, 100% of your segment is targeted and you can change the breakdown within.
Equal percentages for each piece of content is generally recommended. However, different percentages may be appropriate in the case of trying something different than past communications or something otherwise risky. Since the winning variation is determined based on rates (of opens, clicks, etc), winner determination doesn't require equal percentages.
Content testing is not applicable to API triggered push campaigns. Manual testing is supported for event triggered campaigns, but not automatic testing.