– The goal of testing a popup offer should be to measure the signup to conversion rate, as those who convert are more likely to become repeat purchasers.
– A long-running test showed that offering a coupon code for two purchases resulted in the highest rate of repeat purchases.
– To conduct a proper A/B test for popups, it is important to control for variables and ensure consistency in the ad campaign, landing page, and offer.Use a percentage off offer in popups, making it lower than the largest sale of the year but higher than the bundle discount, to increase the chances of repeat purchases.Thinking about an A/B test for your popup, there’s a nearly 100% chance you’re doing it wrong. To properly test a popup offer, you need to understand what your goal is. It’s not the sign-up rate. It’s not revenue. It’s the signup to conversion rate because those people who convert have a much higher chance of shopping again than people who don’t convert. Meaning, it’s impossible for people to become repeat purchasers if they don’t make a purchase the first time. So everything you do should be focused on getting the purchase first, and then understanding the second purchase.
Let me share a story because I’m pretty sure this is the longest-running test anyone has ever done in ecommerce while holding variables consistent. We had a popup that gave someone a unique coupon code that was valid for 1 whole year and could be used up to 6 times. We ran this test for an entire year. We had zero sales for the year, sent out less than 5 non-flow total emails, none of which contained a coupon code other than the ones they had signed up for and still had available. You know what the data said? Two times is the goal for repeat purchases. About 30% of the people used the coupon code once, then 25% of those used it twice. Beyond that, it was less than 1%.
Do you know what coupon code offer we use in our popups right now? “Sign up for 20% off your first two orders.” There is a zero percent chance any company would replicate this test while holding these variables consistent for this duration.
Now, let’s get back to why you’ve been testing your popups wrong. There are tons of variables, and the goal is to control for as many as possible. The cleanest possible test would look like this: one ad campaign, one ad, one landing page, and an A/B/C/D test popup with varying offers and corresponding welcome series first emails to accommodate for them. That way, the audience is the same, everyone sees the same ad, sees the same landing page, sees the same offer on the page, and is provided a variation of a signup offer.
Now here’s where it gets tricky. To do this right, you have to eliminate any other popups that would show across your entire website for them. So you’d have to exclude them by that source or UTM. Wait, you haven’t been running your A/B tests this way for your popups? Don’t worry no one has, because this is real logical and statistical reasoning and for the vast majority of stores the resources needed to even set this up properly would be astronomical for limited gains. So let me save you some time, use a percentage off, make it lower than your largest sale of the year but higher than your bundle discount. This will be good enough for 95% of all stores. Now just stop trading a discount for only an email or phone number because 75% of all purchases happen in the first 12 hours and you’ll never know why some people purchase and others don’t if that’s all your collecting. More emails doesn’t equal more profit, else everyone would just run giveaways all the time. #marketing #ecommerce #datahttps://www.linkedin.com/in/jivanco