vendredi 18 janvier 2019

Full Funnel Testing: SEO & CRO Together - Whiteboard Friday

Posted by willcritchlow

Testing for only SEO or only CRO isn't always ideal. Some changes result in higher conversions and reduced site traffic, for instance, while others may rank more highly but convert less well. In today's Whiteboard Friday, we welcome Will Critchlow as he demonstrates a method of testing for both your top-of-funnel SEO changes and your conversion-focused CRO changes at once.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. Welcome to another Whiteboard Friday. My name is Will Critchlow, one of the founders at Distilled. If you've been following what I've been writing and talking about around the web recently, today's topic may not surprise you that much. I'm going to be talking about another kind of SEO testing.

Over at Distilled, we've been investing pretty heavily in building out our capability to do SEO tests and in particular built our optimization delivery network, which has let us do a new kind of SEO testing that hasn't been previously available to most of our clients. Recently we've been working on a new enhancement to this, which is full funnel testing, and that's what I want to talk about today.

So funnel testing is testing all the way through the funnel, from acquisition at the SEO end to conversion. So it's SEO testing plus CRO testing together. I'm going to write a little bit more about some of the motivation for this. But, in a nutshell, it essentially boils down to the fact that it is perfectly possible, in fact we've seen in the wild cases of tests that win in SEO terms and lose in CRO terms or vice versa.

In other words, tests that maybe you make a change and it converts better, but you lose organic search traffic. Or the other way around, it ranks better, but it converts less well. If you're only testing one, which is common — I mean most organizations are only testing the conversion rate side of things — it's perfectly possible to have a winning test, roll it out, and do worse.

CRO testing

So let's step back a little bit. A little bit of a primer. Conversion rate optimization testing works in an A/B split kind of way. You can test on a single page, if you want to, or a site section. The way it works is you split your audience. So your audience is split. Some of your audience gets one version of the page, and the rest of the audience gets a different version.

Then you can compare the conversion rate among the group who got the control and the group who got the variant. That's very straightforward. Like I say, it can happen on a single page or across an entire site. SEO testing, a little bit newer. The way this works is you can't split the audience, because we care very much about the search engine spiders in this case. For the purposes of this consideration, there's essentially only one Googlebot. So you couldn't put Google in Class A or Class B here and expect to get anything meaningful.

SEO testing

So the way that we do an SEO test is we actually split the pages. To do this, you need a substantial site section. So imagine, for example, an e-commerce website with thousands of products. You might have a hypothesis of something that will help those product pages perform better. You take your hypothesis and you only apply it to some of the pages, and you leave some of the pages unchanged as a control.

Then, crucially, search engines and users see the same experience. There's no cloaking going on. There's no duplication of content. You simply change some pages and not change others. Then you apply kind of advanced mathematical, statistical analysis trying to figure out do these pages get statistically more organic search traffic than we think they would have done if we hadn't made this change. So that's how an SEO test works.

Now, as I said, the problem that we are trying to tackle here is it's really plausible, despite Google's best intentions to do what's right for users, it's perfectly plausible that you can have a test that ranks better but converts less well or vice versa. We've seen this with, for example, removing content from a page. Sometimes having a cleaner, simpler page can convert better. But maybe that was where the keywords were and maybe that was helping the page rank. So we're trying to avoid those kinds of situations.

Full funnel testing

That's where full funnel testing comes in. So I want to just run through how you run a full funnel test. What you do is you first of all set it up in the same way as an SEO test, because we're essentially starting with SEO at the top of the funnel. So it's set up exactly the same way.

Some pages are unchanged. Some pages get the hypothesis applied to them. As far as Google is concerned, that's the end of the story, because on any individual request to these pages that's what we serve back. But the critically important thing here is I've got my little character. This is a human browser performs a search, "What do badgers eat?"

This was one of our silly examples that we came up with on one of our demo sites. The user lands on this page here. What we do is we then set a cookie. This is a cookie. This user then, as they navigate around the site, no matter where they go within this site section, they get the same treatment, either the control or the variant. They get the same treatment across the entire site section. This is more like the conversion rate test here.

Googlebot = stateless requests

So what I didn't show in this diagram is if you were running this test across a site section, you would cookie this user and make sure that they always saw the same treatment no matter where they navigated around the site. So because Googlebot is making stateless requests, in other words just independent, one-off requests for each of these of these pages with no cookie set, Google sees the split.

Evaluate SEO test on entrances

Users get whatever their first page impression looks like. They then get that treatment applied across the entire site section. So what we can do then is we can evaluate independently the performance in search, evaluate that on entrances. So do we get significantly more entrances to the variant pages than we would have expected if we hadn't applied a hypothesis to them?

That tells us the uplift from an SEO perspective. So maybe we say, "Okay, this is plus 11% in organic traffic." Well, great. So in a vacuum, all else being equal, we'd love to roll out this test.

Evaluate conversion rate on users

But before we do that, what we can do now is we can evaluate the conversion rate, and we do that based on user metrics. So these users are cookied.

We can also set an analytics tag on them and say, "Okay, wherever they navigate around, how many of them end up converting?" Then we can evaluate the conversion rate based on whether they saw treatment A or treatment B. Because we're looking at conversion rate, the audience size doesn't exactly have to be the same. So the statistical analysis can take care of that fact, and we can evaluate the conversion rate on a user-centric basis.

So then we maybe see that it's -5% in conversion rate. We then need to evaluate, "Is this something we should roll out?" So step 1 is: Do we just roll it out? If it's a win in both, then the answer is yes probably. If they're in different directions, then there are couple things we can do. Firstly, we can evaluate the relative performance in different directions, taking care that conversion rate applies generally across all channels, and so a relatively small drop in conversion rate can be a really big deal compared to even an uplift in organic traffic, because the conversion rate is applying to all channels, not just your organic traffic channel.

But suppose that it's a small net positive or a small net negative. What we can then do is we might get to the point that it's a net positive and roll it out. Either way, we might then say, "What can we take from this?What can we actually learn?" So back to our example of the content. We might say, "You know what? Users like this cleaner version of the page with apparently less content on it.The search engines are clearly relying on that content to understand what this page is about. How do we get the best of both worlds?"

Well, that might be a question of a redesign, moving the layout of the page around a little bit, keeping the content on there, but maybe not putting it front and center to the user as they land right at the beginning. We can test those different things, run sequential tests, try and take the best of the SEO tests and the best of the CRO tests and get it working together and crucially avoid those situations where you think you've got a win, because your conversion rate is up, but you actually are about to crater your organic search performance.

We think this is going to just be the more data-driven we get, the more accountable SEO testing makes us, the more important it's going to be to join these dots and make sure that we're getting true uplifts on a net basis when we combine them. So I hope that's been useful to some of you. Thank you for joining me on this week's Whiteboard Friday. I'm Will Critchlow from Distilled.

Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Aucun commentaire:

Enregistrer un commentaire