jeudi 28 février 2019
mercredi 27 février 2019
14 SEO Predictions for 2019 & Beyond, as Told by Mozzers
Posted by TheMozTeam
With the new year in full swing and an already busy first quarter, our 2019 predictions for SEO in the new year are hopping onto the scene a little late — but fashionably so, we hope. From an explosion of SERP features to increased monetization to the key drivers of search this year, our SEO experts have consulted their crystal balls (read: access to mountains of data and in-depth analyses) and made their predictions. Read on for an exhaustive list of fourteen things to watch out for in search from our very own Dr. Pete, Britney Muller, Rob Bucci, Russ Jones, and Miriam Ellis!
1. Answers will drive search
People Also Ask boxes exploded in 2018, and featured snippets have expanded into both multifaceted and multi-snippet versions. Google wants to answer questions, it wants to answer them across as many devices as possible, and it will reward sites with succinct, well-structured answers. Focus on answers that naturally leave visitors wanting more and establish your brand and credibility. [Dr. Peter J. Meyers]
Further reading:
- Content for Answers: The Inverted Pyramid - Whiteboard Friday
- We Dipped Our Toes Into Double Featured Snippets
- Desktop, Mobile, or Voice? (D) All of the Above - Whiteboard Friday
2. Voice search will continue to be utterly useless for optimization
Optimizing for voice search will still be no more than optimizing for featured snippets, and conversions from voice will remain a dark box. [Russ Jones]
Further reading:
- The Influence of Voice Search on Featured Snippets
- Lessons from 1,000 Voice Searches (on Google Home)
- How to Discover Featured Snippet Opportunities - Whiteboard Friday
- How to Target Featured Snippet Opportunities - Whiteboard Friday
3. Mobile is table stakes
This is barely a prediction. If your 2019 plan is to finally figure out mobile, you're already too late. Almost all Google features are designed with mobile-first in mind, and the mobile-first index has expanded rapidly in the past few months. Get your mobile house (not to be confused with your mobile home) in order as soon as you can. [Dr. Peter J. Meyers]
Further reading:
- How Does Mobile-First Indexing Work, and How Does It Impact SEO?
- How and Why to Do a Mobile/Desktop Parity Audit
- Internal Linking & Mobile First: Large Site Crawl Paths in 2018 & Beyond
- How Mobile-First Indexing Disrupts the Link Graph
4. Further SERP feature intrusions in organic search
Expect Google to find more and more ways to replace organic with solutions that keep users on Google’s property. This includes interactive SERP features that replace, slowly but surely, many website offerings in the same way that live scores, weather, and flights have. [Russ Jones]
Further reading:
- Zero-Result SERPs: Welcome to the Future We Should've Known Was Coming
- What Do You Do When You Lose Organic Traffic to Google SERP Features?
- Google's Walled Garden: Are We Being Pushed Out of Our Own Digital Backyards?
5. Video will dominate niches
Featured Videos, Video Carousels, and Suggested Clips (where Google targets specific content in a video) are taking over the how-to spaces. As Google tests search appliances with screens, including Home Hub, expect video to dominate instructional and DIY niches. [Dr. Peter J. Meyers]
Further reading:
- YouTube SEO: Top Factors to Invest In - Whiteboard Friday
- A Step-by-Step Guide to Setting Up and Growing Your YouTube Presence
- Beyond YouTube: Video Hosting, Marketing, and Monetization Platforms, Compared
6. SERPs will become more interactive
We’ve seen the start of interactive SERPs with People Also Ask Boxes. Depending on which question you expand, two to three new questions will generate below that directly pertain to your expanded question. This real-time engagement keeps people on the SERP longer and helps Google better understand what a user is seeking. [Britney Muller]
Further reading:
7. Local SEO: Google will continue getting up in your business — literally
Google will continue asking more and more intimate questions about your business to your customers. Does this business have gender-neutral bathrooms? Is this business accessible? What is the atmosphere like? How clean is it? What kind of lighting do they have? And so on. If Google can acquire accurate, real-world information about your business (your percentage of repeat customers via geocaching, price via transaction history, etc.) they can rely less heavily on website signals and provide more accurate results to searchers. [Britney Muller]
Further reading:
- The Ultimate Cheat Sheet for Taking Full Control of Your Google Knowledge Panels
- How to Optimize Your Google My Business Listing
8. Business proximity-to-searcher will remain a top local ranking factor
In Moz’s recent State of Local SEO report, the majority of respondents agreed that Google’s focus on the proximity of a searcher to local businesses frequently emphasizes distance over quality in the local SERPs. I predict that we’ll continue to see this heavily weighting the results in 2019. On the one hand, hyper-localized results can be positive, as they allow a diversity of businesses to shine for a given search. On the other hand, with the exception of urgent situations, most people would prefer to see best options rather than just closest ones. [Miriam Ellis]
Further reading:
- The State of Local SEO Industry Report
- Local Search Ranking Factors 2018: Local Today, Key Takeaways, and the Future - Whiteboard Friday
9. Local SEO: Google is going to increase monetization
Look to see more of the local and maps space monetized uniquely by Google both through Adwords and potentially new lead-gen models. This space will become more and more competitive. [Russ Jones]
Further reading:
- New Research: 35% of Competitive Local Keywords Have Local Pack Ads
- What Do SEOs Do When Google Removes Organic Search Traffic? - Whiteboard Friday
10. Monetization tests for voice
Google and Amazon have been moving towards voice-supported displays in hopes of better monetizing voice. It will be interesting to see their efforts to get displays in homes and how they integrate the display advertising. Bold prediction: Amazon will provide sleep-mode display ads similar to how Kindle currently displays them today. [Britney Muller]
11. Marketers will place a greater focus on the SERPs
I expect we’ll see a greater focus on the analysis of SERPs as Google does more to give people answers without them having to leave the search results. We’re seeing more and more vertical search engines like Google Jobs, Google Flights, Google Hotels, Google Shopping. We’re also seeing more in-depth content make it onto the SERP than ever in the form of featured snippets, People Also Ask boxes, and more. With these new developments, marketers are increasingly going to want to report on their general brand visibility within the SERPs, not just their website ranking. It’s going to be more important than ever for people to be measuring all the elements within a SERP, not just their own ranking. [Rob Bucci]
Further reading:
- Mapping the Overlap of SERP Feature Suggestions
- Make Sense of Your Data with These Essential Keyword Segments
12. Targeting topics will be more productive than targeting queries
2019 is going to be another year in which we see the emphasis on individual search queries start to decline, as people focus more on clusters of queries around topics. People Also Ask queries have made the importance of topics much more obvious to the SEO industry. With PAAs, Google is clearly illustrating that they think about searcher experience in terms of a searcher’s satisfaction across an entire topic, not just a specific search query. With this in mind, we can expect SEOs to more and more want to see their search queries clustered into topics so they can measure their visibility and the competitive landscape across these clusters. [Rob Bucci]
Further reading:
- Build a Search Intent Dashboard to Unlock Better Opportunities
- It's Time to Stop Doing On-Page SEO Like It's 2012
- Using Related Topics and Semantically Connected Keywords in Your SEO - Whiteboard Friday
- How to Feed a Hummingbird: Improve Your On-Page SEO with Related Topics in Moz Pro
13. Linked unstructured citations will receive increasing focus
I recently conducted a small study in which there was a 75% correlation between organic and local pack rank. Linked unstructured citations (the mention of partial or complete business information + a link on any type of relevant website) are a means of improving organic rankings which underpin local rankings. They can also serve as a non-Google dependent means of driving traffic and leads. Anything you’re not having to pay Google for will become increasingly precious. Structured citations on key local business listing platforms will remain table stakes, but competitive local businesses will need to focus on unstructured data to move the needle. [Miriam Ellis]
Further reading:
- The Guide to Building Linked Unstructured Citations for Local SEO
- Why Local Businesses Will Need Websites More than Ever in 2019
14. Reviews will remain a competitive difference-maker
A Google rep recently stated that about one-third of local searches are made with the intent of reading reviews. This is huge. Local businesses that acquire and maintain a good and interactive reputation on the web will have a critical advantage over brands that ignore reviews as fundamental to customer service. Competitive local businesses will earn, monitor, respond to, and analyze the sentiment of their review corpus. [Miriam Ellis]
Further reading:
- Time to Act: Review Responses Just Evolved from "Extra" to "Expected"
- How to Respond to the 5 Types of Google Reviews
- Location Data + Reviews: The 1–2 Punch of Local SEO
- See more local SEO predictions for 2019 by Miriam in our Q&A!
We’ve heard from Mozzers, and now we want to hear from you. What have you seen so far in 2019 that’s got your SEO Spidey senses tingling? What trends are you capitalizing on and planning for? Let us know in the comments below (and brag to friends and colleagues when your prediction comes true in the next 6–10 months). ;-)
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
mardi 26 février 2019
lundi 25 février 2019
vendredi 22 février 2019
jeudi 21 février 2019
The Influence of Voice Search on Featured Snippets
Posted by TheMozTeam
This post was originally published on the STAT blog.
We all know that featured snippets provide easy-to-read, authoritative answers and that digital assistants love to say them out loud when asked questions.
This means that featured snippets have an impact on voice search — bad snippets, or no snippets at all, and digital assistants struggle. By that logic: Create a lot of awesome snippets and win the voice search race. Right?
Right, but there’s actually a far more interesting angle to examine — one that will help you nab more snippets and optimize for voice search at the same time. In order to explore this, we need to make like Doctor Who and go back in time.
From typing to talking
Back when dinosaurs roamed the earth and queries were typed into search engines via keyboards, people adapted to search engines by adjusting how they performed queries. We pulled out unnecessary words and phrases, like “the,” “of,” and, well, “and,” which created truncated requests — robotic-sounding searches for a robotic search engine.
Of course, as search engines have evolved, so too has their ability to understand natural language patterns and the intent behind queries. Google’s 2013 Hummingbird update helped pave the way for such evolution. This algorithm rejigging allowed Google’s search engine to better understand the whole of a query, moving it away from keyword matching to conversation having.
This is good news if you’re a human person: We have a harder time changing the way we speak than the way we write. It’s even greater news for digital assistants, because voice search only works if search engines can interpret human speech and engage in chitchat.
Digital assistants and machine learning
By looking at how digital assistants do their voice search thing (what we say versus what they search), we can see just how far machine learning has come with natural language processing and how far it still has to go (robots, they’re just like us!). We can also get a sense of the kinds of queries we need to be tracking if voice search is on the SEO agenda.
For example, when we asked our Google Assistant, “What are the best headphones for $100,” it queried [best headphones for $100]. We followed that by asking, “What about wireless,” and it searched [best wireless headphones for $100]. And then we remembered that we’re in Canada, so we followed that with, “I meant $100 Canadian,” and it performed a search for [best wireless headphones for $100 Canadian].
We can learn two things from this successful tête-à-tête: Not only does our Google Assistant manage to construct mostly full-sentence queries out of our mostly full-sentence asks, but it’s able to accurately link together topical queries. Despite us dropping our subject altogether by the end, Google Assistant still knows what we’re talking about.
Of course, we’re not above pointing out the fumbles. In the string of: “How to bake a Bundt cake,” “What kind of pan does it take,” and then “How much do those cost,” the actual query Google Assistant searched for the last question was [how much does bundt cake cost].
Just after we finished praising our Assistant for being able to maintain the same subject all the way through our inquiry, we needed it to be able to switch tracks. And it couldn’t. It associated the “those” with our initial Bundt cake subject instead of the most recent noun mentioned (Bundt cake pans).
In another important line of questioning about Bundt cake-baking, “How long will it take” produced the query [how long does it take to take a Bundt cake], while “How long does that take” produced [how long does a Bundt cake take to bake].
They’re the same ask, but our Google Assistant had a harder time parsing which definition of “take” our first sentence was using, spitting out a rather awkward query. Unless we really did want to know how long it’s going to take us to run off with someone’s freshly baked Bundt cake? (Don’t judge us.)
Since Google is likely paying out the wazoo to up the machine learning ante, we expect there to be less awkward failures over time. Which is a good thing, because when we asked about Bundt cake ingredients (“Does it take butter”) we found ourselves looking at a SERP for [how do I bake a butter].
Not that that doesn’t sound delicious.
Snippets are appearing for different kinds of queries
So, what are we to make of all of this? That we’re essentially in the midst of a natural language renaissance. And that voice search is helping spearhead the charge.
As for what this means for snippets specifically? They’re going to have to show up for human speak-type queries. And wouldn’t you know it, Google is already moving forward with this strategy, and not simply creating more snippets for the same types of queries. We’ve even got proof.
Over the last two years, we’ve seen an increase in the number of words in a query that surfaces a featured snippet. Long-tail queries may be a nuisance and a half, but snippet-having queries are getting longer by the minute.
When we bucket and weight the terms found in those long-tail queries by TF-IDF, we get further proof of voice search’s sway over snippets. The term “how” appears more than any other word and is followed closely by “does,” “to,” “much,” “what,” and “is” — all words that typically compose full sentences and are easier to remove from our typed searches than our spoken ones.
This means that if we want to snag more snippets and help searchers using digital assistants, we need to build out long-tail, natural-sounding keyword lists to track and optimize for.
Format your snippet content to match
When it’s finally time to optimize, one of the best ways to get your content into the ears of a searcher is through the right snippet formatting, which is a lesson we can learn from Google.
Taking our TF-IDF-weighted terms, we found that the words “best” and “how to” brought in the most list snippets of the bunch. We certainly don’t have to think too hard about why Google decided they benefit from list formatting — it provides a quick comparative snapshot or a handy step-by-step.
From this, we may be inclined to format all of our “best” and “how to” keyword content into lists. But, as you can see in the chart above, paragraphs and tables are still appearing here, and we could be leaving snippets on the table by ignoring them. If we have time, we’ll dig into which keywords those formats are a better fit for and why.
Get tracking
You could be the Wonder Woman of meta descriptions, but if you aren’t optimizing for the right kind of snippets, then your content’s going to have a harder time getting heard. Building out a voice search-friendly keyword list to track is the first step to lassoing those snippets.
Want to learn how you can do that in STAT? Say hello and request a tailored demo.
Need more snippets in your life? We dug into Google’s double-snippet SERPs for you — double the snippets, double the fun.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
SEO Channel Context: An Analysis of Growth Opportunities
Posted by BrankoK
Too often do you see SEO analyses and decisions being made without considering the context of the marketing channel mix. Equally as often do you see large budgets being poured into paid ads in ways that seem to forget there's a whole lot to gain from catering to popular search demand.
Both instances can lead to leaky conversion funnels and missed opportunity for long term traffic flows. But this article will show you a case of an SEO context analysis we used to determine the importance and role of SEO.
This analysis was one of our deliverables for a marketing agency client who hired us to inform SEO decisions which we then turned into a report template for you to get inspired by and duplicate.
Case description
The included charts show real, live data. You can see the whole SEO channel context analysis in this Data Studio SEO report template.
The traffic analyzed is for of a monetizing blog, whose marketing team also happens to be one of most fun to work for. For the sake of this case study, we're giving them a spectacular undercover name — "The Broze Fellaz."
For context, this blog started off with content for the first two years before they launched their flagship product. Now, they sell a catalogue of products highly relevant to their content and, thanks to one of the most entertaining Shark Tank episodes ever aired, they have acquired investments and a highly engaged niche community.
As you’ll see below, organic search is their biggest channel in many ways. Facebook also runs both as organic and paid and the team spends many an hour inside the platform. Email has elaborate automated flows that strive to leverage subscribers that come from the stellar content on the website. We therefore chose the three — organic Search, Facebook, and email — as a combination that would yield a comprehensive analysis with insights we can easily act on.
Ingredients for the SEO analysis
This analysis is a result of a long-term retainer relationship with "The Broze Fellaz" as our ongoing analytics client. A great deal was required in order for data-driven action to happen, but we assure you, it's all doable.
From the analysis best practice drawer, we used:
- 2 cups of relevant channels for context and analysis via comparison.
- 3 cups of different touch points to identify channel roles — bringing in traffic, generating opt-ins, closing sales, etc.
- 5 heads of open-minded lettuce and readiness to change current status quo, for a team that can execute.
- 457 oz of focus-on-finding what is going on with organic search, why it is going on, and what we can do about it (otherwise, we’d end up with another scorecard export).
- Imperial units used in arbitrary numbers that are hard to imagine and thus feel very large.
- 1 to 2 heads of your analyst brain, baked into the analysis. You're not making an automated report — even a HubSpot intern can do that. You're being a human and you're analyzing. You're making human analysis. This helps avoid having your job stolen by a robot.
- Full tray of Data Studio visualizations that appeal to the eye.
- Sprinkles of benchmarks, for highlighting significance of performance differences.
From the measurement setup and stack toolbox, we used:
- Google Analytics with tailored channel definitions, enhanced e-commerce and Search Console integration.
- Event tracking for opt-ins and adjusted bounce rate via MashMetrics GTM setup framework.
- UTM routine for social and email traffic implemented via Google Sheets & UTM.io.
- Google Data Studio. This is my favorite visualization tool. Despite its flaws and gaps (as it’s still in beta) I say it is better than its paid counterparts, and it keeps getting better. For data sources, we used the native connectors for Google Analytics and Google Sheets, then Facebook community connectors by Supermetrics.
- Keyword Hero. Thanks to semantic algorithms and data aggregation, you are indeed able to see 95 percent of your organic search queries (check out Onpage Hero, too, you'll be amazed).
Inspiration for my approach comes from Lea Pica, Avinash, the Google Data Studio newsletter, and Chris Penn, along with our dear clients and the questions they have us answer for them.
Ready? Let's dive in.
Analysis of the client's SEO on the context of their channel mix
1) Insight: Before the visit
What's going on and why is it happening?
Organic search traffic volume blows the other channels out of the water. This is normal for sites with quality regular content; yet, the difference is stark considering the active effort that goes into Facebook and email campaigns.
The CTR of organic search is up to par with Facebook. That's a lot to say when comparing an organic channel to a channel with high level of targeting control.
It looks like email flows are the clear winner in terms of CTR to the website, which has a highly engaged community of users who return fairly often and advocate passionately. It also has a product and content that's incredibly relevant to their users, which few other companies appear to be good at.
There's a high CTR on search engine results pages often indicates that organic search may support funnel stages beyond just the top.
As well, email flows are sent to a very warm audience — interested users who went through a double opt-in. It is to be expected for this CTR to be high.
What's been done already?
There's an active effort and budget allocation being put towards Facebook Ads and email automation. A content plan has been put in place and is being executed diligently.
What we recommend next
- Approach SEO in a way as systematic as what you do for Facebook and email flows.
- Optimize meta titles and descriptions via testing tools such as Sanity Check. The organic search CTR may become consistently higher than that of Facebook ads.
- Assuming you've worked on improving CTR for Facebook ads, have the same person work on the meta text and titles. Most likely, there'll be patterns you can replicate from social to SEO.
- Run a technical audit and optimize accordingly. Knowing that you haven’t done that in a long time, and seeing how much traffic you get anyway, there’ll be quick, big wins to enjoy.
Results we expect
You can easily increase the organic CTR by at least 5 percent. You could also clean up the technical state of your site in the eyes of crawlers -— you’ll then see faster indexing by search engines when you publish new content, increased impressions for existing content. As a result, you may enjoy a major spike within a month.
2) Insight: Engagement and options during the visit
With over 70 percent of traffic coming to this website from organic search, the metrics in this analysis will be heavily skewed towards organic search. So, comparing the rate for organic search to site-wide is sometimes conclusive, other times not conclusive.
Adjusted bounce rate — via GTM events in the measurement framework used, we do not count a visit as a bounce if the visit lasts 45 seconds or longer. We prefer this approach because such an adjusted bounce rate is much more actionable for content sites. Users who find what they were searching for often read the page they land on for several minutes without clicking to another page. However, this is still a memorable visit for the user. Further, staying on the landing page for a while, or keeping the page open in a browser tab, are both good indicators for distinguishing quality, interested traffic, from all traffic.
We included all Facebook traffic here, not just paid. We know from the client’s data that the majority is from paid content, they have a solid UTM routine in place. But due to boosted posts, we’ve experienced big inaccuracies when splitting paid and organic Facebook for the purposes of channel attribution.
What's going on and why is it happening?
It looks like organic search has a bounce rate worse than the email flows — that's to be expected and not actionable, considering that the emails are only sent to recent visitors who have gone through a double opt-in. What is meaningful, however, is that organic has a better bounce rate than Facebook. It is safe to say that organic search visitors will be more likely to remember the website than the Facebook visitors.
Opt-in rates for Facebook are right above site average, and those for organic search are right below, while organic is bringing in a majority of email opt-ins despite its lower opt-in rate.
Google's algorithms and the draw of the content on this website are doing better at winning users' attention than the detailed targeting applied on Facebook. The organic traffic will have a higher likelihood of remembering the website and coming back. Across all of our clients, we find that organic search can be a great retargeting channel, particularly if you consider that the site will come up higher in search results for its recent visitors.
What's been done already?
The Facebook ad campaigns of "The Broze Fellaz" have been built and optimized for driving content opt-ins. Site content that ranks in organic search is less intentional than that.
Opt-in placements have been tested on some of the biggest organic traffic magnets.
Thorough, creative and consistent content calendars have been in place as a foundation for all channels.
What we recommend next
- It's great to keep using organic search as a way to introduce new users to the site. Now, you can try to be more intentional about using it for driving opt-ins. It’s already serving both of the stages of the funnel.
- Test and optimize opt-in placements on more traffic magnets.
- Test and optimize opt-in copy for top 10 traffic magnets.
- Once your opt-in rates have improved, focus on growing the channel. Add to the content work with a 3-month sprint of an extensive SEO project
- Assign Google Analytics goal values to non-e-commerce actions on your site. The current opt-ins have different roles and levels of importance and there’s also a handful of other actions people can take that lead to marketing results down the road. Analyzing goal values will help you create better flows toward pre-purchase actions.
- Facebook campaigns seem to be at a point where you can pour more budget into them and expect proportionate increase in opt-in count.
Results we expect
Growth in your opt-ins from Facebook should be proportionate to increase in budget, with a near-immediate effect. At the same time, it’s fairly realistic to bring the opt-in rate of organic search closer to site average.
3) Insight: Closing the deal
For channel attribution with money involved, you want to make sure that your Google Analytics channel definitions, view filters, and UTM’s are in top shape.
What's going on and why is it happening?
Transaction rate, as well as per session value, is higher for organic search than it is for Facebook (paid and organic combined).
Organic search contributes to far more last-click revenue than Facebook and email combined. For its relatively low volume of traffic, email flows are outstanding in the volume of revenue they bring in.
Thanks to the integration of Keyword Hero with Google Analytics for this client, we can see that about 30 percent of organic search visits are from branded keywords, which tends to drive the transaction rate up.
So, why is this happening? Most of the product on the site is highly relevant to the information people search for on Google.
Multi-channel reports in Google Analytics also show that people often discover the site in organic search, then come back by typing in the URL or clicking a bookmark. That makes organic a source of conversions where, very often, no other channels are even needed.
We can conclude that Facebook posts and campaigns of this client are built to drive content opt-ins, not e-commerce transactions. Email flows are built specifically to close sales.
What’s been done already?
There is dedicated staff for Facebook campaigns and posts, as well a thorough system dedicated to automated email flows.
A consistent content routine is in place, with experienced staff at the helm. A piece has been published every week for the last few years, with the content calendar filled with ready-to-publish content for the next few months. The community is highly engaged, reading times are high, comment count soaring, and usefulness of content outstanding. This, along with partnerships with influencers, helps "The Broze Fellaz" take up a half of the first page on the SERP for several lucrative topics. They’ve been achieving this even without a comprehensive SEO project. Content seems to be king indeed.
Google Shopping has been tried. The campaign looked promising but didn't yield incremental sales. There’s much more search demand for informational queries than there is for product.
What we recommend next
- Organic traffic is ready to grow. If there is no budget left, resource allocation should be considered. In paid search, you can often simply increase budgets. Here, with stellar content already performing well, a comprehensive SEO project is begging for your attention. Focus can be put into structure and technical aspects, as well as content that better caters to search demand. Think optimizing the site’s information architecture, interlinking content for cornerstone structure, log analysis, and technical cleanup, meta text testing for CTR gains that would also lead to ranking gains, strategic ranking of long tail topics, intentional growing of the backlink profile.
- Three- or six-month intensive sprint of comprehensive SEO work would be appropriate.
Results we expect
Increasing last click revenue from organic search and direct by 25 percent would lead to a gain as high as all of the current revenue from automated email flows. Considering how large the growth has been already, this gain is more than achievable in 3–6 months.
Wrapping it up
Organic search presence of "The Broze Fellaz" should continue to be the number-one role for bringing new people to the site and bringing people back to the site. Doing so supports sales that happen with the contribution of other channels, e.g. email flows. The analysis points out is that organic search is also effective at playing the role of the last-click channel for transactions, often times without the help of other channels.
We’ve worked with this client for a few years, and, based on our knowledge of their marketing focus, this analysis points us to a confident conclusion that a dedicated, comprehensive SEO project will lead to high incremental growth.
Your turn
In drawing analytical conclusions and acting on them, there’s always more than one way to shoe a horse. Let us know what conclusions you would’ve drawn instead. Copy the layout of our SEO Channel Context Comparison analysis template and show us what it helped you do for your SEO efforts — create a similar analysis for a paid or owned channel in your mix. Whether it’s comments below, tweeting our way, or sending a smoke signal, we’ll be all ears. And eyes.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
mercredi 20 février 2019
Make sense of your data with these essential keyword segments
Posted by TheMozTeam
This blog post was originally published on the STAT blog.
The first step to getting the most out of your SERP data is smart keyword segmentation — it surfaces targeted insights that will help you make data-driven decisions.
But knowing what to segment can feel daunting, especially when you’re working with thousands of keywords. That’s why we’re arming you with a handful of must-have tags.
Follow along as we walk through the different kinds of segments in STAT, how to create them, and which tags you’ll want to get started with. You’ll be a fanciful segment connoisseur by the time we’re through!
Segmentation in STAT
In STAT, keyword segments are called “tags” and come as two different types: standard or dynamic.
Standard tags are best used when you want to keep specific keywords grouped together because of shared characteristics — like term (brand, product type, etc), location, or device. Standard tags are static, so the keywords that populate those segments won’t change unless you manually add or remove them.
Dynamic tags, on the other hand, are a fancier kind of tag based on filter criteria. Just like a smart playlist, dynamic tags automatically populate with all of the keywords that meet said criteria, such as keywords with a search volume over 500 that rank on page one. This means that the keywords in a dynamic tag aren’t forever — they’ll filter in and out depending on the criteria you’ve set.
How to create a keyword segment
Tags are created in a few easy steps. At the Site level, pop over to the Keywords tab, click the down arrow on any table column header, and then select Filter keywords. From there, you can select the pre-populated options or enter your own metrics for a choose-your-own-filter adventure.
Once your filters are in place, simply click Tag All Filtered Keywords, enter a new tag name, and then pick the tag type best suited to your needs — standard or dynamic — and voila! You’ve created your very own segment.
Segments to get you started
Now that you know how to set up a tag, it’s time to explore some of the different segments you can implement and the filter criteria you’ll need to apply.
Rank and rank movement
Tracking your rank and ranking movements with dynamic tags will give you eyeballs on your keyword performance, making it easy to monitor and report on current and historical trends.
There’s a boatload of rank segments you can set up, but here’s just a sampling to get you started:
- Keywords ranking in position 1–3; this will identify your top performing keywords.
- Keywords ranking in position 11–15; this will suss out the low-hanging, top of page two fruit in need of a little nudge.
- Keywords with a rank change of 10 or more (in either direction); this will show you keywords that are slipping off or shooting up the SERP.
Appearance and ownership of SERP features
Whether they’re images, carousels, or news results, SERP features have significantly altered the search landscape. Sometimes they push you down the page and other times, like when you manage to snag one, they can give you a serious leg up on the competition and drive loads more traffic to your site.
Whatever industry-related SERP features that you want to keep apprised of, you can create dynamic tags that show you the prevalence and movement of them within your keyword set. Segment even further for tags that show which keywords own those features and which have fallen short.
Below are a few segments you can set up for featured snippets and local packs.
Featured snippets
Everyone’s favourite SERP feature isn’t going anywhere anytime soon, so it wouldn’t be a bad idea to outfit yourself with a snippet tracking strategy. You can create as many tags as there are snippet options to choose from:
- Keywords with a featured snippet.
- Keywords with a paragraph, list, table, and/or carousel snippet.
- Keywords with an owned paragraph, list, table, and/or carousel snippet.
- Keywords with an unowned paragraph, list, table, and/or carousel snippet.
The first two will allow you to see over-arching snippet trends, while the last two will chart your ownership progress.
If you want to know the URL that’s won you a snippet, just take a peek at the URL column.
Local packs
If you’re a brick and mortar business, we highly advise creating tags for local packs since they provide a huge opportunity for exposure. These two tags will show you which local packs you have a presence in and which you need to work on
- Keywords with an owned local pack.
- Keywords with an unowned local pack.
Want all the juicy data squeezed into a local pack, like who’s showing up and with what URL? We created the Local pack report just for that.
Landing pages, subdomains, and other important URLs
Whether you’re adding new content or implementing link-building strategies around subdomains and landing pages, dynamic tags allow you to track and measure page performance, see whether your searchers are ending up on the pages you want, and match increases in page traffic with specific keywords.
For example, are your informational intent keywords driving traffic to your product pages instead of your blog? To check, a tag that includes your blog URL will pull in each post that ranks for one of your keywords.
Try these three dynamic tags for starters:
- Keywords ranking for a landing page URL.
- Keywords ranking for a subdomain URL.
- Keywords ranking for a blog URL.
Is a page not indexed yet? That’s okay. You can still create a dynamic tag for its URL and keywords will start appearing in that segment when Google finally gets to it.
Location, location, location
Google cares a lot about location and so should you, which is why keyword segments centred around location are essential. You can tag in two ways: by geo-modifier and by geo-location.
For these, it’s better to go with the standard tag as the search term and location are fixed to the keyword.
Geo-modifier
A geo-modifier is the geographical qualifier that searchers manually include in their query — like in [sushi near me]. We advocate for adding various geo-modifiers to your keywords and then incorporating them into your tagging strategy. For instance, you can segment by:
- Keywords with “in [city]” in them.
- Keywords with “near me” in them.
The former will show you how you fare for city-wide searches, while the latter will let you see if you’re meeting the needs of searchers looking for nearby options.
Geo-location
Geo-location is where the keyword is being tracked. More tracked locations mean more searchers’ SERPs to sample. And the closer you can get to searchers standing on a street corner, the more accurate those SERPs will be. This is why we strongly recommend you track in multiple pin-point locations in every market you serve.
Once you’ve got your tracking strategy in place, get your segmentation on. You can filter and tag by:
- Keywords tracked in specific locations; this will let you keep tabs on geographical trends.
- Keywords tracked in each market; this will allow for market-level research.
Search volume & cost-per-click
Search volume might be a contentious metric thanks to Google’s close variants, but having a decent idea of what it’s up to is better than a complete shot in the dark. We suggest at least two dynamic segments around search volume:
- Keywords with high search volume; this will show which queries are popular in your industry and have the potential to drive the most traffic.
- Keywords with low search volume; this can actually help reveal conversion opportunities — remember, long-tail keywords typically have lower search volumes but higher conversion rates.
Tracking the cost-per-click of your keywords will also bring you and your PPC team tonnes of valuable insights — you’ll know if you’re holding the top organic spot for an outrageously high CPC keyword.
As with search volume, tags for high and low CPC should do you just fine. High CPC keywords will show you where the competition is the fiercest, while low CPC keywords will surface your easiest point of entry into the paid game — queries you can optimize for with less of a fight.
Device type
From screen size to indexing, desktop and smartphones produce substantially different SERPs from one another, making it essential to track them separately. So, filter and tag for:
- Keywords tracked on a desktop.
- Keywords tracked on a smartphone.
Similar to your location segments, it’s best to use the standard tag here.
Go crazy with multiple filters
We’ve shown you some really high-level segments, but you can actually filter down your keywords even further. In other words, you can get extra fancy and add multiple filters to a single tag. Go as far as high search volume, branded keywords triggering paragraph featured snippets that you own for smartphone searchers in the downtown core. Phew!
Want to make talk shop about segmentation or see dynamic tags in action? Say hello (don’t be shy) and request a demo.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
lundi 18 février 2019
Build a Search Intent Dashboard to Unlock Better Opportunities
Posted by scott.taft
We've been talking a lot about search intent this week, and if you've been following along, you’re likely already aware of how “search intent” is essential for a robust SEO strategy. If, however, you’ve ever laboured for hours classifying keywords by topic and search intent, only to end up with a ton of data you don’t really know what to do with, then this post is for you.
I’m going to share how to take all that sweet keyword data you’ve categorized, put it into a Power BI dashboard, and start slicing and dicing to uncover a ton insights — faster than you ever could before.
Building your keyword list
Every great search analysis starts with keyword research and this one is no different. I’m not going to go into excruciating detail about how to build your keyword list. However, I will mention a few of my favorite tools that I’m sure most of you are using already:
- Search Query Report — What better place to look first than the search terms already driving clicks and (hopefully) conversions to your site.
- Answer The Public — Great for pulling a ton of suggested terms, questions and phrases related to a single search term.
- InfiniteSuggest — Like Answer The Public, but faster and allows you to build based on a continuous list of seed keywords.
- MergeWords — Quickly expand your keywords by adding modifiers upon modifiers.
- Grep Words — A suite of keyword tools for expanding, pulling search volume and more.
Please note that these tools are a great way to scale your keyword collecting but each will come with the need to comb through and clean your data to ensure all keywords are at least somewhat relevant to your business and audience.
Once I have an initial keyword list built, I’ll upload it to STAT and let it run for a couple days to get an initial data pull. This allows me to pull the ‘People Also Ask’ and ‘Related Searches’ reports in STAT to further build out my keyword list. All in all, I’m aiming to get to at least 5,000 keywords, but the more the merrier.
For the purposes of this blog post I have about 19,000 keywords I collected for a client in the window treatments space.
Categorizing your keywords by topic
Bucketing keywords into categories is an age-old challenge for most digital marketers but it’s a critical step in understanding the distribution of your data. One of the best ways to segment your keywords is by shared words. If you’re short on AI and machine learning capabilities, look no further than a trusty Ngram analyzer. I love to use this Ngram Tool from guidetodatamining.com — it ain’t much to look at, but it’s fast and trustworthy.
After dropping my 19,000 keywords into the tool and analyzing by unigram (or 1-word phrases), I manually select categories that fit with my client’s business and audience. I also make sure the unigram accounts for a decent amount of keywords (e.g. I wouldn’t pick a unigram that has a count of only 2 keywords).
Using this data, I then create a Category Mapping table and map a unigram, or “trigger word”, to a Category like the following:
You’ll notice that for “curtain” and “drapes” I mapped both to the Curtains category. For my client’s business, they treat these as the same product, and doing this allows me to account for variations in keywords but ultimately group them how I want for this analysis.
Using this method, I create a Trigger Word-Category mapping based on my entire dataset. It’s possible that not every keyword will fall into a category and that’s okay — it likely means that keyword is not relevant or significant enough to be accounted for.
Creating a keyword intent map
Similar to identifying common topics by which to group your keywords, I’m going to follow a similar process but with the goal of grouping keywords by intent modifier.
Search intent is the end goal of a person using a search engine. Digital marketers can leverage these terms and modifiers to infer what types of results or actions a consumer is aiming for.
For example, if a person searches for “white blinds near me”, it is safe to infer that this person is looking to buy white blinds as they are looking for a physical location that sells them. In this case I would classify “near me” as a “Transactional” modifier. If, however, the person searched “living room blinds ideas” I would infer their intent is to see images or read blog posts on the topic of living room blinds. I might classify this search term as being at the “Inspirational” stage, where a person is still deciding what products they might be interested and, therefore, isn’t quite ready to buy yet.
There is a lot of research on some generally accepted intent modifiers in search and I don’t intent to reinvent the wheel. This handy guide (originally published in STAT) provides a good review of intent modifiers you can start with.
I followed the same process as building out categories to build out my intent mapping and the result is a table of intent triggers and their corresponding Intent stage.
Intro to Power BI
There are tons of resources on how to get started with the free tool Power BI, one of which is from own founder Will Reynold’s video series on using Power BI for Digital Marketing. This is a great place to start if you’re new to the tool and its capabilities.
Note: it’s not about the tool necessarily (although Power BI is a super powerful one). It’s more about being able to look at all of this data in one place and pull insights from it at speeds which Excel just won’t give you. If you’re still skeptical of trying a new tool like Power BI at the end of this post, I urge you to get the free download from Microsoft and give it a try.
Setting up your data in Power BI
Power BI’s power comes from linking multiple datasets together based on common “keys." Think back to your Microsoft Access days and this should all start to sound familiar.
Step 1: Upload your data sources
First, open Power BI and you’ll see a button called “Get Data” in the top ribbon. Click that and then select the data format you want to upload. All of my data for this analysis is in CSV format so I will select the Text/CSV option for all of my data sources. You have to follow these steps for each data source. Click “Load” for each data source.
Step 2: Clean your data
In the Power BI ribbon menu, click the button called “Edit Queries." This will open the Query Editor where we will make all of our data transformations.
The main things you’ll want to do in the Query Editor are the following:
- Make sure all data formats make sense (e.g. keywords are formatted as text, numbers are formatted as decimals or whole numbers).
- Rename columns as needed.
- Create a domain column in your Top 20 report based on the URL column.
Close and apply your changes by hitting the "Edit Queries" button, as seen above.
Step 3: Create relationships between data sources
On the left side of Power BI is a vertical bar with icons for different views. Click the third one to see your relationships view.
In this view, we are going to connect all data sources to our ‘Keywords Bridge’ table by clicking and dragging a line from the field ‘Keyword’ in each table and to ‘Keyword’ in the ‘Keywords Bridge’ table (note that for the PPC Data, I have connected ‘Search Term’ as this is the PPC equivalent of a keyword, as we’re using here).
The last thing we need to do for our relationships is double-click on each line to ensure the following options are selected for each so that our dashboard works properly:
- The cardinality is Many to 1
- The relationship is “active”
- The cross filter direction is set to “both”
We are now ready to start building our Intent Dashboard and analyzing our data.
Building the search intent dashboard
In this section I’ll walk you through each visual in the Search Intent Dashboard (as seen below):
Top domains by count of keywords
Visual type: Stacked Bar Chart visual
Axis: I’ve nested URL under Domain so I can drill down to see this same breakdown by URL for a specific Domain
Value: Distinct count of keywords
Legend: Result Types
Filter: Top 10 filter on Domains by count of distinct keywords
Keyword breakdown by result type
Visual type: Donut chart
Legend: Result Types
Value: Count of distinct keywords, shown as Percent of grand total
Metric Cards
Sum of Distinct MSV
Because the Top 20 report shows each keyword 20 times, we need to create a calculated measure in Power BI to only sum MSV for the unique list of keywords. Use this formula for that calculated measure:
Sum Distinct MSV = SUMX(DISTINCT('Table'[Keywords]), FIRSTNONBLANK('Table'[MSV], 0))
Keywords
This is just a distinct count of keywords
Slicer: PPC Conversions
Visual type: Slicer
Drop your PPC Conversions field into a slicer and set the format to “Between” to get this nifty slider visual.
Tables
Visual type: Table or Matrix (a matrix allows for drilling down similar to a pivot table in Excel)
Values: Here I have Category or Intent Stage and then the distinct count of keywords.
Pulling insights from your search intent dashboard
This dashboard is now a Swiss Army knife of data that allows you to slice and dice to your heart’s content. Below are a couple examples of how I use this dashboard to pull out opportunities and insights for my clients.
Where are competitors winning?
With this data we can quickly see who the top competing domains are, but what’s more valuable is seeing who the competitors are for a particular intent stage and category.
I start by filtering to the “Informational” stage, since it represents the most keywords in our dataset. I also filter to the top category for this intent stage which is “Blinds”. Looking at my Keyword Count card, I can now see that I’m looking at a subset of 641 keywords.
Note: To filter multiple visuals in Power BI, you need to press and hold the “Ctrl” button each time you click a new visual to maintain all the filters you clicked previously.
The top competing subdomain here is videos.blinds.com with visibility in the top 20 for over 250 keywords, most of which are for video results. I hit ctrl+click on the Video results portion of videos.blinds.com to update the keywords table to only keywords where videos.blinds.com is ranking in the top 20 with a video result.
From all this I can now say that videos.blinds.com is ranking in the top 20 positions for about 30 percent of keywords that fall into the “Blinds” category and the “Informational” intent stage. I can also see that most of the keywords here start with “how to”, which tells me that most likely people searching for blinds in an informational stage are looking for how to instructions and that video may be a desired content format.
Where should I focus my time?
Whether you’re in-house or at an agency, time is always a hit commodity. You can use this dashboard to quickly identify opportunities that you should be prioritizing first — opportunities that can guarantee you’ll deliver bottom-line results.
To find these bottom-line results, we’re going to filter our data using the PPC conversions slicer so that our data only includes keywords that have converted at least once in our PPC campaigns.
Once I do that, I can see I’m working with a pretty limited set of keywords that have been bucketed into intent stages, but I can continue by drilling into the “Transactional” intent stage because I want to target queries that are linked to a possible purchase.
Note: Not every keyword will fall into an intent stage if it doesn’t meet the criteria we set. These keywords will still appear in the data, but this is the reason why your total keyword count might not always match the total keyword count in the intent stages or category tables.
From there I want to focus on those “Transactional” keywords that are triggering answer boxes to make sure I have good visibility, since they are converting for me on PPC. To do that, I filter to only show keywords triggering answer boxes. Based on these filters I can look at my keyword table and see most (if not all) of the keywords are “installation” keywords and I don’t see my client’s domain in the top list of competitors. This is now an area of focus for me to start driving organic conversions.
Wrap up
I’ve only just scratched the surface — there’s tons that can can be done with this data inside a tool like Power BI. Having a solid data set of keywords and visuals that I can revisit repeatedly for a client and continuously pull out opportunities to help fuel our strategy is, for me, invaluable. I can work efficiently without having to go back to keyword tools whenever I need an idea. Hopefully you find this makes building an intent-based strategy more efficient and sound for your business or clients.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
Detecting Link Manipulation and Spam with Domain Authority
Posted by rjonesx.
Over 7 years ago, while still an employee at Virante, Inc. (now Hive Digital), I wrote a post on Moz outlining some simple methods for detecting backlink manipulation by comparing one's backlink profile to an ideal model based on Wikipedia. At the time, I was limited in the research I could perform because I was a consumer of the API, lacking access to deeper metrics, measurements, and methodologies to identify anomalies in backlink profiles. We used these techniques in spotting backlink manipulation with tools like Remove'em and Penguin Risk, but they were always handicapped by the limitations of consumer facing APIs. Moreover, they didn't scale. It is one thing to collect all the backlinks for a site, even a large site, and judge every individual link for source type, quality, anchor text, etc. Reports like these can be accessed from dozens of vendors if you are willing to wait a few hours for the report to complete. But how do you do this for 30 trillion links every single day?
Since the launch of Link Explorer and my residency here at Moz, I have had the luxury of far less filtered data, giving me a far deeper, clearer picture of the tools available to backlink index maintainers to identify and counter manipulation. While I in no way intend to say that all manipulation can be detected, I want to outline just some of the myriad surprising methodologies to detect spam.
The general methodology
You don't need to be a data scientist or a math nerd to understand this simple practice for identifying link spam. While there certainly is a great deal of math used in the execution of measuring, testing, and building practical models, the general gist is plainly understandable.
The first step is to get a good random sample of links from the web, which you can read about here. But let's assume you have already finished that step. Then, for any property of those random links (DA, anchor text, etc.), you figure out what is normal or expected. Finally, you look for outliers and see if those correspond with something important - like sites that are manipulating the link graph, or sites that are exceptionally good. Let's start with an easy example, link decay.
Link decay and link spam
Link decay is the natural occurrence of links either dropping off the web or changing URLs. For example, if you get links after you send out a press release, you would expect some of those links to eventually disappear as the pages are archived or removed for being old. And, if you were to get a link from a blog post, you might expect to have a homepage link on the blog until that post is pushed to the second or third page by new posts.
But what if you bought your links? What if you own a large number of domains and all the sites link to each other? What if you use a PBN? These links tend not to decay. Exercising control over your inbound links often means that you keep them from ever decaying. Thus, we can create a simple hypothesis:
Hypothesis: The link decay rate of sites manipulating the link graph will differ from sites with natural link profiles.
The methodology for testing this hypothesis is just as we discussed before. We first figure out what is natural. What does a random site's link decay rate look like? Well, we simply get a bunch of sites and record how fast links are deleted (we visit a page and see a link is gone) vs. their total number of links. We then can look for anomalies.
In this case of anomaly hunting, I'm going to make it really easy. No statistics, no math, just a quick look at what pops up when we first sort by Lowest Decay Rate and then sort by Highest Domain Authority to see who is at the tail-end of the spectrum.
Success! Every example we see of a good DA score but 0 link decay appears to be powered by a link network of some sort. This is the Aha! moment of data science that is so fun. What is particularly interesting is we find spam on both ends of the distribution — that is to say, sites that have 0 decay or near 100% decay rates both tend to be spammy. The first type tends to be part of a link network, the second part tends to spam their backlinks to sites others are spamming, so their links quickly shuffle off to other pages.
Of course, now we do the hard work of building a model that actually takes this into account and accurately reduces Domain Authority relative to the severity of the link spam. But you might be asking...
These sites don't rank in Google — why do they have decent DAs in the first place?
Well, this is a common problem with training sets. DA is trained on sites that rank in Google so that we can figure out who will rank above who. However, historically, we haven't (and no one to my knowledge in our industry has) taken into account random URLs that don't rank at all. This is something we're solving for in the new DA model set to launch in early March, so stay tuned, as this represents a major improvement on the way we calculate DA!
Spam Score distribution and link spam
One of the most exciting new additions to the upcoming Domain Authority 2.0 is the use of our Spam Score. Moz's Spam Score is a link-blind (we don't use links at all) metric that predicts the likelihood a domain will be indexed in Google. The higher the score, the worse the site.
Now, we could just ignore any links from sites with Spam Scores over 70 and call it a day, but it turns out there are fascinating patterns left behind by common link manipulation schemes waiting to be discovered by using this simple methodology of using a random sample of URLs to find out what a normal backlink profile looks like, and then see if there are anomalies in the way Spam Score is distributed among the backlinks to a site. Let me show you just one.
It turns out that acting natural is really hard to do. Even the best attempts often fall short, as did this particularly pernicious link spam network. This network had haunted me for 2 years because it included a directory of the top million sites, so if you were one of those sites, you could see anywhere from 200 to 600 followed links show up in your backlink profile. I called it "The Globe" network. It was easy to look at the network and see what they were doing, but could we spot it automatically so that we could devalue other networks like it in the future? When we looked at the link profile of sites included in the network, the Spam Score distribution lit up like a Christmas tree.
Most sites get the majority of their backlinks from low Spam Score domains and get fewer and fewer as the Spam Score of the domains go up. But this link network couldn't hide because we were able to detect the sites in their network as having quality issues using Spam Score. If we relied only on ignoring the bad Spam Score links, we would have never discovered this issue. Instead, we found a great classifier for finding sites that are likely to be penalized by Google for bad link building practices.
DA distribution and link spam
We can find similar patterns among sites with the distribution of inbound Domain Authority. It's common for businesses seeking to increase their rankings to set minimum quality standards on their outreach campaigns, often DA30 and above. An unfortunate outcome of this is that what remains are glaring examples of sites with manipulated link profiles.
Let me take a moment and be clear here. A manipulated link profile is not necessarily against Google's guidelines. If you do targeted PR outreach, it is reasonable to expect that such a distribution might occur without any attempt to manipulate the graph. However, the real question is whether Google wants sites that perform such outreach to perform better. If not, this glaring example of link manipulation is pretty easy for Google to dampen, if not ignore altogether.
A normal link graph for a site that is not targeting high link equity domains will have the majority of their links coming from DA0–10 sites, slightly fewer for DA10–20, and so on and so forth until there are almost no links from DA90+. This makes sense, as the web has far more low DA sites than high. But all the sites above have abnormal link distributions, which make it easy to detect and correct — at scale — link value.
Now, I want to be clear: these are not necessarily examples of violating Google's guidelines. However, they are manipulations of the link graph. It's up to you to determine whether you believe Google takes the time to differentiate between how the outreach was conducted that resulted in the abnormal link distribution.
What doesn't work
For every type of link manipulation detection method we discover, we scrap dozens more. Some of these are actually quite surprising. Let me write about just one of the many.
The first surprising example was the ratio of nofollow to follow links. It seems pretty straightforward that comment, forum, and other types of spammers would end up accumulating lots of nofollowed links, thereby leaving a pattern that is easy to discern. Well, it turns out this is not true at all.
The ratio of nofollow to follow links turns out to be a poor indicator, as popular sites like facebook.com often have a higher ratio than even pure comment spammers. This is likely due to the use of widgets and beacons and the legitimate usage of popular sites like facebook.com in comments across the web. Of course, this isn't always the case. There are some sites with 100% nofollow links and a high number of root linking domains. These anomalies, like "Comment Spammer 1," can be detected quite easily, but as a general measurement the ratio does not serve as a good classifier for spam or ham.
So what's next?
Moz is continually traversing the the link graph looking for ways to improve Domain Authority using everything from basic linear algebra to complex neural networks. The goal in mind is simple: We want to make the best Domain Authority metric ever. We want a metric which users can trust in the long run to root out spam just like Google (and help you determine when you or your competitors are pushing the limits) while at the same time maintaining or improving correlations with rankings. Of course, we have no expectation of rooting out all spam — no one can do that. But we can do a better job. Led by the incomparable Neil Martinsen-Burrell, our metric will stand alone in the industry as the canonical method for measuring the likelihood a site will rank in Google.
We're launching Domain Authority 2.0 on March 5th! Check out our helpful resources here, or sign up for our webinar this Thursday, February 21st for more info on how to communicate changes like this to clients and stakeholders:
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
vendredi 15 février 2019
4 Ways to Improve Your Data Hygiene - Whiteboard Friday
Posted by DiTomaso
We base so much of our livelihood on good data, but managing that data properly is a task in and of itself. In this week's Whiteboard Friday, Dana DiTomaso shares why you need to keep your data clean and some of the top things to watch out for.
Video Transcription
Hi. My name is Dana DiTomaso. I am President and partner at Kick Point. We're a digital marketing agency, based in the frozen north of Edmonton, Alberta. So today I'm going to be talking to you about data hygiene.
What I mean by that is the stuff that we see every single time we start working with a new client this stuff is always messed up. Sometimes it's one of these four things. Sometimes it's all four, or sometimes there are extra things. So I'm going to cover this stuff today in the hopes that perhaps the next time we get a profile from someone it is not quite as bad, or if you look at these things and see how bad it is, definitely start sitting down and cleaning this stuff up.
1. Filters
So what we're going to start with first are filters. By filters, I'm talking about analytics here, specifically Google Analytics. When go you into the admin of Google Analytics, there's a section called Filters. There's a section on the left, which is all the filters for everything in that account, and then there's a section for each view for filters. Filters help you exclude or include specific traffic based on a set of parameters.
Filter out office, home office, and agency traffic
So usually what we'll find is one Analytics property for your website, and it has one view, which is all website data which is the default that Analytics gives you, but then there are no filters, which means that you're not excluding things like office traffic, your internal people visiting the website, or home office. If you have a bunch of people who work from home, get their IP addresses, exclude them from this because you don't necessarily want your internal traffic mucking up things like conversions, especially if you're doing stuff like checking your own forms.
You haven't had a lead in a while and maybe you fill out the form to make sure it's working. You don't want that coming in as a conversion and then screwing up your data, especially if you're a low-volume website. If you have a million hits a day, then maybe this isn't a problem for you. But if you're like the rest of us and don't necessarily have that much traffic, something like this can be a big problem in terms of the volume of traffic you see. Then agency traffic as well.
So agencies, please make sure that you're filtering out your own traffic. Again things like your web developer, some contractor you worked with briefly, really make sure you're filtering out all that stuff because you don't want that polluting your main profile.
Create a test and staging view
The other thing that I recommend is creating what we call a test and staging view. Usually in our Analytics profiles, we'll have three different views. One we call master, and that's the view that has all these filters applied to it.
So you're only seeing the traffic that isn't you. It's the customers, people visiting your website, the real people, not your office people. Then the second view we call test and staging. So this is just your staging server, which is really nice. For example, if you have a different URL for your staging server, which you should, then you can just include that traffic. Then if you're making enhancements to the site or you upgraded your WordPress instance and you want to make sure that your goals are still firing correctly, you can do all that and see that it's working in the test and staging view without polluting your main view.
Test on a second property
That's really helpful. Then the third thing is make sure to test on a second property. This is easy to do with Google Tag Manager. What we'll have set up in most of our Google Tag Manager accounts is we'll have our usual analytics and most of the stuff goes to there. But then if we're testing something new, like say the content consumption metric we started putting out this summer, then we want to make sure we set up a second Analytics view and we put the test, the new stuff that we're trying over to the second Analytics property, not view.
So you have two different Analytics properties. One is your main property. This is where all the regular stuff goes. Then you have a second property, which is where you test things out, and this is really helpful to make sure that you're not going to screw something up accidentally when you're trying out some crazy new thing like content consumption, which can totally happen and has definitely happened as we were testing the product. You don't want to pollute your main data with something different that you're trying out.
So send something to a second property. You do this for websites. You always have a staging and a live. So why wouldn't you do this for your analytics, where you have a staging and a live? So definitely consider setting up a second property.
2. Time zones
The next thing that we have a lot of problems with are time zones. Here's what happens.
Let's say your website, basic install of WordPress and you didn't change the time zone in WordPress, so it's set to UTM. That's the default in WordPress unless you change it. So now you've got your data for your website saying it's UTM. Then let's say your marketing team is on the East Coast, so they've got all of their tools set to Eastern time. Then your sales team is on the West Coast, so all of their tools are set to Pacific time.
So you can end up with a situation where let's say, for example, you've got a website where you're using a form plugin for WordPress. Then when someone submits a form, it's recorded on your website, but then that data also gets pushed over to your sales CRM. So now your website is saying that this number of leads came in on this day, because it's in UTM mode. Well, the day ended, or it hasn't started yet, and now you've got Eastern, which is when your analytics tools are recording the number of leads.
But then the third wrinkle is then you have Salesforce or HubSpot or whatever your CRM is now recording Pacific time. So that means that you've got this huge gap of who knows when this stuff happened, and your data will never line up. This is incredibly frustrating, especially if you're trying to diagnose why, for example, I'm submitting a form, but I'm not seeing the lead, or if you've got other data hygiene issues, you can't match up the data and that's because you have different time zones.
So definitely check the time zones of every product you use --website, CRM, analytics, ads, all of it. If it has a time zone, pick one, stick with it. That's your canonical time zone. It will save you so many headaches down the road, trust me.
3. Attribution
The next thing is attribution. Attribution is a whole other lecture in and of itself, beyond what I'm talking about here today.
Different tools have different ways of showing attribution
But what I find frustrating about attribution is that every tool has its own little special way of doing it. Analytics is like the last non-direct click. That's great. Ads says, well, maybe we'll attribute it, maybe we won't. If you went to the site a week ago, maybe we'll call it a view-through conversion. Who knows what they're going to call it? Then Facebook has a completely different attribution window.
You can use a tool, such as Supermetrics, to change the attribution window. But if you don't understand what the default attribution window is in the first place, you're just going to make things harder for yourself. Then there's HubSpot, which says the very first touch is what matters, and so, of course, HubSpot will never agree with Analytics and so on. Every tool has its own little special sauce and how they do attribution. So pick a source of truth.
Pick your source of truth
This is the best thing to do is just say, "You know what? I trust this tool the most." Then that is your source of truth. Do not try to get this source of truth to match up with that source of truth. You will go insane. You do have to make sure that you are at least knowing that things like your time zones are clear so that's all set.
Be honest about limitations
But then after that, really it's just making sure that you're being honest about your limitations.
Know where things are necessarily going to fall down, and that's okay, but at least you've got this source of truth that you at least can trust. That's the most important thing with attribution. Make sure to spend the time and read how each tool handles attribution so when someone comes to you and says, "Well, I see that we got 300 visits from this ad campaign, but in Facebook it says we got 6,000.
Why is that? You have an answer. That might be a little bit of an extreme example, but I mean I've seen weirder things with Facebook attribution versus Analytics attribution. I've even talked about stuff like Mixpanel and Kissmetrics. Every tool has its own little special way of recording attributions. It's never the same as anyone else's. We don't have a standard in the industry of how this stuff works, so make sure you understand these pieces.
4. Interactions
Then the last thing are what I call interactions. The biggest thing that I find that people do wrong here is in Google Tag Manager it gives you a lot of rope, which you can hang yourself with if you're not careful.
GTM interactive hits
One of the biggest things is what we call an interactive hit versus a non-interactive hit. So let's say in Google Tag Manager you have a scroll depth.
You want to see how far down the page people scroll. At 25%, 50%, 75%, and 100%, it will send off an alert and say this is how far down they scrolled on the page. Well, the thing is that you can also make that interactive. So if somebody scrolls down the page 25%, you can say, well, that's an interactive hit, which means that person is no longer bounced, because it's counting an interaction, which for your setup might be great.
Gaming bounce rate
But what I've seen are unscrupulous agencies who come in and say if the person scrolls 2% of the way down the page, now that's an interactive hit. Suddenly the client's bounce rate goes down from say 80% to 3%, and they think, "Wow, this agency is amazing." They're not amazing. They're lying. This is where Google Tag Manager can really manipulate your bounce rate. So be careful when you're using interactive hits.
Absolutely, maybe it's totally fair that if someone is reading your content, they might just read that one page and then hit the back button and go back out. It's totally fair to use something like scroll depth or a certain piece of the content entering the user's view port, that that would be interactive. But that doesn't mean that everything should be interactive. So just dial it back on the interactions that you're using, or at least make smart decisions about the interactions that you choose to use. So you can game your bounce rate for that.
Goal setup
Then goal setup as well, that's a big problem. A lot of people by default maybe they have destination goals set up in Analytics because they don't know how to set up event-based goals. But what we find happens is by destination goal, I mean you filled out the form, you got to a thank you page, and you're recording views of that thank you page as goals, which yes, that's one way to do it.
But the problem is that a lot of people, who aren't super great at interneting, will bookmark that page or they'll keep coming back to it again and again because maybe you put some really useful information on your thank you page, which is what you should do, except that means that people keep visiting it again and again without actually filling out the form. So now your conversion rate is all messed up because you're basing it on destination, not on the actual action of the form being submitted.
So be careful on how you set up goals, because that can also really game the way you're looking at your data.
Ad blockers
Ad blockers could be anywhere from 2% to 10% of your audience depending upon how technically sophisticated your visitors are. So you'll end up in situations where you have a form fill, you have no corresponding visit to match with that form fill.
It just goes into an attribution black hole. But they did fill out the form, so at least you got their data, but you have no idea where they came from. Again, that's going to be okay. So definitely think about the percentage of your visitors, based on you and your audience, who probably have an ad blocker installed and make sure you're comfortable with that level of error in your data. That's just the internet, and ad blockers are getting more and more popular.
Stuff like Apple is changing the way that they do tracking. So definitely make sure that you understand these pieces and you're really thinking about that when you're looking at your data. Again, these numbers may never 100% match up. That's okay. You can't measure everything. Sorry.
Bonus: Audit!
Then the last thing I really want you to think about — this is the bonus tip — audit regularly.
So at least once a year, go through all the different stuff that I've covered in this video and make sure that nothing has changed or updated, you don't have some secret, exciting new tracking code that somebody added in and then forgot because you were trying out a trial of this product and you tossed it on, and it's been running for a year even though the trial expired nine months ago. So definitely make sure that you're running the stuff that you should be running and doing an audit at least on an yearly basis.
If you're busy and you have a lot of different visitors to your website, it's a pretty high-volume property, maybe monthly or quarterly would be a better interval, but at least once a year go through and make sure that everything that's there is supposed to be there, because that will save you headaches when you look at trying to compare year-over-year and realize that something horrible has been going on for the last nine months and all of your data is trash. We really don't want to have that happen.
So I hope these tips are helpful. Get to know your data a little bit better. It will like you for it. Thanks.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!