mercredi 6 février 2019

IMPROVING CREATIVITY ⬆⬇ Hypnosis in the Elevator

Exploring Google's New Carousel Featured Snippet

Posted by TheMozTeam

Google let it be known earlier this year that snippets were a-changin’. And true to their word, we’ve seen them make two major updates to the feature — all in an attempt to answer more of your questions.

We first took you on a deep dive of double featured snippets, and now we’re taking you for a ride on the carousel snippet. We’ll explore how it behaves in the wild and which of its snippets you can win.

For your safety, please remain seated and keep your hands, arms, feet, and legs inside the vehicle at all times!

What a carousel snippet is an how it works

This particular snippet holds the answers to many different questions and, as the name suggests, employs carousel-like behaviour in order to surface them all.

When you click one of the “IQ-bubbles” that run along the bottom of the snippet, JavaScript takes over and replaces the initial “parent” snippet with one that answers a brand new query. This query is a combination of your original search term and the text of the IQ-bubble.

So, if you searched [savings account rates] and clicked the “capital one” IQ-bubble, you’d be looking at a snippet for “savings account rates capital one.” That said, 72.06 percent of the time, natural language processing will step in here and produce something more sensible, like “capital one savings account rates.”

On the new snippet, the IQ-bubbles sit at the top, making room for the “Search for” link at the bottom. The link is the bubble snippet’s query and, when clicked, becomes the search query of a whole new SERP — a bit of fun borrowed from the “People also ask” box.

You can blame the ludicrous “IQ-bubble” name on Google — it’s the class tag they gave on HTML SERP. We have heard them referred to as “refinement” bubbles or “related search” bubbles, but we don’t like either because we’ve seen them do both refine and relate. IQ-bubble it is.

There are now 6 times the number of snippets on a SERP

Back in April, we sifted through every SERP in STAT to see just how large the initial carousel rollout was. Turns out, it made a decent-sized first impression.

Appearing only in America, we discovered 40,977 desktop and mobile SERPs with carousel snippets, which makes up a hair over 9 percent of the US-en market. When we peeked again at the beginning of August, carousel snippets had grown by half but still had yet to reach non-US markets.

Since one IQ-bubble equals one snippet, we deemed it essential to count every single bubble we saw. All told, there were a dizzying 224,508 IQ-bubbles on our SERPs. This means that 41,000 keywords managed to produce over 220,000 extra featured snippets. We’ll give you a minute to pick your jaw up off the floor.

The lowest and most common number of bubbles we saw on a carousel snippet was three, and the highest was 10. The average number of bubbles per carousel snippet was 5.48 — an IQ of five if you round to the nearest whole bubble (they’re not that smart).

Depending on whether you’re a glass-half-full or a glass-half-empty kind of person, this either makes for a lot of opportunity or a lot of competition, right at the top of the SERP.

Most bubble-snippet URLs are nowhere else on the SERP

When we’ve looked at “normal” snippets in the past, we’ve always been able to find the organic results that they’ve been sourced from. This wasn’t the case with carousel snippets — we could only find 10.76 percent of IQ-bubble URLs on the 100-result SERP. This left 89.24 percent unaccounted for, which is a metric heck-tonne of new results to contend with.

Concerned about the potential competitor implications of this, we decided to take a gander at ownership at the domain level.

Turns out things weren’t so bad. 63.05 percent of bubble snippets had come from sites that were already competing on the SERP — Google was just serving more varied content from them. It does mean, though, that there was a brand new competitor jumping onto the SERP 36.95 percent of the time. Which isn’t great.

Just remember: these new pages or competitors aren’t there to answer the original search query. Sometimes you’ll be able to expand your content in order to tackle those new topics and snag a bubble snippet, and sometimes they’ll be beyond your reach.

So, when IQ-bubble snippets do bother to source from the same SERP, what ranks do they prefer? Here we saw another big departure from what we’re used to.

Normally, 97.88 percent of snippets source from the first page, and 29.90 percent typically pull from rank three alone. With bubble snippets, only 36.58 percent of their URLs came from the top 10 ranks. And while the most popular rank position that bubble snippets pulled from was on the first page (also rank three), just under five percent of them did this.

We could apply the always helpful “just rank higher” rule here, but there appears to be plenty of exceptions to it. A top 10 spot just isn’t as essential to landing a bubble snippet as it is for a regular snippet.

We think this is due to relevancy: Because bubble snippet queries only relate to the original search term — they’re not attempting to answer it directly — it makes sense that their organic URLs wouldn’t rank particularly high on the SERP.

Multi-answer ownership is possible

Next we asked ourselves, can you own more than one answer on a carousel snippet? And the answer was a resounding: you most definitely can.

First we discovered that you can own both the parent snippet and a bubble snippet. We saw this occur on 16.71 percent of our carousel snippets.

Then we found that owning multiple bubbles is also a thing that can happen. Just over half (57.37 percent) of our carousel snippets had two or more IQ-bubbles that sourced from the same domain. And as many as 2.62 percent had a domain that owned every bubble present — and most of those were 10-bubble snippets!

Folks, it’s even possible for a single URL to own more than one IQ-bubble snippet, and it’s less rare than we’d have thought — 4.74 percent of bubble snippets in a carousel share a URL with a neighboring bubble.

This begs the same obvious question that finding two snippets on the SERP did: Is your content ready to pull multi-snippet duty?

"Search for" links don't tend to surface the same snippet on the new SERP

Since bubble snippets are technically providing answers to questions different from the original search term, we looked into what shows up when the bubble query is the keyword being searched.

Specifically, we wanted to see if, when we click the “Search for” link in a bubble snippet, the subsequent SERP 1) had a featured snippet and 2) had a featured snippet that matched the bubble snippet from whence it came.

To do this, we re-tracked our 40,977 SERPs and then tracked their 224,508 bubble “Search for” terms to ensure everything was happening at the same time.

The answers to our two pressing questions were thus:

  1. Strange, but true, even though the bubble query was snippet-worthy on the first, related SERP, it wasn’t always snippet-worthy on its own SERP. 18.72 percent of “Search for” links didn’t produce a featured snippet on the new SERP.
  2. Stranger still, 78.11 percent of the time, the bubble snippet and its snippet on the subsequent SERP weren’t a match — Google surfaced two different answers for the same question. In fact, the bubble URL only showed up in the top 20 results on the new SERP 31.68 percent of the time.

If we’re being honest, we’re not exactly sure what to make of all this. If you own the bubble snippet but not the snippet on the subsequent SERP, you’re clearly on Google’s radar for that keyword — but does that mean you’re next in line for full snippet status?

And if the roles are reversed, you own the snippet for the keyword outright but not when it’s in a bubble, is your snippet in jeopardy? Let us know what you think!

Paragraph and list formatting reign supreme (still!)

Last, and somewhat least, we took a look at the shape all these snippets were turning up in.

When it comes to the parent snippet, Heavens to Betsy if we weren’t surprised. For the first time ever, we saw an almost even split between paragraph and list formatting. Bubble snippets, on the other hand, went on to match the trend we’re used to seeing in regular ol’ snippets:

We also discovered that bubble snippets aren’t beholden to one type of formatting even in their carousel. 32.21 percent of our carousel snippets did return bubbles with one format, but 59.71 percent had two and 8.09 percent had all three. This tells us that it’s best to pick the most natural format for your content.

Get cracking with carousel snippet tracking

If you can’t wait to get your mittens on carousel snippets, we track them in STAT, so you’ll know every keyword they appear for and have every URL housed within.

If you’d like to learn more about SERP feature tracking and strategizing, say hello and request a demo!


This article was originally published on the STAT blog on September 13, 2018.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

mardi 5 février 2019

DR. PARET STUDENTS PRACTICE HYPNOSIS

A New Domain Authority Is Coming Soon: What’s Changing, When, & Why

Posted by rjonesx.

Howdy Moz readers,

I'm Russ Jones, Principal Search Scientist at Moz, and I am excited to announce a fantastic upgrade coming next month to one of the most important metrics Moz offers: Domain Authority.

Domain Authority has become the industry standard for measuring the strength of a domain relative to ranking. We recognize that stability plays an important role in making Domain Authority valuable to our customers, so we wanted to make sure that the new Domain Authority brought meaningful changes to the table.

Learn more about the new DA

What’s changing?

What follows is an account of some of the technical changes behind the new Domain Authority and why they matter.

The training set:

Historically, we’ve relied on training Domain Authority against an unmanipulated, large set of search results. In fact, this has been the standard methodology across our industry. But we have found a way to improve upon it that fundamentally, from the ground up, makes Domain Authority more reliable.

The training algorithm:

Rather than relying on a complex linear model, we’ve made the switch to a neural network. This offers several benefits including a much more nuanced model which can detect link manipulation.

The model factors:

We have greatly improved upon the ranking factors behind Domain Authority. In addition to looking at link counts, we’ve now been able to integrate our proprietary Spam Score and complex distributions of links based on quality and traffic, along with a bevy of other factors.

The backbone:

At the heart of Domain Authority is the industry's leading link index, our new Moz Link Explorer. With over 35 trillion links, our exceptional data turns the brilliant statistical work by Neil Martinsen-Burrell, Chas Williams, and so many more amazing Mozzers into a true industry standard.

What does this mean?

These fundamental improvements to Domain Authority will deliver a better, more trustworthy metric than ever before. We can remove spam, improve correlations, and, most importantly, update Domain Authority relative to all the changes that Google makes.

It means that you will see some changes to Domain Authority when the launch occurs. We staked the model to our existing Domain Authority which minimizes changes, but with all the improvements there will no doubt be some fluctuation in Domain Authority scores across the board.

What should we do?

Use DA as a relative metric, not an absolute one.

First, make sure that you use Domain Authority as a relative metric. Domain Authority is meaningless when it isn't compared to other sites. What matters isn't whether your site drops or increases — it's whether it drops or increases relative to your competitors. When we roll out the new Domain Authority, make sure you check your competitors' scores as well as your own, as they will likely fluctuate in a similar direction.

Know how to communicate changes with clients, colleagues, and stakeholders

Second, be prepared to communicate with your clients or webmasters about the changes and improvements to Domain Authority. While change is always disruptive, the new Domain Authority is better than ever and will allow them to make smarter decisions about search engine optimization strategies going forward.

Expect DA to keep pace with Google

Finally, expect that we will be continuing to improve Domain Authority. Just like Google makes hundreds of changes to their algorithm every year, we intend to make Domain Authority much more responsive to Google's changes. Even when Google makes fundamental algorithm updates like Penguin or Panda, you can feel confident that Moz's Domain Authority will be as relevant and useful as ever.

When is it happening?

We plan on rolling out the new Domain Authority on March 5th, 2019. We will have several more communications between now and then to help you and your clients best respond to the new Domain Authority, including a webinar on February 21st. We hope you’re as excited as we are and look forward to continuing to bring you the most reliable, cutting-edge metrics our industry has to offer.


Be sure to check out the resources we’ve prepared to help you acclimate to the change, including an educational whitepaper and a presentation you can download to share with your clients, team, and stakeholders:

Explore more resources here


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

lundi 4 février 2019

How To Setup Metrics to Optimize Your Digital PR Team’s Press Coverage

Posted by acarlisle

Over the past six years, our team at Fractl has studied the art of mastering content marketing press coverage. Before moving into Agency Operations, I on-boarded and trained over a dozen new associates for our digital PR team within a year as the Media Relations Manager. Scaling a team of that size in a such a short period of time required hands-on training and a clear communication of goals and expectations within the role — but what metrics are indicative of success in digital PR?

As a data-driven content marketing agency, we turned to the numbers for something a little different than our usual data-heavy campaigns — we used our own historical data to analyze and optimize our digital PR team’s outreach.

This post aims to provide better insight in defining measurable variables as key performance indicators, or KPIs, for digital PR teams and understanding the implications and relationships of those KPIs. We’ll also go into the rationale for establishing baselines for these KPIs, which indicate the quality, efficiency, and efficacy of a team’s outreach efforts.

As a guide for defining success by analyzing your own metrics for your team (digital PR or otherwise), we'll provide the framework for the research design, which helped us establish a threshold for the single variable we identified to best measure our efforts and be the most significantly correlated with the KPIs indicative of success of a digital PR team.

Determining the key performance indicators for digital PR outreach

The influx of available data for marketers and PR professionals to measure the impact of their work allows us to stray away from vague metrics like “reach” and the even more vague goal of “more publicity.” Instead, we are able to focus on the metrics most indicative of what we’re actually trying to measure: the effect of digital PR efforts.

We all have our theories and educated guesses about which metrics are most important and how each are related, but without researching further, theories remain theories (or expert opinions, at best). Operational research allows businesses to use the scientific method as a way to provide managers and their teams with a quantitative basis for decision making. Operationalization is the process of strictly defining variables to turn nebulous concepts (in this case, the effort and success of your digital PR team) into variables that can be measured, empirically and quantitatively.

There is one indicator identified to best measure your effort into a campaign’s outreach. It is a precursor to all of the indicators below: the volume of pitch emails sent for each campaign.

Because all pitches are not created equal, the indicators below gauge which factors best define the success of outreach, such as the quality of outreach correspondence, the efficiency of time to secure press, the efficacy of the campaign, and media mentions secured. Each multi-faceted metric can be described by a variety of measurements, and all are encompassed by the independent variable of the volume of pitch emails sent for each campaign.

Some indicators may be better measured by using more than a single metric, so for the purposes of this post, here are the three metrics to illustrate each of these three KPIs to offer a more holistic picture of your team’s performance:

Pitch quality and efficacy

  • Placement Rate: The percentage of placements (i.e., media mentions) secured per the number of total pitches sent.
  • Interest Rate: The percentage of interested publisher replies to pitches per the number of total pitches sent.
  • Decline Rate: The percentage of declining publisher replies to pitches per the number of total pitches sent.

Efficiency and capacity

  • Total days of outreach: The number of business days between the first and last pitch sent for a campaign, which is the sum of the two metrics below.
  • Days to first placement: The number of business days between the first pitch sent and first placement to be published for a campaign.
  • Days to syndication: The number of business days between the first placement to be published and the last pitch to be sent for a campaign.

Placement quality and efficacy

  • Total Links: The total number of backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow) for a campaign’s landing page.
  • Total DoFollow Links: The total number of DoFollow backlinks from external linking domains for a campaign’s landing page.
  • Total Domain Authority of Links: The total domain authority of all backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow,) for a campaign’s landing page.

Optimizing effort to yield the best KPIs

After identifying the metrics, we need to solve the next challenge: What are the relationships between your efforts and your KPIs? The practical application of these answers can help you establish a threshold or range for the input metric that is correlated with the highest KPIs. We’ll discuss that in a bit.

After identifying metrics to analyze, define the nature of their relationships to one another. Use a hypothesis test to verify an effect; in this case, we’re interested to find the relationship between pitch count and each of the metrics we defined above as being KPIs of successful outreach. This study hypothesizes that campaigns closed out in 70 pitches or less will have better KPIs than campaigns closed out with over 71 pitches.

Analyzing the relationship and determining significance of the data

Next, determine if the relationship is significant; when the relationship is stated as statistically significant, the relationship observed has a high likelihood of happening in the future. When it comes to claiming statistical significance, some may assume there must be a complex formula that only seasoned statisticians can calculate. In reality, determining statistical significance is done via a t-test, a simple statistical test that compares two samples to help us infer a correlation of the same relationships in future samples.

In this case, campaigns with pitch counts below 70 are one group and campaigns above 71 are a second group. The findings below define the percentage difference between the means of both groups (i.e., the campaigns from Q2 and Q3) to determine if lower pitch counts do have a desired effect for each metric; those that are asterisked are statistically significant, meaning there is a less than a 5 percent chance that the observed results are due to chance.


How our analysis can optimize your digital PR team's efforts

In practice, the relationships between these metrics help you establish a better standard of practice for your team’s outreach with realistic expectations and goals. Further, the correlation between the specified range of pitch counts and all other KPIs give you a reliable range of what values you can expect when it comes to the metrics for pitch quality, timelines, and campaign performance when adhering to the range of pitches.

The original theory — that a threshold for pitch counts exists when the relationship between pitch count and all other metrics of performance were compared — is confirmed by the data. The sample with lower pitch counts (less than 70) sees a positive relationship with the KPIs we want to decrease (e.g. decline rates, total days) and negative relationship with the KPIs we want to increase (e.g. placement rates, link counts). The sample with higher pitch counts (greater than 71) saw the inverse — a negative relationship with the KPIs we want to decrease and a positive relationship with the KPIs we want to increase. Essentially, when campaigns with less than 70 pitches sent were isolated, the numbers improved in nearly every metric.

When this analysis is applied to each of the 74 campaigns from Q3, you’ll see nearly consistent results, with the exception again being Total Domain Authority. Campaigns with up to 70 pitches are correlated with better KPIs when compared to campaigns with over 71 pitches.

Vague or unrealistic expectations and goals will sabotage the success of any team and any project. When it comes to the effort put into each campaign, having objective, optimized procedures allows your team to work smarter, not harder.

So, what does that baseline range look like, and how do you calculate it?

Establishing realistic baseline metrics

A simple question helps answer what the baseline should be in this instance: What was the average of each KPI of the campaigns with fewer than 70 pitches?

We gathered all 70 campaigns closed out of our digital PR team’s pipelines in the second and third quarters of 2018 with pitch counts below 70 and determined the average of each metric. Then, we calculated the standard deviation from the mean, which defines the spread of the data to establish a range for each KPI — and that became our baseline range.

Examining historical data is among the best methods for determining realistic baselines. By gathering a broad, sizeable sample (usually more than 30 is ideal) that represents the full scope of projects your team works on, you can determine the average for each metric and deviation from the average to establish a range.

These reliable ranges allow your digital PR team to understand the baselines they must strive for during active outreach when in compliance with the standard of practice for pitch counts established from our research. Further, these baseline ranges allow you to set more realistic goals for future performance by increasing each range by a realistic percentage.

Deviations from that range act as indicators of potential issues related to the quality, efficiency, or efficacy of their outreach, with each of the metrics implying what specifically may be array. We offer context into each of those metrics defining our three KPIs in terms of their implications and limitations.

Understanding how each metric can influence the productivity of your team

Pitch quality and efficacy

The purpose of a pitch is to tell a compelling and succinct story of why the campaign you’re pitching is newsworthy and fits the beat of the individual writer you’re pitching. Help your team succeed by enforcing tried and true best practices to enable them to craft each pitch with personalization and compelling narratives at the top of mind. The placements act as a conversion rate to measure the efficacy of your team’s outreach while interests and declines act as a combined response rate to measure the quality of outreach.

To help your team avoid the “spray and pray” mentality of blasting out as many pitches as possible and hoping one will yield a media mention, which ultimately jeopardizes publisher relationships and are an inefficient use of time, focus on the rates our teams secure responses and placements from publishers in relation to the total volume of pitches sent. Prioritize this interpretation of the data rather than just the individual counts to help add context to the pitch count.

Campaigns with a high-ratio of interest and placements to pitches from publishers imply the quality of the pitch was sufficient, meaning it encompassed one or more of the factors known to be important in securing press coverage. This includes, but is not limited to, compelling and newsworthy narratives, personalized details, and/or relevancy to the writer. In some cases, campaigns may have a low-ratio of interest but high-ratio of placements as a result of a nonresponse bias — the occurrence where publishers will not respond to a pitch but will still cover the campaign in a future article, yielding a placement. These “ghost posts” can skew interest rates, illustrating why three metrics compose this KPI.

Campaigns with a high-ratio of declines to pitches imply the quality of the pitch may be subpar, which signals to the associate to re-evaluate their outreach strategy. Again, the inverse may not always be true, as campaigns with a low ratio of declines may be a result of non-response bias. In this case, if publishers do not respond at all, we can either infer they did not open the email or they opened the email and were not interested, therefore declining by default.

While confounding variables (such as the quality of the content itself, not just the quality of the pitch) may skew these metrics in either direction and remain the greatest limitation, holistically, these three metrics offer actionable insights during active outreach.

Efficiency and capacity

Similarly, ranges for timeline metrics can give your associates context of when they should be achieving milestones (i.e., the first placement) as well as the total length of outreach. Deviating beyond the standard timeline to secure the first placement often indicates the outreach strategy needs re-evaluating, while extending beyond the range for total days of outreach indicates a campaign should be closed out soon.

Efficiency metrics help beyond advising the strategy for outreach, informing operations from a capacity standpoint. Toggling between tens and sometimes hundreds of active campaigns at any given point relies on consistency for capacity — reducing variance between the volume of campaigns entering production to campaigns being closed out of the pipeline by staggering campaigns based on their average duration. This allows for more robust planning and reliable forecasting.

Awareness of the baselines for time to secure press enables you and your team to not just plan strategies and capacities, but also the content of your campaigns. You can ensure timely content by allowing for sufficient time for outreach when ideating your campaigns so the content does not become stale or outdated.

The biggest limitation of these metrics is a looming external variable often beyond our control — the editorial calendars and agendas of the publishers. Publishers have their own deadlines and priorities to fill, so we can not always plan for delays in publishing dates or worse yet, scrapping coverage altogether.

Placement quality and efficacy

Ultimately, your efforts are intended to yield placements to gain brand awareness and voice, as well as build a diverse link portfolio; the latter is arguably easier to quantify. Total external links pointing to the campaign’s landing page or client homepage along with the total Domain Authority of those links allow you to track both the quantity and quality of links.

Higher link counts built from your placements allow you to infer the syndication networks of the placements your outreach secured, while higher total Domain Authority measures the relative value of those linking domains to measure quality. Along with further specifying the types of links (specifically Dofollow links, arguably the most valuable link type), these metrics have the potential to forecast the impact of the campaign on the website’s own overall authority.

Replicating our analysis to optimize your team’s press coverage

Often times, historical research designs such as this one can have limitations in their cause and effect implications. This collection of data offers valuable insight into correlations to help us infer patterns and trends.

Our analysis utilized historical data representative of our entire agency in terms of scope of clients, campaign types, and associates, strengthening internal validity. So while the specific baseline metrics are tailored to our team, the framework we offer for establishing those baselines is transferable to any team.

Apply these methods with your digital PR team to help define KPIs, establish baselines, and test your own theories:

  • Track the ten metrics that compose the KPIs of digital PR outreach for each campaign or initiative to keep a running historical record.
  • Determine the average spread via the mean and standard deviation for each metric from a sizeable, representative sample of campaigns to establish your team’s baseline metrics.
  • Test any theories of trends in your team’s effort (i.e., pitch counts) in relation to KPIs with a simple hypothesis test to optimize your team and resources.

How does your team approach defining the most important metrics and establishing baseline ranges? How do you approach optimizing those efforts to yield the best press coverage? Uncovering these answers will help your team synergize more effectively and establish productive foundations for future outreach efforts.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

vendredi 1 février 2019

All About Website Page Speed: Issues, Resources, Metrics, and How to Improve

Posted by BritneyMuller

Page speed is an important consideration for your SEO work, but it's a complex subject that tends to be very technical. What are the most crucial things to understand about your site's page speed, and how can you begin to improve? In this week's edition of Whiteboard Friday, Britney Muller goes over what you need to know to get started.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we're going over all things page speed and really getting to the bottom of why it's so important for you to be thinking about and working on as you do your work.

At the very fundamental level I'm going to briefly explain just how a web page is loaded. That way we can sort of wrap our heads around why all this matters.

How a webpage is loaded

A user goes to a browser, puts in your website, and there is a DNS request. This points at your domain name provider, so maybe GoDaddy, and this points to your server where your files are located, and this is where it gets interesting. So the DOM starts to load all of your HTML, your CSS, and your JavaScript. But very rarely does this one pull all of the needed scripts or needed code to render or load a web page.

Typically the DOM will need to request additional resources from your server to make everything happen, and this is where things start to really slow down your site. Having that sort of background knowledge I hope will help in us being able to triage some of these issues.

Issues that could be slowing down your site

What are some of the most common culprits?

  1. First and foremost is images. Large images are the biggest culprit of slow loading web pages.
  2. Hosting can cause issues.
  3. Plugins, apps, and widgets, basically any third-party script as well can slow down load time.
  4. Your theme and any large files beyond that can really slow things down as well.
  5. Redirects, the number of hops needed to get to a web page will slow things down.
  6. Then JavaScript, which we'll get into in a second.

But all of these things can be a culprit. So we're going to go over some resources, some of the metrics and what they mean, and then what are some of the ways that you can improve your page speed today.

Page speed tools and resources

The primary resources I have listed here are Google tools and Google suggested insights. I think what's really interesting about these is we get to see what their concerns are as far as page speed goes and really start to see the shift towards the user. We should be thinking about that anyway. But first and foremost, how is this affecting people that come to your site, and then secondly, how can we also get the dual benefit of Google perceiving it as higher quality?

We know that Google suggests a website to load anywhere between two to three seconds. The faster the better, obviously. But that's sort of where the range is. I also highly suggest you take a competitive view of that. Put your competitors into some of these tools and benchmark your speed goals against what's competitive in your industry. I think that's a cool way to kind of go into this.

Chrome User Experience Report

This is Chrome real user metrics. Unfortunately, it's only available for larger, popular websites, but you get some really good data out of it. It's housed on Big ML, so some basic SQL knowledge is needed.

Lighthouse

Lighthouse, one of my favorites, is available right in Chrome Dev Tools. If you are on a web page and you click Inspect Element and you open up Chrome Dev Tools, to the far right tab where it says Audit, you can run a Lighthouse report right in your browser.

What I love about it is it gives you very specific examples and fixes that you can do. A fun fact to know is it will automatically be on the simulated fast 3G, and notice they're focused on mobile users on 3G. I like to switch that to applied fast 3G, because it has Lighthouse do an actual run of that load. It takes a little bit longer, but it seems to be a little bit more accurate. Good to know.

Page Speed Insights

Page Speed Insights is really interesting. They've now incorporated Chrome User Experience Report. But if you're not one of those large sites, it's not even going to measure your actual page speed. It's going to look at how your site is configured and provide feedback according to that and score it. Just something good to be aware of. It still provides good value.

Test your mobile website speed and performance

I don't know what the title of this is. If you do, please comment down below. But it's located on testmysite.thinkwithgoogle.com. This one is really cool because it tests the mobile speed of your site. If you scroll down, it directly ties it into ROI for your business or your website. We see Google leveraging real-world metrics, tying it back to what's the percentage of people you're losing because your site is this slow. It's a brilliant way to sort of get us all on board and fighting for some of these improvements.

Pingdom and GTmetrix are non-Google products or non-Google tools, but super helpful as well.

Site speed metrics

So what are some of the metrics?

First paint

We're going to go over first paint, which is basically just the first non-blank paint on a screen. It could be just the first pixel change. That initial change is first paint.

First contentful paint

First contentful paint is when the first content appears. This might be part of the nav or the search bar or whatever it might be. That's the first contentful paint.

First meaningful paint

First meaningful paint is when primary content is visible. When you sort of get that reaction of, "Oh, yeah, this is what I came to this page for," that's first meaningful paint.

Time to interactive

Time to interactive is when it's visually usable and engage-able. So we've all gone to a web page and it looks like it's done, but we can't quite use it yet. That's where this metric comes in. So when is it usable for the user? Again, notice how user-centric even these metrics are. Really, really neat.

DOM content loaded

The DOM content loaded, this is when the HTML is completely loaded and parsed. So some really good ones to keep an eye on and just to be aware of in general.

Ways to improve your page speed

HTTP/2

HTTP/2 can definitely speed things up. As to what extent, you have to sort of research that and test.

Preconnect, prefetch, preload

Preconnect, prefetch, and preload really interesting and important in speeding up a site. We see Google doing this on their SERPs. If you inspect an element, you can see Google prefetching some of the URLs so that it has it faster for you if you were to click on some of those results. You can similarly do this on your site. It helps to load and speed up that process.

Enable caching & use a content delivery network (CDN)

Caching is so, so important. Definitely do your research and make sure that's set up properly. Same with CDNs, so valuable in speeding up a site, but you want to make sure that your CDN is set up properly.

Compress images

The easiest and probably quickest way for you to speed up your site today is really just to compress those images. It's such an easy thing to do. There are all sorts of free tools available for you to compress them. Optimizilla is one. You can even use free tools on your computer, Save for Web, and compress properly.

Minify resources

You can also minify resources. So it's really good to be aware of what minification, bundling, and compression do so you can have some of these more technical conversations with developers or with anyone else working on the site.

So this is sort of a high-level overview of page speed. There's a ton more to cover, but I would love to hear your input and your questions and comments down below in the comment section.

I really appreciate you checking out this edition of Whiteboard Friday, and I will see you all again soon. Thanks so much. See you.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

IMPROVING STUDY WITH non verbal HYPNOSIS ⬆⬇ in the Elevator