Smartial Wayback Machine Text Extractor



Live version of this page DOES NOT exist (#404)


This article contains 6406 words.

#33: Using Google Search Console Effectively Part 1 - Technical SEO - Bamboo Chalupa Podcast

Google Search Console (formerly Google Webmaster Tools) is a powerful suite of free tools to help improve your search marketing. It provides a lot of data – but how do you use it effectively?

In this Part 1 of the Google Search Console series, Brett and Nate discuss how to use Search Console’s features to improve your website’s technical SEO.

Issues Discussed

  • Set up tips & tricks
  • Crawl rate report
  • Crawl error report
  • Fetch As Google
  • Robots Tester
  • URL Parameters
  • XML Sitemaps

Links Mentioned

Google Search Console

Nate’s Guide To Google Search Console

Check out this episode!

Transcript

Brett Snyder: Hello there and welcome to episode number 33 of your Bamboo Chalupa Digital Marketing Podcast. My name is Brett Snyder, I am the president of Knucklpuck and I’m joined by my co host Nate Shivar, the author of Shivarweb.com.

Nate Shivar: Hello hello.

Brett Snyder: So today we’re going to talk about a tool that is publicly available, that is free, and that most people especially who have been around digital marketing for a while, obviously know about, but really don’t use in to full extent possible. We’re talking about what was formerly Google Web Master Tools, but is recently being re-branded to the Google Search Console. Give a little background on kind of Google, now Google Search Console. I’ll apologize in advance if we call it Web Master Tools at any point throughout the episode. Google Search Console kind of created a mutually beneficial arrangement between Google and between Web Masters. That’s why they called it Web Master Tools to begin with because they wanted to encourage people to install this snippet of code on their website under the guise that Google will provide you with information about how that website is performing. Now to provide the information about how that website is performing, Google obviously needs access to that information. This idea of Web Master Tools really gave Google an opportunity to get people to install additional tracking codes about site performance on their own individual websites. Google gets more information about how people are interacting with and engaging on the website, and the web master gets info about how those interactions and engagements actually influence the search performance in the search engine.

Nate Shivar: That’s a good reminder about the SEO industry in general. I think a lot of people who don’t know about it think of it as you know, people who are trying to trick Google into getting better rankings. It’s important to understand that Google wants to have well run, well written, well maintained websites because like you said it’s a mutually beneficial relationship. To do that they also make Google Web Master Tools Search Console fairly easy to install. To get installed, you know not to basic, but there’s several options. Nowadays you can install it actually through your analytics code snippet so you don’t even have to add anything additional to your website code.

Brett Snyder: That is honestly, that’s my favorite way to do it. It’s the easiest way. It’s one where, we’ve told people from episode one that you have to have Google Analytics installed. That it’s pretty straight forward. If you have the same login really all you have to do is say verify through Google Analytics. If you’re logged in to the same Google account as your Google Analytics code is verified, it will automatically, once you hit verify, it will automatically verify the access to Google Search Console.

Nate Shivar: Even if you don’t want to go that route. If your having trouble, say if you’re a third party and you don’t have the same Google account for whatever reason, you can always add a little snippet of code to the head section. You can even verify it through DNS if that’s what you want to work through.

Brett Snyder: I have never in all my years doing SEO, I have never even heard somebody say they’ve done the DNS option, let alone done it myself. I would say that the Google Analytics shared login would be that first option. Then go to the HTML tag. Again there are plug ins. I know the Yose plug in lets you say kind of what is the access code for your Web Master Tools or your Search Console when you drop it right in there. The DNS and there’s also an HDML file upload. I think there used to be an XML file upload, but they realized that was too complicated. They got marginally less complicated with an HDML file upload, but the DNS and HDML file are two options, but I have never, Nate I don’t know about you, but I have never seen or heard anybody use those options.

Nate Shivar: No, I think the DNS would just be in a very very specific circumstance. Either way there’s, I mean DNS, even if you need that method, it is there. Either way Google wants you to have Google Web Master Tools, but before we actually get into how do we actually use it for technical SEO, there’s a few other things I want to mention about setting it up. First off, Google looks at protocols and sub-domains as different properties. You need to have your www version and your non-wwww version and any other sub-domains all verified separately. Then if you also have, if you’re running your site on secure protocol, HTPPS, that needs to be verified separately. Your unsecure area, your HTTP needs to be verified even if you are site wide HDPS. Just because it’s going to be able to give you any issues that do come through, you need to be able to have access to those.

Brett Snyder: There are ways to set your preferred version in there. If you’re a redirecting non-wwww to wwww you can identify it into Google. It’s actually a recommended best practice that you do so because it allows them, again you go straight to the source here. This is a direct line to the way Google is getting information about your site performance. You can actually set what your preferred version of the site is, but you have to verify these accounts individually in order to do that.

Nate Shivar: Exactly. Also when you are setting it up, especially if you’re an agency or a company context and you use say a group login, a group Google account, make sure that you set up email forwarding so that the right people are getting the right alerts. Brett like you said, this is your direct line to Google. If there are any crawl issues or any hacking issues, you know if your site gets hacked and Google finds evidence of that, if there are new owners assigned, or if for whatever reason your site gets a manual penalty this is how Google is going to communicate those issues to you. If you’re setting it up under a group login, make sure you have those fine details sorted out so that the right people are getting these messages.

Brett Snyder: Right, and I know there was something where, that was much more the manual penalty alert, that was much more prominent and prevalent with Penguin. When that came out. I don’t know that there’s been many that have come out recently, but that is again, if they want to give you, something algorithmically is taking place on your site, Web Master Tools, they basically dump those messages into an inbox. For us, what we do is we have one login and then from that login we go in and we do email forwarding for the particular project managers on each individual account. That way we are able to have one where anybody can go in there if they need to go and dive into a particular client that we have. We have everything consolidated in one place. It’s not something where if there’s turnover, or if you have different people, or shifting priorities, then you’re not saying well so and so actually verified Web Master Tools but now who’s login was tied to that.

It’s nice to be able to keep things consistently and then just set up email forwarding in that account. That way if you have any change in your team or any turnover you don’t have to re-verify Search Console, but it also keeps basically a copy of all these messages in whatever the group login is. It’s also available in the Web Master Tools. I’ve got to preemptively apologize. I still haven’t gotten that out of my vernacular. You still have it in the Search Console platform as well.

Nate Shivar: Yeah, you don’t want the keys to your website data resting with one employee.

Brett Snyder: Okay, so let’s talk a little bit about, I know the biggest thing that we use Search Console for, and Nate I know you do as well, is looking at the technical of SEO. We’ll touch on some of the on page and the off page a little later, but the real majority of what we want to talk about today is how we’re going to use the technical data that comes straight from the horses mouth to be able to influence the strategies that we want to implement so that Google can more efficiently crawl our website. If there’s on thing that you get out of this, and if there’s one thing that you look to Google Search Console and say how can this help me. It gives you information on optimizing the crawl rates and the crawl efficiency of your website.

Nate Shivar: Yeah, and crawl rates, I think we’ll start there, is one of the more misunderstood pieces of data that you get just because it’s hard to know what the standard is, or when you should be concerned. The crawl rate report, when I look at, what I really look at any variances. This is especially important if your working with a large site, but even if you have a small site where Google is crawling your whole site every few weeks, every few days. You should still look for a few things. First of you should keep a running pitch mark of what is typical for your website, what’s the typical crawl rate. So that if you see any anomaly’s, if a crawl rate spikes way up, if it spikes way down, you should be able to know that that aligns with a site update. You know if you add a big section to your website, if you do a website relaunch, you should see Google bot spike up in the crawl rate, or if your site goes down, your crawl rate should go down. If you see an anomaly that should not be there, that’s where you can start your investigative work.

Brett Snyder: I think it’s important to kind of talk about the relativeness of these because there isn’t really a standard for the number of kilobytes downloaded per day. We’ve gotten questions about that as well. What should I be aiming for? It’s almost like what kind of domain authority should I aim for? Well you should aim to have one that is better than the competitors in your space. Being able to look at these crawl rates, as Nate said, looking at them as a trend, looking for spikes, and peaks, and valley’s for what your standard crawl rate is, that’s going to give you the information that something might be wrong. That doesn’t necessarily tell you exactly what is wrong, but it gives you an understanding that we need to be able to look into something. As we’ve said before, if Google has issues crawling your website, it doesn’t matter how many links you’ve built, it doesn’t matter how strong your content is. If they cannot crawl your website, you are not going to get in the top ranking positions because that is just a major issue in how they process your information.

Nate Shivar: Oh yeah, there’s just so many variables. It also depends on your type of industry and your type of website. If you’re a publisher and you’re putting up multiple posts per day, I mean Google Bots going to be trained to want to come back and crawl that fresh content. Whereas if you’re a large site with mostly static content that never changes, your crawl rate might look different. The important thing again is to look at variances. Do the download times look reasonable? If your having troubles with your user speed and Google Bot is also taking a lot of time to download pages, that might be an issue. If you see Google Bot crawling a large number of pages, but they’re not downloading many kilobytes of data, that might mean that Google is stuck on a crawl loop. Where they’re just crawling lots of perimeter pages. You’ll see this in eCommerce sites where Google bot kind of gets stuck down a rabbit hole where it’s just crawling the same thing over and over and over. That might be a sign that you should investigate what is Google Bot looking at and how can you coach Google Bot throughout to download larger portions of your site and not get stuck like that.

Brett Snyder: This ties back to, remember page speed is still a small but growing aspect of the ranking algorithm. Page speed is from a conversion rate perspective. You know, Nate we can try to find for the show notes the specific statistics here, but it’s something like a 1%, 1 second increase in page speed time actually has 20-30% increase on conversion rate. It’s a very substantial impact here. Being able to understand how Google’s crawling and downloading at least gives you some insights into the page speed because page speed is defined in terms of how Google defines it. Looking at these crawl rates really gives you an understanding of whether or not you need to devote some resources towards addressing those key issues.

Nate Shivar: To segway into the next report, crawl errors. A lot of times Google will not even find your errors and you’ll get those reported in Search Console in the crawl errors report. A lot of times this report, I’ve heard it called go home Google, you’re drunk report because you’ll see pages and URL’s listed in this report that do not exist anywhere on the web. You’ve never put them up, you’ve never published them, you don’t know where they came from.

Brett Snyder: There’s a number of different crawl errors here. Where you talk about these crawl errors, it’ll give you server errors. Where are you having errors with your server. That’s actually something that we did recently at Knucklpuck. We found that there were a number of server errors being generated for a client of ours. As we would go in there, we didn’t see any issues because we can go through the site and the page would load. Because they were driving so much traffic, the server was starting to get overloaded and creating errors. That resulted in a decrease in traffic, it resulted in a decrease of conversion. Being able to go into Search Console and say hey, there’s actually something having to do with how your server, with load balancing on your server or anything else in terms of the amount of information this server is being asked to process. It is starting to throttle. These server errors are extremely valuable. I’m talking about kind of your 503 errors, 500 errors. These are the ones that will really let you know, is there something that needs to be done from an infrastructure stand point to make sure that my site can continue to provide the information to users at the scale that they are asking for it.

Nate Shivar: Yeah, I think that goes back to the theme that we’ve touched on already of digging. Google is giving you this data, it’s up to you to dig and kind of put on the detective hat, and investigate why are these showing up. In the crawl errors report, like you said, it lists it by errors. It also lists them by priority. Google is going to say hey, this looks like it’s a priority to us based on the URL, based on the links coming into it. You should address these basically from one moving downwards.

Brett Snyder: It will also tell you when the error was identified. It gives the date. So if you can see oh this is an error, there is an option to mark as fixed. If you fix these things you can actually go in there and say hey Google this has been fixed. Again, remember this information comes from when Google crawls the site, so if they haven’t re-crawled the page and you have addressed the issues, maybe they wouldn’t notice. Even if it was a page that was linked to it from one page, and you removed them, how is Google going to crawl that again. Probably not. You can actually go and mark these things as fixed. Looking at the date that it was identified is actually extremely valuable to understand whether or not you’ve implemented a solution since the time that it’s been picked up.

Nate Shivar: The last things I’ll touch on with crawlers is that this is a report where you can always dig deeper. I used Web Master Tools Search Console for a embarrassing long time before I figured out that you can click on the link. I know this sounds like a newbie mistake, but I’ve seen new SEO’s, they don’t know that you can click on the link and Google will tell you where it found that link from and where that URL is linking up to. You can click and use it to quickly diagnose where these issues are.

Brett Snyder: To give kind of an example for that, if you go in there and you say hey this page doesn’t exist, I don’t know where this is, and you say oh it’s being linked to from an old deprecated link structure. Okay, obviously Google hasn’t re-crawled the original link because it doesn’t exist anymore. There were no external links to it, there’s no internal links because this site’s structure’s been updated. Now it gives you a sense to kind of qualify these errors in a way that makes sense to your particular circumstances.

Nate Shivar: I like that. Let’s talk about the fetch as Google Tool. This basically your shortcut to diagnosing on page script and it’s your shortcut to getting in the Google index. This is a way where you can get your page directly into Google’s index without having to wait for the Google bot to come back and crawl.

Brett Snyder: For people who are kind of newer to SEO, back 10-14 years ago, maybe 15 years ago, wow. Back even 10 years ago SEO companies would sell services that they will submit your site to the index. That was one of the core services that they would offer. That was basically the precursor to this fetch as Google Tool, because if Google hasn’t found your website, if there are no links to it, these agencies would actually say, we have to submit your site to Google’s index before they’ll kind of use this fetch as Google tool to be able to say I want to see how Google is crawling it. There is still the option to submit to index. One of my favorite things to do.

I’m very upfront with our clients about this. Listen, if we’ve made changes to this site, Google is going to find it in a couple of days. They’re going to naturally crawl the site. This is not something where you have to submit to Google for them to recognize the changes. If you’ve made changes to a website, you’ve updated internal in-structures, you’ve updated content, you can submit as Google and they come back and they update the cashed version of your site very very quickly. I’ve seen it happen in as little as an hour, where you get the new cashed version into the index. That way you’re not having to wait 3 or 4 days for them to re-crawl and be able to pick up your changes. You get to basically go right to the front of the line and say I need you to crawl and resubmit this page because there are changes to it. I want you to pick up the changes to this site and I want you to pick them up immediately.

Nate Shivar: Yeah, and the other thing that Google’s fetch as can do, especially now, I guess it was 2 years ago in May of 2013 where Google bot was made to where it could process JavaScript and CSS. It can render your page as a user would see it. The fetch as Google Tool is where you can see if Google bot is loading the page as it exists. If you’re blocking nay JavaScript, or your blocking any CSS to where Google bot doesn’t see the page as a user sees it, this is where you will find out those issues and where you can fix it because Google will, it’ll give you a list of scripts and CSS files that it was able to find, but was not able to access and use to render the page. It will give you a priority, whether or not Google bot really needs this or whether or not this is something where you should go and fix now. A lot of times, I know on older Word Press installs this is a big issues where the robots at TXD will be blocking the folder that holds JavaScript. If you have a slider or you have any text behind JavaScript, Google bot won’t be able to access it. Not because Google bot can’t process JavaScript, but because it’s blocked by your robots file. This is where you’re going to find those kind of errors.

Brett Snyder: I’ve had issues with that. We’ve had clients who’ve had significant problems. We’ve finally diagnosed the error because we realized that we had implemented a navigation menu that incorporated at least some degree of JavaScript, but we were blocking all of that. Essentially we said hey we’re going to implement this much more robust navigation menu. We flipped the switch and traffic went down. Even though we’re seeing the navigation menu, we completely stripped it away because we were blocking their access to these elements. That was something where we had a significant impact on the traffic.

The last thing, and this is something that is, the default setting for fetch as Google is going to fetch the desktop version. Something that I absolutely recommend that every single person do, whether you have 5% mobile traffic or whether you’re looking at 50-60% of your traffic coming in as mobile, you can fetch as Google, you can do the same rendering tools that Nate was just talking about, but you can do it from Google’s mobile crawler. This gives you an understanding where if you see, oh I think our site is responsive, or anybody last April who saw that Google is going to start incorporating or had a big change to the way that they process mobile. The fetch as Google tool does allow you to crawl it through Google’s mobile crawler. The default is desktop, but there’s a drop down to select the crawler that you want to fetch as.

You want to absolutely 100%, hit pause right now and go check this out, you want to go in there and see how your mobile site is being crawled as well. As most people understand at this point, mobile traffic and mobile kind of engagement is skyrocketing. You’re talking about, you say we’ve got clients that are driving 100,000 visit’s a month with more than 50% of the traffic coming in on a mobile device. It is essential that you confirm that Google is crawling your mobile site effectively. If you’re kind of moving into that section, you’ve just done a re-brand. It lets you understand how things are being rendered. Please please please, look at not just the default fetch as Google, but definitely the mobile smart phone crawler as well.

Nate Shivar: The next tool that’s kind of been this technical SEO section, is the robot.txt tester tool. This tool, it does exactly what it says. It looks at your robots file which is again is the file that all good bots go and check to see what parts of the site they can and can’t crawl. This is where you’re going to find out if you’re blocking pages or directories or resources that you might not want to block or might be having additional consequences that you might not realize. This is where you go and check to see if your implementation is correct. This is also where you can go and troubleshoot before you put up a new implementation. Say if you’re running a large eCommerce site and you want to block certain account pages or sale pages, whatever you want to block bots from, you can go and test that out to make sure that you’re not going to accidentally block another whole section of the site that you do want Google bot to be able to access.

Brett Snyder: We say you can test this. If you actually go into the dashboard itself for the robot.txt tester, you can go in there and change the robot.txt. It won’t change the actual file, but you can physically edit what Google is using as the robot.txt file, and then drop in sections of your site to see if they are accessible. When you do that, Google will actually highlight, if its restricted, Google will actually highlight the line in your robot.txt file that is causing that page or that resource to be excluded. You can go in there and basically use your robot.txt as a draft. You can update that. It never pushes to your live site. It doesn’t actually effect how Google crawls it unless you update the file itself. You can basically, it’s like a staging server almost for being able to manipulate that robot.txt file and confirm as Nate talked about, the consequences of that change. You can confirm what type of consequences are going to result from making certain changes to the robot.txt.

Nate Shivar: The next tool is the URL parameters tool. Outside of professional SEO’s and webmasters of large sites, not many people use this tool. Mainly because it comes with a warning. If you actually click over to the tool you’ll see a warning that says use this feature only if you’re sure how parameters work. Incorrectly excluding URL’s could result in many pages disappearing from search. The reason that shows up is because this is really only useful for certain types of sites. Usually eCommerce sites, but really any large site where you’re having to deal with a crawl budget, crawl rate, and you want to be able to help Google understand what parameters are doing what. If they are sorting, filtering, etc to help address duplicate content.

Brett Snyder: Sure, and to kind of get more of an example there, let’s go through the actual specifics there. If you have page content, the first thing Google asks you is no, it does not effect page content, so it just tracks usage, or yes, it changes, reorders, or narrows page content. Those are the two options that are available for you as the first step. Now if you say it doesn’t effect page content it only tracks usage, what Google is going to do is they are going to crawl one representative URL. They’re basically going to ignore that parameter and say this is just to track usage. It does, it’s exactly the same page, they’re just basically a parameter way of cookie-ing the user. In that sense, they’re only going to do one representative URL.

Now what Nate’s talking about here, if you select changes, reorders, or narrows page content, then it gives you an additional drop down that provides some other options, some other ways to annotate these parameters. Then you can give Google direction as to what they should do with those parameters. You can go in there and say yes, this sorts, or narrows, specifies, translates, or paginates. There is even the option to say other. One that doesn’t fit all of that. Once you select that then you are able to tell Google if you want them to decide which is the default setting, you want them to crawl every URL, they want you to only crawl URL’s with certain criteria that you can establish, or to crawl none of those. We just wanted to make sure that that’s really clear here because again it is an intimidating aspect of Google Search Console. It comes with that warning and it says hey people are going to, or pages are going to disappear from search. If you are really kind of conscious of the way that you approach this, and you know exactly what your parameters do, if it’s a filtering or a sorting parameter, you can annotate it as such. It allows Google to prioritize the way that they crawl these parameters and understand the impact that these parameters have on how your page content is presented.

Nate Shivar: Yeah, and just to bring it full loop, this is the tool that you would go to use if say you’re looking at the crawl rate report, and you’re seeing a lot of pages being crawled without many kilobytes being downloaded. Then if you go and you do some searches for your site in Google search and you’re seeing a lot of the same pages being repeated with the same parameters, that will be a content issue. This is the type of tool that you will go and help solve that issue.

Brett Snyder: One last talking through some of these technical things here, the last thing that we want to touch on are the XML site maps. You’ve heard us talk about XML site maps all the time. Really this is just a file where you are able to prioritize how Google crawls through your site. General recommendation unless your site is 500,000 pages, your XML site map for your average site in looking at the 100 to 5,000 pages, put the pages that are going to be the most important, the ones that you expect to rank. You don’t have to put your privacy policy or any of these kind of superfluous pages in there. The XML site map allows you to tell Google which pages they should crawl, and you can set priorities.

For example, your home page would be priority one. Your top navigation pages may be priority .9. Now you’re old legacy archived news pages, maybe those are lower priorities. What this allows us to do, and here’s, Nate I’m sure that you can kind of speak to this is as well. With the way that people use these XML site maps incorrectly is that they put everything at the same priority. To quote one of my favorite movies, “When everyone’s incredible, nobody is.” Using the XML site map to be able to establish legitimate priorities for Google lets them recognize which pages you feel are going to be most important to your users. The ones that you want to be front and center. You can submit this XML site map directly to Google Search Console and it will tell you if there are errors in the site map. It will tell you if things are being picked up inappropriately.

Nate Shivar: Yeah, I’ve always thought of XML site maps sort of as the road map that you give Google when they show up at your site. So Google isn’t going to exclusively use the XML site map to crawl and index your pages. It’s still going to have to drive the roads so to speak, or click through the links. The XML site map is sort of your guide, these are the pages that I really want you to look at. This is how often I want you to look at them. Take this into consideration in your crawl. That’s kind of what Google is going to use it. I wouldn’t solely, I’ve seen a lot of people who, they’ll bury some orphan pages way down on their site and never link to them internally. Then just solely rely on the XML site map to get those pages crawled frequently. That’s not an ideal solution. The ideal solution is to still integrate those pages into your site if they are that important. Then use the XML site map like it says.

Brett Snyder: Right, it’s to support your existing site structure. It’s not to replace it. The reason that we focus so heavily on these technical items is that the foundation of your website is really integral to how Google and other search engines as well, are going to be able to process the information that you put on your site. Think about it like the foundation of your house. If you’re going to invest in this beautiful home, you know master bedroom with all the fixtures in there. You’re going to have a man cave that has the big flat screen TV. that covers the entire wall. You’re going to create this beautiful magnificent home, but if it’s on a shawdy foundation, then the home is really, then you’re kind of building a house of cards.

We talk about the technical infrastructure, it’s something that I know is near and dear to Nate and myself. It’s something that we’ve really kind of focused in on in terms of our approach to SEO. Google Search Console provides you an extraordinary amount of information. Then from there it really enhances how your on page factors are going to be evaluated. It ensures that when you have lists that are really going over your site, that they are really going to provide a value add to that domain authority and increase the likely hood that you’re going to rank for your targeted phrases. It all starts with that strong technical foundation and Google Search Console provides you with an immense amount of information to be able to do that effectively.

That said we are actually going to break this episode up into two separate sections. On next episode, we’re going to talk a little bit more about the on page factors. How to use the HTML improvements in the search analytics functions. We’re going to talk about how to look at links to your site to be able to recognize potential penalties or potential negative SEO associated with your domain. We’re going to give you a chance to go back and really dig into some of the technical elements of search console. In our next episode, once we have that technical foundation in place, we’re going to walk you through how to leverage the tools form an on page and an off page perspective to make sure that you are putting yourself in the best possible position to capture those top rankings for your marquee key word phrases.

Nate Shivar: All right, well you can find previous episodes, links to everything we mentioned and our contact information at bamboochalupa.com. If you don’t want to miss another episode including the upcoming part 2 of this series, go subscribe to the podcast in iTunes or your favorite podcast app. While you’re there please do leave a comment or rating. We learn a lot from your feedback and your ratings helps others to discover the show. So for Brett Snyder, I’m Nate Shivar, thank you for listening.




Please close this window manually.