SEO for Web Migrations: Recovering your Organic Traffic after a Web Migration Gone Wrong – #CrawlingMondays 6th Episode
In the 6th episode of Crawling Mondays Aleyda shares how to recover your Website organic search ranking and traffic after losing it from doing a Web Migration without SEO support by going through the most important validations to be made in order to recover your organic search rankings.
Hello there. I hope that you’re having a wonderful start of the week, a great Monday. Today I would like to share with you one of my potential less ideal scenarios, and how to handle it to hopefully change it to something positive, right? It is about web migrations gone wrong, right? Because maybe, potentially, very likely, they didn’t have the correct SEO support and validation right from the start and you might tell me, Aleyda, what are you telling me?
There are so many guides out there sharing about the importance of taking SEO into this duration right from the start, right before the start, even, right? It took again, my attention. This topic comes back again, because I was actually taking a look at SISTRIX Index watch with the top SEO losers of 2018, right? I saw that there was a whole section here, you can see, of those websites that lost tons of visibility here, after web migration. Not only in the UK, but also in the Spanish version. One of theirs is actually a government site. Ouch. So, in today’s episode, I want to cover this very particular scenario.
What can we do if a website doesn’t take SEO into consideration right from the start? Our clients, somehow they forgot. They should tell us about it. They come afterwards and say, hey, could you help me? How can I recover my loss of [inaudible 00:01:49] traffic? Let’s take a look.
One of the main reasons and causes of why websites lose traffic after a migration, and why the migration goes wrong is because they haven’t actually taken into consideration all of their top, meaningful pages to be migrated to their new location. So somehow, they have missed those pages that were ranking very well, or were bringing lots of traffic, were being referred, were being linked a lot by many other websites. So, the first step should be a little bit of doing forensic work to identify which had been all those old pages that were driving a lot of organic search, traffic, rankings. Most links. Those really meaningful pages that the website used to have to check if all of those pages have actually been taken into consideration in a migration and have been migrated. The data will be to accept, to validate all that.
But, let’s first gather all of the data. Ideally, we should start with our own internal data, with Google Analytics and Google Search. Also, go to the analytic tool pages. Those that were bringing more traffic in general. Organic traffic, in particular, before the migration happened. Hopefully we will still have access to that historical data. We can see here, per landing pages, which were particularly important pages bringing more traffic at that point. The same with the Google search also, right? Additionally, it should be ideal to compensate and compliment this information with the want of ranking index success and comparative tools. Like, for example, SEMrush.
I love it because; I don’t know if you have seen it, but it doesn’t only allows to understand the position changes over time. Even of any type of meaningful keywords for which they confine our website pages ranking for. But, they actually allow us to change the day and take a look at historical data. We can actually also select here the time and see which word the tool pages bring in traffic; organic traffic at that point. Another tool that is very, I have to say, useful, and visual, too. The one that I prefer from a visual perspective, to double check how the migration happened and when the migration happened. It’s at SISTRIX. Because, we can visualize it for any type of migration. Not only from one domain to another, but also protocol change. HTTP to HTTPS. Specific part change. If it is a migration from a subdirectory to a sub-domain, for example. This will allow us to visualize the change in a very flexible way.
So, we can check the before and after, and also the URL changes. How this URLs actually migrated, or were moved at that point. Or, at least, information that will track, right? So, we can come and check for those rankings before the migration happened in July of 2016. And, the ones that the website has right now, right? So, to see how the URLs change and the traffic dropped. How they were actually moved. Here, in particular, what is important is that we can grab, and we can gather all these important pages, former pages, that were bringing lots of rankings, of traffic, meaningful traffic to the website before the migration happened.
Not only from a rankings perspective. I have to say, it also is always good to double verify from an external link perspective. Those pages that were bringing and attracting more external links, more popularity, more referred users from all the sites, right? So, again, we can use link analysis too. For example, [inaudible 00:06:07]. Or, in this particular case, CognitiveSEO. They also use the data from Majestic, so it’s very very complete. I have this very very nice visualizations. So I can see which has been the top linked pages of this website in the past. Which have been identified particularly that are broken over time. There are ones were returning 404 should be status in this particular case.
But then again, this one is most important to me. To bring this URLs data. To double check if this important pages, that are to a point, were higher liked. They are redirecting somewhere. They have been taken into consideration in the migration.
After you have gathered together all of those meaningful, important, top traffic rankings, form URLs that should have been taken into consideration. It’s time to validate, verify, if they have been actually migrated correctly as they should have been right from the start. You want to double check. You want to validate. You want to verify. Crawl this list of important URLs and make sure that the redirects are correct. That these are permanent redirects, 301 redirects, that there are no redirect change or redirect moves. They actually go to the final, relevant destination, directly. That, they are actually going to a relevant new page, to a meaningful new page, right?
So, you can do this by going to your list, the list that you have gathered of URLs that you have gathered, and put together. Do at this crawl, with whatever crawler. Most crawlers; all crawlers I think, they support to the release crawl. Screaming Frog I am showing here, but you can also use Sibyl, DeepCrawl, Beautify, OnCrawl. You can import the URLs and use the List crawl mode to check them. In particular, I love this report that Screaming Frog has. This is particularly useful in these situations. The report of redirects and canonical chains. So, for example, I crawl the list of all URLs of GameFAQs, which we saw was one of the sites that lost the organic search visibility when they moved to a new domain, right? As you can see, all of them or most of them, they all have a redirect going on.
What we are going to do here to double check where they are going. If they are actually going directly to a proper new version of these URLs in the new locations, is to select the Redirect & Canonical Chains report. And, we will obtain here. Yeah. This is Excel, but we can actually filter to see what type of redirect. For instance, HTTP, redirect going on, and commanded correctly. How many number of redirects have been included. If there are any redirect loops going on. So, you can see it’s false, thankfully. If there are any temporary redirects in between, for example, this is also very very useful to identify problems. The initial URL here, of course, it is non-indexible. It is redirected. The URL, it should be status code, that is such a higher one.
Where is it going, right? So, we can see that redirect number one is this other URL. It is going from here to here to us. So, the main explanation before, right? We can see that this is the final URL destination, that there are no more jumps in the redirect chains, right? If there were many redirect chains, if a former URL going to another one that is again redirected to another one that is maybe, probably going to a final URL destination that might be showing a full [inaudible 00:10:31] should be status, that will be showing.
So after we have made sure that all the URLs have been correctly migrated to a new location, is to validate if the new URLs, where they have migrated are correctly optimized. If there are still meaningful, relevant indexible and verifiable. Actually, they are able to maintain the relevance of polarity towards the same topics in terms for which the old URLs were ranking in the first place, right? So, I will do two things in this situation. I will release a normal crawl, a typical crawl in the new website in the new location. I will also probably do another list crawl of the destination, the URL’s destinations of those other pages. Why? Because, I want to double check. I want to verify and prioritize and make sure that the list of the destinations of these whole pages. These new URL versions are [inaudible 00:11:45].
So, are actually manually revised and checked, and seeing if the title tag is there. If it is meaningful it is still relevant. Similar to the old one, or even better than the old one, or worse to take the upper page action. The descriptions, headings. The metadata is correct, right? But beyond that, if there is any blockage of these URLs. And then, of course, do the same for the rest, with the normal crawl again. Better understand, also, how this particular pages are internal linked in the new web organization.
To crawl again, and check what I mentioned before, tiles with descriptions. The canonicalisation here. They are self-canonicalised or canonicalised to other URLs. Why it’s so, right? To double check if they have any noindex going on. Why it’s so, right? Check if any of those really [inaudible 00:12:52] pages. High traffic pages are redirecting somehow, to any of this. Why, again? If the titles are the correct ones, the descriptions are the correct ones. The headings are the correct one. Ideally, we should not only crawl and understand the configuration. Maybe the configuration, the optimization of these new pages are still correct, are still relevant, but we should find a way. There should be a way to compare versus the old ones, right? Because, maybe we see that the title is relevant. But the [inaudible 00:13:24] relevant. Content. The content is still there. It’s relevant. It’s not duplicated. It’s not in content.
So, what’s going on? Why they are not ranking us well? Well, maybe these other pages were even better. Even more meaningful, more relevant. More popular. Were better linked not only externally but also internally, right? So, we should be able to compare ideally here, is when we should ask. We should be able to ask and tell a client, look, if there’s any way to enable and check in a testing environment, in a development environment that is closed, of course, and see if the old website was even better than this one, right? To better understand the gap. Maybe still existing gap of the old versus the new one. Ideally, that should be a possibility. We can double check versus the older site in a versatile way, because we can, again, crawl the older site and compare the two crawls. I love how Sibyl allows us to easily check two crawls. And also with DeepCrawl, we have the possibility to check the gap between two different crawls, right?
If it is not possible, if there is no access to this whole version of the website somehow, we can come and check if it something reason, right? Google Cache. So for example here, we can see how in the case of GameFAQs, it seems that somehow, some of the URLs clearly are shown in their index. But then, when we double check, we can see that the actual cache is for their new URL version, right? Although it might seem that they are still there, the last snapshot is showing already the new URL version. We can always go and try to work around here with the internet archive.
I can go to January here and see how it looked back then. The old version of the site. So, yeah. This is it. Before the migration. It’s January, right? So, it looked like this. So, we can see and we can compare other pages versus their new versions, and actually compare and see how they used to look in the past and how they look now. Yeah. Again, take configuration, content relevance, how meaningful is the content? Also, the gap between the old location versus the new one, especially if it is a cross domain type of migration. Because again, this plays an important role, especially with very competitive domains. Start getting very competitive.
Queries. So for example, this old website. Take a look at the amount of linking domains that it had. A number of linking domains, the number of Backlinks. The influence, the popularity based off this old domain. We’ll look at the new one. Again, it’s also pretty good. There’s not an important gap. But, I’m pretty sure that there should be some important former links that are still pointing to the old location, especially if it is a very very old one, if it has a domain that has a really good popularity that has been here for a long time. We can see how it was already there five years ago.
So, it is important to understand that if there’s a gap; and important link popularity gap between the former domain and the new one. Maybe that is even the reason of why it is not performing as well for top queries. Because, the new domain, the new destination is not still as popular as the old one. And then, if that’s the reason. And of course, there is no way. It has already happened, right? Is to better understand which are those top links that were pointing to the old domain, to the old location. Old URLs, and how you can reclaim them. How you can claim them; how you can recover these links so they can go to a new location, especially if you identify that they are going to pages that are not those that have been correctly redirected.
Of course, the first step should be to fix the redirect if you can, and make sure that the new location is as relevant and [inaudible 00:18:22]. If somehow, you cannot do it, or somehow, still like that, you cannot recover as much, the next step should be, of course, to try to reclaim and recover the popularity that was lost. Hopefully by doing all this, we’ll be be able to recover those lost rankings, that lost visibility, traffic, and being able to fix. I will say that by doing this, we’ll be able to do much better job to recover, to fix, what wasn’t correctly done right from the beginning, which should have been the ideal scenario, of course. I hope that this was handy to you.
Ideally, after this, I will say that it is the right time to set SEO alerts, in case they haven’t been implemented before, right? To understand when the pages changes, when the pages are migrated. Your redirects are placed. Nobody can tell, oh, I wasn’t alerted about it. Did you know it’s so? So, there’s actually a Crawling Mondays. This was the second Crawling Monday that I did. Or, the first one, even. Anyway. There is a previous episode of Crawling Monday that you can see here about that specific topic. To be alerted with [inaudible 00:19:48] when anything on the website changes, including when a migration happens without even being alerted of it, right? [inaudible 00:20:00], a couple of tools that you should definitely check out. Check out the video.
Thank you very much for being here, for watching this, and I hope that you have enjoyed it. Look for my presentation about this topic. But, of course, not only about this specific scenario, but from a broader perspective of complex migrations from and SEO perspective, and how to win them. How to take the opportunity to make the most out of them, to improve actually. Your organic search rankings and traffic. Looking forward to share again with you in the next Crawling Mondays! Thank you very much. Bye bye.