A website migration is not just about technology, but also about maintaining the visibility that has grown in search engines and increasingly also in LLMs. Even small errors in redirects, indexing or performance can cause ranking losses. My article therefore highlights key SEO hurdles during migrations and how these can be made manageable with clear processes, clean staging, stable hosting and consistent monitoring.
Why is a migration so sensitive from an SEO perspective?
From an SEO perspective, the migration of a website is not a purely technical infrastructure change, but an intervention in a system already interpreted by search engines. Over time, a structure of URLs, internal links, technical signals and external references is created, which together characterise the visibility of a website.
If a website relaunch changes central elements of this structure, search engines and LLMs have to reprocess the assignment of URLs, internal links and relevance signals. Short-term fluctuations in rankings are not uncommon. However, it becomes problematic if this new assignment is not properly supported technically, for example due to missing 301 redirects, changed URL structures, inconsistent internal linking or incorrect indexing specifications.
From the point of view of search engines, a migration is therefore always a combination of URL reassignment, a new crawling phase and re-evaluation of the internal structure.
Grown URL structures and historical signals
In the search engine index, each URL acts as a carrier of cumulative ranking signals. These are generated over time by factors such as
- incoming backlinks
- internal linking structures
- Crawling and indexing history
- embedding the page in the thematic and structural architecture of the website
These signals are not abstractly linked to “the content”, but to specific URLs. If the address of a page changes in the course of a relaunch, it must be clearly recognisable to search engines which new URL replaces the previous one.
In practice, this assignment takes place via 301 redirects, which signal the permanent relocation of a resource and enable the transfer of existing ranking signals. If redirects are missing, incomplete or lead via several intermediate stations, some of the authority built up can be lost.
It becomes particularly critical when structural changes are made, for example when permalink structures change or entire directory structures are reorganised. Without consistent URL mapping, Google loses some of the previous orientation within the website.
Migration as an intervention in an indexed system
A website migration usually affects several levels simultaneously, which are relevant from an SEO perspective. Typically, they change:
- the URL structure
- the internal navigation and link architecture
- Technical signals such as canonical tags or indexing rules
- the underlying server or hosting infrastructure
If several of these changes come together, Google has to re-crawl large parts of the website and re-evaluate the relationships between URLs, internal links and ranking signals.
Extensive relaunch projects in particular therefore look like a partially new website from a search engine perspective. The more URL structures, internal linking and technical signals change, the greater the re-evaluation effort.
Migrations are therefore one of the most sensitive phases in the life cycle of a website. It is crucial that URLs, links and redirects are implemented consistently so that existing ranking signals are correctly transferred to new target pages.
The most common SEO hurdles in website migrations
The biggest SEO problems during a migration are rarely caused by a single error. In practice, several technical changes usually take effect at the same time. If they are not systematically prepared, search engines can no longer clearly establish the correlation between existing URLs, content and ranking signals.
A typical example: The URL structure of a blog section changes from /blog/article name to /insights/article name. If the old URLs are not forwarded via 301 and internal links are not adapted, 404 errors occur and search engines lose the assignment to the previous page, existing rankings can be lost. So short, so bad.
Incorrect or incomplete 301 redirects
Redirects are the central mechanism for obtaining ranking signals during a website move and are of course just as important for consistent user guidance. A 301 redirect signals to search engines that a URL has been permanently replaced by a new address and that existing signals should be transferred to the target page.
In practice, problems usually arise due to incomplete or inconsistent redirect logics.
Typical mistakes you should avoid:
- old URLs are forgotten in the mapping and return 404 errors after the move
- Redirects lead to generic pages instead of thematically appropriate destinations
- Several redirects in succession (redirect chains)
- Redirects are only implemented after the go-live
- permanent changes are incorrectly implemented as a 302 redirect
While a 301 redirect signals a permanent shift, a 302 redirect stands for a temporary redirect. Search engines assume that the original URL remains the same and generally do not transfer ranking signals to the new target page.
Another common problem is redirect chains, where a URL is forwarded via several intermediate stations. They often occur when new redirects are simply placed over existing redirects of the “old” website during the migration. Instead, redirect rules should always be consolidated. The goal: each old URL redirects directly to its final “new” target page.
Another important note: redirects do not only affect SEO, of course. If the target URLs of landing pages change in the course of a relaunch, it is essential that ongoing paid search campaigns are adjusted at an early stage. Otherwise, your adverts will initially run on 301 redirects or, in the worst case, on 404 pages – with unnecessary losses in terms of user experience, conversion and budget efficiency.
Don’t forget backlinks
Faulty redirects are particularly critical for pages that have built up backlinks over the years. External links always refer to a specific URL. If this is changed or deleted, it must be clearly recognisable to search engines which new page it replaces.
If this assignment is missing, backlinks lead nowhere and part of the authority built up is lost. This not only affects traditional rankings, but increasingly also the source base for LLMs and generative search systems, which rely heavily on stable and consistent URL signals. Initial analyses also show that URLs with redirects are cited significantly less frequently by LLMs than in classic Google results, which makes stable, directly accessible URLs appear even more important in the context of LLM visibility.
General redirects to the homepage are also problematic. Search engines often interpret such redirects as soft 404, as there is no content equivalent to the original page.
Dealing with outdated content
Of course, you don’t have to redirect every URL during a migration. Larger websites in particular often have content without traffic, rankings or backlinks.
There are three possible strategies here:
- 301 forwarding if content is still relevant or has backlinks
- Consolidation, when several contents are brought together thematically
- Removal (404 or 410) if content is permanently out of date
A 410 status code can be useful depending on the context, as it explicitly signals that a page has been deliberately removed.
Removing outdated content can be particularly useful for larger relaunch projects. Fewer indexed pages reduce the complexity of the migration. In addition, URLs without any real added value, traffic, rankings or backlinks can often be removed without negative SEO effects.
Unintentional changes to the URL structure
Changes to the URL structure are a frequent side effect of website relaunches. These become particularly critical when entire areas such as blog directories are affected.
These are caused, for example, by
- new permalink rules
- Changed category or directory structures
- New page types or templates
Even seemingly small changes can have an impact, for example when slugs, language paths or trailing slashes change. For search engines, each of these changes means that a previously known URL is suddenly accessible under a new address. Such changes are not fundamentally problematic, but they do increase the complexity of the move. Each new URL must be clearly assigned to an existing page.
The most important recommendations:
- Keep the existing URL structure as stable as possible and avoid unnecessary changes
- Document changes to URL patterns (e.g. directories, slugs or categories) at an early stage
- Systematically synchronise new URLs with existing pages before the move
The more stable the URL structure remains during a relaunch, the lower the re-evaluation effort for search engines.
Loss of internal link structures
In addition to external backlinks, internal linking also plays a central role in the evaluation of a website. Internal links control which pages search engines crawl and how authority is distributed within the domain.
However, this structure often changes during relaunch projects. New navigations, modified templates or merged content can lead to internal links being omitted or continuing to refer to old URLs.
If important pages receive fewer internal links as a result, this can have a direct impact on their visibility. Search engines crawl such pages less frequently and rate them as less relevant within the website structure.
The most important recommendations:
- Identify central internal link sources (navigation, footer, content links) before the move
- Systematically update internal links to new URLs
- Perform a crawl after the go-live to check internal link structures
It is also important that central pages remain accessible via several internal paths.
In addition to structural factors such as URLs and internal linking, technical SEO signals also play a decisive role in the relaunch. Settings relating to indexing, canonicals or sitemaps in particular can determine how quickly search engines understand a new website structure.
Other critical technical SEO settings
Many SEO problems when moving websites are caused by technical settings from the development phase. In staging environments, mechanisms are often activated to prevent unfinished content from being indexed.
If you adopt such configurations after the go-live, this can have a direct impact on crawling and indexing. Four technical signals are particularly relevant here: noindex instructions, robots.txt rules, canonical tags and XML sitemaps.
Together they steer:
- what content search engines are allowed to crawl
- which pages are indexed at all
- which URL is considered the authoritative version
- and which pages are discovered at all.
Errors in these areas can lead to pages not being crawled, indexed incorrectly or not being found at all.
Noindex: When pages don’t even make it into the index
During the development of a website, search engines are often prevented from indexing content globally. In WordPress, for example, this is done via corresponding settings in the CMS or via SEO plugins. Technically, a noindex tag is usually integrated into the page source code.
If you leave this setting active after the go-live, important pages can be completely excluded from the index. Particularly treacherous: The website functions technically flawlessly, content is accessible but does not appear in the search results.
A crawl quickly shows whether you have inadvertently noindexed pages that should actually be indexed.
robots.txt: When search engines are not allowed to crawl in the first place
While noindex controls indexing, robots.txt regulates which areas may be crawled at all. In development environments, entire directories or the entire website are often blocked. If such rules remain in place after the migration, search engines and LLMs will not even be able to retrieve central areas of your website, which can be particularly critical for resources such as CSS or JavaScript that are required for rendering.
Before going live, you should therefore make sure that the robots.txt only blocks the areas that you actually want to exclude.
Canonical tags: When search engines rate the wrong URL
Canonical tags are among the most inconspicuous but extremely influential SEO signals. They tell search engines which URL should be considered the authoritative version of a page. Errors often occur here during migrations. Typical causes include
- Canonical tags that continue to refer to old URLs
- Templates that play the wrong canonicals across the board
- several competing canonical signals
Such constellations can lead to search engines indexing the wrong pages or not correctly assigning ranking signals. This becomes particularly critical if your canonicals point to URLs that have already been redirected or no longer exist.
A crawl after the migration quickly shows whether canonical tags consistently refer to the current URL of the page.
XML sitemaps: When search engines can’t find the new URLs
After the migration, search engines must first re-record the new website structure. XML sitemaps help with this by providing a structured overview of all indexable URLs.
Problems often arise if you do not update sitemaps after a relaunch. They then continue to contain old URLs while new pages are missing, which you should avoid at all costs. Once the updated sitemap has been generated and lists all the pages you want to see in it, you should submit it to the Google Search Console immediately afterwards.
Moving tracking and analytics
Even if tracking is perhaps not directly part of technical SEO or GEO optimisation, it is of course central to evaluating the business impact of a migration and should therefore always be considered and moved. Changes to templates, themes or script integrations can quickly lead to your analytics or tracking codes no longer being triggered correctly after the relaunch.
If tracking is missing, important data (not only for evaluating the migration) is lost, such as the development of traffic, engagement or conversion rates.
You should therefore make sure before and after the go-live:
- whether analytics tools are correctly integrated
- whether tracking tags or Google Tag Manager containers are loaded
- whether events and conversion tracking continue to work
Especially in the first few days after a migration, functioning tracking is crucial in order to be able to quickly classify changes in traffic or user behaviour. It is therefore advisable to document analytics properties, tag manager configurations and important events before the migration.
Loss of structured data
Structured data is often overlooked in relaunch projects, as it is usually integrated directly into themes, templates or plugins. Existing schema markup can be partially or completely lost, especially when changing themes or page builders.
Particularly affected are often:
- Article and blog markup
- Breadcrumb structures
- Organisational or product data
If this markup is missing after the relaunch, search engines lose structured information about content, navigation structures or organisational entries. Although this rarely has a direct effect on rankings, it can influence the display of search results, for example through missing rich results or breadcrumb displays.
Especially for websites with many content pages or structured content formats, this can lead to a lower click-through rate in the search results. It is therefore essential to document the existing schema markup before the relaunch and to check the markup with free tools such as Google’s Rich Results Test or the Schema Validator.
Interim conclusion: No relaunch without mapping
Many of the risks described above can be traced back to one central cause: a lack of overview of existing URLs and their new target structure. A properly prepared URL mapping creates precisely this overview. It forces you to record every existing page, define its future role and systematically check technical signals such as redirects, indexing or internal linking.
For each existing URL, the mapping defines how it will be handled in future, for example which new target page it will receive, whether a redirect will be set, whether it will remain indexable or whether it will be deliberately removed (404/410). This makes it clear to search engines how existing content and signals will be transferred to the new structure.
Tools such as Screaming Frog can first be used to create a complete crawl of the existing website in order to capture all current URLs and compare them with the new target structure. At the same time, important technical signals can be checked and prepared, such as the indexing status, canonical tags, sitemap entries or the integration of tracking and analytics scripts.
In practice, it has been shown time and again that clean URL mapping is your most important lever for transferring stable ranking signals to new pages and implementing migrations without major losses in visibility and UX.
The hosting setup as a risk buffer
In addition to structural SEO factors, the technical infrastructure of your website also plays a key role in migrations. A stable hosting setup massively reduces the risks during the relaunch and makes it easier to control the go-live process.
Three factors in particular are crucial: a functioning staging environment, stable server performance and a controlled go-live strategy.
Staging as a control instrument
A staging environment is an isolated copy of the website in which changes can be tested before the actual go-live. It is a central component of quality assurance, especially for migrations.
In a staging environment, you can check, among other things:
- Complete crawls of the new website
- Function and destination of all redirects
- Canonical tags and indexing rules
- internal linking structures
- XML sitemap and crawlability
- Implemented structured data
This allows many typical migration errors to be recognised before they have an impact on the live environment. Good managed WordPress hosters provide such environments, for example via integrated staging functions such as those offered by Raidboxes.
It is important that the staging environment itself is not indexed by search engines. In practice, this is usually done via noindex instructions or ideally via password protection. In this way, all technical settings can already be implemented for live conditions without the website being crawlable by search engines.
That’s why you need a staging environment
Find out why a staging environment is essential for your migration in our comprehensive guide to WordPress staging.
Performance and server response times
After a migration, the technical infrastructure of the website often changes, be it due to a new server, a different hosting architecture or changed caching mechanisms. The performance of the server environment has a direct influence on the crawling behaviour of search engines. If server response times increase or timeouts occur more frequently, search engines generally reduce the number of pages that are retrieved per crawl.
This can be particularly problematic after a migration, as search engines have to crawl changes to URLs, internal links and page structures again. If the infrastructure is unstable or too slow during this phase, the processing of the new website structure will be delayed.
If the server infrastructure is unstable or too slow during this phase, this can lead to the server becoming unavailable:
- new content is indexed more slowly
- Changes to existing pages are recognised later
- Rankings stabilise more slowly
A stable and high-performance server environment is therefore crucial. This includes
- Short server response times
- Functioning server-side caching
- an infrastructure without frequent timeouts or disconnections
Especially in the first few days after the relaunch, it is worth keeping an eye on server logs or monitoring data in order to recognise unusual crawling peaks or performance problems at an early stage. Stable server performance helps to ensure that search engines can crawl the new website structure efficiently and process changes quickly.
Controlled go-live and backup strategy
The actual go-live is one of the most critical phases of a website migration. Errors in this phase can have a direct impact on crawling, indexing and user experience. That’s why you should definitely carry out a final technical check of the staging version before the go-live. A complete crawl shows whether redirects, indexing rules, canonicals and sitemaps have been implemented correctly. A structured go-live strategy therefore comprises several elements:
- Complete backups of files and databases
- a controlled DNS migration to the new infrastructure
- Avoidance of parallel versions of the website
- Possibility of a quick rollback
Ideally, the go-live should take place at a time when traffic is low so that potential problems can be recognised and rectified quickly.
After the migration: monitoring. Is everything working?
Of course, from an SEO perspective, a website migration does not end with the go-live. Only in the weeks that follow will it become clear whether redirects are working correctly, new URLs are being indexed and rankings remain stable.
To recognise potential problems at an early stage, four areas are particularly important: crawl analysis, redirect control, indexing monitoring and performance monitoring.
Crawl analysis before and after migration
A crawl of the existing website provides a complete list of all accessible URLs as well as information on internal links, status codes and meta data. This data serves as a reference point for the new website.
A complete crawl should therefore be carried out before the migration in order to record, among other things
- all indexable URLs
- internal linking structures
- Canonical tags
- Meta titles and descriptions
- Status codes of the pages
After the relaunch, a second crawl can be used to check whether these structures are still implemented consistently.
Typical SEO crawlers for this analysis are:
- Screaming Frog
- Sitebulb
- Lumar
The comparison between the before and after crawl makes many migration problems immediately visible.
Check redirect mapping
Even if redirects were set up before the go-live, they should definitely be checked again after the migration. It is particularly important to check:
- 404 errors
- Redirect chains
- wrong target pages
A crawl of the old URLs will quickly show you whether all redirects are working correctly. A simple method is to crawl the list of old URLs again and check whether they return a 301 status code and whether they redirect directly to the correct target page.
Monitoring in the Google Search Console
The Google Search Console remains one of your most important monitoring tools after a migration. It provides direct information on how Google crawls and indexes the new website.
Particularly relevant after a relaunch:
- Indexing status of new pages
- Crawling error
- new 404 pages
- Changes in impressions and clicks
In addition, the updated XML sitemap should be submitted promptly after the migration in order to alert search engines to new URLs more quickly.
Important: When changing domains, the “Change of address” tool in the Google Search Console should also be used to officially notify Google of the website’s move and to support the transfer of existing signals.
Performance and ranking monitoring
Ranking fluctuations are not uncommon in the weeks following a migration. It is crucial to continuously monitor the development of rankings and traffic in order to identify fundamental shifts that may indicate errors during the migration.
Important key figures are:
- organic traffic
- Ranking positions of central keywords
- Crawling activity
- Loading times and core web vitals
If these key figures stabilise after the relaunch, this is a good sign that search engines have successfully processed the new website structure.
Keep an eye on WordPress
If your WordPress website suddenly slows down or even goes down, you’ll notice it straight away (e.g. through lost visitors, revenue and visibility). With reliable WordPress monitoring, you can keep an eye on loading times, uptime, server load and potential sources of error at all times.
Your migration checklist
A structured checklist helps to avoid typical mistakes during a relaunch. For larger projects in particular, it ensures that key visibility signals are taken into account.
Before the migration
- Create a complete crawl of the existing website
- Export list of all indexable URLs
- Identify the most important pages based on traffic, rankings and backlinks
- Create redirect mapping between old and new URLs
- Document metadata and structured data
- Secure tracking and analytics setups
- Set up staging environment with identical server configuration
During the migration
- Implement and test redirects
- Update internal links to new URLs
- Check canonical tags and indexing rules
- Generate XML sitemap for the new website
- Perform a technical SEO check of the staging version
After the go-live
- Submit a new XML sitemap in the Google Search Console
- Use the address change tool in the Search Console when changing domains
- Monitor crawling errors and 404 pages
- Test redirects again
- Monitor rankings and organic traffic
- Check loading times and server response times
- Check tracking and analytics function
Especially for more complex relaunch projects, a structured checklist helps to systematically work through key SEO factors and avoid typical mistakes.
Migrations can be planned: with structure and a stable infrastructure
Website migrations rarely fail due to individual technical errors, but mostly due to a lack of structure in the process. Clean URL mapping, clear technical checks and consistent monitoring ensure that search engines can quickly categorise new structures and that existing signals are retained – both in traditional search results and in the growing information systems of LLMs.
The technical basis on which a migration takes place is just as crucial. A stable hosting environment, reliable server performance and a well thought-out backup strategy ensure that changes can be implemented in a controlled manner and that quick rollbacks are possible in an emergency.
Migrations remain complex, but with clear processes, thorough preparation and a stable infrastructure, they become a well-plannable step in further development


Leave a Reply