Our client faced an urgent challenge: migrating their entire infrastructure from Memset Cloud Storage to IOmart (with Amazon S3 used as a foundation), as Memset had been acquired by IOmart and was shutting down its services. This was a tight deadline with no room for failure.
The migration had to be completed within a strict timeframe with minimal downtime, ensuring seamless continuity for their applications and users. Working closely with the client, we devised a structured plan that would:
- Synchronise 4TB of critical files under network bandwidth limitations.
- Migrate 100+ databases while keeping them in sync.
- Ensure the integrity of all data before switching to the new infrastructure.

Phase 1: Overcoming Network Limitations for File Synchronisation
With 4TB of data stored on the server and limited network bandwidth, transferring everything at once was not feasible. A full migration in one go would take too long and overload the system. Instead, we opted for an incremental synchronisation approach to gradually move the data while keeping the system operational.
To achieve this, we used Robocopy, which allowed us to copy only new or changed files instead of transferring everything from scratch. To further optimise the process, we automated file transfers using Windows Task Scheduler, running them overnight when network usage was lower. This ensured minimal disruption to other processes.
To monitor the migration, we set up Slack webhooks that sent automatic notifications about the status of each completed transfer. If an issue arose, we could address it immediately without manual checks. By breaking the data into segments and transferring files in phases, we reduced the overall migration time and maintained system stability.
Key Tools:
Robocopy
Windows Task Scheduler
PowerShell scripts
Slack webhook (for notifications)
Phase 2: Migrating 100+ Databases Without Data Loss
During the initial analysis, we discovered that the infrastructure included around 60 databases on the test server and 40 on production. Since the data was actively being updated, we needed a migration strategy that would ensure the databases remained fully synchronised throughout the transition. A simple backup and restore wouldn’t work, as the risk of data inconsistency was too high.
To address this, we opted for Transaction Log Shipping, a method that continuously replicates database changes from the source to the new environment. However, while setting up log shipping on the test server, we encountered frequent failures. The issue stemmed from the fact that processing a high volume of transactions for all databases simultaneously put too much strain on the server, causing synchronisation failures.
To stabilise the process, we adjusted the log shipping configuration by staggering database synchronisation over different time slots instead of running all updates in parallel. Each database was assigned a specific time window, allowing the system to handle the workload more efficiently. The migration workflow involved creating an initial backup, restoring it on the new server, configuring log rotation, and ensuring that all transaction logs were successfully applied.
Once the synchronisation process was stable and database integrity was verified, we coordinated with the client to schedule the final switchover. The moment the web applications were disconnected from the old databases, we performed a final sync and switched the new databases to Read/Write mode. From there, the client gradually transitioned their applications, with our team monitoring and verifying data consistency at each step.
Key Tools:
MSSQL Transaction Log Shipping
mssqldump (for backups)
Built-in MSSQL monitoring tools
Problems and Solutions
To summarise, during the migration we encountered the following challenges that required adjustments to our approach:
- Slow data transfer speeds — Resolved by using incremental copying to reduce the volume of transferred data.
- Log shipping failures — The high volume of transactions overwhelmed the system, so we staggered synchronisation by assigning different time slots to each database, which stabilised the process.
- Switchover complexities — Before switching to the new databases, we conducted a final integrity check to ensure data accuracy. The transition was then carried out in coordinated stages with the client to avoid disruptions.
Communication with the Client
Throughout the migration, we worked closely with the client, who handled certain aspects of the transition on their end:
- Switching web applications to the new databases after verification.
- Testing the final environment post-migration to ensure everything was running as expected.
This collaborative approach allowed us to execute the migration smoothly while minimising risks and downtime.
Results
The migration was successfully completed with the following key outcomes:
✅ Stable data synchronisation – Transaction log shipping was configured for each database individually, ensuring data consistency and reliability.
✅ Minimal downtime – Thanks to incremental file copying and a staged database transition, downtime was reduced to just two hours.
✅ Successful infrastructure migration – A total of 60 test databases and 40 production databases were transferred. All files and databases were verified for accuracy and integrity.
The project demonstrated how careful planning, automation, and close collaboration with the client can ensure a smooth and efficient migration, even under tight deadlines.