2.5TB of Archives Moved to O365 in a Day? How we Optimise Archive Migration Throughput
The question on everyone’s lips in all archive migration projects “How long will it take?”, the answer is of course a question “How long is a piece of string?”.A lot of factors come into play in calculating how long it will take to complete an archive migration. With over 250 successful archive migration projects behind us, we can provide a good estimate at the start of a project however until we start migrating data, we cannot give real world figures in your environment. Our statement is “We will migrate as fast as the environment allows us to”. Bear in mind, we deliver fixed cost archive migration projects so we are equally incentivised to complete the project as quickly as possible!
I remember Tim ‘The Tool Man’ Taylor from Home Improvement when I was growing up, trying to supercharge every piece of equipment he could get his hands on. I think of the Insentra Migrations team as Tim – though we are definitely not accident prone like he was! We are more like an F1 team trying to gain that 1/100ths of a second extra out of the car because every millisecond helps especially when looking at migrations of hundreds of millions of objects.
What do we consider when calculating migration speed? Glad you asked, let’s start with the source archive environment.
In most cases your archive environment was designed to store ‘archive’ data for a long time with infrequent access from end users. At the time of writing the product no consideration was given to the future need to get all the data out fast and move it to a different platform. Archiving software vendors also don’t typically make it easy to get all your data out fast.
We have migrated data from many different environments, most of which are still active with daily archiving runs, however some have been sitting idle for years and are only used for eDiscovery and Compliance purposes. Some environments don’t work from an application layer, but the backend data is still intact. This is why a critical part of our migration methodology is to ‘health check’ the source environment and provide guidance on what can be changed to enhance migration throughput.
Major factors which influence speed out of the source environment are a combination of the following questions:
1.) Is the environment actively archiving data?
2.) What load is currently on the environment? Migrating out of an environment that has consistently high CPU >80% makes it challenging. We need to take into consideration the normal day to day use of the environment. We do have strategies to combat highly utilised servers that will allow us to migrate out as well as continue the normal day to day running.
3.) What kind of storage are those many TB’s of data stored on? Archive solutions often use low tier storage which are enhanced for write speed. It is also quite common to see archive data being moved to secondary storage (web based, backup software based) which can make it difficult to migrate data out fast – again though, in many cases we have strategies in how to get around this limitation.
4.) The database server(s) can be a bottleneck.
a.) Is it maintained on a regular basis?
b.) Does the server have sufficient resources to add data migration requests to it?
c.) Are the databases on a shared instance with other critical databases?
5.) How many archiving servers are there? Of course, multiple source servers will see better migration throughput than a single server – this depends on the source archiving environment as many only require one server meeting the customers archiving needs.
6.) Where are the archiving servers located physically and what is their network connectivity? I will discuss the migration server environment requirement shortly, however high bandwidth low latency (1ms) is required to achieve high migration throughput.
7.) What OS versions are on the archive servers? We do see speed impacted when the source servers are running Windows 2003 – mainly due to the limitation of SMB1. An interesting blog about SMB1 found here – https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/– it’s not only slow, it isn’t safe.
Let’s move onto the migration environment setup and the recommendations we advise.
Insentra use different migration vendors to migrate archive data depending on the specific customer requirements – it is not a case of one product fits all. The software we use all work in similar ways, however they all have their strengths depending on your source archive solution and your target environment. Don’t worry though… we view archive migrations like moving to a new house – the client doesn’t typically concern themselves with the brand of truck doing the move, rather it is on the capability of the removalists to select the best tool for the job.
We design all migration environments based on the source environment and what sort of speed we are likely to achieve. The number of migrations servers required is a major point – we can generally achieve very high throughput with multiple migration servers especially when the source archive environment has multiple servers. Note: the migration servers are not overly spec’ed – generally a virtual 8CPU/16GB server will suffice in 99% of migration projects.
Every migration tool we use depends on MS SQL server on the backend to hold information about the migration (what we need to migrate, what have we migrated and where have we migrated it to are stored in those SQL databases). The SQL databases hold this information and are the first point of call for the proof we have migrated the data successfully from A to B.
For very large environments (billions of messages), a decently sized MS SQL server instance(s) is required. We will advise sizing based on our previous migration experience with the SQL disk being the most important factor in sizing the server. We always ask for the fastest tier of storage available because if SQL cannot keep up, the migration throughput will suffer.
I mentioned networking earlier – high bandwidth/low latency (<1ms) is required between the source environment and the migration environment. Migration throughput will be impacted with anything less than this.
Other factors? Depending on the software used for the migration, some storage may be required to be used as a staging disk – we generally ask for about 200GB – this disk needs to be as fast as possible and local to the migration servers. Anti-virus software can also cause performance issues – generally we ask for AV scanning to be disabled on certain disk and processes.
So, by taking this approach, we have a highly tuned and well-oiled source and migration environment to get data out of those archives fast. Let’s talk about the target environment and how it affects migration throughput. In this blog, I will concentrate on Office 365 as this is currently the target in highest demand for our projects. If you want to know about other targets (PST, EV etc) drop me a message.
If all is well with the source and the migration environment itself, the target will always be the bottleneck – this is what we try to ensure with every migration project.
I’m going to start with the obvious one – bandwidth; it is almost always the bottleneck. The key question is ‘how much bandwidth is the archive migration allowed to utilise to get that data up to Office 365’? 100mb/s should be a minimum – this ‘should’ get us somewhere between 25-30GB transferred an hour. The more bandwidth that is available the more we can potentially transfer so long as the source environment can provide data at this throughput. We also need to take into consideration other data consuming the same pipe to O365 – live mail migrations? SharePoint migrations? One Drive? These all impact the speed of moving archive data.
Office 365 throttling – Office 365 is a shared environment so there are limits put in place by Microsoft to ensure a single tenant doesn’t hog and take resources from other tenants. We ask our clients to log a case with Microsoft to have the throttling limits tuned to enable the faster migration of data. Whilst the migration software does have some smarts to help with throttling, in all cases throttling changes made to Office 365 are recommended.
Let’s move onto the last bit, other factors to consider.
Organisation change – how many users can the service desk handle a day? It may only be able to handle up to 200 users migrated a day. This page gives a lot more detail – https://collab365.community/organizational-change-management-adopting-office-365/ – way better than what I could ever explain and we face the same limitations articulated in this article when migrating archives.
Does speed matter?
Put simply, yes. In many cases, the legacy archive system is using TB’s of storage, on its last legs and needs to be decommissioned sooner rather than later. Imagine moving the mailboxes to Office 365 and 2 years later the archives are still migrating! However, whilst many organisations are focused on speed, they often forget that they can only handle so much change at any given time. If you are a 20,000 seat business, can your helpdesk handle the migration/cutover of more than 500 users a night? Many can’t and it’s organisational factors like these that often slow projects down.
So how fast have we gone? Everyone likes speed, right? I find the migration statistics mesmerising especially when we get very high throughput rates consistently.
With a source environment of multiple source archive servers, multiple migration server environment and 1GB/s Express Route to O365 which was all ours to use – we managed to sustain close to 250GB/hr transfer for a few hours, translated into about 2.5 million messages an hour. Unfortunately we could not sustain that throughput, however we were regularly peaking between 2 to 2.5TB transferred to Office 365 a day with most of that data being transferred early mornings with no one on the system. We did see consistent +100GB/hr transfer rates into Office 365.
STOP THE PRESS! Speed records have been blown out the water at Insentra in the last few days whilst I was writing this blog.
Just over 6TB’s migrated into Office 365 over a 24-hour period – multi-server source and multi-server migration environment coupled with a big pipe made this possible. We are seeing this consistently at this customer.
Another – single source archive server, two migration servers and a 500mb/s pipe to the internet gave us a consistent 2.5TB a day to Office 365 and this was sustained over many days – just over 20TB’s migrated in a bit more than 8 days.
That’s it from me at this time, I hope the above helped in understanding the many complexities in getting that migration completed faster. You can see from our latest statistics that we can get there very fast if the environment is right!
Join the Insentra Community with the Insentragram Newsletter
Hungry for more?
Need to Archive More than Email? Globanet Merge1
Globanet Merge1 is a message capture platform and ingestion engine that helps enterprises comply with various regulations and policies by consolidating various data types into one archive or mail solution for eDiscovery
Insentra Takes Email Archive Migration Award
Insentra wins TransVault’s 2014 international partner of the year award. Praise for Insentra by TransVault chief.
Are PST Files Still Relevant?
Outlook Personal Storage Files (PST files) have been around since the mid 1990s. These files provide the ability to create local archives of server based email.