It may worth to have some rule of thumb when is it faster to use snowball than upload/download data. The snowball appliance is delivered according to the docs using the UPC 2-day delivery (in US, the rest of the world I’m not sure), so using the snowball it takes at least 4 days to migrate the data.
Any rule of thumb how long does it take up upload e.g. 1TB using 100Mbs line? Please confirm / improve my estimate.
100 Mbs is approx 100/8 MB/s = 12.5 MB/s
1 TB / 12.5 MB/s = 80E+3 s = 22h
Using a network transfer time calculator (https://downloadtimecalculator.com/Data-Transfer-Calculator.html) it says the time is 24.5 which looks approximately correct. Did I forget or misunderstand anything?
Your math is pretty right. Snowballs aren’t always the answer, but it can be a brilliant option
It’s generally safe to deduct a bit from the maximum MBp/s as well, so you could look at something like 10MBp/s. Plus, you also would only be able to use part of the network, since the rest of the business still has to be able to operate, at least during business hours. So maybe drop that down to 50% to 5MBp/s. So we could take that up to a pretty conservative 50ish hours per terabyte?
Snowballs are all about the economy of scale. Supposing you wanted to transfer something much larger, in the order of 250TB (or 0.25PB).
Using the internet with a fully utilizable 100mbps link (12.5MBp/s), we could do that in about 229 days… Eek. Using a Snowball if we just used one back and forth at 14TBp/d (72TB usable space over 5 days), we could do it in maybe three weeks.
But… what if we used multiple Snowballs… Let’s go with 4 Snowballs (80TB model, at 72GB usable space)
Let’s say you had a nice 10GBps (1.25GBps) fibre link from your SAN into your Snowball, total transfer to the devices would take about 55 hours. Add the four days for transit, and maybe another one for the processing by AWS… Maybe a bit over a week?
When you compared over 7 months, to about one month, to one week, the numbers show.
The general rule of thumb I’ve gathered is that if you’re in a hurry to move data to AWS and can’t move the data in under a week via WAN, Snowball is the faster bet as they approximate the total time to load and ship the data back to AWS to be within a week per Snowball. As Stephen mentioned, this could be scaled out with multiple Snowballs.
Here’s some other ideas that we’ve had to consider when choosing between the options, snowball vs DX/VPN or internet. Do you have free bandwidth to chew up for sending all this data to AWS? Maybe your DX/VPN is nearing capacity and you don’t want to impact other traffic that’s using that pipe. Same idea if you are sending it over your internet pipe. Also, what is this data? Is it super sensitive? Is the data owner more comfortable using DX/VPN vs HTTPS internet vs snowball? You might need to have a discussion with them about how their data is protected via encryption with each solution. What is your company policy for hooking up an AWS snowball in your data center? If you haven’t done it before, security people are not thrilled with hooking up a hard drive device (snowball) to an internal storage switch. It’s not always what makes sense from an engineering perspective, sometimes it’s what makes sense for the data owner and the company.
I would only recommend a Snowball for remote locations with very limited/unreliable bandwidth. The Snowballs’s CLI/documentation is not very user friendly and even though newer versions have 1GbE and 10GbE options, you will disappointed by the upload/download speeds (will need to perform multiple copies in parallel to saturate the link).