Sometimes I have to move large files between servers. Most often they’re database backups, and since we have single database tables that can extend to many hundreds of gigabytes, whole database dumps can easily generate files of one or more terrabytes.

Even more challenging was a recent need to re-construct historical data from backup drives on a development workstation and then upload the results.

But here’s the wrinkle. Once file sizes move beyond 3-4GB, transfers seem to become a smidgen unreliable, whatever protocol I use.

The answer … a very useful *nix utility called split.

Continue reading