Some interesting findings about Veeam Backup v5

By admin, October 29, 2010 11:56 am

The followings are my own findings about Veeam Backup v5: 

  • If you have more than 30-50 VM (average 20-30GB) to backup, it’s better to setup a 2nd Veeam Backup server to load-balance the job. So eventually, you may have a number of load-balanced Veeam Backup servers to evenly spread out the loading, and this will greatly reduce the queue time and shorten the total backup windows.Don’t Worry, all Veeam Backup servers are managed centrally by Veeam Enterprise Manager (and the best part is additional Veeam Backup servers DOES NOT count towards your Veeam licenses, how thoughtful and care Veeam is, thank you!)It’s recommended by Veeam to create 3-4 jobs per Veeam Backup server due to high CPU usage during the backup windows for de-duplication and compression. However this doesn’t apply Replication which doesn’t use de-duplication and compression.Note: However there is another way to fully utilize your one and only Veeam Backup Server by adding more than 4 concurrent jobs. (See quote below)
  • The “Estimated required Space” within Job description is incorrect as it’s a know bug that VMware API or Veeam doesn’t know how to interpret Thin-Provisioning volume yet, so estimate by yourself. Most of the time, it will over-estimate the Total Required backup space by 3-10x more, so don’t worry!
  • After each job completes, you will see a Processing rate: (for example 251 MB/s), which means total size of VM divided by total time it took to process the VM. This is affected by time it takes to talk to vCenter, freeze guest OS, create and remove snapshot, backup small VM files (configuration), and backup actual disks.

I am still not sure if the target storage requires very fast I/O or high IOPS device like 4-12 RAID10/RAID50 10K/15K SAS disks as some say the backup process with de-duplication and compress is random, some say it’s sequential, so if it’s random, then we need fast spindle with lots of disks, if it’s sequential, then we only need cheap 7200RPM SATA disks, a 4 RAID 5 2TB will be the most cost effective solution for storing the vbk/vik/vrk images.

Some interesting comments quote from Veeam’s forum relating to my findings:

That’s why we run multiple jobs. Not only that, but when doing incremental backup, a high percentage of the time is spent simply prepping the guests OS, taking the snapshot, removing the snapshot, etc, etc. With Veeam V5 you get some more “job overhead” if you use the indexing feature since to system has to build the index file (can take quite some time on systems with large numbers of files) and then backup the zipped index via the VM tools interface. This time is all calculated in the final “MB/sec” for the job. That means that if you only have a single job running there will be lots of “down time” where no transfer is really occurring, especially with incremental backups because there’s relatively little data transferred for most VM’s compared to the amount of time spent taking and removing the snapshot. Multiple jobs help with this because, while one job may be “between VM’s” handling it’s housekeeping, the other job is likely to be transferring data.

There are also other things to consider as well. If you’re a 24×7 operation, you might not really want to saturate your production storage just to get backups done. This is admittedly less of an issue with CBT based incrementals, but used to be a big deal with ESX 3.5 and earlier, and full backups can still be impacting to your production storage. If I’m pushing 160MB/sec from one of my older SATA Equallogic arrays, it’s I/O latency will shoot to 15-20ms or more, which severely impacts server performance on that system. Might not be an issue if your not a 24×7 shop and you have a backup window where you can hammer you storage as much as you want, but is certainly an issue for us. Obviously we have times that are more quiet than others, and our backup windows coincide with our “quiet” time, but we’re a global manufacturer, so systems have to keep running and performance is important even during backups.

 

Finally, one thing often overlooked is the backup target. If you’re pulling data at 60MB/sec, can you write the data that fast? Since Veeam is compressing and deduping on the fly, it can have a somewhat random write pattern even when it’s running fulls, but reverse incrementals are especially hard on the target storage since they require a random read, random write, and sequential write for ever block that’s backed up during an incremental. I see a lot of issue with people attempting to write to older NAS devices or 3-4 drive RAID arrays which might have decent throughput, but poor random access. This is not as much of an issue with fulls and the new forward incrementals in Veeam 5, but still has some impact.

 

No doubt Veeam creates files a lot differently that most vendors. Veeam does not just create a sequential, compressed dump of the VMDK files. 

Veeam’s file format is effectively a custom database designed to store compressed blocks and their hashes for reasonably quick access. The hashes allow for dedupe (blocks with matching hashes are the same), and there’s some added overhead to provide additional transactional safety so that you VBK file is generally recoverable after a crash. That means Veeam files have a storage I/O pattern more like a busy database than a traditional backup file dump.

Leave a Reply