Veeam Backup Space Required: Formula and Calculation

By admin, October 31, 2010 12:03

Found this really useful information in the Veeam’s forum (contributed by Anton), a great place to learn about the product and their staff are all very caring and warm.

The formulas we use for disk space estimation are the following:

Backup size = C * (F*Data + R*D*Data)
Replica size = Data + C*R*D*Data

Data = sum of processed VMs size (actually used, not provisioned)

C = average compression/dedupe ratio (depends on too many factors, compression and dedupe can be very high, but we use 50% – worst case)

F = number of full backups in retention policy (1, unless periodic fulls are enabled)

R = number of rollbacks according to retention policy (14 by default)

D = average amount of VM disk changes between cycles in percent (we use 10% right now, but will change it to 5% in v5 based on feedback… reportedly for most VMs it is just 1-2%, but active Exchange and SQL can be up to 10-20% due to transaction logs activity – so 5% seems to be good average)

傳聞中全開的BBR樹脂膠Ferrari 458

By admin, October 30, 2010 11:13




我把我知道的和大家分享一下。摟主觀察得很仔細,的确那個白模(灰模)并非烽火輪的。合金模型的零部件眾多,他的灰模也不是這個樣子。這個其實是bbr 手版模型試圖生產開門版本的一個試驗模型。很遺憾這個模型最終沒有被通過,兩個原因—-第一造价的昂貴(1多開模,2必須用技術水平極高的師傅才能組裝,3損坏率高)用這樣的成本生產出來以后,能夠承受得了這個价格的客人估計已經很少了。 第二,要解決開啟次數能足夠多,而接縫不變形,這些存在一些技術上的難關。 所以BBR目前暫時放棄了生產開門458手版的計划。至于以后是否可以做開門的手版模型,這個不得而知。 BBR合金的458是否能生產開門版本的,目前沒有說會生產,但是也沒有說一定不生產。只能說生產的可能性很小,其實不必糾結在這里了。


Some interesting findings about Veeam Backup v5

By admin, October 29, 2010 11:56

The followings are my own findings about Veeam Backup v5: 

  • If you have more than 30-50 VM (average 20-30GB) to backup, it’s better to setup a 2nd Veeam Backup server to load-balance the job. So eventually, you may have a number of load-balanced Veeam Backup servers to evenly spread out the loading, and this will greatly reduce the queue time and shorten the total backup windows.Don’t Worry, all Veeam Backup servers are managed centrally by Veeam Enterprise Manager (and the best part is additional Veeam Backup servers DOES NOT count towards your Veeam licenses, how thoughtful and care Veeam is, thank you!)It’s recommended by Veeam to create 3-4 jobs per Veeam Backup server due to high CPU usage during the backup windows for de-duplication and compression. However this doesn’t apply Replication which doesn’t use de-duplication and compression.Note: However there is another way to fully utilize your one and only Veeam Backup Server by adding more than 4 concurrent jobs. (See quote below)
  • The “Estimated required Space” within Job description is incorrect as it’s a know bug that VMware API or Veeam doesn’t know how to interpret Thin-Provisioning volume yet, so estimate by yourself. Most of the time, it will over-estimate the Total Required backup space by 3-10x more, so don’t worry!
  • After each job completes, you will see a Processing rate: (for example 251 MB/s), which means total size of VM divided by total time it took to process the VM. This is affected by time it takes to talk to vCenter, freeze guest OS, create and remove snapshot, backup small VM files (configuration), and backup actual disks.

I am still not sure if the target storage requires very fast I/O or high IOPS device like 4-12 RAID10/RAID50 10K/15K SAS disks as some say the backup process with de-duplication and compress is random, some say it’s sequential, so if it’s random, then we need fast spindle with lots of disks, if it’s sequential, then we only need cheap 7200RPM SATA disks, a 4 RAID 5 2TB will be the most cost effective solution for storing the vbk/vik/vrk images.

Some interesting comments quote from Veeam’s forum relating to my findings:

That’s why we run multiple jobs. Not only that, but when doing incremental backup, a high percentage of the time is spent simply prepping the guests OS, taking the snapshot, removing the snapshot, etc, etc. With Veeam V5 you get some more “job overhead” if you use the indexing feature since to system has to build the index file (can take quite some time on systems with large numbers of files) and then backup the zipped index via the VM tools interface. This time is all calculated in the final “MB/sec” for the job. That means that if you only have a single job running there will be lots of “down time” where no transfer is really occurring, especially with incremental backups because there’s relatively little data transferred for most VM’s compared to the amount of time spent taking and removing the snapshot. Multiple jobs help with this because, while one job may be “between VM’s” handling it’s housekeeping, the other job is likely to be transferring data.

There are also other things to consider as well. If you’re a 24×7 operation, you might not really want to saturate your production storage just to get backups done. This is admittedly less of an issue with CBT based incrementals, but used to be a big deal with ESX 3.5 and earlier, and full backups can still be impacting to your production storage. If I’m pushing 160MB/sec from one of my older SATA Equallogic arrays, it’s I/O latency will shoot to 15-20ms or more, which severely impacts server performance on that system. Might not be an issue if your not a 24×7 shop and you have a backup window where you can hammer you storage as much as you want, but is certainly an issue for us. Obviously we have times that are more quiet than others, and our backup windows coincide with our “quiet” time, but we’re a global manufacturer, so systems have to keep running and performance is important even during backups.


Finally, one thing often overlooked is the backup target. If you’re pulling data at 60MB/sec, can you write the data that fast? Since Veeam is compressing and deduping on the fly, it can have a somewhat random write pattern even when it’s running fulls, but reverse incrementals are especially hard on the target storage since they require a random read, random write, and sequential write for ever block that’s backed up during an incremental. I see a lot of issue with people attempting to write to older NAS devices or 3-4 drive RAID arrays which might have decent throughput, but poor random access. This is not as much of an issue with fulls and the new forward incrementals in Veeam 5, but still has some impact.


No doubt Veeam creates files a lot differently that most vendors. Veeam does not just create a sequential, compressed dump of the VMDK files. 

Veeam’s file format is effectively a custom database designed to store compressed blocks and their hashes for reasonably quick access. The hashes allow for dedupe (blocks with matching hashes are the same), and there’s some added overhead to provide additional transactional safety so that you VBK file is generally recoverable after a crash. That means Veeam files have a storage I/O pattern more like a busy database than a traditional backup file dump.

如果吸毒會毀一生,那麼玩攝影就會窮三代 (轉文)

By admin, October 28, 2010 10:00



12天前:發現焦距不夠長,于是發現尼康AF-S DX VR 18-200mm F3.5-5.6GIF-ED【5600RMB】可以一鏡走天下,安逸,于是買了它。買來當天就去打鳥了。正巧遇見一些攝影家也在河邊拍白鶴,天呢,別人用的是尼康AF-S 300mm F4DIF-ED鏡頭,一問价格【8000RMB】,再看看別人的照片,如果不買一個怎么能拍出別人那樣的好看的照片呢,再次出血。本來有個尼康AF-S300mm F2.8 IF-ED II的,但是要三万多,買不起。



9天前:單位喊我去幫忙拍攝生產車間的全貌,把全部鏡頭掏出來,都沒得一個超廣角,為了在領導面前顯示自己攝影很得行,于是立馬買了個12-24mm/F4G IF-ED【8200RMB】。




5天前:發燒友們的照片,几乎都是定焦拍攝的,變焦靠走,成像質量上等一流。确實這些天來用了不少錢,手頭緊張,据說定焦也用得比較少,于是就買了兩個副厂的貨,一個是适馬30mm F1.4 EX DC HSM鏡頭【3300RMB】,一個騰龍SP AF 180mm F3.5 Di LD-IF鏡頭【8300RMB】,在圖麗鏡頭里面确實是選不出一個定焦。

4天前:發現适馬鏡頭照片色彩偏綠,銳度和飽和度都比較惱火,騰龍照出來一片灰調,卡白卡白的。這才明白一個道理,為什么發燒友們用的都是原裝定焦頭,真是貴得所值。那現在怎么辦,只有再選原厂鏡頭。再次敗了尼康AF DX Fisheye 10.5mm F2.8G ED鏡頭【5500RMB】,尼康 Ai AF 18mm F2.【8100RMB】,尼康AF-S Micro NIKKOR 60mm F2.8G ED鏡頭【5500RMB】。還有個确實不敢買的尼康PC-E NIKKOR 24mm f/3.5D ED鏡頭,這可是個移軸鏡頭,拍建筑非常好,兩万多塊錢喲。

3天前:差中長焦遠射定焦頭,再次購得尼康Ai AF DC 135mm F2D鏡頭【7200MRB】,外加前几天買的300/F4已經夠了,像尼康AF-S 600mm F4G ED VR鏡頭确實不敢買,市場价九万多。



今天:女朋友放假來到我家,打開她背包一看,除了我送她的佳能套机外包包里面多了17-40/4.0【4800RMB】;50/1.8【700RMB】;70-200/2.8L is USM【13000RMB】外加一些配件和包包【2800RMB】,當場我就傻了,她把這些年的獎學金全部用光了。打電話喊妹妹回來陪未來嫂子耍,妹妹說正在器材城看全畫幅相机,當場耳鳴頭暈。







VMware KB1010184: Setting the number of cores per CPU in a virtual machine

By admin, October 26, 2010 23:05

Some operating system SKUs are hard-limited to run on a fixed number of CPUs. For example, Windows Server 2003 Standard Edition is limited to run on up to 4 CPUs. If you install this operating system on an 8-socket physical box, it runs on only 4 of the CPUs. The operating system takes advantage of multi-core CPUs so if your CPUs are dual core, Windows Server 2003 SE runs on up to 8 cores, and if you have quad-core CPUs, it runs on up to 16 cores, and so on.

Virtual CPUs (vCPU) in VMware virtual machines appear to the operating system as single core CPUs. So, just like in the example above, if you create a virtual machine with 8 vCPUs (which you can do with vSphere) the operating system sees 8 single core CPUs. If the operating system is Windows 2003 SE (limited to 4 CPUs) it only runs on 4 vCPUs.

Note: Remember that 1 vCPU maps onto a physical core not a physical CPU, so the virtual machine is actually getting to run on 4 cores.

This is an over simplication, since vCPUs are scheduled on logical CPUs which are hardware execution contexts. These can be a while CPU in the case of a single core CPU, or a single core in the case of CPUs that have only 1 thread per core, or could be just a thread in the case of a CPU that has hyperthreading.

Consider this scenario:

In the physical world you can run Windows 2003 SE on up to 8 cores (using a 2-socket quad-core box) but in a virtual machine they can only run on 4 cores because VMware tells the operating system that each CPU has only 1 core per socket.

VMware now has a setting which provides you control over the number of cores per CPU in a virtual machine.

This new setting, which you can add to the virtual machine configuration (.vmx) file, lets you set the number of cores per virtual socket in the virtual machine.

To implement this feature:

  1. Power off the virtual machine.
  2. Right-click on the virtual machine and click Edit Settings.
  3. Click Hardware and select CPUs.
  4. Choose the number of virtual processors.
  5. Click the Options tab.
  6. Click General, in the Advanced options section.
  7. Click Configuration Parameters.
  8. Include cpuid.coresPerSocket in the Name column.
  9. Enter a value (try 2, 4, or 8 ) in the Value column.Note: Ensure that cpuid.coresPerSocket is divisible by the number of vCPUs in the virtual machine. That is, when you divide cpuid.coresPerSocket by the number of vCPUs in the virtual machine, it must return an integer value. For example, if your virtual machine is created with 8 vCPUs, coresPerSocket can only be 1, 2, 4, or 8. The virtual machine now appears to the operating system as having multi-core CPUs with the number of cores per CPU given by the value that you provided in step 9. 
  10. Click OK.
  11. Power on the virtual machine.

For example:

Create an 8 vCPU virtual machine and set cpuid.coresPerSocket = 2. Window Server 2003 SE running in this virtual machine now uses all 8 vCPUs. Under the covers, Windows sees 4 dual-core CPUs. The virtual machine is actually running on 8 physical cores.


  • Only values of 1, 2, 4, 8 for the cpuid.coresPerSocket are supported for the multi-core vCPU feature in ESX 4.0.
  • In ESX 4.0, if multi-core vCPU is used, hot-plug vCPU is not permitted, even if it is available in the UI.
  • Only HV 7 virtual machines support the multi-core vCPU feature.

Important: When using cpuid.coresPerSocket, you should always ensure that you are in compliance with the requirements of your operating system EULA (Regarding the number of physical CPUs on which the operating system is actually running).

Update Apr 19

One good example is Windows Server 2003 Web Edition limited to 2 CPU sockets only, so if you assign 8 vCPUs, it will only see 2, by setting cpuid.coresPerSocket = 4 and assign 8 vCPUs, it means your server will have 2 CPU sockets and each socket will have 4 cores, so this manually override the default and allows you to have 8 CPUs technically speaking 8 Cores with Windows Server 2003 Web Edition which is previously impossible before ESX 4.1. :)


By admin, October 25, 2010 14:30




芳華絕代的Mercedes-Benz 500K Special Roadster

By admin, October 23, 2010 23:14






1/18 Maisto Mercedes-Benz 500K Type Special Roadster (1936)


賽道歷史上的光輝: Ford GT 40 MKII

By admin, October 23, 2010 16:40

Ford GT 40 MK 系列曾經在世界超跑賽道歷史上佔有一個極其重要的席位﹐我特別喜歡這個系列裡的MK II﹐因為它那幾乎趴在地上的扁平車身﹐加上兩隻大大的眼睛﹐其實MK II 氣勢來的更像南美洲的眼睛蛇﹐伺機把對手隨時KO。

delPrado這款Ford GT MKII性價比來的很高﹐因為年代已久﹐所以車身難免有些泡泡﹐但作為收個車型來說﹐還是特別值得的。


1/43 delPrado Ford GT 40 MKII

delPrado IMG_2952a

delPrado IMG_2952b

Feature Request to Equallogic, please add READ ONLY to specific iSCSI initiator

By admin, October 22, 2010 22:55

From Equallogic Support:

I believe our (Dell EqualLogic) intent was to either restrict access
on the volume level, not based on the ACL passed by the
initiator.  However, if you can express in writing exactly what you
would like to be added as a feature, why you would like this
feature, and how it would be beneficial to you within your
environment, we will create an enhancement request on your behalf.

Just be aware that submitting an enhancement requests does not
guarantee your request will be honored.  New features are added at
the sole discretion of the engineering team and will be added based
upon the needs of all customers and if the underlying code can/will
support the request.



My Reply:

It’s because many people use Veeam Backup to backup their VMs and in order for Veeam to use SAN offload feature (ie, directly backup the VM from SAN to backup server without going through ESX host), so we need to mount the VMFS Volume direclty from EQL to Windows Server Host and you may ask it will corrupt the VMFS volume which is concurrently mounted by ESX Host, NO it won’t as before mounting the volume on Windows Host, we issued automount=disable to make sure Windows Server won’t automatically initialize or format the volume by accident. (In fact, I found the mounted Equallogic volume under Disk Manager cannot be initialized, everything is gray out and it won’t show in IOMeter as well, but you can Delete Volume under Windows Server 2008 Disk Manager, strange!)

It will serve as a Double Insurance feature if EQL can implement such READ-ONLY to a specific iSCSI initiator that will greatly improve the protection of the attached volume for use of Veeam SAN Offload Backup.

I am sure there are many Veeam and Equallogic users would love to see this feature added.

Please kindly consider adding this feature in coming firmware release.

Thank you very much in advance!

半島酒店的Rolls Royce Phantom

By admin, October 22, 2010 13:12

今天早上起床﹐發現樓下特別熱鬧﹐原來是鄰居的大喜之日﹐還有架半島酒店的超級大型Rolls Royce陪襯﹐正宗英國綠的Phantom真的很有氣勢﹗





Pages: 1 2 3 4 Next