Category: Others (其它)

Veeam B&R 5: Block Size Optimization and Reversed Incremental or Synthetic Full

By admin, December 20, 2010 12:31 pm

Saw these two piece of useful information on Veeam’s forum today, mostly contributed by Tom:

Block Size Optimization

In the job properties, on the “Backup Destination” if you hit “Advanced” and then select the “Storage” tab with V5 you can now select to optimize for Local disk, LAN target, and WAN target. What this is really doing is setting the “block size” of the VBK. Previous version of Veeam always used 1MB blocks, which is now the equivalent of the “Local disk” option. LAN target uses 512KB blocks, and WAN target uses 256KB blocks. The smaller blocks sizes typically mean that incrementals are smaller and thus less data is transferred over the LAN or WAN at the cost of some CPU. Because we push our backups across sites we always use the WAN target settings.

 

Reversed Incremental or Synthetic Full

As a general rule, reverse incrementals are going to use the least amount of space on disk, but the most amount on tape, while the reverse is true of forward incrementals, they will use more disk space, but significantly less tape space. The only exception might be if you have a fairly small retention period.

Assume you have a 100GB full backup with 10GB of changes a day, here’s how the space would break down assuming 4 weeks retention:

Reverse Incremental:
Disk — 100GB Full + 280GB (10GB/day * 28 days) of reverse incrementals = 380GB
Tape — 100GB VBK copied to tape every day * 28 days = 2.8TB

Forward Incremental w/Synthetic Full:
Disk — 400GB (100GB Full * 1 per week) + 240GB (10GB/day incrementals * 24 days) = 640GB
Tape — Same as disk, since you simply copy the full or incremental to tape every day = 640GB

So, the Forward Incremental/Synthetic option in this scenario would use ~70% more disk space, but less than 25% of the tape space. If you’re planning to keep only a short period of disks on retention and use tape for long term storage then forward incrementals will save space, but that’s about the only scenario where it will save space. For the best space savings with on disk retention, reverse incremental are the way to go, but at the cost of a large amount of tape space.

 

Personally I prefer reversed incremental because my main strategy is to get a full working backup to tape every night without the need to merge increments when in an emergency need to get my files back from tape.

YY(Yonex)的ISO Metric果然確名不虛傳

By admin, December 14, 2010 9:29 pm

多謝六叔同Will和我今天爆左好多板﹐平時最弱的開波今日竟然發揮到﹐靠它拿了不少分﹐總結就是絕對不能懶﹐由拋球﹑彎腰﹑weight transfer到擊球﹐樣樣都不能懶﹐還有就是上次師兄提醒我拋球是要拋去左邊一些﹐才能做出大角度和多些Spin﹔加上Wolf兄在星期日示範了如何運用前臂去包球而產生更加多的旋轉和向前沖的力量﹐這些都增加了今日的發球的穩定性和力量。

但平日最有信心的正手底線抽擊則在正式開波後就頻頻失效﹐Rally時底線抽擊還好地地﹐不關節奏﹑不關球速度﹐到底是為什麼﹐拉拍到收拍做足晒﹐可是就是沒威力和頻頻落網﹐我現在也想不清楚﹐可能還是因為自己走的不夠快﹐打波靠對腳也就是這個道理了。

最後嘗試了Will的新YY Ezone球拍﹐YY 的ISO Metric果然確名不虛傳﹐跟之前試過的一系列YY一樣﹐真的很SPIN和容易控制尤其是上網和Approach Shot﹐而且手感極好﹗

以後有機會一定要再請教一下用YY的師兄們﹐問問他們多些關於YY的意見﹐開始越來越喜歡YY了﹐因為始終想找一個POG的代替品。(POG已經是一個近25年歷史的爺爺了﹐但實在太喜歡它的手感了)

 

ezone01[1]

讓子彈飛

By admin, December 10, 2010 10:59 pm

此片極有可能是本年度最期待的華語電影之一﹐光看主角就已經是陣容鼎盛﹗

姜文﹑周潤發和葛優﹐三位都是極有份量的演員。尤其是葛爺﹐想不到他今年新年檔期﹐一人就擔當了3個大片的主角﹗

01

尋找回發球的信心

By admin, November 16, 2010 10:16 pm

今天在一片高興的氣氛下結束了刺激的雙打比賽。我很驚訝地發現我今天的發球是如此的得心應手﹐很可能是今天的拋球姿勢正確了﹐所以出球無論是力量和旋轉都有了根本的變化﹐而且最主要是心理方面的調整﹐完全放開包袱和儘量放膽去開球﹐出來的效果跟昨天判若兩人﹐加上多了網前VOLLEY主動出擊﹐自己都覺得表現滿意﹐在場的另外3位可以為我做證。

恍如隔世 Back from Hell!

By admin, November 14, 2010 11:19 pm

時間不知不覺飛一樣地過了3個月﹐我也終于捱過了這極度艱苦的3個月。

因為8月開始公司需要進行大型的系統升級而且加上時間緊迫﹐所以這段期間連睡覺時間都很寶貴﹐每天幾乎都要工作長達13-15個鐘頭﹐最低時薪$28肯定達不到。 (對IT/Network/Server有興趣的朋友﹐可以到我的Blog看看詳情)

有時在街上遇見波友﹐都只好苦笑一下﹐很無奈﹐沒辦法﹑生計緊要﹐網球只能排第二。

Now I am back from Hell!   又可以從新開始新的波季﹗

真心希望可以和大家從新再次享受Happy Tennis!  

單雙打無所謂﹐最緊要開心﹗放心我沒退步太多﹐因為上星期打了3個月正式的第一場﹐感覺十分良好。

而且這3個月我心理上得到了“極度充實的壓力測試”鍛煉﹐這下應該可以在球場上發揮得更加臨危不亂了。

人生中總有幾次大起伏﹐正如政府宣傳講“方法總比困難多”﹐只要從容面對﹐加上努力多數都可以克服﹐即使達不到﹐也無需要後悔。

而且我這次學會了放棄放下一些長期的心結﹐原來之後得到的更多。

Just like in a tennis game, sometimes, you need to learn when to let go for a shot or even a game and prepare for the next one that may eventually lead to a victory in a match.

正所謂”置之死地而后生“﹐我完全明白了這個道理﹐當然﹐過程之辛苦(尤其是心理上)﹐我想經歷過類似困難的朋友都應該知道我的意思。

Inception

By admin, November 6, 2010 3:22 pm

看著他從13年前Titanic裡青澀的Jack到The Departed裡火爆的Billy﹐到近期一連兩套類似的戲Shutter Island裡面的Teddy和今天Inception裡的Cobb﹐Leonardo Dicaprio真的是越來越有魅力﹐短短十年內﹐演技已經達到爐火純青的地步﹐Hollywood有此等人才的確難得。

回到此片﹐可以說是近年來自Matrix後又一令觀眾深思的電影﹐它極有可能會成為以後這種題材的經典教材。

InceptionPoster3WBHD-691x1024

Veeam Backup Space Required: Formula and Calculation

By admin, October 31, 2010 12:03 pm

Found this really useful information in the Veeam’s forum (contributed by Anton), a great place to learn about the product and their staff are all very caring and warm.

The formulas we use for disk space estimation are the following:

Backup size = C * (F*Data + R*D*Data)
Replica size = Data + C*R*D*Data

Data = sum of processed VMs size (actually used, not provisioned)

C = average compression/dedupe ratio (depends on too many factors, compression and dedupe can be very high, but we use 50% – worst case)

F = number of full backups in retention policy (1, unless periodic fulls are enabled)

R = number of rollbacks according to retention policy (14 by default)

D = average amount of VM disk changes between cycles in percent (we use 10% right now, but will change it to 5% in v5 based on feedback… reportedly for most VMs it is just 1-2%, but active Exchange and SQL can be up to 10-20% due to transaction logs activity – so 5% seems to be good average)

Some interesting findings about Veeam Backup v5

By admin, October 29, 2010 11:56 am

The followings are my own findings about Veeam Backup v5: 

  • If you have more than 30-50 VM (average 20-30GB) to backup, it’s better to setup a 2nd Veeam Backup server to load-balance the job. So eventually, you may have a number of load-balanced Veeam Backup servers to evenly spread out the loading, and this will greatly reduce the queue time and shorten the total backup windows.Don’t Worry, all Veeam Backup servers are managed centrally by Veeam Enterprise Manager (and the best part is additional Veeam Backup servers DOES NOT count towards your Veeam licenses, how thoughtful and care Veeam is, thank you!)It’s recommended by Veeam to create 3-4 jobs per Veeam Backup server due to high CPU usage during the backup windows for de-duplication and compression. However this doesn’t apply Replication which doesn’t use de-duplication and compression.Note: However there is another way to fully utilize your one and only Veeam Backup Server by adding more than 4 concurrent jobs. (See quote below)
  • The “Estimated required Space” within Job description is incorrect as it’s a know bug that VMware API or Veeam doesn’t know how to interpret Thin-Provisioning volume yet, so estimate by yourself. Most of the time, it will over-estimate the Total Required backup space by 3-10x more, so don’t worry!
  • After each job completes, you will see a Processing rate: (for example 251 MB/s), which means total size of VM divided by total time it took to process the VM. This is affected by time it takes to talk to vCenter, freeze guest OS, create and remove snapshot, backup small VM files (configuration), and backup actual disks.

I am still not sure if the target storage requires very fast I/O or high IOPS device like 4-12 RAID10/RAID50 10K/15K SAS disks as some say the backup process with de-duplication and compress is random, some say it’s sequential, so if it’s random, then we need fast spindle with lots of disks, if it’s sequential, then we only need cheap 7200RPM SATA disks, a 4 RAID 5 2TB will be the most cost effective solution for storing the vbk/vik/vrk images.

Some interesting comments quote from Veeam’s forum relating to my findings:

That’s why we run multiple jobs. Not only that, but when doing incremental backup, a high percentage of the time is spent simply prepping the guests OS, taking the snapshot, removing the snapshot, etc, etc. With Veeam V5 you get some more “job overhead” if you use the indexing feature since to system has to build the index file (can take quite some time on systems with large numbers of files) and then backup the zipped index via the VM tools interface. This time is all calculated in the final “MB/sec” for the job. That means that if you only have a single job running there will be lots of “down time” where no transfer is really occurring, especially with incremental backups because there’s relatively little data transferred for most VM’s compared to the amount of time spent taking and removing the snapshot. Multiple jobs help with this because, while one job may be “between VM’s” handling it’s housekeeping, the other job is likely to be transferring data.

There are also other things to consider as well. If you’re a 24×7 operation, you might not really want to saturate your production storage just to get backups done. This is admittedly less of an issue with CBT based incrementals, but used to be a big deal with ESX 3.5 and earlier, and full backups can still be impacting to your production storage. If I’m pushing 160MB/sec from one of my older SATA Equallogic arrays, it’s I/O latency will shoot to 15-20ms or more, which severely impacts server performance on that system. Might not be an issue if your not a 24×7 shop and you have a backup window where you can hammer you storage as much as you want, but is certainly an issue for us. Obviously we have times that are more quiet than others, and our backup windows coincide with our “quiet” time, but we’re a global manufacturer, so systems have to keep running and performance is important even during backups.

 

Finally, one thing often overlooked is the backup target. If you’re pulling data at 60MB/sec, can you write the data that fast? Since Veeam is compressing and deduping on the fly, it can have a somewhat random write pattern even when it’s running fulls, but reverse incrementals are especially hard on the target storage since they require a random read, random write, and sequential write for ever block that’s backed up during an incremental. I see a lot of issue with people attempting to write to older NAS devices or 3-4 drive RAID arrays which might have decent throughput, but poor random access. This is not as much of an issue with fulls and the new forward incrementals in Veeam 5, but still has some impact.

 

No doubt Veeam creates files a lot differently that most vendors. Veeam does not just create a sequential, compressed dump of the VMDK files. 

Veeam’s file format is effectively a custom database designed to store compressed blocks and their hashes for reasonably quick access. The hashes allow for dedupe (blocks with matching hashes are the same), and there’s some added overhead to provide additional transactional safety so that you VBK file is generally recoverable after a crash. That means Veeam files have a storage I/O pattern more like a busy database than a traditional backup file dump.

如果吸毒會毀一生,那麼玩攝影就會窮三代 (轉文)

By admin, October 28, 2010 10:00 am

14天前:明天是還在外地讀書的女朋友生日,買了個佳能50D套机【7800RMB】照點相片紀念。新机到手于是先跑到公園試拍,正在公園拍攝荷花的老頭說佳能成像太肉,尼康比較銳利,還現場用事實說話,于是后悔。

13天前:把佳能相机干脆當做禮物送給了女朋友,自己回家路上買了尼康D300s套机16-85mm【14800RMB】,另外還買了台筆記本電腦【5400RMB】,不然照片照出來沒得地方觀賞。

12天前:發現焦距不夠長,于是發現尼康AF-S DX VR 18-200mm F3.5-5.6GIF-ED【5600RMB】可以一鏡走天下,安逸,于是買了它。買來當天就去打鳥了。正巧遇見一些攝影家也在河邊拍白鶴,天呢,別人用的是尼康AF-S 300mm F4DIF-ED鏡頭,一問价格【8000RMB】,再看看別人的照片,如果不買一個怎么能拍出別人那樣的好看的照片呢,再次出血。本來有個尼康AF-S300mm F2.8 IF-ED II的,但是要三万多,買不起。

11天前:前兩天認識几個影友,說今天去拍模特人像,說85/1.4D【7000RMB】很不錯,買一個帶去會有很好的收獲,于是下手買了它。帶去拍攝的時候發現想拍兩張模特全身都比較困難。很懊惱。

10天前:我家妹妹說給她拍點照片做相冊,一想要是50/1.4mm【4000RMB】多好,于是馬不停蹄買了個。莫說,這回是比昨天照得更全面一些。但是總覺得比較肉。

9天前:單位喊我去幫忙拍攝生產車間的全貌,把全部鏡頭掏出來,都沒得一個超廣角,為了在領導面前顯示自己攝影很得行,于是立馬買了個12-24mm/F4G IF-ED【8200RMB】。

8天前:今天加入了攝影協會,攝影群,瀏覽攝影网站[攝影無忌];[太平洋攝影];[大眾攝影];[車壇影協];[新攝影];[蜂鳥网];[中國攝影家];[路客驢舍];[橡樹攝影]等等,發現一個問題,全畫幅相机可以在成像上得到更大优勢。主要的就是我昨天的車間拍攝最為例子。要是尼康D700【13800RMB】配上12-24mm/F4GIF-ED,省去乘那1.5的系數,還有那超高感光,那就是什么都解決了。通過网絡的學習,決定添置D700全畫幅是沒有錯的。于是把D300s套机送給了自家妹妹。

7天前:和攝影家協會的人員來往,听說D700和12-24;24-70;70-200是全世界最絕配的搭檔。都上全畫幅單反了,不上牛頭那怎么行?由于太貴,所以敗了個水貨24-70/2.8【12000RMB】,70-200/2.8【14500RMB】。

6天前:買了不少攝影書,訂了不少報紙刊物【1200RMB】,上班下班都在看,研究軟件處理照片,結識了一些攝影發燒友。

5天前:發燒友們的照片,几乎都是定焦拍攝的,變焦靠走,成像質量上等一流。确實這些天來用了不少錢,手頭緊張,据說定焦也用得比較少,于是就買了兩個副厂的貨,一個是适馬30mm F1.4 EX DC HSM鏡頭【3300RMB】,一個騰龍SP AF 180mm F3.5 Di LD-IF鏡頭【8300RMB】,在圖麗鏡頭里面确實是選不出一個定焦。

4天前:發現适馬鏡頭照片色彩偏綠,銳度和飽和度都比較惱火,騰龍照出來一片灰調,卡白卡白的。這才明白一個道理,為什么發燒友們用的都是原裝定焦頭,真是貴得所值。那現在怎么辦,只有再選原厂鏡頭。再次敗了尼康AF DX Fisheye 10.5mm F2.8G ED鏡頭【5500RMB】,尼康 Ai AF 18mm F2.【8100RMB】,尼康AF-S Micro NIKKOR 60mm F2.8G ED鏡頭【5500RMB】。還有個确實不敢買的尼康PC-E NIKKOR 24mm f/3.5D ED鏡頭,這可是個移軸鏡頭,拍建筑非常好,兩万多塊錢喲。

3天前:差中長焦遠射定焦頭,再次購得尼康Ai AF DC 135mm F2D鏡頭【7200MRB】,外加前几天買的300/F4已經夠了,像尼康AF-S 600mm F4G ED VR鏡頭确實不敢買,市場价九万多。

2天前:听說鏡頭保護不好容易發霉,需要干燥放置,于是買了個電子干燥箱,花了我一個月的工資【4000RMB】。中午听說路客驢舍戶外組織爬山露營以及攝影采風活動,于是帶上了相机器材和露營裝備等物出去了。

昨天:由于前一天參加活動,相机太重,導致脖子扭傷及病倒。去醫院看病和保健按摩花費【250RMB】,出院回家路上順便去了趟攝影器材城,給商家講述自己病的情況,結果喊我買了卡片机佳能G11【4350RMB】。本來分別有2万多元的徠卡M8和六万多的徠卡M9的,可錢包里面怎么也沒得那么多錢。

今天:女朋友放假來到我家,打開她背包一看,除了我送她的佳能套机外包包里面多了17-40/4.0【4800RMB】;50/1.8【700RMB】;70-200/2.8L is USM【13000RMB】外加一些配件和包包【2800RMB】,當場我就傻了,她把這些年的獎學金全部用光了。打電話喊妹妹回來陪未來嫂子耍,妹妹說正在器材城看全畫幅相机,當場耳鳴頭暈。

 

PS:功夫不負有心人,下午一年多沒有聯系的出版社編輯朋友說來家拷點我的照片印到書上去,确實讓我高興了不少,走的時候問他出什么書,他說《錯誤的曝光与构圖》示范實例教科書,當時就想一腳架碾過去。

明天:以我現在每月2000元的支付能力計算,我將需要N年才能來完成還清在朋友那和銀行的借貸款。當然期望快點漲工資,我還有個尼康D3X沒有買【45000元】以及剛剛听別人介紹的瑪米亞DM28中畫幅數碼相机【十一万左右】。

未來:我也要四處宣傳玩單反相机的好。

 

PS:一年后,和女友結婚去了外地工作,家里房屋拆遷,70歲的奶奶在家把所有相机鏡頭以每斤5毛錢賣給了廢鐵收購站。唯獨還剩下一個佳能50/1.8,收廢鐵的說全是塑料,不要!

VMware KB1010184: Setting the number of cores per CPU in a virtual machine

By admin, October 26, 2010 11:05 pm

Some operating system SKUs are hard-limited to run on a fixed number of CPUs. For example, Windows Server 2003 Standard Edition is limited to run on up to 4 CPUs. If you install this operating system on an 8-socket physical box, it runs on only 4 of the CPUs. The operating system takes advantage of multi-core CPUs so if your CPUs are dual core, Windows Server 2003 SE runs on up to 8 cores, and if you have quad-core CPUs, it runs on up to 16 cores, and so on.

Virtual CPUs (vCPU) in VMware virtual machines appear to the operating system as single core CPUs. So, just like in the example above, if you create a virtual machine with 8 vCPUs (which you can do with vSphere) the operating system sees 8 single core CPUs. If the operating system is Windows 2003 SE (limited to 4 CPUs) it only runs on 4 vCPUs.

Note: Remember that 1 vCPU maps onto a physical core not a physical CPU, so the virtual machine is actually getting to run on 4 cores.

This is an over simplication, since vCPUs are scheduled on logical CPUs which are hardware execution contexts. These can be a while CPU in the case of a single core CPU, or a single core in the case of CPUs that have only 1 thread per core, or could be just a thread in the case of a CPU that has hyperthreading.

Consider this scenario:

In the physical world you can run Windows 2003 SE on up to 8 cores (using a 2-socket quad-core box) but in a virtual machine they can only run on 4 cores because VMware tells the operating system that each CPU has only 1 core per socket.

VMware now has a setting which provides you control over the number of cores per CPU in a virtual machine.

This new setting, which you can add to the virtual machine configuration (.vmx) file, lets you set the number of cores per virtual socket in the virtual machine.

To implement this feature:

  1. Power off the virtual machine.
  2. Right-click on the virtual machine and click Edit Settings.
  3. Click Hardware and select CPUs.
  4. Choose the number of virtual processors.
  5. Click the Options tab.
  6. Click General, in the Advanced options section.
  7. Click Configuration Parameters.
  8. Include cpuid.coresPerSocket in the Name column.
  9. Enter a value (try 2, 4, or 8 ) in the Value column.Note: Ensure that cpuid.coresPerSocket is divisible by the number of vCPUs in the virtual machine. That is, when you divide cpuid.coresPerSocket by the number of vCPUs in the virtual machine, it must return an integer value. For example, if your virtual machine is created with 8 vCPUs, coresPerSocket can only be 1, 2, 4, or 8. The virtual machine now appears to the operating system as having multi-core CPUs with the number of cores per CPU given by the value that you provided in step 9. 
  10. Click OK.
  11. Power on the virtual machine.

For example:

Create an 8 vCPU virtual machine and set cpuid.coresPerSocket = 2. Window Server 2003 SE running in this virtual machine now uses all 8 vCPUs. Under the covers, Windows sees 4 dual-core CPUs. The virtual machine is actually running on 8 physical cores.

Note:

  • Only values of 1, 2, 4, 8 for the cpuid.coresPerSocket are supported for the multi-core vCPU feature in ESX 4.0.
  • In ESX 4.0, if multi-core vCPU is used, hot-plug vCPU is not permitted, even if it is available in the UI.
  • Only HV 7 virtual machines support the multi-core vCPU feature.

Important: When using cpuid.coresPerSocket, you should always ensure that you are in compliance with the requirements of your operating system EULA (Regarding the number of physical CPUs on which the operating system is actually running).

Update Apr 19

One good example is Windows Server 2003 Web Edition limited to 2 CPU sockets only, so if you assign 8 vCPUs, it will only see 2, by setting cpuid.coresPerSocket = 4 and assign 8 vCPUs, it means your server will have 2 CPU sockets and each socket will have 4 cores, so this manually override the default and allows you to have 8 CPUs technically speaking 8 Cores with Windows Server 2003 Web Edition which is previously impossible before ESX 4.1. :)

Pages: Prev 1 2 3 4 5 6 7 ...87 88 89 ...102 103 104 Next