開球和底線開始有明顯的改善

By admin, April 22, 2011 11:40 am

最近和不少朋友進行單打比賽﹐他們都不約而同地說我的球技穩健了﹐我覺得最明顯的是開球方面每Set少于3個Double Fault﹐而且第二發比以前有明顯的穩定性﹐應該是轉了半Backhand Grip和加強Spin的原因吧﹐現在更嘗試反手開多些大斜角Kick Serve。

而底線方面﹐對著強勁的上選﹑平擊時也鎮定了不少﹐而且儘量會做足Foot Work﹐令自己可以有充分的時間減少Late Hit的機會﹐所以來回10幾板已經可以應付自如了﹐但面對正反手都用強勁下側旋的攻擊還是無能為力。唯一的不好處就是感覺膝蓋的壓力不斷增加﹐開始酸軟了﹐年紀的關係吧﹐哈哈。。。

總結近期的進步最主要可能是心態上的改變和動作上的完整和流暢﹐儘量集中精神﹐別浪費體力和亂打﹐要有Plan﹐最後就是Focus真的極之重要﹐發現如果能Focus到﹐感覺連球速也相對地減慢了﹐真的很神奇﹐好玩﹗妙哉﹗

What’s New in vSphere 5.0

By admin, April 22, 2011 11:14 am

Content has been removed by the request of VMware on Apr. 28, 2011.

How to get ESX MPIO working on StardWind iSCSI SAN

By admin, April 21, 2011 10:23 pm

Anton (CTO of Starwind) gave me a great gift last night (StarWind v5.6 Ent), thanks!  I couldn’t wait to set it up and do some tests on this latest toy!

The setup is very easy, took me less than 5 mins, probably I’ve installed the previous v4.x back in 2009, but setup according to my own taste is a bit tricky as you need to tweak starwind.cfg and understand the first few parameters especially under the <Connection> section.

It took me 1 hours to get everything working (ie, ESX MPIO+Starwind) as I want to limit the NICs to only iSCSI subnet, as well as change the default iSCSI port to 3268. Yes, sure you can use a non-default port as 3268, as my 3260 is occupied by Microsoft’s iSCSI Target 3.3. I found the default installation also opens the management and iSCSI port 3261/3260 to public in firewall, you definitely want to disable it and limit the NIC access in StarWind management console as well the .cfg file.

So I have configured two Gbit NICs on WindStar box,

10.0.8.2:3268
10.0.8.3:3268

On each of the ESX Host there are 4 Gbit NICs on iSCSI subnet, I added one of the target IP 10.0.8.2:3268, then I found ONLY 4 MPIO Paths discovered, but not the 8 paths, all 4 were using the 10.0.8.2 path, this mean the other redundant path 10.0.8.3:3268 was not being used at all, so MPIO was not working technically specking. On contrast, Microsoft iSCSI Target will add the other one 10.0.8.3:3268 automatically, so it correctly shows 8 Paths.

After searching Starwind forum with Google (yes, use that site: command, so powerful), I quickly located the problem is within starwind.cfg

You can do normal ESX multipathing in Starwind without the HA cluster feature of Starwind 5, just follow the instructions for configuring Starwind to work with XEN and uncomment the <iScsiDiscoveryListInterfaces value=”1″/> line in the starwind.cfg file. This allows ESX to see all the possible paths to the iSCSI target on the server.

After enabled it, and restarted the StarWind service, Bingo! Everything worked as expected! 8 MPIO paths showing Active (I/O). This tweak does work for ESX as well not just Xen, and in fact it’s a MUST to enable it in order to see all paths.

So within the last 3 days, I was able to added two software iSCSI SAN to my VMware environment together with Equallogic, now I virtually have three SANs to play with, I will try to test Storage vMotion between all 3 SANs and perform some interesting benchmarking on StarWind as well as Microsoft iSCSI Target.

Later, I will try to configure the StarWind HA mode on VM (which is hosted on Equallogic), so it’s an iSCSI SAN within another iSCSI SAN. :)

終于出手了

By admin, April 21, 2011 7:10 pm

前天終于打電話給車模店訂了那台Autoart Ford Mustang GT390 1967年金色的﹐絕對是因為鄰居那1:1棗紅色的真傢伙實在太令人著迷﹗

今天路過Lancel手袋舖位﹐突然看見這個﹐太巧了﹗原來我的確對屁股好看的超跑情有獨鍾﹗想不到現在連美國的經典肌肉車也能迷倒我﹗

car

Official picture from Lancel web site:

2

Equallogic PS Series Firmware Version V5.0.5 Released

By admin, April 21, 2011 4:02 pm

As usual, I would wait at least 1 month before taking the firmware update, probably not to update the firmware at all as none of the following  issues occur to me.

Issues Corrected in this version (v5.0.5) are described below:

In rare cases, a failing drive in a array may not be correctly marked as failed. When this occurs, the system is unable to complete other I/O operations on group volumes until the drive is removed. This error affects PS3000, PS4000, PS5000X, PS5000XV, PS5500, PS6000, PS6010, PS6500, and PS6510 arrays running Version 5.0 of the PS Series Firmware.

I thought this has been fixed in v5.0.4 where the fix list indicates Drives may be incorrectly marked as failed. So this basically means a supposed failed drive is marked as health, but a healthy drive is marked as failed, wow, interesting! :)

• A resource used by an internal process during the creation of new volumes may be exhausted, causing the process to restart.

• If an array at the primary site in a volume replication relationship is restarted while the replication of the volume is paused, resuming replication could cause an internal process to restart at the secondary site.

• A resource used by the network management process could be exhausted causing slow GUI response.

• Volume login requests in clustered host environments could timeout resulting in the inability of some hosts to connect to the shared volume.

• A management process could restart while attempting to delete a local replica snapshot of a volume, resulting in slow array response at the primary site for that volume.

• When a management process is restarted, or a member array is restarted, a volume that is administratively offline could be brought online.

• If a member of the secondary site restarts while a volume replication is active, the group at the primary site could continue to report that the secondary site group is offline after the secondary site member is back online.

How to Extend VM partition under Linux (CentOS)

By admin, April 21, 2011 8:12 am

I often extend partition live (without downtime) for Windows VM using either diskpart or extpart from Dell, but extending partition under Linux is a totally different thing, it’s a bit complex if you are from Windows world.

  1. Increase the disk from vCenter GUI, reboot the server. (Take a Snapshot)
  2. ls -al /dev/sda* to find out the last created partition, sda2 is the one in my case
  3. fdisk /dev/sda type n for new partition, then p and 3 for the partition number (ie, sda3), then accept all the default first and last cylinders and finally w to complete the partition creation, then finish with reboot.
  4. pvcreate /dev/sda3 create a new Physical Volume
  5. vgextend VolGroup00 /dev/sda3 add this new volume to default Volume Group VolGroup00.
    (Note: vgextend cl /dev/sda3 in CentOS 7)
  6. vgdisplay to show the FREE PE (size of the free disk space), lvdisplay to show the volume name.
  7. Extend the volume by lvextend -L +XXG /dev/VolGroup00/LogVol00, you can find out the exact path of the default Logical Volume by lvdisplay. (ie, lvextend -L +20G…)
    (Note: lvextend -L +XXG /dev/cl/root in CentOS 7)
  8. Resize the file system by resize2fs /dev/VolGroup00/LogVol00 to complete the whole process. (If everything works, remove the Snapshot)

Update: May 15, 2017
For CentOS 7 , use xfs_growfs /dev/cl/root as it’s use XFS file system instead of the traditional ext3/4 based file systems, also Group and volume name have been changed to cl (was VolGroup00) and root (was VolGroup00).

Reset and update a Dead DRAC III on Poweredge 2650 in CentOS enviornment

By admin, April 20, 2011 5:08 pm

RMC Webserver 2.0: error 304 occured

The  above is the error message when you tried to connect to DRAC III Web UI on Poweredge 2650, the old DRAC isn’t very stable, it just crashed without any reason from time to time.

To reset it the method is quite simple, you need to install Dell OpenManage 5.5 on CentOS, then issue the following command and wait 30 seconds before login again.

> racadm racreset

Btw, you can view  DRAC’s information by

> acadm getsysinfo

RAC Information:
RAC Date/Time         = Wed, 20 Apr 2011 16:54:27 GMT+08:00
Firmware Version      = 3.37 (Build 08.13)
Firmware Updated      =
Hardware Version      = A04
Current IP Address    = 10.0.0.22
Current IP Gateway    = 10.0.0.2
Current IP Netmask    = 255.255.255.0
DHCP enabled          = FALSE
Current DNS Server 1  =
Current DNS Server 2  =
DNS Servers from DHCP = FALSE
PCMCIA Card Info      = N/A

System Information:
System ID    = 0121h
System Model = PowerEdge 2650
BIOS Version = A21
Asset Tag    =
Service Tag  = XXXXXXXX
Hostname     =
OS name      = Linux 2.6.18-92.el5
ESM Version  = 3.37

Watchdog Information:
Recovery Action         = No Action�
Present countdown value = 0
Initial countdown value = 6553

RAC Firmware Status Flags:
Global Reset Pending Flag = 0

Since the DRAC III Firmware Version 3.37 (Build 08.13) is quite old, I want to update it to the latest 3.38, A00 (the release note said it has fixed remote console bug, so worth the update), all you need is download the harddisk version and extra it firmimg.bm1 to your TFTP root, then login to DRAC again and select the Update tap, upload and the firmware and wait a few minute to complete the whole update.

Contact

By admin, April 20, 2011 2:15 pm

Ellie Arroway

“I’ll tell you one thing about the universe, though. The universe is a pretty big place. It’s bigger than anything anyone has ever dreamed of before. So if it’s just us… seems like an awful waste of space. Right?”

190178.1020.A[1]

“The operation is not supported on the object” encountered when deploy VM from Template.

By admin, April 19, 2011 2:55 pm

Today, when I deploy a CentOS VM from Template, I’ve encountered an error:

Reconfigure virtual machine Status showing “The operation is not supported on the object”

Googled around and find nothing, then I realized it’s probably something to do with the hardware configuration. I checked the vmfs configuration file and found ddb.adapterType = “lsilogic”, after remove it, everything is back to normal, of course, I’ve updated my template as well. It’s due to the CentOS template VM Disk Controller has been changed and the old configuration was still kept somehow.

I also discover deploy a Linux VM somehow will add new a NIC, the solution is to remove the nic.bak, and reconfigure the IP on the new eth0.

Update Jun-21-2011

I’ve encountered the same problem today when deploy from a w2k8r2 template, the annoying alert simply won’t go away. Luckily, I’ve found out the solution by trial and error. Simply convert the Template to VM, then to Template solved the problem completely. I suspect this is a bug in ESX 4.1, the original template was Cloned from the running VM, may be that’s why!

 

It’s REAL, Microsoft iSCSI Software Target 3.3 is FREE now!

By admin, April 18, 2011 2:06 pm

This is probably one of the most exciting news from Microsoft for virtualization community in recent years! It’s a bad news for many others who offering the similar products for high cost like StarWind, SANMelody, etc. and it’s even a sad news for those who purchased W2K8R2 storage servers from OEMs like Dell, HP and IBM a year ago.

On Apr 8 Microsoft has made it’s iSCSI Target software (original it’s WinTarget) for FREE! So go grab it and use it in your VMware ESX. :) Sure, it doen’t have many fancy features comparing to Equallogic, but it’s FREE and it supports ESX and Hyper-V as well as Xen and comes with schedule snapshots, so there is really nothing to complain.

Setup is very simple, just make sure you DE-SELECT the non-iSCSI NIC interfaces and leave only the iSCSI ones or you may risk to open your iSCSI SAN to the world and please double check the firewall setting disabled Public access, note after the default installation, somehow it enables Public access to iSCSI Target, huh? Ouch!

I got ESX 4.1 to connect to the MS iSCSI Target without any problem, and also went as far as changing the multipathing policy to EQL_PSP_EQL_ROUTED, haha…guess what, it did work apparently as all paths showing Active (I/O), but obviously it won’t work as later I found out there is no disk to mount under Storage, well it was expected. This leaves Round Robin (VMware) the best choice for MPIO setting and finally I loaded IOmeter, MPIO did shoot up to 60-80Mbps as I have two active links, not bad consider the underlying RAID-5 has only 4 disks on PERC H700 and the good news is the CPU loading of WinTarget.exe is very low, almost close to 0%.

FAQ
Q: The Microsoft iSCSI Software Target is now free. Is it supported in a production environment?
A: Yes. The Microsoft iSCSI Software Target is supported in a production environment. The Hyper-V team regularly tests with the MS iSCSI Software Target and it works great with Hyper-V.

Q: On what operating systems is the Microsoft iSCSI Software Target supported?
A: The Microsoft iSCSI Software Target is supported for Windows Server 2008 R2 Standard, Enterprise and Datacenter Editions with or without SP1 (in fact, that’s what is recommended), and it can only be installed on Windows Server 2008 R2 Full install, but not Core Install.

The Microsoft iSCSI Software Target 3.3, is provided in only in an x64 (64-bit) version.

Q: Can the free Microsoft Hyper-V Server 2008 R2 use the free Microsoft iSCSI Software Target?
A: Yes and No. Yes, Microsoft Hyper-V Server 2008 R2 can act as a client to access virtual machines via iSCSI. The way to do that is to type iscsicpl.exe at the command prompt to bring up the Microsoft iSCSI Initiator (client) and configure it to access an iSCSI Target (server). However, you can’t install the Microsoft iSCSI Software Target on a Microsoft Hyper-V Server. The Microsoft iSCSI Software Target requires Windows Server 2008 R2.

Q: Can I use the Microsoft iSCSI Software Target 3.3 as shared storage for a Windows Server Failover Cluster?
A: Yes. That is one of its most common uses.

Download the Microsoft iSCSI Software Target 3.3 for Windows Server 2008 R2, go to http://www.microsoft.com/downloads/en/details.aspx?FamilyID=45105d7f-8c6c-4666-a305-c8189062a0d0 and download a single file called “iSCSITargetDLC.EXE”.

Finally, make sure you read and understand the Scalability Limits!

Pages: Prev 1 2 3 4 Next