Raj2796's Blog

September 29, 2009

HP Storage roadmap

Filed under: eva,san — raj2796 @ 2:53 pm

Hp invent

Just saw an article in http://www.theregister.co.uk By Chris Mellor posted in storage that i found interesting:

HP’s EVA arrays will get thin provisioning and automated LUN migration, while LeftHand’s SAN software will be ported to the XEN and Hyper-V platforms, according to attendees at an HP Tech Day event.

This HP StorageWorks event took place in Colorado Springs yesterday with an audience of invited storage professionals. Several of them tweeted from the event, revealing what HP was saying.

HP commented that it thought its EVA mid-range array was an excellent product but hadn’t got the sales it deserved, with only around 70,000 shipped.

Currently EVA LUNs can be increased or decreased in size without stopping storage operations. This is suited to VMware operations, with the vSphere API triggering LUN size changes.

LUN migration, possibly sub-LUN migration, with EVAs is set to become non-disruptive in the future, according to attendees Devang Panchigar, who blogs as StorageNerve, and Steven Foskett. He said the EVA: “supports 6 to 8 SSDs in a single disk group, [and is] adding automated LUN migration between tiers.”

The EVA will be given thin provisioning functionality, the ability to create LUNs that applications think are fully populated with storage but which actually only get allocated enough disk space for data writes plus a buffer, with more disk space allocated as needed. Older 6000 and 8000 class EVA products won’t get thin provisioning, however, only the newer products.

In a lab session, attendees were shown that it was easier to provision a LUN and set up a snapshot on EVA than on competing EMC or NetApp products.

A common storage architecture

HP people present indicated HP was going to move to common hardware, including Intel-powered controllers, for its storage arrays. El Reg was given news of this back in June

Since HP OEMs the XP from Hitachi Data Systems, basing it on HDS’s USP-V ,this might encourage HDS to move to an Intel-based controller in that array.

Moving to a common hardware architecture for classic dual controller-based modular arrays is obviously practical, and many suppliers have done this. However high-end enterprise class arrays often have proprietary hardware to make them handle internal data flows faster. BlueArc has its FPGA-accelerated NAS arrays and 3PAR uses a proprietary ASIC for internal communications and other functions. Replacing these with Intel CPUs would not be easy at all.

Gestalt IT has speculated about EMC moving to a common storage hardware architecture based on Intel controllers. Its Symmetrix V-Max uses Intel architecture, and the Celerra filer and Clariion block storage arrays look like common hardware with different, software-driven, storage personalities.

There are hardware sourcing advantages here, with the potential for simplified engineering development. It could be that HP is moving to the same basic model of a common storage hardware set with individualised software stacks to differentiate filers, block arrays, virtual tape libraries and so forth. For HP there might also be the opportunity to use its own x86 servers as the hardware base for storage controllers.

It expects unified (block and file), commodity-based storage to exceed proprietary enterprise storage shipments in 2012.
SAN virtualisation software

HP OEMs heterogeneous SAN management software, called the SAN Virtualisation Services Platform (SVSP) from LSI, who obtained the technology when it bought StorAge in 2006. This software can HP virtualise EVA and MSA arrays, plus certain EMC, IBM and Sun arrays into a common block storage pool. The XP can’t be part of that pool, though. Also the XP, courtesy of HDS, virtualises third-party arrays connected to it as well. HP indicated that it is using the SVSP to compete with IBM’s SAN Volume Controller. Since the SVC is a combined hardware and software platform with well-over 10,000 installations, HP has a mountain to climb.

Also the SVSP is a box sitting in front of the arrays with no performance-enhancing caching functionality. It could be that HP has a hardware platform refresh coming for the SVSP.
LeftHand Networks SAN software

HP also discussed its LeftHand Storage product, which is software running in Intel servers which virtualises a server’s storage into a SAN. This scales linearly up to 30 nodes. The software can run as a VMware virtual machine in VSA (Virtual SAN Appliance) form. The scaling with LeftHand is to add more nodes and scale-out, whereas traditional storage scales-up, adding more performance inside the box.

HP also has its Ibrix scale-out NAS product which is called the Fusion Filesystem.

The LeftHand software supports thin storage provisioning and this is said to work well with VMware thin provisioning. We might expect it to be ported to the Hyper-V and XEN hypervisors in the next few months.

HP sees Fibre Channel over Ethernet (FCoE) becoming more and more influential. It will introduce its own branded FCoE CNA (Converged Network Adapter) within months.

HP also confirmed that an ESSN organisation will appear next year with its own leader. This was taken as a reference to Enterprise Servers, Storage and Networks, with David Donatelli, the recent recruit from EMC, running it.

Nothing was said about deduplication product futures or about the ExDS9100 extreme file storage product. Neither was anything said about the HP/Fusion-io flash cache-equipped ProLiant server demo recording slightly more than a million IOPS from its 2.5TB of Fusion-io flash storage. ®

link to theregister article


September 25, 2009

EVA4400 OS unit ID in VDisk presentation properties error

Filed under: eva,san — raj2796 @ 11:43 am


The Vdisk OS Unit ID is cleared from the graphical user interface (GUI) and set to 0 after upgrading to xcs 9522000. Seems this is yet another known bug, from hp:


Document ID: c01849072

Version: 1
ADVISORY: HP StorageWorks Command View EVA software refresh clears virtual disk operating system unit ID in user interface
NOTICE: The information in this document, including products and software versions, is current as of the Release Date. This document is subject to change without notice.

Release Date: 2009-08-19

Last Updated: 2009-08-19

After an HP StorageWorks Command View EVA software refresh, the Vdisk OS Unit ID is cleared from the graphical user interface (GUI). This only affects the setting within the GUI and not in the actual virtual disk setting.

For customers using OpenVMS hosts, not having the OS Unit ID displayed as a visible reminder could lead to duplicate unit IDs entered and presented to a host, which can result in data loss.

Also, when saving any changes under the Vdisk Presentation tab, ensure that the OS Unit ID is set to the desired value.

This issue affects the GUI setting in HP StorageWorks Command View EVA software versions 9.00.00, 9.00.01, 9.01.00, and 9.01.01. The issue has not been seen to affect the actual virtual disk setting.

This issue will be addressed in next full release of HP StorageWorks Command View EVA software, tentatively scheduled for release in early 2010.

To track OS Unit IDs , use one of the following options:


In the HP Command View EVA software GUI, add OS Unit ID and presentation host ID information to the comment field of the Virtual Disk.

Using HP SSSU, write a script to collect OS Unit ID from the array for each virtual disk presented to the OpenVMS host.

Use the following command to collect the WWID and the OS Unit ID numbers on the OpenVMS hosts:


In the output, the DGA device number is the OS Unit ID. The WWID listed in the output can be used to match to the WWID either from the HP Command View EVA display or from an SSSU data collection script.

September 16, 2009

Netware user container moves using JRB utils and relevant changes for user subcontainers

Filed under: edir,Netware — raj2796 @ 10:03 am

Years ago we inherited a few thousand users, a few aging server rooms and a couple of schools that were located on another of the universities campuses. Although i took over the 3com/Cisco network and upgraded to the latest equipment (at that time 2950’s) and our advanced configs, the server team never took over the home drive servers at the site which today are still managed by another department. To cut a long story short the other department wants us of their servers so I’ve built a half dozen new virtualised 6.5 sp8 servers and we’re migrating the data over. Whilst i was doing work on the user’s i decided to split our users up into smaller Subcontainers, diving them by the last digit of their usernames. Couple of things to watch out for in the Subcontainers:

1 – Login scripts – the Subcontainers need login scripts – go to properties then login scripts and add an include for the parent container – this way each Subcontainers can have its default login script inherited from the parent container meaning only one script to update, thus avoiding mistakes maintaining multiple copies of the same code. If you need Subcontainers specific login script changes add them after the include statement

2 – Inheritance levels for applications- check you are inheriting all relevant applications at the new container depth. Open console one and select tools – Zenworks Utilities – Application Launcher Tools – Show Inherited Applications
If you are missing applications available at the parent container then select the Subcontainer and view properties – zenworks – launcher configuration. Now change the mode to view objects effective settings and note down the set application inheritance level (user) value. Change the mode to View/Edit object’s custom configuration and enter the new value for the set application inheritance level (user). The value will be previously value plus one per sub container.

3 – Moving users – easy to script – just use the getrest command and have the output be used by move_obj with delays between moves. e.g.

display just site2 staff that are not logged in and ending then logs to a file on c drive
getrest .*.faculty.staff.site2.org na eq “none” /j /u /yc /l=c:\site2staff.log <- use the file for move_obj

I moved a few thousand users into relevant Subcontainers over night without errors 🙂

To move actual data just use jrb utils !

JRB - saviour of the netware sysadmins

September 11, 2009

Netware 6.5 sp8 and Vmware esx 3.5u4 compatibility problem

Filed under: Netware,vmware — raj2796 @ 11:13 am

It seems Netwares newest server, Netware 6.5 sp8, allegedly the last release of Netware, and this time they mean it, though they really meant it when they said Netware 6.5 sp6 was the last release, has problems with Vmware. In this case Vmware esx 3.5 u4.

We identified an easily repeatable error where the inclusion of a virtual floppy drive causes abends on Netware 6.5 sp8 Vmware servers after a “restart server” or a “reboot server” command is issued ! As of this time neither Vmware or Netware have released tids on the problem or seem aware it exists, though this does raise the question of what kind of an idiot would still be adding floppy drives to Vmware servers ?

Novell Netware install guide

Vmware HA

Filed under: ha — raj2796 @ 10:43 am

Whilst looking for an explantation on why HA seems to fail at random on my Vmware servers post reboots i found this nice explantation ! What i like most is that it confirms my theory of active primary ha servers, thus reinforcing my long held suspicion that I’m a genius and everyone else is stupid 🙂

Vmware HA

most usefull part of article:

das.ignoreRedundantNetWarning – Remove the error icon/message from your vCenter when you don’t have a redundant Service Console connection. Default value is “false”, setting it to “true” will disable the warning.


EVA 4400 XCS 09522000 upgrade

Filed under: eva — raj2796 @ 10:21 am

Hmm, so Hp advises me to upgrade to 09521000 and send me the “DRAFT XCS 09521000_release note.pdf”, only for me to find they pulled the 09521000 xcs before i could even download it and replaced with 09522000! What’s worse is there are no updated release notes/upgrade documentation…. Seems we should use the same documentation for 09521000 – the version that was pulled, to upgrade to 09522000…

Anyway, i decided to test the firmware on a new Eva at another site since it wasn’t in production and still had 09006000 running. Very simple to do, shame about the downtime needed to upgrade the controller firmware though, it was java upgrade/cv upgrade/disk upgrade / Eva upgrade / hp calling about Eva problems post upgrade / hp having no idea about solution / me goggling solution and fixing / everything fine .

The only problem with the upgrade was controller event logs not being generated post upgrade. After stopping the cv service I had to ignore what hp says in the 09521000 upgrade documentation and rename the entire C:\Program Files (x86)\Hewlett-Packard\Sanworks\Element Manager for StorageWorks HSV\cache\[wwnn-name] directory to old, not just the single file to get it working again.

EVA 4400 unboxed and setup
From my notes :


1 – CV upgrade to v 9

2 – update java

3 – check

CV 9  Upgrade

Two parts – CV server and CV Eva management module ( OCP )


1 – check following exists and are populated – create if not already there ( should be) – use EXACT syntax – case sensitive

• HP Storage Admins

• HP Storage Users

2 – extract contents of cd to a folder on server – iso to extract from is:


3 – stop desta service during upgrade

( recommend you disable mcafees at this stage if u can since will speed things up )

4 – Double-click HP StorageWorks Command View EVA Software Suite.exe to start the installation.

even though we don’t need smi-s cimon it is required by the installation so leave it selected to proceed

5 – Next until it uninstalls the existing CV and reboots

6 – Post reboot it install CV 9

7 – Once finished check cv / cv perf mon / cv sssu icons are available – launch cv and verify v9.1 is loaded and license is valid

8 – Reboot server and check desta is running


1 – command view is available from the server on port 2372 – make sure you lock it down on windows firewall

2 – hp sim-s cimon opened some ports without telling you – sneaky things – lock down firewall if required


1 – Launch opc via IE

2 – If unsure of ocp address connect to cv -> system option -> launch ocp – DO NOT LAUNCH – first log out of CV

3 – the cv address for ocp is wrong – you don’t want the ocp cv address – you want the ocp management address

ocp management is on port 2373 not the 2372 used to launch cv on ocp

4 – select update firmware

5 – point to .pkg file in the cv iso

6 – next to all – NOTE – can take 30 minutes – 8+ mins to load code then 15 mins to reload – don’t be hasty and reboot else you’ll screw things up and have to restart

7 – IMPORTANT once finished login to OCP CV – this is port 2372 NOT 2373 – you want the man module version of cv now

8 – use system options and set time – i used a ntp server and europe/london as zone – this is important else the ocp and server CV’s will have different times and time can be reset ot 1970 on the eva – no idea why 1970



After removing a version of HP Command View EVA earlier than 8.x.x, the following error message

may be displayed when installing HP Command View EVA 9.1:

A previous Master Installer version has been detected on the system.

Uninstall the previous version and retry the installation.

If this message is displayed, do the following:

1. Go to the directory <ProgramFiles>\Common Files\InstallShield\Universal.

2. Locate any directories named HP_MasterInstallerXX and delete them.

There may be a directory for each previous version of HP Command View EVA that was

installed on the system.

3. Continue with the HP Command View EVA 9.1 installation.


1 – just update


Upgrading firmware on all disks

1. NOTE – MAKE SURE FATA DISKS HAVE NO RAID 0 GROUPS – if you do copy data to diff luns – raid0 FATA disks stops firmware upgrade.

2. download the HP StorageWorks EVA Hard Disk Drive Bundle zip file from the following web site:


3. Store the file to a known local directory – dont need to unzip – cv9 uses zip files

4. Open and log into HP Command View EVA – code load – disks etc

Blog at WordPress.com.