Raj2796's Blog

September 29, 2009

HP Storage roadmap

Filed under: eva,san — raj2796 @ 2:53 pm

Hp invent

Just saw an article in http://www.theregister.co.uk By Chris Mellor posted in storage that i found interesting:

HP’s EVA arrays will get thin provisioning and automated LUN migration, while LeftHand’s SAN software will be ported to the XEN and Hyper-V platforms, according to attendees at an HP Tech Day event.

This HP StorageWorks event took place in Colorado Springs yesterday with an audience of invited storage professionals. Several of them tweeted from the event, revealing what HP was saying.
EVA

HP commented that it thought its EVA mid-range array was an excellent product but hadn’t got the sales it deserved, with only around 70,000 shipped.

Currently EVA LUNs can be increased or decreased in size without stopping storage operations. This is suited to VMware operations, with the vSphere API triggering LUN size changes.

LUN migration, possibly sub-LUN migration, with EVAs is set to become non-disruptive in the future, according to attendees Devang Panchigar, who blogs as StorageNerve, and Steven Foskett. He said the EVA: “supports 6 to 8 SSDs in a single disk group, [and is] adding automated LUN migration between tiers.”

The EVA will be given thin provisioning functionality, the ability to create LUNs that applications think are fully populated with storage but which actually only get allocated enough disk space for data writes plus a buffer, with more disk space allocated as needed. Older 6000 and 8000 class EVA products won’t get thin provisioning, however, only the newer products.

In a lab session, attendees were shown that it was easier to provision a LUN and set up a snapshot on EVA than on competing EMC or NetApp products.

A common storage architecture

HP people present indicated HP was going to move to common hardware, including Intel-powered controllers, for its storage arrays. El Reg was given news of this back in June

Since HP OEMs the XP from Hitachi Data Systems, basing it on HDS’s USP-V ,this might encourage HDS to move to an Intel-based controller in that array.

Moving to a common hardware architecture for classic dual controller-based modular arrays is obviously practical, and many suppliers have done this. However high-end enterprise class arrays often have proprietary hardware to make them handle internal data flows faster. BlueArc has its FPGA-accelerated NAS arrays and 3PAR uses a proprietary ASIC for internal communications and other functions. Replacing these with Intel CPUs would not be easy at all.

Gestalt IT has speculated about EMC moving to a common storage hardware architecture based on Intel controllers. Its Symmetrix V-Max uses Intel architecture, and the Celerra filer and Clariion block storage arrays look like common hardware with different, software-driven, storage personalities.

There are hardware sourcing advantages here, with the potential for simplified engineering development. It could be that HP is moving to the same basic model of a common storage hardware set with individualised software stacks to differentiate filers, block arrays, virtual tape libraries and so forth. For HP there might also be the opportunity to use its own x86 servers as the hardware base for storage controllers.

It expects unified (block and file), commodity-based storage to exceed proprietary enterprise storage shipments in 2012.
SAN virtualisation software

HP OEMs heterogeneous SAN management software, called the SAN Virtualisation Services Platform (SVSP) from LSI, who obtained the technology when it bought StorAge in 2006. This software can HP virtualise EVA and MSA arrays, plus certain EMC, IBM and Sun arrays into a common block storage pool. The XP can’t be part of that pool, though. Also the XP, courtesy of HDS, virtualises third-party arrays connected to it as well. HP indicated that it is using the SVSP to compete with IBM’s SAN Volume Controller. Since the SVC is a combined hardware and software platform with well-over 10,000 installations, HP has a mountain to climb.

Also the SVSP is a box sitting in front of the arrays with no performance-enhancing caching functionality. It could be that HP has a hardware platform refresh coming for the SVSP.
LeftHand Networks SAN software

HP also discussed its LeftHand Storage product, which is software running in Intel servers which virtualises a server’s storage into a SAN. This scales linearly up to 30 nodes. The software can run as a VMware virtual machine in VSA (Virtual SAN Appliance) form. The scaling with LeftHand is to add more nodes and scale-out, whereas traditional storage scales-up, adding more performance inside the box.

HP also has its Ibrix scale-out NAS product which is called the Fusion Filesystem.

The LeftHand software supports thin storage provisioning and this is said to work well with VMware thin provisioning. We might expect it to be ported to the Hyper-V and XEN hypervisors in the next few months.

HP sees Fibre Channel over Ethernet (FCoE) becoming more and more influential. It will introduce its own branded FCoE CNA (Converged Network Adapter) within months.

HP also confirmed that an ESSN organisation will appear next year with its own leader. This was taken as a reference to Enterprise Servers, Storage and Networks, with David Donatelli, the recent recruit from EMC, running it.

Nothing was said about deduplication product futures or about the ExDS9100 extreme file storage product. Neither was anything said about the HP/Fusion-io flash cache-equipped ProLiant server demo recording slightly more than a million IOPS from its 2.5TB of Fusion-io flash storage. ®

link to theregister article

Advertisements

1 Comment »

  1. SVSP uses a fastpath asic to complete the virtualisation at wire speed, why do you need cache in the way?. cache should be closest to where it is needed most at the server and in the array. the more you scale a virtualisation technology the less likely you are to get a cache hit. if you put caching into the network you have to synchronise all of the copies this adds work for the virtualisation engines and adds additional unnecessary load onto the fabric. if you lose one of the nodes and you rely on cache you have to destage all the cache. so the bigger the cache the more you have to destage. If big cache in the middle was a proven scalable technology, why do we not do this for internet content? why have technologies such as akamai proven to be the most scalable and manageable. SVSP gives you centralised control and the scale out through the DPM’s you can only scale up so far before you must scale out

    Comment by Steve — October 12, 2009 @ 2:12 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: