Storage Field Day 3 Next Month

Coming up next month is Storage Field Day 3 which will be held on April 24th through the 26th and is part of the Gestalt IT – Tech Field Day events. I will be participating as a delegate (as I have before) and will be meeting with a number of storage companies in the Denver Colorado area to learn about their product offerings from a technical perspective and how they will fit into the virtualization realm.

The companies that will be presenting at this event are; Avere Systems, Cleversafe, Marvell, NetApp, NEXGEN Storage, PernixData, SanDisk, Starboard and two other companies that have not yet been named.

I am very familiar with NetApp, NEXGEN and Starboard since I have been working on NetApp systems for quite a few years and also met NEXGEN and Starboard at a prior Tech Field Day event, but I am interested to hear about the new product(s) and to engage in more technical detail about the presented material from each.

As always, I do my homework when meeting these companies so that I can start to pry open the inner workings of their product(s). Being a huge fan of file based storage, I fully intend to have some technical conversations with Avere Systems about their FXT Edge filers and to learn more about the way they tier and scale their platform from a capacity vs. performance standpoint.

After reading about Cleversafe’s product, I am very interested in the products they have listed in their portfolio. The product looks to have three components to it. The Manager, Accesser and Slicestor. Without going to far into what each of these components do, I am refraining from further discussion in this post and will wait to have a fact to face with them about it.

Marvell’s DragonFly product looks interesting with their NVRAM for SAN & NAS and NVCACHE & NVDRIVE that look to be designed for cloud solutions. I will make a note of talking to them more about the NVDRIVE since that product is stated on the website to target public and private cloud environments with the PCIe SSD accelerator with write-back caching on their tiered non-volatile memory.

I am very curious about PernixData’s “Flash Virtualization Platform”, that upon preliminary review, looks to have a play in the virtualized datacenter space with a application data tiering method that is targeted towards application acceleration.

Look for follow up posts about these and the other companies that we will be meeting at this event. With the lineup of delegates that they have planned for this, I expect some very technical discussions.


Tech Field Day – Storage Field Day 2 Starts this Week!

Very excited that Storage Field Day 2 is amongst us and this week is shaping up to be a good one!

I will be heading to beautiful San Jose / Silicon Valley this week for Storage Field Day 2 on November 8th and 9th. I will be there with 10 other delegates and you can read more about it on the Tech Field Day page. I am really looking forward to visiting and getting into some deep technical discussions with the following companies that we will be visiting on Thursday and Friday:

  • Asigra
  • Nexgen Storage
  • Nimble Storage
  • Nimbus Data
  • Nutanix
  • Riverbed
  • Tintri
  • Virsto
  • Zerto

One of the great things about Tech Field Day is that these companies have engineers deliver the content and we really get to understand the product from a technical perspective and not just be presented with high level information.

Look for updated posts throughout the week and tune in at the link above for live video streaming from the meetings!


Adding SAN LUNs from Different Arrays in VMware

One interesting thing that I ran across recently and I am surprised that I haven’t yet, is adding fiber channel LUN’s to one cluster from different arrays. As a storage guy I did some preliminary research on the deployment since there is very real possibility that I will wind up will multiple LUN ID’s that are the same on the cluster nodes like depicted here:

Everything looks fine, but upon further inspection the redundant paths are down!

Why? Well you can only have one LUN 0 and one LUN 1 on a single controller – (disk controller 101). Remember: Runtime Name is known as A:C:T:L or (Adapter:Controller:Target:LUN). They are all on the same controller!

To solve this, you can do one of two things.

1) The preferred method is to install or present (in the blade world) another set of HBA’s (host bus adapter) to the ESX host if you want to keep the same LUN ID’s and maintain path and card redundancy. This of course is only possible of you have the expansion slots or have not exceeded your maximums of virtual HBAs to the hosts. But what this will do is keep the I/O on its own path out of the host on a separate circuit.

2) Another way is to present the new LUN’s at a different ID and this will keep both paths active . But keep in mind that if you are only running two HBA’s on rack mount systems, you are contending for I/O on those paths. In the blade world its a little different since they are virtual HBA’s and share a common uplink out of the chassis to the fabric.