Platform as a Service will change the IT Landscape

There are more and more signs that “as-a-service” offerings will be the
“norm” in the next few years, and there is a fundamental shift taking place on the software development front that is embracing this delivery model and it picking up some major weight behind it.

There are two major players in the Platform as a Service arena to date:

1. Cloudfoundry, the company that was spun out of VMware to form a new company name (under the EMC umbrella) called Pivotal that is headed by none other than Paul Maritz who was the CEO of VMware. I have been following this product very closely over the last few years and with the recent backing of GE and now IBM, this platform is poised to be a major player.

2. Red Hat has OpenShift which has a very good product suite defined. They have it broke down into three possible ways to consume platform as a service. The first being their own hosted solution called “online”, second being the “enterprise” version for on premise PaaS deployments and third being the “Origin” model that is open source.

It’s hard to say that this is something new. A few years ago I remember reading an interesting article (can’t recall the location of it), but it that stated that thousands of developers attended a PaaS seminar in Asia. Keep in mind that this was at least 2 years ago! The thought of removing the infrastructure layer away from the application development lifecycle is a huge change and something that will not happen overnight, but nonetheless – it will happen.

If you look at this from a developer’s standpoint, they will consume this much more rapidly and with very few road blocks. When they push code to the PaaS from their IDE, there is no need to compile that code. The platform does the work for them. Think about it. The delivery model is real-time, which makes this software as a service delivery system far more efficient and easier to maintain.

This is not to say that Infrastructure isn’t needed – quite the opposite. Automation is the key component to PaaS and therefore it is closely tied to a virtualized, dynamic environment that will need to scale at a moment’s notice. With all the gears, nodes, cloud controllers, routers, stagers, etc, the IaaS will need to be extremely elastic with the heavy demand for resources these components will need.


Next-Generation Storage Symposium Recap

I got the chance to attend the Next-Generation Storage Symposium in San Jose yesterday that is organized by Tech Field Day and hosted by Steven Foskett. There were a number of vendors that spoke at the event and if I could put a theme on this event it would be centric around flash based storage. I am going to summarize on what each vendor brought to the table at this event in order to give you an overview of what was talked about.

Nexsan – Hybrid storage platform through SAN, NAS and Unified Storage Systems including their NS model that they referenced in their presentation.

Nimbusdata – The presentation revolved around their memory-based protocol storage systems and they discussed how simplistic the platform is and they emphasized a 10yr warranty on their platform.

Permabit – This company focuses on Enterprise Flash Array’s and cache solutions through their Albireo dedupication methods for SSD’s.

Purestorage -This is a pure flash-based storage platform that reads and writes to flash disks fundamentally different to take advantage of performance and sustain longevity.

Solidfire – They are focused on cloud-scale computing and the performance requirements behind it. Their point is that traditional storage platforms are not designed for this level of storage. Solidfire’s presentation focused on taking advantage of unique properties of these solid state drives.

Starboard – This company fell in line with the other presenters in that they have a hybrid storage platform that is focused on delivery at a certain price point. They also make note of cost savings from consolidating and mixing workloads with dynamic storage pooling.

Tegile – This presentation focused on their Hybrid storage system that utilizes spinning disk as well as flash based. They touched on their management capabilities with this technology around VDI as well.


Towards the end of the day, some of the Tech Field Day delegates hosted a number of sessions that had various panelists talk about a number of topics that pertained to the architecture of these next generation storage platforms.

Flash Storage overview – Comments from the moderator and panelists centered on the delivery mechanisms for each company that is focused on solid state as the primary tier for all levels. A strong argument was made from one of the attendees that some of these problems have been addressed at a higher layer and if these technologies have these same issues in mind when products are delivered.

Scaling storage for the future – Comments from the panelists revolved around how large scale environments can effectively be moved to the next level and maintain the service levels that the original deployment was designed to deliver.

Storage for the Virtual Infrastructure – Questions and statements in this panel talked about how these new storage platforms are going to affect how data is accessed and stored. One of the main highlights in this discussion were centered around how the storage layer is aware of the type of I/O that is being passed to it. VMware’s VAAI was called out specifically as well as other API’s that are coming to market.

FreeNAS Storage Setup for the Home VMware Lab

There are a few options out there for the home lab, but over the years I’ve tried many and default to FreeNAS due to its low overhead for resources and fast setup. This walk-though is based on build 8.2.0 and is a step by step process to get your NAS array up and running in any lab. My setup consists of VMware Workstation 9 with FreeNAS 8.2.0, ESXi 5.1 and Windows 2008 R2 64-bit.

1) First start out by obtaining the bits for FreeNAS from – the downloads are located on the right-hand side of the page.

2) Create your virtual machine with VMware Workstation with the following parameters ( you don’t need a 20GB primary disk, but I used one for other functions). I have setup two (private guest only virtual networks) to use for this configuration. VMnet2 is the management network and VMnet9 is the storage network. Also, notice the low overhead of RAM.

3) Next step is to attach the ISO to the VM and boot it. Go through the setup, restart and you will be prompted with this menu on boot:

4) Select Configure Network Interface #1 and setup EM0. As you probably saw in my screenshot of the vm configuration, I have two virtual NIC’s setup. This is because I keep the management and storage traffic separate. So, when you select #1 from this menu, you will see EM0 and Em1.

5) After you configure em0 for management traffic, move on to the configuration of em1 for the actual storage path. I used the for my “storage” network. The next few steps will outline the configuration from the GUI.

6) Log into the appliance from a machine on your network

7) One of the first things I do from here is change the admin password from this location.

8) Since time is critical part of any service, I added my local DC to the list of NTP servers (I’ve also noted the add NTP server dialog box to the right). My internal time service is listed on the left. This is after all a “fenced off” environment (to borrow a term from vCloud).

9) I didn’t use any VLAN’s or static routes but this depicts how the interfaces are used in my FreeNAS environment. I’ve setup labels on the interfaces just to be clear what is used for what.

10) When we added that second disk, we are now able to add it into the volume manager and create a new NFS mount point off it by going into the volume manager and creating the path indicated in the following screenshot. Also note the permissions assigned to the volume.

11) Now that we have the volume and path setup we need to share it out the “storage” path. EM1. Note that you select the allowed network IP (or as I have listed here) range of IP’s that are allowed to get to the share. Remember, EM1 is on VMNET9 which is on a separate virtual network segment that allows for isolated storage traffic in my lab.

12) Double-check to make sure the NFS service is running on your device


You are now ready to start connecting your vSphere ESXi nodes to the storage and setup HA, DRS, FT, etc!