Attending the conference as an attendee, I was also invited to sit in on NetApp’s presentation at Tech Field Day Extra at VMworld US 2017 on August 29th. I’ve had this post in draft mode for a while now and thought it would be a good time to post it with NetApp Insight kicking off this week in Las Vegas. In addition to that, I have a genuine interest to hear about what NetApp has cooking in the HCI world since I am a NetApp NCDA and managed NetApp’s since the late 1990’s. So naturally, I wanted to see what their play in the Hyper-converged world was at this point.
Gabriel Chapman kicked off the session with a analogy that enterprises are at a turning point. They (NetApp) see this as an opportunity to enter the market with an enterprise level solution with their next generation data center.
The three main points they wanted to drive home were:
- Guarantee Performance
- Flexibility & Scale
Simplicity of Implementing
- Integration with the portfolio of the products that are currently on the market.
- NetApp Deployment Engine – Drive all of this from vCenter (snap-mirror, etc)
- The control of storage performance can be adjusted on a per volume basis through QoS.
- Provide granular control of your storage at the VM level.
- ESX on the compute nodes
- Active IQ for telemetry
- Target Workloads
- Web Infra
The minimal install to get up and running would be 2 Chassis with 2 Compute nodes and 4 Flash Storage nodes leaving 2 open bays for expansion. There are also options for what size nodes:
- Small (16 Cores, 256GB RAM, 6x 480GB SSD = 5.5-11TB with 8GB NVRAM)
- Medium (24 Cores, 512GB RAM, 6x 960GB SSD = 11-22TB with 8GB NVRAM)
- Large (36 Cores, 768GB RAM, 6x 1.92TB SSD = 22-44TB with 8GB NVRAM)
*Each compute node has 4x 25/10Gbe & 2x 1Gbe Adapters.
As I stated early in this post, I am a big fan of what NetApp has done for the storage industry. In today’s demand for performance in such a wide variety of applications (whether that be traditional packaged apps, containers, SaaS, PaaS, etc) trying to pack all of them on a filesystem that is general purpose is not ideal. Storage policies can only do so much and some app will suffer as others get priority. Sure, you can combat that with scaling those units out to meet that demand, but placing those workloads on datastores that are tailored to meet the IOP demands is key. Combine that with the deployment engine that they have integrated into the system and you have a purpose built converged solution that can meet the requirements for those various applications in the environment.
I’ve seen this type of deployment in other array’s but NetApp’s approach to HCI (attacking it from a performance perspective) but this is where many traditional applications will benefit.