Switching Gears

The IT industry is very unique and you simply can’t compare it to anything else, thats one of the reasons I love it.

I’ve been in it for over two decades and remember a time when technology moderately changed over several years, but the rate of change over the last five years has been dramatic.

The shift in software development, application delivery, storage, networking, compute and IT consumption in general is changing rapidly. No doubt, these are exciting times.

With that, I’ve decided that it is time to make a change and accepted a position at Cisco as a Technical Solutions Architect for Data Center and Cloud.

The transformation around data center has accelerated from converged infrastructure to hyperconverged to hybrid cloud models.

It’s a very exciting time to join Cisco with the product lines that they offer today!

-Rick

SSH Permissions Issue for EC2 AWS Key pair

Ran into an issue where I created a new AWS instance, configured a new key pair so I can SSH into my test machine and received the following error message when connecting to it (names and addresses generic of course):

mycomputer$ ssh user@10.10.10.10 -i myprivatekey.pem
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for ‘myprivatekey.pem’ are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key “myprivatekey.pem”: bad permissions
user@10.10.10.10: Permission denied (publickey).

What this is telling you is that the unix permissions for the .pem file are too loose and need to be locked down. The file that you pulled down from AWS when you created the key has unix permissions of 644 (by default) which are [-rw-r–r-] and need to be changed to 400 or [-r——–]. So here is the command you need to run on your .pem file:

mycomputer$ chmod 400 myprivatekey.pem

Issue the same command listed above and you should be able to connect.

*Of course, for best security practice – I recommend that you lock down what IP(s) can connect to your instances for SSH.

Rick

 

VMworld 2017 NetApp Tech Field Day Presentation

Attending the conference as an attendee, I was also invited to sit in on NetApp’s presentation at Tech Field Day Extra at VMworld US 2017 on August 29th. I’ve had this post in draft mode for a while now and thought it would be a good time to post it with NetApp Insight kicking off this week in Las Vegas. In addition to that, I have a genuine interest to hear about what NetApp has cooking in the HCI world since I am a NetApp NCDA and managed NetApp’s since the late 1990’s. So naturally, I wanted to see what their play in the Hyper-converged world was at this point.

NetApp HCI

Gabriel Chapman kicked off the session with a analogy that enterprises are at a turning point. They (NetApp) see this as an opportunity to enter the market with an enterprise level solution with their next generation data center.

The three main points they wanted to drive home were:

  1. Guarantee Performance
  2. Flexibility & Scale
  3. Automation

Simplicity of Implementing

  • Integration with the portfolio of the products that are currently on the market.
  • NetApp Deployment Engine – Drive all of this from vCenter (snap-mirror, etc)
  • The control of storage performance can be adjusted on a per volume basis through QoS.
  • Provide granular control of your storage at the VM level.

Compute Side

  • ESX on the compute nodes
  • Active IQ for telemetry
  • Target Workloads
  • Cloud
  • Web Infra
  • Databases

Base Installation

The minimal install to get up and running would be 2 Chassis with 2 Compute nodes and 4 Flash Storage nodes leaving 2 open bays for expansion. There are also options for what size nodes:

  • Small (16 Cores, 256GB RAM, 6x 480GB SSD = 5.5-11TB with 8GB NVRAM)
  • Medium (24 Cores, 512GB RAM, 6x 960GB SSD = 11-22TB with 8GB NVRAM)
  • Large (36 Cores, 768GB RAM, 6x 1.92TB SSD = 22-44TB with 8GB NVRAM)

*Each compute node has 4x 25/10Gbe & 2x 1Gbe Adapters.

My Thoughts

As I stated early in this post, I am a big fan of what NetApp has done for the storage industry. In today’s demand for performance in such a wide variety of applications (whether that be traditional packaged apps, containers, SaaS, PaaS, etc) trying to pack all of them on a filesystem that is general purpose is not ideal. Storage policies can only do so much and some app will suffer as others get priority. Sure, you can combat that with scaling those units out to meet that demand, but placing those workloads on datastores that are tailored to meet the IOP demands is key. Combine that with the deployment engine that they have integrated into the system and you have a purpose built converged solution that can meet the requirements for those various applications in the environment.

I’ve seen this type of deployment in other array’s but NetApp’s approach to HCI (attacking it from a performance perspective) but this is where many traditional applications will benefit.

Post Disclaimer: I was invited to attend VMworld 2017 Tech Field Day Extra as an independent participant. I was not compensated for my time spent attending this presentation. My post is not influenced in any way by Gestalt IT or NetApp and I am under no obligation to write this article. The companies mentioned above did not review or edit this content and it is written from purely an independent perspective.