I was doing some preliminary research on this company that we will see this week during Storage Field Day 4 and have a few items to cover prior. They have a scale out strategy with their SnapScale NAS solution that has its own operating system called “RAINcloud OS”. The interesting thing about this product is that they put all storage under one namespace and prevents your storage from becoming silo’d allowing for simplified management.
According the the data sheet, the minimum configuration is a 3 node (2u) setup with at least 12 SAS drives per unit for the X4 model (scaling out to 36 total) and 4 drives in the 2X model (scaling out to 12).
Whats interesting about this scaling method is the “limitless” scaling model that they’ve implemented in the array. The documentation show that there is no scaling limitation even though their specifications show a 74 to 512PB limit (probably depending on the model type), but this is something that I am planning to clarify in the presentation and if those are just tested limits or if they will support deployments beyond these figures.
Another area of interest is the way they connect clients to it. Looking through the technical documentation, it looks like each node accepts client communication through that common namespace that distributes the I/O throughout. This makes me wonder that if an entry point were to completely fail, that clients communication would need to be re-established on another node. I am curious to find out the potential impact of that client/server connection and how Overland Storage handles that situation and also how this will affect the 99.999% uptime or if it is seamless. The protocols that they support indicate that routed iSCSI and NFS are a necessity to allow for this type of redundancy, but I will get clarification. Again, these are my assumptions and this is a preliminary post about this product/company.
Of course, my main line of questioning will revolve around the connectivity options to virtualized environments (especially VMware). I see that each chassis has the ability to connect 4x 10GbE and have some questions about the switch configuration, particularly around the aggregation. As always, you can bet that I will bring up cloud based deployments and how they will handle linked-cones.
I am also curious to find out about this SnapEDR technology that they use for replication and what is required for such an implementation.
Also, per the data sheet: I see that they allow for a CLI connection for management! Two thumbs up here.
Of course, I will be the one asking “When will we see CIFS 3.0 and NFS v4 support” if it isn’t addressed in the presentation.
Look for a follow up post on Overland Storage shortly after the presentation at Storage Field Day 4!