Hyperconvergence is great and now gives us the ability to not be dependent on a centralized storage array (SAN) like we’ve all been accustomed to. Adding nodes to
scale our vSAN environment is simple to do (depending on your switch infrastructure) and can be done on the fly.
Removal of those nodes is a fairly straight forward process but there are some cleanup steps that need to be done in order to repurpose those nodes for other environments.
You can validate that your node is no longer part of Virtual SAN Clustering by issuing the following command:
[root@machinename:~] esxcli vsan cluster get
You should see “Virtual SAN Clustering is not enabled on this host” if the host has been properly removed from Virtual SAN.
Ok? Great, you can now proceed to the fun part.
Warning: Make sure that your vSAN node is out of the cluster and that you’ve remove the vmkernel interface from the host!
Once a system has been removed from the vSAN cluster, the storage is no longer
considered as a “part” of the array. But, in order for you to reuse those nodes
for something else, you need to clean up the remnants of what vSAN has left
behind on those local disks.
If you attempt to put that node into an new or different environment, you’ll
quickly notice that none of the disks are usable since they still have the old
vSAN partition information on them. We can find this out by issuing the following
command on the node via the console/ssh session:
[root@machinename:~] esxcli vsan storage list
This produces a list of all the disks that are claimed and you will see each one
with a “naa.xxxxxxxxx” format as the identifier. You’ll also notice that each one
is identified as being SSD or HDD through the line “Is SSD:” with a value of true
Cleaning these drives can be done by collecting the identifier of the disk and
issuing the following command for removal:
[root@machinename:~] esxcli vsan storage remove –disk=naa.xxxxxxx <where xxxxxxx
is the identifier of your disk>
This is done for spinning disks only. If you want to remove an SSD, you need to
use the –ssd switch to specify. Again, you can derive this information from the
storage list mentioned above.
In a hybrid vSAN environment, there is a requirement for at least one SSD for each
disk group. When the disk groups are formed, the spinning drives are “tied” to the
SSD drive and are dependent on it while the node is standalone.
This can be used to our advantage when doing disk cleaning since we don’t have to
remove each HDD individually and can simply call out the SSD in the vsan storage
remove command! Simply collect all “IS SSD: true” disks from your storage list
output and issue the esxcli vsan storage remove –ssd=naa.xxxxxxx command!
Bam! Now all disks are free from old partitioning and ready to use in your new
P.S. – You’ll probably notice that the datastore name sticks on the hosts that you
just removed (regardless of the fact that the disks have been removed). This seems
to be cosmetic since a creation of a new vSAN cluster replaces this with the