The CLI path is about managing migration from command line of NSX manager. So first of all you need to log in to your NSX manager – since you are in production and the managers are in cluster any one of them should do the job. If you are not certain, log in to the cluster IP or the first node of your Management cluster.
The whole procedure is well documented, but there some considerations to be made.
The CLI procedure goes like this (after logging in):
- You perform the command
vds-migrate precheckand get the output:
Obtaining the Precheck Id can also be interpreted as successful pass of requirements for migrating the N-VDS to vDS. Also the migrattion scenario is being generated as a part of this step – how many vDS will be configured, which uplinks will be switched etc.
2. Next come the
vds-migrate show-topology command, which shows you the topology outcome of the migration from networking perspective:
Here we have plenty of information and one intended mistake.
The Precheck Id should correspond with Precheck Id obtained in the first step (in my example it does not correspond as I have run the command again after taking the screenshot which resulted in new Id).
vds_name is the name that will be assigned to your new vDS (to which you will migrate from N-VDS).
vmknic will contain vmknics reattached from N-VDS to this new vDS.
transport_node_id is the list of Transport Nodes (ESXi Hosts) that will be assigned to the vDS they are under.
The mistake here is that I have intentionally changed a configuration of one of the N-VDS – first host has LLDP Profile set to “LLDP [Send Packet Enabled] “, while the second one has it Disabled. This will result in creation of two vDS switches attaching one host to one vDS. The catch is in the fact that the configuration of the N-VDS has to be identical on each host or separate vDS switches with separate configurations will be created. This is now part of the documentation and is stated in the beginning of the article, but it hasn’t always been that way and that’s why I’m mentioning it in the first place. If you detect and correct any unintended misconfigurations, you will have to begin from step 1.
After I have corrected the configuration and run the precheck again, I’m getting new topology:
Now we have 1 vDS with both nodes attached to it.
3. If everything is according to your expectations, you can apply the configuration by issuing
vds-migrate apply-topology command.
In this step the vDS gets created in vCenter:
Notice the generated name – this is something you are not able to specify along the way. If you do not like such nice and easy to remember name, you will have to change it after the migration.
4. The migration itself is triggered by
vds-migrate esx-cluster-name <name_of_your_ESXI_cluster> command. This will trigger the whole automated process of migration. The automated part is in the process of putting the host to the maintenance mode, do the switch migration magic and taking it out of the migration:
After the migration there will be a “Network connectivity lost” alarm – that is expected, especially if the N-VDS was your only virtual switch connected. Here is the full list of events if your switch is successful:
!!! There is one big catch if you blindly follow my guide or the official one and it is still NOT documented !!! Initiating the migration by issuing the command
vds-migrate esxi-cluster-name <name_of_your_ESXI_cluster> a default timeout for putting the host to Maintenance mode will be used. The default timeout is 300 seconds. This means that when a host is not being able to enter MM in 5 minutes, the migration moves to another host. It does not wait or is written to wait leaving your host in MM. However you are able to specify the maintenance mode timeout by connecting a sub-command
maintenance-timeout <number_in_seconds>the whole command would be something like:
vds-migrate esxi-cluster-name <name_of_your_ESXI_cluster>maintenance mode <number_in_seconds>
Do not forget the sub-command or you will regret it. Trust me, been there.
My opinion is that after all the troubles I had with this method I wouldn’t use it. The control you have over the whole process is pretty much minimal and the impact of misconfigured or forgotten timeout for maintenance mode can be pretty much devastating.
One last advice – get familiar with all sub commands of the
apply-topology#VDS apply topology
delete-topology#VDS delete topology
disable-migrate#Disable NVDS to VDS migration
esxi-cluster-id#ESXi Cluster Id
esxi-cluster-name#ESXi Cluster Name
precheck#VDS migration precheck
show-topology#VDS show topology
tn-list#VDS migration tn list
tn-list is also available the
maintenance-timeout sub command as these commands trigger the migration.