Wednesday, June 19, 2019

Connecting my lab with to Azure with NSX

How to set up a site-to-site vpn between NSX in my lab and Azure: This setup uses the most basic option to connect NSX to Azure, using the basic SKU for the Virtual Network Gateway.
The trigger was that I wanted to know how to configure the VPN on the Azure site and to experiment with both on premises vm's and vm's in Azure. My network has a consumer grade router that did not have the right tools to setup a vpn to Azure (or I just did not get it to work).
On the NSX side, the blog from Cris Colotti helped a lot. It matches the capabilities of the basic Gateway in Azure perfectly. As the basic Gateway does not support BGP I setup static routes (Azure does this automagically) on my home router to the NSX Edge for the vNet in Azure.
A linux vm in Azure:

A traceroute from my desktop to the azure vm:
The NSX settings:


So, this is working well. But when using NSX and Azure, your networks are probably very dynamic. The next step will be to replace this site to site VPN setup with one not based on static routes but on BGP.

vCenter appliance 6.5 to 6.7U1 upgrade

Yesterday I performed an upgrade of a vCenter appliance to 6.7U1. All went well but there was one thing in the upgrade process that more or less surprised me. I got the following message:
The default partition '/' has only 4.1 GB of available space.
So the root partition had not enough space to have the backup of both the database and the historical data. You can give an alternative path with space to proceed.
When searching I found this article: "Export path provided does not have enough disk space" error upgrading to vCenter Server Appliance 6.0 (2113947) Which lets you make a new disk. I opted for looking on the vcsa to see if there was a alternative:

Filesystem                                Size  Used Avail Use% Mounted on
devtmpfs                                   16G     0   16G   0% /dev
tmpfs                                      16G   32K   16G   1% /dev/shm
tmpfs                                      16G  684K   16G   1% /run
tmpfs                                      16G     0   16G   0% /sys/fs/cgroup
/dev/sda3                                  11G  6.0G  4.1G  60% /
tmpfs                                      16G  1.4M   16G   1% /tmp
/dev/sda1                                 120M   35M   80M  30% /boot
/dev/mapper/log_vg-log                     25G  3.6G   20G  16% /storage/log
/dev/mapper/seat_vg-seat                   50G  8.5G   39G  19% /storage/seat
/dev/mapper/autodeploy_vg-autodeploy       25G   57M   24G   1% /storage/autodeploy
/dev/mapper/imagebuilder_vg-imagebuilder   25G   45M   24G   1% /storage/imagebuilder
/dev/mapper/dblog_vg-dblog                 25G  3.5G   20G  15% /storage/dblog
/dev/mapper/db_vg-db                       25G  907M   23G   4% /storage/db
/dev/mapper/core_vg-core                   50G   52M   47G   1% /storage/core
/dev/mapper/netdump_vg-netdump            9.8G   23M  9.2G   1% /storage/netdump

/dev/mapper/updatemgr_vg-updatemgr         99G  3.3G   91G   4% /storage/updatemgr

So, plenty of space on the update manager disk. And that is what I used:
/storage/updatemgr was filled in as path. This worked and while the export ran I monitorered the disk usage. When the export finished the status was this:

Filesystem                                Size  Used Avail Use% Mounted on
devtmpfs                                   16G     0   16G   0% /dev
tmpfs                                      16G   32K   16G   1% /dev/shm
tmpfs                                      16G  684K   16G   1% /run
tmpfs                                      16G     0   16G   0% /sys/fs/cgroup
/dev/sda3                                  11G  6.0G  4.1G  60% /
tmpfs                                      16G  139M   16G   1% /tmp
/dev/sda1                                 120M   35M   80M  30% /boot
/dev/mapper/log_vg-log                     25G  3.6G   20G  16% /storage/log
/dev/mapper/seat_vg-seat                   50G  8.5G   39G  19% /storage/seat
/dev/mapper/autodeploy_vg-autodeploy       25G   58M   24G   1% /storage/autodeploy
/dev/mapper/imagebuilder_vg-imagebuilder   25G   45M   24G   1% /storage/imagebuilder
/dev/mapper/dblog_vg-dblog                 25G  3.5G   20G  15% /storage/dblog
/dev/mapper/db_vg-db                       25G  907M   23G   4% /storage/db
/dev/mapper/core_vg-core                   50G   52M   47G   1% /storage/core
/dev/mapper/netdump_vg-netdump            9.8G   23M  9.2G   1% /storage/netdump

/dev/mapper/updatemgr_vg-updatemgr         99G  5.9G   88G   7% /storage/updatemgr

So, YMMV but this looks like a very simple alternative to adding a whole new disk.
If you have a very large vSphere environment, maybe a better solution is to add a disk before starting the upgrade procedure.

(this still works with vCenter 7.0 U2 upgrades)