ESXi 5.x/6.x deployment with Foreman

Unattended rollouts of ESXi 5.x and 6.x servers with Foreman are straightforward. But there are a few pitfalls that can be time-consuming to figure out and we want to avoid that.

This howto covers -step by step- the necessary actions that need to be taken to finish with a basic ESXi installation. Customising the ESXi installation is beyond the scope of this document. Required prerequisites for following this howto are a functional Foreman installation, a VMware ESXi/VMvisor 5.x or 6.x installation medium, sufficient access rights to perform installation actions and some time.

For this howto Foreman 1.11.0RC2, running on Debian 8.3 'jessie' is used. Other installations might differ in details, like paths or the used versions of software, etc., but should be easy to adapt.

The actions we need to perform break down into four major categories:

  1. Preparation of the Installation Source
  2. Preparation of the Network Boot Loader
  3. Preparation of the Foreman Templates and Objects
  4. Testing

1 Preparation of the Installation Source

The first thing we need to do is to copy the content of the VMware installation medium to the TFTP server. We assume that we already copied the ISO image into the /tmp directory of our TFTP server and that the TFTP root directory is /srv/tftp. The ESXi installation files will be copied into the directory /srv/tftp/esx60:

mount -noloop /tmp/VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso /mnt
cp -a /mnt /srv/tftp/esx60
umount /mnt

Next we patch the boot loader configuration file. We change all the absolute paths to relative paths and set a path prefix, so that the loader can find the files on the TFTP server. A backup copy of the file is stored as boot.cfg.org:

cd /srv/tftp/esx60/
sed -i.org 's&/&&g' boot.cfg
echo 'prefix=../esx60/' >> boot.cfg

That is all we need to do with the VMware installation medium.

2 Preparation of the Network Boot Loader

One nuisance of the ESXi installation process is that only syslinux 3.86 is supported as a boot loader. Newer versions, like syslinux 6.03 that is delivered with Debian 8.3, fail to load VMware's customised mboot.c32 module:

Syslinux with VMware mboot.c32

On the other hand, mboot.c32, that comes with the newer syslinux, has trouble to load and start the ESXi installation:

syslnx603-mboot_c.png

Even incorporating the contents of boot.cfg into Foreman's PXELinux template, AKA syslinux configuration file, doesn't help.

One way to work around that (until Foreman allows the switch the boot file for operating systems, host groups or single hosts) is to get rid of the default syslinux and use version 3.86 for all tasks.

But what if you want to install other operating systems or software products, that require a newer version of syslinux? So that isn't a good solution to this problem. The approach proposed in this howto is to use chain loading and use syslinux 3.86 only for the ESXi deployment and stick with the default syslinux for all other tasks. Of course, with this approach you can use several different network boot loaders for other tasks, if you need to.

The first step is to download the required syslinux 3.86 and install it on the TFTP server:

cd /tmp/
wget -q https://www.kernel.org/pub/linux/utils/boot/syslinux/3.xx/syslinux-3.86.tar.bz2
tar xjf syslinux-3.86.tar.bz2
mkdir /srv/tftp/syslinux386
cp syslinux-3.86/core/pxelinux.0 /srv/tftp/syslinux386/
find syslinux-3.86/com32/ -name \*.c32 -exec cp {} /srv/tftp/syslinux386 \;

The network boot loader, as well as the default one, needs access to the configuration files provided by Foreman. We create a symbolic link to the configuration file directory into its installation directory:

ln -s ../pxelinux.cfg /srv/tftp/syslinux386/

On this Debian 8.3 based machine the default Foreman installation lacks the syslinux chain loader module. But it is part of the already installed Debian package. So we copy the module and its dependencies into the TFTP root directory:

cp /usr/lib/syslinux/modules/bios/pxechn.c32 /srv/tftp/
cp /usr/lib/syslinux/modules/bios/libcom32.c32 /srv/tftp/

Since we are already logged in into our TFTP server we create two additional files, that we will need in the Foreman PXE template to keep track of the state our boot loader is in:

echo 'DEFAULT chainloadsyslnx386' > /srv/tftp/goto.cfg
echo 'DEFAULT installesx60' > /srv/tftp/syslinux386/goto.cfg

With this last step the preparation of the network boot loader is completed and we can proceed with creating the required Foreman templates.

3 Preparation of the Foreman Templates and Objects

3.1 Installation Media

First we create a dummy installation media. Technically this isn't needed, but Foreman doesn't let us proceed if we don't provide an installation media. Foreman tries to retrieve a kernel and an initial ramdisk from that source. The disadvantage of this dummy installation media is that the HTTP requests to the server usually result in 404 errors and two empty files on the TFTP server.

There is a Foreman feature request that addresses this problem. For the moment we simply ignore it.

Since we don't have a special ESXi operating system family object, we set the operating system family to Red Hat. The name and path of the installation media are free to choose:

esx_installmedia.png

Now we can click Submit and go on with the next step.

3.2 Operating System Object

Next we create the operating system object. On the first panel we again choose Red Hat as our operating system family. Make also sure that you choose a supported hash function for password encryption, otherwise the installation will bail out. SHA512 should work:

esx_os1.png

On the installation media panel we select the freshly created dummy media:

esx_os2.png

For now we can click Submit and finish this dialog.

3.3 PXELinux Template

We need to create a template of type PXELinux next. The name can be chosen freely, the type is, as said before, PXELinux. We associate this template with the operation system object, that we created in a previous step. The next three screen dumps illustrate this steps:

pxetemplate1.png

pxetemplate2.png

pxetemplate3.png

The most interesting part is, of course, the template code itself:

<%#
kind: PXELinux
name: ESXi 6.0 (PXELinux)
oses:
- ESXi 6.0
%>
#
# This file was deployed via '<%= @template_name %>' template
#


INCLUDE goto.cfg

LABEL chainloadsyslnx386
  kernel pxechn.c32
  append /syslinux386/pxelinux.0 -p /syslinux386/

LABEL installesx60
  kernel ../esx60/mboot.c32
  append -c boot.cfg ks=<%= foreman_url('provision') %>

First you will notice the include statement. We created two files named goto.cfg in a previous step. The file in the TFTP root directory contains the statement DEFAULT chainloadsyslnx386, the other file -in the directory /syslinux386- contains the statement DEFAULT installesx60.

Depending on which variant of goto.cfg is included, syslinux executes either the actions below label chainloadsyslnx386 or the actions below label installesx60.

But how do we choose which goto.cfg is loaded? As a default the include file is loaded from the TFTP root directory. This means that after booting at first the actions below the label chainloadsyslinx386 are executed. They advise syslinux to load the chain loader module pxechn.c32. This module then loads another layer 0 network boot programme, named /syslinux386/pxelinux.0, which is the NBP that comes with syslinux 3.86. We advise this new NBP to find its configuration in the directory /syslinux386.

Now pxelinux.0, version 3.86, is loaded and it evaluates above code again. Since it looks into /syslinux386 for configuration files, the second goto.cfg is found and the default label changed to installesx60. This advises syslinux to load the mboot.c32 module from the relative directory path ../esx60. This is VMware's customised module, that now loads fine into syslinux 3.86. It is parametrised with its configuration file boot.cfg and a URL pointing to a kickstart provision file.

This little indirection allows us to boot ESXi with its vendor supported syslinux 3.86, while every other deployment task still uses the default syslinux of the Foreman installation.

3.4 Partition Table

Next we need a partition table. This can be as complicated as you want or as simple as removing all existing partitions:

partitiontable.png

The template code:

<%#
kind: ptable
name: ESXi 6.0 (Partitioning)
oses:
- ESXi 6.0
%>

# Clear all partitions.
clearpart --firstdisk --overwritevmfs

3.5 Provision Template

As this last step we create a provision template. Analogous to the PXELinux template the name can be choosen freely and we associate the template with the ESXi operating system object:

provisiontemplate1.png

provisiontemplate2.png

provisiontemplate3.png

The template code is basically a standard ESXi kickstart file. It is by no means a receipt of how to configure an ESXi server. There is a lot you want to change for ESXi servers in production.

<%#
kind: provision
name: ESXi 6.0 (Provision)
oses:
- ESXi 6.0
%>

# Accept the VMware End User License Agreement.
vmaccepteula


# Set the root password for the DCUI and Tech Support Mode.
# Make sure to use a supported hash function, like SHA512.
rootpw --iscrypted <%= root_pass %>


# Partitioning.
# Default: Clear all partitions.
<% if @dynamic -%>
%include /tmp/diskpart.cfg
<% else -%>
<%= @host.diskLayout %>
<% end -%>


# Install on first disk.
install --firstdisk --overwritevmfs


# Set the network to DHCP on the first network adapter.
network --bootproto=dhcp --device=vmnic0


<% if @host.params['PerformReboot'] == '1' -%>
# Reboot after installation.
reboot
<% end -%>


# First boot script.
%firstboot --interpreter=busybox

# Do stuff to further customise your installation.

# Inform the build system that we are done.
# This is the ideal way yo do it, but it doesn't work with Foreman.
#echo "Informing Foreman that we are built"
#Older Foreman versions: wget -q -O /dev/null <%= foreman_url %>
#wget -q -O /dev/null <%= foreman_url('built') %>



%post --interpreter=busybox

# Fix DNS.
echo "nameserver <%= @host.subnet.dns_primary %>" > /etc/resolv.conf

# Inform the build system that we are done.
# This should be done in the firstboot script. But
# that isn't supported by Foreman at the moment.

echo "Informing Foreman that we are built"
#Older Foreman versions: wget -q -O /dev/null <%= foreman_url %>
wget -q -O /dev/null <%= foreman_url('built') %>

exit 0

The successful finish of the installation should be reported in a first boot script, after the machine booted for the first time into the installed system. This would also ensure that further customisations would be covered by this 'guard'.

The problem is, that Foreman does not support local reboots during the installation phase. There is a Foreman feature request to address this.

After finishing the provision template we go back to the operating system object we created earlier and assign the freshly created partition table, the PXELinux template and the provision template:

esx_os3.png

esx_os4.png

Now all preparation steps are done and we can provision a machine and test our setup.

4 Testing

For testing our setup we create a new host in Foreman (or re-use an existing one) and configure it. Besides the other essential settings we select the ESXi operating system, the associated installation medium and partition table:

clt.png

After saving the configuration changes and booting the client it should perform a PXE boot and start loading its installation files:

esx1.png

After a few minutes the installation should have finished:

esx5.png

The build status, shown in the property panel of the host, should be Installed now:

host_status.png

As a last step you should check the log files on the ESXi host.

5 Feedback

Feel free to contact me (pkgs@c0t0d0s0.de) and give me feedback about this howto. Do you think it is useful? Or is it the most superfluous thing you have seen in ages? Perhaps you have some feature requests?

Impressum