Rocky and Palette eXtended Kubernetes
This guide teaches you how to use the CAPI Image Builder tool in an airgapped environment to create a custom Rocky Linux image with Palette eXtended Kubernetes (PXK) for VMware vSphere and use the image to create a cluster profile. You can use either a Rocky Linux boot ISO or an existing Rocky Linux VM to create your image.
Prerequisites
- Rocky ISO
- Rocky VM
-
Access to the VMware vSphere environment, including credentials and permission to create virtual machines.
-
An airgapped instance of Palette or VerteX deployed in VMware vSphere.
-
SSH access to the VMware vSphere airgap support VM used to deploy the airgapped instance of Palette or Vertex.
-
The following artifacts must be available in the root home directory of the airgap support VM. You can download the files on a system with internet access and then transfer them to your airgap environment.
-
CAPI Image Builder compressed archive file. Contact your Palette support representative to obtain the latest version of the tool. This guide uses version 4.6.24 as an example.
-
Rocky Linux ISO version 8 or 9. Ensure you download the
x86_64-dvd.isofile and not thex86_64-boot.isofile, and make sure you have its SHA256 checksum available. This guide uses Rocky 8 as an example. Refer to the Configuration Reference page for details on supported operating systems. -
Airgap Kubernetes pack binary of the version for which the image will be generated. This guide uses version
1.30.4as an example. Refer to the Additional Packs page for instructions on how to download the binary. Additionally, check the supported Kubernetes versions in the Compatibility Matrix. -
(Optional) Any custom Bash scripts (
.shfiles) that you want to execute when creating your Rocky image. Custom scripts are supported beginning with CAPI Image Builder version 4.6.23.
-
-
Access to the VMware vSphere environment, including credentials and permission to create virtual machines.
-
An airgapped instance of Palette or VerteX deployed in VMware vSphere.
-
SSH access to the VMware vSphere airgap support VM used to deploy the airgapped instance of Palette or Vertex.
-
The following artifacts must be available in the root home directory of the airgap support VM. You can download the files on a system with internet access and then transfer them to your airgap environment.
-
CAPI Image Builder compressed archive file. Contact your Palette support representative to obtain the latest version of the tool. This guide uses version 4.6.24 as an example.
-
Airgap Kubernetes pack binary of the version for which the image will be generated. This guide uses version
1.30.4as an example. Refer to the Additional Packs page for instructions on how to download the binary and upload it to your registry. Additionally, check the supported Kubernetes versions in the Compatibility Matrix. -
(Optional) Any custom Bash scripts (
.shfiles) that you want to execute when creating your Rocky image. Custom scripts are supported beginning with CAPI Image Builder version 4.6.23.
-
-
An existing VM with an OS of Rocky Linux 8 or 9 installed. This VM will be used as the base of your image and must meet the following requirements:
-
The following tools installed:
- conntrack-tools
- cloud-init
- cloud-utils-growpart
- iptables
- python2-pip
- python3
-
A user of
builderwith a password ofbuilder. This is required by the vsphere-clone-builder. Thebuilderuser must be granted passwordless sudo privileges.builderuser and password privileges-
On your Rocky Linux VM, add a
builderuser.sudo useradd builder -
Set the password for the
builderuser tobuilder.echo 'builder:builder' | sudo chpasswd -
Assign passwordless sudo privileges to the
builderuser and assign the appropriate permissions.echo 'builder ALL=(ALL) NOPASSWD: ALL' | sudo tee /etc/sudoers.d/builder
sudo chmod 0440 /etc/sudoers.d/builder
-
-
SSH password authentication enabled in
/etc/ssh/sshd_configby settingPasswordAuthenticationtoyes. You must either restartsshdor reboot your system for the changes to take effect. -
SSH password authentication enabled for
cloud-initby settingssh_pwauthtotrue. This is required to preventcloud-initfrom overwritingPasswordAuthentication yesin/etc/ssh/sshd_configwhen booting the cloned VM. We recommend creating a separate file that explicitly setsssh_pwauth: true.sudo tee /etc/cloud/cloud.cfg.d/99-enable-ssh-pwauth.cfg << EOF
ssh_pwauth: true
EOF -
IPv4 packet forwarding enabled.
-
firewalld disabled.
sudo systemctl disable --now firewalld -
/tmpmounted to execute binaries and scripts.Check
/tmpstatus-
Check the mount status of
/tmp. Look for a status ofnoexec.mount | grep '/tmp'Example outputtmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime,size=2G)tipIf you receive an error, where
/tmpis not displayed in the mount output, it is likely because it is a regular directory on the filesystem and not a separate mount.Issue the following command to confirm the mount point of
/tmp. If theMounted onlocation is/, no action is required.df --human-readable /tmpExample outputFilesystem Size Used Avail Use% Mounted on
/dev/mapper/rl-root 70G 3.7G 67G 6% / -
If
/tmphas a status ofnoexec, use your preferred text editor to edit the file/etc/fstaband set/tmptoexec.vi /etc/fstabExample output/dev/mapper/rl-root / xfs defaults 0 0
UUID=3b068723-b40a-4c10-ac6d-00271cd4d3a4 /boot xfs defaults 0 0
UUID=F867-A7CE /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/rl-home /home xfs defaults 0 0
/dev/mapper/rl-swap none swap defaults 0 0
tmpfs /tmp tmpfs defaults,nosuid,nodev,exec,size=2G 0 0 -
Remount all filesystems in
/etc/fstab.sudo mount --all -
Confirm the mount status of
/tmpis set toexec.mount | grep '/tmp'Example outputtmpfs on /tmp type tmpfs (rw,nosuid,nodev,exec,relatime,size=2G)
-
-
If your system has been hardened using a Security Technical Implementation Guide (STIG) policy, you may need to remediate the following:
-
SELinux may prevent binaries from executing, including
cloud-initscripts. We recommend setting the SELinux status topermissiveordisableduntil the image building process is complete.Check SELinux status
-
Check the status of SELinux.
getenforceExample outputEnforcing -
If the status is
Enforcing, use your preferred text editor to open the SELinux config file and setSELINUXto eitherpermissiveordisabled.vi /etc/selinux/configExample output# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
-
-
fapolicyd may prevent certain applications from executing, such as
containerd-shim-runc-v2. We recommend disablingfapolicyduntil the image building process is complete.sudo systemctl disable --now fapolicyd
-
-
A snapshot of your VM created once all other prerequisites are met. This is required by the vsphere-clone-builder.
-
Build Custom Image
- Rocky ISO
- Rocky VM
-
Open a terminal window and SSH into the airgap support VM using the command below. Replace
<path-to-private-key>with the path to the private SSH key,<vm-username>with your airgap support VM username, and<airgap-vm-hostname>with the IP address or Fully Qualified Domain Name (FQDN) of the airgap support VM (for example,example-vm.palette.dev).infoWhether you use the IP address or FQDN depends on the hostname used when setting up your airgap support VM. If you used an existing RHEL VM to set up your VM, this is always the FQDN; if you used an OVA, it depends on the hostname used when invoking the command
/bin/airgap-setup.sh <airgap-vm-hostname>.ssh -i <path-to-private-key> <vm-username>@<airgap-vm-hostname> -
Switch to the
rootuser account to complete the remaining steps.sudo --login -
Ensure all artifacts listed in the Prerequisites section are available in the
roothome directory of the airgap support VM.ls -lExample output-rw-r--r-- 1 root root 183310952 Nov 17 23:59 airgap-pack-kubernetes-1.30.4.bin
drwx------ 2 root root 4096 Jun 30 14:37 bin
-rw-r--r-- 1 root root 3973587887 Oct 21 12:02 capi-image-builder-v4.6.24.tgz
drwxr-xr-x 2 root root 4096 Apr 12 2024 prep
-rw-r--r-- 1 root root 1086324736 Nov 17 22:03 rocky-8-latest-x86_64-dvd.iso
drwx------ 3 root root 4096 Apr 1 2024 snap -
Upload the airgap Kubernetes pack to the airgap registry. Replace
<version>with your Kubernetes version.chmod +x airgap-pack-kubernetes-<version>.bin && \
./airgap-pack-kubernetes-<version>.bin -
Set your CAPI Image Builder version tag as a variable. The version must match the
capi-image-buildercompressed TAR file.CAPI_IMAGE_BUILDER_VERSION=<capi-image-builder-version-tag>
echo CAPI Image Builder version: $CAPI_IMAGE_BUILDER_VERSIONExample outputCAPI Image Builder version: v4.6.24 -
Extract the CAPI Image Builder file.
tar --extract --gzip --file=capi-image-builder-$CAPI_IMAGE_BUILDER_VERSION.tgzThe
roothome directory of your airgap support VM should now contain the following artifacts.ls -lExample output-rw-rw-r-- 1 ubuntu ubuntu 928 Apr 8 2025 README
-rwxr-xr-x 1 root root 183310952 Nov 17 23:59 airgap-pack-kubernetes-1.30.4.bin
drwx------ 2 root root 4096 Jun 30 14:37 bin
-rw-rw-r-- 1 ubuntu ubuntu 2471340032 May 16 2025 capi-builder-v4.6.24.tar
-rw-r--r-- 1 root root 3973587887 Oct 21 12:02 capi-image-builder-v4.6.24.tgz
drwxrwxr-x 2 ubuntu ubuntu 4096 Aug 13 2024 kickstart
drwxrwxr-x 3 ubuntu ubuntu 4096 Apr 8 2025 output
drwxr-xr-x 2 root root 4096 Apr 12 2024 prep
-rw-r--r-- 1 root root 1086324736 Nov 17 22:03 rocky-8-latest-x86_64-dvd.iso
drwxr-xr-x 3 ubuntu ubuntu 12288 Oct 21 11:51 rpmrepo
drwx------ 3 root root 4096 Apr 1 2024 snap
-rw-rw-r-- 1 ubuntu ubuntu 602989568 Apr 1 2025 yum-repo-v1.0.0.tar -
Update the permissions of the
outputfolder to allow the CAPI Builder tool to create directories and files within it.chmod a+rwx output -
Move the Rocky Linux ISO file to the
outputfolder.mv rocky-8-latest-x86_64-dvd.iso output/ -
Copy the
ks.cfg.rocky8orks.cfg.rocky9file from thekickstartfolder to theoutputfolder asks.cfg. Replace<version>with8or9, depending on the OS version of your Rocky VM.cp kickstart/ks.cfg.rocky<version> output/ks.cfg -
Copy the
server.crtfile from the/opt/spectro/ssl/directory to therpmrepofolder.cp /opt/spectro/ssl/server.crt rpmrepo/ -
Open the
imageconfigtemplate file in an editor of your choice and fill in the required parameters. Theimageconfigfile is used to personalize the base CAPI image for your cluster, which you can alter to fit your needs. This includes specifying the OS type, Kubernetes version, whether the image should be FIPS compliant, and more.The following example configuration configures a Rocky 8 CAPI image in an airgapped environment. Replace all VMware-related placeholders in the
Define Vmware infra detailssection with values from your VMware vSphere environment. Additionally, replace<airgap-vm-hostname>with the hostname or IP address of your airgap support VM.For a complete list of parameters, refer to the Configuration Reference page. Additionally, refer to the Compatibility Matrix for a list of supported Kubernetes versions and their corresponding dependencies.
warningIf you used the airgap support VM hostname during the execution of the
airgap-setup.shscript, ensure you enter the VM hostname in theairgap_ipparameter. The same applies if you used the VM IP address.vi ./output/imageconfigExample imageconfig file# Define the OS type and version here
# os_version=rhel-8 | rhel-9 | rockylinux-8 | rockylinux-9
# image_type=standard | fips
os_version=rockylinux-8
image_type=standard
# Define the image name
# image_name=<Final Image Name to create>
image_name=rocky-8
# Define the Cloud type
# cloud_type=vmware
cloud_type=vmware
# Define the Component Versions
#
# containerd crictl and cni version update should be done
# only if the images are available in the upstream repositories
k8s_version=1.30.4
cni_version=1.3.0
containerd_version=1.7.13
crictl_version=1.28.0
# Define RHEL subscription credentials(if $image_type=rhel)
# used while image creation to use package manager
# rhel_subscription_user=
# rhel_subscription_pass=
# Define ISO url(if image is rhel or rockylinux)
iso_name=rocky-8-latest-x86_64-dvd.iso
iso_checksum=<iso-checksum>
# Define AWS infra details
aws_access_key=
aws_secret_key=
# Define Vmware infra details
vcenter_server=<vcenter-server>
vcenter_user=<vcenter-user>
vcenter_password=<vcenter-password>
vcenter_datacenter=<vcenter-datacenter>
vcenter_datastore=<vcenter-datastore>
vcenter_network=<vcenter-network>
vcenter_folder=<vcenter-folder>
vcenter_cluster=<vcenter-cluster>
vcenter_resource_pool=<vcenter-resource-pool>
# Optional: for OVA based builds
vcenter_template=
# Define Azure infra details
azure_client_id=
azure_client_secret=
azure_subscription_id=
azure_location=
azure_storage_account=
azure_resource_group=
# Define GCE infra details
google_app_creds=
gcp_project_id=
# Airgap Configuration
airgap=true
airgap_ip=<airgap-vm-hostname>tipTo build a FIPS-compliant image, keep the
image_typeset tofips.Once you are finished making changes, save and exit the file.
-
(Optional) You can add custom Bash scripts (
.shfiles) to run before or after the build process. This feature is available beginning with CAPI Image Builder version 4.6.23. If any scripts are found in the relevant directories, they are copied to an Ansible playbook.Move any scripts that you want to be executed before the build process to the
output/custom_scripts/predirectory. Move any scripts that you want to be executed after the build process to theoutput/custom_scripts/postdirectory. Ensure the scripts are executable.Below is an example of moving a pre-install script to the appropriate
predirectory and making it executable.Example of moving a script and modifying permissionsmv sample-script.sh output/custom_scripts/pre/sample-script.sh
chmod +x custom_scripts/pre/sample-script.sh -
Load the CAPI Image Builder container image with the command below.
- Docker
- Podman
docker load < capi-builder-$CAPI_IMAGE_BUILDER_VERSION.tarpodman load < capi-builder-$CAPI_IMAGE_BUILDER_VERSION.tar -
Load the Yum container image with the command below. The Yum container is used to serve the packages required by the CAPI Image Builder.
- Docker
- Podman
docker load < yum-repo-v1.0.0.tarpodman load < yum-repo-v1.0.0.tar -
Tag the CAPI Image Builder and Yum containers.
- Docker
- Podman
docker tag localhost/v1.0.0:latest localhost/yum-repo:v1.0.0
docker tag localhost/$CAPI_IMAGE_BUILDER_VERSION:latest localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSIONpodman tag localhost/v1.0.0:latest localhost/yum-repo:v1.0.0
podman tag localhost/$CAPI_IMAGE_BUILDER_VERSION:latest localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION -
Confirm that both container images were loaded and tagged correctly.
- Docker
- Podman
docker imagesExample outputREPOSITORY TAG IMAGE ID CREATED SIZE
localhost/capi-builder v4.6.24 2adff15eee2d 7 days ago 2.09GB
localhost/yum-repo v1.0.0 b03879039936 6 weeks ago 603 MBpodman imagesExample outputREPOSITORY TAG IMAGE ID CREATED SIZE
localhost/capi-builder v4.6.24 2adff15eee2d 7 days ago 2.09GB
localhost/yum-repo v1.0.0 b03879039936 6 weeks ago 603 MB -
Start the Yum container and assign its ID to the
BUILD_ID_YUMvariable. The following command mounts the/root/rpmrepodirectory on your airgap support VM to the/var/www/html/rpmrepodirectory of the Yum container, runs the container on port 9000 of your VM, and detaches the container's output from the terminal.- Docker
- Podman
BUILD_ID_YUM=$(docker run --volume /root/rpmrepo:/var/www/html/rpmrepo --publish 9000:80 --detach localhost/yum-repo:v1.0.0)BUILD_ID_YUM=$(podman run --volume /root/rpmrepo:/var/www/html/rpmrepo --publish 9000:80 --detach localhost/yum-repo:v1.0.0) -
Execute the command below to visualize the Yum container logs.
- Docker
- Podman
docker logs --follow $BUILD_ID_YUMMonitor the output until a
Pool finishedmessage appears, indicating that the process was completed successfully.# Output condensed for readability
Directory walk started
Directory walk done - 53 packages
Temporary output repo path: /var/www/html/rpmrepo/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finishedpodman logs --follow $BUILD_ID_YUMMonitor the output until you see a
Pool finishedmessage, which indicates that the process was completed successfully.# Output condensed for readability
Directory walk started
Directory walk done - 53 packages
Temporary output repo path: /var/www/html/rpmrepo/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished -
Issue the command below to start the CAPI Image Builder container and assign the container ID to the
BUILD_ID_CAPIvariable. This command starts the container on the same network as your airgap support VM, mounts the/root/outputdirectory of your VM to the/home/imagebuilder/outputdirectory of the CAPI Image Builder container, and detaches the container's output from the terminal.The tool will create and configure a VM with Dynamic Host Configuration Protocol (DHCP) in your VMware vSphere environment using the
image_namedefined inimageconfig. The tool will then generate a Rocky image from the VM and save it to theoutputdirectory.- Docker
- Podman
BUILD_ID_CAPI=$(docker run --net=host --volume /root/output:/home/imagebuilder/output --detach localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)BUILD_ID_CAPI=$(podman run --net=host --volume /root/output:/home/imagebuilder/output --detach localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)If you need the VM to use static IP placement instead of DHCP, follow the steps described below.
CAPI Image Builder with Static IP Placement
-
Open the
ks.cfgfile located in the output folder. Find and replace the network linenetwork --bootproto=dhcp --onboot=on --ipv6=auto --activate --hostname=capv.vmwith the configuration below.network --bootproto=static --ip=<vcenter-static-ip-address> --netmask=<vcenter-netmask> --gateway=<vcenter-gateway> --nameserver=<vcenter-nameserver>Replace
<vcenter-static-ip-address>with a valid IP address from your VMware vSphere environment and<vcenter-netmask>,<vcenter-gateway>, and<vcenter-nameserver>with the correct values from your VMware vSphere environment. The<vcenter-netmask>parameter must be specified in dotted decimal notation, for example,--netmask=255.255.255.0.Once you are finished making changes, save and exit the file.
-
Issue the command below to start the CAPI Image Builder container and assign the container ID to the
BUILD_ID_CAPIvariable. This command starts the container on the same network as your airgap support VM, mounts the/root/outputdirectory of your VM to the/home/imagebuilder/outputdirectory of the CAPI Image Builder container, and detaches the container's output from the terminal.The tool will use the
imageconfigfile to create and configure a VM with static IP placement in your VMware vSphere environment.- Docker
- Podman
BUILD_ID_CAPI=$(docker run --net=host --volume /root/output:/home/imagebuilder/output --detach localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)BUILD_ID_CAPI=$(podman run --net=host --volume /root/output:/home/imagebuilder/output --detach localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)
-
Execute the following command to view the CAPI Image Builder container logs and monitor the build progress. If you added any custom scripts in step 11, the output will be displayed in the build log.
- Docker
- Podman
docker logs --follow $BUILD_ID_CAPIpodman logs --follow $BUILD_ID_CAPIinfoIt may take a few minutes for the logs to start being displayed, and the build takes several minutes to complete.
-
Once the build is complete, the Rocky CAPI image is downloaded to the
outputdirectory as theimage_namespecified in theimageconfigfile. Issue the following command to confirm that the build files are present in theoutputdirectory.ls -l output/<image_name>Example output-rw-r--r-- 1 ubuntu ubuntu 1203 Nov 18 02:48 packer-manifest.json
-rw-r--r-- 1 ubuntu ubuntu 3571576320 Nov 18 02:48 rocky-8-disk-0.vmdk
-rw-r--r-- 1 ubuntu ubuntu 9507 Nov 18 02:48 rocky-8.ovf
-rw-r--r-- 1 ubuntu ubuntu 212 Nov 18 02:48 rockylinux-8-kube-v1.30.4.mf
-rw-r--r-- 1 ubuntu ubuntu 3571630080 Nov 18 02:49 rockylinux-8-kube-v1.30.4.ova
-rw-r--r-- 1 ubuntu ubuntu 64 Nov 18 02:49 rockylinux-8-kube-v1.30.4.ova.sha256
-rw-r--r-- 1 ubuntu ubuntu 41044 Nov 18 02:48 rockylinux-8-kube-v1.30.4.ovf -
Locate the new Rocky image VM in your VMware vSphere environment. Right-click the VM and select Clone > Clone to Template.
tipOnce the image is built, you can connect to the image via SSH. The following steps are based on guidance from the Image Builder Book.
Connect to image VM with SSH
-
On a machine with govc installed and configured with your VMware vSphere credentials, clone the Kubernetes Image Builder repository.
-
Navigate to the
capidirectory of the Kubernetes Image Builder repository.cd ./image-builder/images/capi/ -
Run the Kubernetes Image Builder
image-govc-cloudinit.shscript and pass in theimage_nameof your Rocky image VM as specified in theimageconfigfile. This creates a snapshot of the image and updates it with the data located in thecloudinitdirectory. Ensure the VM is off before running the command../hack/image-govc-cloudinit.sh <image_name>Example outputimage-govc-cloudinit: creating snapshot 'new'
image-govc-cloudinit: initializing cloud-init data
image-govc-cloudinit: creating snapshot 'cloudinit' -
Set read-write permissions for the
id_rsa.capifile.chmod 600 cloudinit/id_rsa.capi -
Power on the Rocky image VM.
-
Connect to the VM via SSH using your
id_rsa.capikey. Replace<vm-ip>with the IP of your Rocky image VM.ssh -i cloudinit/id_rsa.capi capv@<vm-ip>
-
-
Enter a VM template name, choose a location for the template, and select Next.
infoThe name and location do not have to match those defined in the
imageconfigfile. The same applies to the remaining locations and resources specified in the following steps. -
Choose a compute resource and select Next.
-
Choose a storage location and select Next.
-
Review your template configurations and select Finish to convert the VM into a Rocky image template that you can reference while creating your cluster profile.
-
Open a terminal window and SSH into the airgap support VM using the command below. Replace
<path-to-private-key>with the path to the private SSH key,<vm-username>with your airgap support VM username, and<airgap-vm-hostname>with the IP address or Fully Qualified Domain Name (FQDN) of the airgap support VM (for example,example-vm.palette.dev).infoWhether you use the IP address or FQDN depends on the hostname used when setting up your airgap support VM. If you used an existing RHEL VM to set up your VM, this is always the FQDN; if you used an OVA, it depends on the hostname used when invoking the command
/bin/airgap-setup.sh <airgap-vm-hostname>.ssh -i <path-to-private-key> <vm-username>@<airgap-vm-hostname> -
Switch to the
rootuser account to complete the remaining steps.sudo --login -
Ensure all artifacts listed in the Prerequisites section are available in the
roothome directory of the airgap support VM.ls -lExample output-rw-r--r-- 1 root root 183310952 Nov 17 23:59 airgap-pack-kubernetes-1.30.4.bin
drwx------ 2 root root 4096 Jun 30 14:37 bin
-rw-r--r-- 1 root root 3973587887 Oct 21 12:02 capi-image-builder-v4.6.24.tgz
drwxr-xr-x 2 root root 4096 Apr 12 2024 prep
drwx------ 3 root root 4096 Apr 1 2024 snap -
Upload the airgap Kubernetes pack to the airgap registry. Replace
<version>with your Kubernetes version.chmod +x airgap-pack-kubernetes-<version>.bin && \
./airgap-pack-kubernetes-<version>.bin -
Set your CAPI Image Builder version tag as a variable. The version must match the
capi-image-buildercompressed TAR file.CAPI_IMAGE_BUILDER_VERSION=<capi-image-builder-version-tag>
echo CAPI Image Builder version: $CAPI_IMAGE_BUILDER_VERSIONExample outputCAPI Image Builder version: v4.6.24 -
Extract the CAPI Image Builder file.
tar --extract --gzip --file=capi-image-builder-$CAPI_IMAGE_BUILDER_VERSION.tgzThe
roothome directory of your airgap support VM should now contain the following artifacts.ls -lExample output-rw-rw-r-- 1 ubuntu ubuntu 928 Apr 8 2025 README
-rwxr-xr-x 1 root root 183310952 Nov 17 23:59 airgap-pack-kubernetes-1.30.4.bin
drwx------ 2 root root 4096 Jun 30 14:37 bin
-rw-rw-r-- 1 ubuntu ubuntu 2471340032 May 16 2025 capi-builder-v4.6.24.tar
-rw-r--r-- 1 root root 3973587887 Oct 21 12:02 capi-image-builder-v4.6.24.tgz
drwxrwxr-x 2 ubuntu ubuntu 4096 Aug 13 2024 kickstart
drwxrwxr-x 3 ubuntu ubuntu 4096 Apr 8 2025 output
drwxr-xr-x 2 root root 4096 Apr 12 2024 prep
drwxr-xr-x 3 ubuntu ubuntu 12288 Oct 21 11:51 rpmrepo
drwx------ 3 root root 4096 Apr 1 2024 snap
-rw-rw-r-- 1 ubuntu ubuntu 602989568 Apr 1 2025 yum-repo-v1.0.0.tar -
Update the permissions of the
outputfolder to allow the CAPI Builder tool to create directories and files within it.chmod a+rwx output -
Copy the
ks.cfg.rocky8orks.cfg.rocky9file from thekickstartfolder to theoutputfolder asks.cfg. Replace<version>with8or9, depending on the OS version of your Rocky VM.cp kickstart/ks.cfg.rocky<version> output/ks.cfg -
Copy the
server.crtfile from the/opt/spectro/ssl/directory to therpmrepofolder.cp /opt/spectro/ssl/server.crt rpmrepo/ -
Open the
imageconfigtemplate file in an editor of your choice and fill in the required parameters. Theimageconfigfile is used to personalize the base CAPI image for your cluster, which you can alter to fit your needs. This includes specifying the OS type, Kubernetes version, whether the image should be FIPS compliant, and more.The following example configuration configures a Rocky 8 CAPI image from an existing Rocky 8 VM in VMware vSphere. Replace all VMware-related placeholders in the
Define Vmware infra detailssection with values from your VMware vSphere environment. Additionally, forvcenter_template, enter the full datacenter path to the Rocky VM that you want to use as a base for your CAPI image. Replace<airgap-vm-hostname>with the hostname or IP address of your airgap support VM.For a complete list of parameters, refer to the Configuration Reference page. Additionally, refer to the Compatibility Matrix for a list of supported Kubernetes versions and their corresponding dependencies.
warningIf you used the airgap support VM hostname during the execution of the
airgap-setup.shscript, ensure you enter the VM hostname in theairgap_ipparameter. The same applies if you used the VM IP address.vi ./output/imageconfigExample imageconfig file# Define the OS type and version here
# os_version=rhel-8 | rhel-9 | rockylinux-8 | rockylinux-9
# image_type=standard | fips
os_version=rockylinux-8
image_type=standard
# Define the image name
# image_name=<Final Image Name to create>
image_name=rocky-8
# Define the Cloud type
# cloud_type=vmware
cloud_type=vmware
# Define the Component Versions
#
# containerd crictl and cni version update should be done
# only if the images are available in the upstream repositories
k8s_version=1.30.4
cni_version=1.3.0
containerd_version=1.7.13
crictl_version=1.28.0
# Define RHEL subscription credentials(if $image_type=rhel)
# used while image creation to use package manager
# rhel_subscription_user=
# rhel_subscription_pass=
# Define ISO url(if image is rhel or rockylinux)
iso_name=
iso_checksum=
# Define AWS infra details
aws_access_key=
aws_secret_key=
# Define Vmware infra details
vcenter_server=<vcenter-server>
vcenter_user=<vcenter-user>
vcenter_password=<vcenter-password>
vcenter_datacenter=<vcenter-datacenter>
vcenter_datastore=<vcenter-datastore>
vcenter_network=<vcenter-network>
vcenter_folder=<vcenter-folder>
vcenter_cluster=<vcenter-cluster>
vcenter_resource_pool=<vcenter-resource-pool>
# Optional: for OVA based builds
vcenter_template=<vcenter-datacenter-path-to-VM>
# Define Azure infra details
azure_client_id=
azure_client_secret=
azure_subscription_id=
azure_location=
azure_storage_account=
azure_resource_group=
# Define GCE infra details
google_app_creds=
gcp_project_id=
# Airgap Configuration
airgap=true
airgap_ip=<airgap-vm-hostname>tipTo build a FIPS-compliant image, keep the
image_typeset tofips.Once you are finished making changes, save and exit the file.
-
(Optional) You can add custom Bash scripts (
.shfiles) to run before or after the build process. This feature is available beginning with CAPI Image Builder version 4.6.23. If any scripts are found in the relevant directories, they are copied to an Ansible playbook.Move any scripts that you want to be executed before the build process to the
output/custom_scripts/predirectory. Move any scripts that you want to be executed after the build process to theoutput/custom_scripts/postdirectory. Ensure the scripts are executable.Below is an example of moving a pre-install script to the appropriate
predirectory and making it executable.Example of moving a script and modifying permissionsmv sample-script.sh output/custom_scripts/pre/sample-script.sh
chmod +x custom_scripts/pre/sample-script.sh -
Load the CAPI Image Builder container image with the command below.
- Docker
- Podman
docker load < capi-builder-$CAPI_IMAGE_BUILDER_VERSION.tarpodman load < capi-builder-$CAPI_IMAGE_BUILDER_VERSION.tar -
Load the Yum container image with the command below. The Yum container is used to serve the packages required by the CAPI Image Builder.
- Docker
- Podman
docker load < yum-repo-v1.0.0.tarpodman load < yum-repo-v1.0.0.tar -
Tag the CAPI Image Builder and Yum containers.
- Docker
- Podman
docker tag localhost/v1.0.0:latest localhost/yum-repo:v1.0.0
docker tag localhost/$CAPI_IMAGE_BUILDER_VERSION:latest localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSIONpodman tag localhost/v1.0.0:latest localhost/yum-repo:v1.0.0
podman tag localhost/$CAPI_IMAGE_BUILDER_VERSION:latest localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION -
Confirm that both container images were loaded and tagged correctly.
- Docker
- Podman
docker imagesExample outputREPOSITORY TAG IMAGE ID CREATED SIZE
localhost/capi-builder v4.6.24 2adff15eee2d 7 days ago 2.09GB
localhost/yum-repo v1.0.0 b03879039936 6 weeks ago 603 MBpodman imagesExample outputREPOSITORY TAG IMAGE ID CREATED SIZE
localhost/capi-builder v4.6.24 2adff15eee2d 7 days ago 2.09GB
localhost/yum-repo v1.0.0 b03879039936 6 weeks ago 603 MB -
Start the Yum container and assign its ID to the
BUILD_ID_YUMvariable. The following command mounts the/root/rpmrepodirectory on your airgap support VM to the/var/www/html/rpmrepodirectory of the Yum container, runs the container on port 9000 of your VM, and detaches the container's output from the terminal.- Docker
- Podman
BUILD_ID_YUM=$(docker run --volume /root/rpmrepo:/var/www/html/rpmrepo --publish 9000:80 --detach localhost/yum-repo:v1.0.0)BUILD_ID_YUM=$(podman run --volume /root/rpmrepo:/var/www/html/rpmrepo --publish 9000:80 --detach localhost/yum-repo:v1.0.0) -
Execute the command below to visualize the Yum container logs.
- Docker
- Podman
docker logs --follow $BUILD_ID_YUMMonitor the output until a
Pool finishedmessage appears, indicating that the process was completed successfully.# Output condensed for readability
Directory walk started
Directory walk done - 53 packages
Temporary output repo path: /var/www/html/rpmrepo/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finishedpodman logs --follow $BUILD_ID_YUMMonitor the output until you see a
Pool finishedmessage, which indicates that the process was completed successfully.# Output condensed for readability
Directory walk started
Directory walk done - 53 packages
Temporary output repo path: /var/www/html/rpmrepo/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished -
Issue the command below to start the CAPI Image Builder container and assign the container ID to the
BUILD_ID_CAPIvariable. This command starts the container on the same network as your airgap support VM, mounts the/root/outputdirectory of your VM to the/home/imagebuilder/outputdirectory of the CAPI Image Builder container, and detaches the container's output from the terminal.The tool will create and configure a VM with Dynamic Host Configuration Protocol (DHCP) in your VMware vSphere environment using the
image_namedefined inimageconfig. The tool will then generate a Rocky image from the VM and save it to theoutputdirectory.- Docker
- Podman
BUILD_ID_CAPI=$(docker run --net=host --volume /root/output:/home/imagebuilder/output --detach localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)BUILD_ID_CAPI=$(podman run --net=host --volume /root/output:/home/imagebuilder/output --detach localhost/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)If you need the VM to use static IP placement instead of DHCP, follow the steps described below.
CAPI Image Builder with Static IP Placement
-
Open the
ks.cfgfile located in the output folder. Find and replace the network linenetwork --bootproto=dhcp --onboot=on --ipv6=auto --activate --hostname=capv.vmwith the configuration below.network --bootproto=static --ip=<vcenter-static-ip-address> --netmask=<vcenter-netmask> --gateway=<vcenter-gateway> --nameserver=<vcenter-nameserver>Replace
<vcenter-static-ip-address>with a valid IP address from your VMware vSphere environment and<vcenter-netmask>,<vcenter-gateway>, and<vcenter-nameserver>with the correct values from your VMware vSphere environment. The<vcenter-netmask>parameter must be specified in dotted decimal notation, for example,--netmask=255.255.255.0.Once you are finished making changes, save and exit the file.
-
Issue the command below to start the CAPI Image Builder container and assign the container ID to the
BUILD_ID_CAPIvariable. This command starts the container on the same network as your airgap support VM, mounts the/root/outputdirectory of your VM to the/home/imagebuilder/outputdirectory of the CAPI Image Builder container, and detaches the container's output from the terminal.The tool will use the
imageconfigfile to create and configure a VM with static IP placement in your VMware vSphere environment.- Docker
- Podman
BUILD_ID_CAPI=$(docker run --net=host --volume /root/output:/home/imagebuilder/output --detach us-docker.pkg.dev/palette-images/palette/imagebuilder/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)BUILD_ID_CAPI=$(podman run --net=host --volume /root/output:/home/imagebuilder/output --detach us-docker.pkg.dev/palette-images/palette/imagebuilder/capi-builder:$CAPI_IMAGE_BUILDER_VERSION)
-
Execute the following command to view the CAPI Image Builder container logs and monitor the build progress. If you added any custom scripts in step 11, the output will be displayed in the build log.
- Docker
- Podman
docker logs --follow $BUILD_ID_CAPIpodman logs --follow $BUILD_ID_CAPIinfoIt may take a few minutes for the logs to start being displayed, and the build takes several minutes to complete.
-
Once the build is complete, the Rocky CAPI image is downloaded to the
outputdirectory as theimage_namespecified in theimageconfigfile. Issue the following command to confirm that the build files are present in theoutputdirectory.ls -l output/<image_name>Example output-rw-r--r-- 1 ubuntu ubuntu 1203 Nov 18 02:48 packer-manifest.json
-rw-r--r-- 1 ubuntu ubuntu 3571576320 Nov 18 02:48 rocky-8-disk-0.vmdk
-rw-r--r-- 1 ubuntu ubuntu 9507 Nov 18 02:48 rocky-8.ovf
-rw-r--r-- 1 ubuntu ubuntu 212 Nov 18 02:48 rockylinux-8-kube-v1.30.4.mf
-rw-r--r-- 1 ubuntu ubuntu 3571630080 Nov 18 02:49 rockylinux-8-kube-v1.30.4.ova
-rw-r--r-- 1 ubuntu ubuntu 64 Nov 18 02:49 rockylinux-8-kube-v1.30.4.ova.sha256
-rw-r--r-- 1 ubuntu ubuntu 41044 Nov 18 02:48 rockylinux-8-kube-v1.30.4.ovf -
Locate the new Rocky image VM in your VMware vSphere environment. Right-click the VM and select Clone > Clone to Template.
tipOnce the image is built, you can connect to the image via SSH. The following steps are based on guidance from the Image Builder Book.
Connect to image VM with SSH
-
On a machine with govc installed and configured with your VMware vSphere credentials, clone the Kubernetes Image Builder repository.
-
Navigate to the
capidirectory of the Kubernetes Image Builder repository.cd ./image-builder/images/capi/ -
Run the Kubernetes Image Builder
image-govc-cloudinit.shscript and pass in theimage_nameof your Rocky image VM as specified in theimageconfigfile. This creates a snapshot of the image and updates it with the data located in thecloudinitdirectory. Ensure the VM is off before running the command../hack/image-govc-cloudinit.sh <image_name>Example outputimage-govc-cloudinit: creating snapshot 'new'
image-govc-cloudinit: initializing cloud-init data
image-govc-cloudinit: creating snapshot 'cloudinit' -
Set read-write permissions for the
id_rsa.capifile.chmod 600 cloudinit/id_rsa.capi -
Power on the Rocky image VM.
-
Connect to the VM via SSH using your
id_rsa.capikey. Replace<vm-ip>with the IP of your Rocky image VM.ssh -i cloudinit/id_rsa.capi capv@<vm-ip>
-
-
Enter a VM template name, choose a location for the template, and select Next.
infoThe name and location do not have to match those defined in the
imageconfigfile. The same applies to the remaining locations and resources specified in the following steps. -
Choose a compute resource and select Next.
-
Choose a storage location and select Next.
-
Review your template configurations and select Finish to convert the VM into a Rocky image template that you can reference while creating your cluster profile.
Create Cluster Profile
The Rocky image is now built and available in the VMware vSphere environment. You can use it to create a cluster profile and deploy a VMware host cluster.
-
Log in to Palette.
-
From the left main menu, select Profiles > Add Cluster Profile.
-
In the Basic Information section, assign the cluster profile a Name, brief Description, and Tags. Choose Full or Infrastructure for the profile Type, and select Next.
-
In the Cloud Type section, choose VMware vSphere, and select Next.
-
Select the Bring Your Own OS (BYOOS) pack and provide the following values in the YAML configuration editor. Proceed to the Next layer when finished.
Field Description Example osImageOverrideThe path to your Rocky Linux image template in your VMware vSphere environment. /Datacenter/vm/sp-docs/rockylinux-8-kube-v1.30.4osNameThe type of operating system used in your image. rockylinuxosVersionThe version of your operating system. Enter 8or9depending on the Rocky Linuxos_versionreferenced in theimageconfigfile.8Example YAML configurationpack:
osImageOverride: "/Datacenter/vm/sp-docs/rockylinux-8-kube-v1.30.4"
osName: "rockylinux"
osVersion: "8" -
Select the Palette eXtended Kubernetes (PXK) pack. Ensure the Pack Version matches the
k8s_versionspecified in theimageconfigfile. Proceed to the Next layer.
- Complete the remaining profile layers, making any changes necessary. When finished, select Finish Configuration to create your cluster profile. For additional information on creating cluster profiles, refer to our Create an Infrastructure Profile and Create a Full Profile guides.
Next Steps
After you have created an OS image with CAPI Image Builder and have it referenced in a cluster profile, you can deploy a VMware host cluster using the created cluster profile. Refer to the Create and Manage VMware Clusters guide for instructions on deploying a VMware host cluster.