Terraform - KVM
This page includes a lot of snippets and notes from my tests with Terraform and KVM.
Terraform Install
This is a quick snippet of commands that I used to install Terraform directly on the KVM hypervisor host.
wget https://releases.hashicorp.com/terraform/1.6.6/terraform_1.6.6_linux_amd64.zip
sudo apt install unzip
unzip terraform_1.6.6_linux_amd64.zip
rm terraform_1.6.6_linux_amd64.zip
sudo mv terraform /usr/bin
terraform
KVM Terraform - Hello World
This is a basic hello world style test. I use the simplest possible configuration that will result in actually creating a VM. It doesn’t have a image with an OS installed and doesn’t really hae anything configured. It doesn’t really do anything except show that the tools are working. After this I moved right on to more elaborate and more usable examples.
main.tfterraform { required_providers { libvirt = { source = "dmacvicar/libvirt" } } } provider "libvirt" { } resource "libvirt_domain" "test1" { name="test1" } user1@neon-slug1:~/test1$
KVM - Simple Terraformed VM with CentOS
This was just a quick test. The image used is apparently meant to have a user/password setup with some additional configuration. I didn’t look into what was needed for this at all. I just used this to show that I could get a VM up and running with an OS on it. I quickly moved on to the next round of tests that included using Ubuntu instead.
main.tfterraform { required_providers { libvirt = { source = "dmacvicar/libvirt" } } } provider "libvirt" { } resource "libvirt_volume" "host4-qcow2" { name = "host4.qcow2" pool = "default" source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2" format = "qcow2" } resource "libvirt_domain" "host4" { name = "host4" memory = "2048" vcpu = 2 network_interface { network_name = "host-bridge" } disk { volume_id = "${libvirt_volume.host4-qcow2.id}" } console { type = "pty" target_type = "serial" target_port = "0" } graphics { type = "vnc" listen_type = "address" listen_address = "0.0.0.0" autoport = true } }
Apparmor Errors
Error snippets from apparmor that occurred when trying to use Terraform with KVM. These didn’t occur when directly managing KVM with libvirt CLI tools.
Error: error creating libvirt domain: internal error: process exited while connecting to monitor: 2024-01-08T19:00:39.640226Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/host4.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/var/lib/libvirt/images/host4.qcow2': Permission denied
sudo cat /var/log/kern.log | grep -C 10 apparmor | grep -C 10 qemu
Jan 8 19:00:39 neon-slug1 kernel: [90055.751003] audit: type=1400 audit(1704740439.638:141): apparmor="DENIED" operation="open" profile="libvirt-a786930e-63cb-4fc9-9957-b5a580f2fe6a" name="/var/lib/libvirt/images/host4.qcow2" pid=10290 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055
This can be used to disable apparmor ( don’t do this, you should learn to configure it ):
sudo systemctl stop apparmor
sudo systemctl disable apparmor
sudo vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="apparmor=0"
TF - Remote KVM Notes
Some notes that might be helpful when attempting to terraform KVM resources from a remote host. So far I’ve been terraforming them directly from the hypervisor host. This snippet hasn’t been tested.
export LIBVIRT_DEFAULT_URI="qemu+ssh://root@192.168.1.100/system"
terraform plan
provider "libvirt" {
uri = "qemu+ssh://root@192.168.100.10/system"
}
KVM Network Interface Note
These are snippets that can be used to define a network interface for a KVM / Libvirt / Qemeu VM. These are not a Terraform config snippets. Both seem to work but will result in different interface names on the host. This means that you will need to login to the host and edit your configs ( netplan in this case ) for networking to work.
<interface type='network'>
<mac address='52:54:00:ea:6f:62'/>
<source network='host-bridge'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:67:66:6a'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
Single Host Ubuntu Bionic
This is what I used to built a single Ubuntu Server Bionic host.
main.tfterraform { required_providers { libvirt = { source = "dmacvicar/libvirt" } } } provider "libvirt" { } resource "libvirt_volume" "host4-qcow2" { name = "host4.qcow2" pool = "default" source = "/var/lib/libvirt/images/ubuntu_server_bionic_console_enabled.qcow2" format = "qcow2" } resource "libvirt_domain" "host4" { name = "host4" memory = "2048" vcpu = 2 network_interface { network_name = "host-bridge" } disk { volume_id = "${libvirt_volume.host4-qcow2.id}" } console { type = "pty" target_type = "serial" target_port = "0" } graphics { type = "vnc" listen_type = "address" listen_address = "0.0.0.0" autoport = true } }
Multihost Bionic
This is what I used to build multiple hosts with an Ubuntu Server Bionic image.
main.tfterraform { required_providers { libvirt = { source = "dmacvicar/libvirt" } } } provider "libvirt" { } locals { host_list = toset([ "host1", "host2", "host3" ]) } resource "libvirt_volume" "volumes" { for_each = local.host_list name = "${each.key}.qcow2" pool = "default" source = "/var/lib/libvirt/images/ubuntu_server_bionic_console_enabled.qcow2" format = "qcow2" } resource "libvirt_domain" "hosts" { for_each = local.host_list name = each.key memory = "2048" vcpu = 2 network_interface { network_name = "host-bridge" } disk { volume_id = libvirt_volume.volumes[each.key].id } console { type = "pty" target_type = "serial" target_port = "0" } graphics { type = "vnc" listen_type = "address" listen_address = "0.0.0.0" autoport = true } }
Terraforming Minikube Host
This is what I used to terraform my minikube host. I increased the disk size afterwards.
main.tfterraform { required_providers { libvirt = { source = "dmacvicar/libvirt" } } } provider "libvirt" { } locals { host_list = toset([ "kube_test1" ]) } resource "libvirt_volume" "volumes" { for_each = local.host_list name = "${each.key}.qcow2" pool = "default" source = "/var/lib/libvirt/images/ubuntu_server_jammy_template1.qcow" format = "qcow2" } resource "libvirt_domain" "hosts" { for_each = local.host_list name = each.key memory = "2048" vcpu = 2 network_interface { network_name = "host-bridge" } disk { volume_id = libvirt_volume.volumes[each.key].id } console { type = "pty" target_type = "serial" target_port = "0" } graphics { type = "vnc" listen_type = "address" listen_address = "0.0.0.0" autoport = true } }