Installing a Proxmox 8.1.3 Lab server

Proxmox VE is an Open Source solution for running virtual devices including, virtual machines, containers and more. It is perfect and almost a necessity for lab environments but is powerful and robust enough for the most demanding production environment.

This quick post describes the primary steps to start a bare metal install.

The hardware used is a Dell Optiplex 7050 micro PC with 32GB of memory, one 2GB NVME disk and one 2GB SSD drive. They cost around $100.00.

Start by downloading the latest Proxmox VE ISO. To create the ISO on a bootable USB you can use Balena Etcher.

Once done ensure your server is connected to a monitor and has a keyboard attached. Use the text interface as opposed to the graphics interface as there is no real difference. Use ext4 for the disks for a simple life. If you have a monster machine with multiple disks and memory then consider ZFS but for a lab server use ext4.

Step 1. Set the repositories and update and restart.

  1. Disable both Enterprise repositories.
  2. Add the No-Subscription repositories
  3. Go to updates and press refresh.
  4. When the window says OK close the window
  5. Click on upgrade.
  6. Inside the new window enter y
  7. When complete close the window
  8. Reboot.

Step 2. Remove Local-LVM and increase space on Local.

  1. Configure and start a Putty session so you can copy and paste commands.
  2. Go to Data center and remove Local-lvm
  3. Run the commands below:
  4. Reboot and confirm.
lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root

Step 3. Edit logind.con

  1. nano /etc/systemd/logind.conf
  2. Use ignore for any setting that may cause unexpected shutdowns

Step 4. GPU passthru

  1. nano /etc/default/grub
  2. GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off”
  3. update-grub
  4. echo “vfio” >> /etc/modules
  5. echo “vfio_iommu_type1” >> /etc/modules
  6. echo “vfio_pci” >> /etc/modules
  7. echo “vfio_virqfd” >> /etc/modules
  8. echo “options vfio_iommu_type1 allow_unsafe_interrupts=1” > /etc/modprobe.d/iommu_unsafe_interrupts.conf
  9. echo “options kvm ignore_msrs=1” > /etc/modprobe.d/kvm.conf
  10. echo “blacklist radeon” >> /etc/modprobe.d/blacklist.conf
  11. echo “blacklist nouveau” >> /etc/modprobe.d/blacklist.conf
  12. echo “blacklist nvidia” >> /etc/modprobe.d/blacklist.conf
  13. echo “blacklist snd_hda_intel” >> /etc/modprobe.d/blacklist.conf
  14. echo “blacklist snd_hda_codec_hdmi” >> /etc/modprobe.d/blacklist.conf
  15. echo “blacklist i915” >> /etc/modprobe.d/blacklist.conf
  16. Find the vfio numbers
  17. echo “options vfio-pci ids=8086:1912,8086:a2f0, disable_vga=1”> /etc/modprobe.d/vfio.conf
  18. update-initramfs -u

*tip: Use this command for a before and after check to ensure IOMMU is active
for d in /sys/kernel/iommu_groups//devices/; do n=${d#/iommu_groups/}; n=${n%%/}; printf ‘IOMMU group %s ‘ “$n”; lspci -nns “${d##/}”; done

To find vfio: 

lspci -v

Look for the first set of numbers
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06) (prog-if 00 [VGA controller])
00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio

Use the numbers and run:-
lspci -n -s 00:02.0

00:02.0 0300: 8086:1912 (rev 06)

Run through all the PCI devices you want to passthru and add it this line:

echo "options vfio-pci ids=8086:1912,8086:a2f0 disable_vga=1"> /etc/modprobe.d/vfio.conf

Step 5. Add NFS and NFS Share and the second Disk

  1. Run lsblk
  2. wipe the second disk in the GUI
  3. Initialize disk with GPT
  4. fdisk /dev/sda
  5. n
  6. Default, default, +1600GB (1.5 TiB
  7. 2, default, default (372.9 GiB)
  8. w
  9. lsblk
  10. mkfs -t ext4 /dev/sda1
  11. mkfs -t ext4 /dev/sda2
  12. mkdir /mnt/pve/lrssd15tib
  13. mkdir /mnt/pve/lrssdnfs
  14. mount -t auto /dev/sda1 /mnt/pve/lrssd15tib
  15. mount -t auto /dev/sda2 /mnt/pve/lrssdnfs
  16. apt install nfs-server
  17. chmod -R 777 /mnt/pve/lrssd15tib
  18. chmod -R 777 /mnt/pve/lrssdnfs
  19. Go to the GUI, Datacenter and add a directory
  20. lrssd15tib – /mnt/pve/lrssd15tib – Disk Image, ISO Image, container template, VZdump, Snippits – not shared
  21. lrssdnfs – /mnt/pve/lrssdnfs – Disk Image, ISO Image, container template, VZdump, Snippits – shared
  22. The new disk is now in the GUI.
  23. Next export the NFS volume for other systems to find.
  24. nano /etc/exports
  25. Add – /mnt/pve/lrssdnfs *(rw,async)
  26. systemctl restart nfs-server
  27. Connect to a remote NFS drive
  28. tp16nfs – 172.22.*.* (should be in drop down) – select all

Step 6. Add Netdata

bash <(curl -Ss https://my-netdata.io/kickstart.sh)
http://172.22.*.*:19999/

Step 7. Add system Notes

For example:

To reduce memory from 98% to 46% when using ZFS:
```
echo "$[3 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
cat /sys/module/zfs/parameters/zfs_arc_max 3221225472
```

Reboot.

Step 7. Migrate VM’s

  1. Shutdown the VM using – shutdown -h now
  2. Go to Backup – Backup now – Choose NFS share on target – Mode = Stop.
  3. Restore using the same node ID

Done!

Leave a Reply

Your email address will not be published. Required fields are marked *