I used to use this machine directly for daily use because I have a machine at home with Linux, but later on I couldn’t use this machine directly at work, so the usage of this machine was low, so I stopped using it directly and used it as a server and connected to it via remote connection.

However, I feel that this machine is OK for personal use, so I want to open some VMs to do some separate things. As a rule of thumb, I would probably use Virsh directly (I don’t want to install GUI and use WebUI), but thinking that these VMs I use may also be available to the public cloud, I chose another familiar tool: Vagrant.

Introduction to Vagrant

Vagrant is a tool developed by Hashicorp for managing VM infrastructure, it’s a bit like Docker in concept, that is, you can manage your development and runtime environment through a configuration file that can be versioned, Docker is through Compose (a large Docker manages container environments through Compose (Kubernetes for large orchestration), while Vagrant manages VM environments through Vagrantfile.

Vagrant can support different virtualization platforms through plugins, where virtualization platforms are not only local virtualization platforms (KVM, Vmware etc) but also public cloud platforms, so Vagrant view smoothes out the differences between platforms through Vagrantfile, allowing developers to migrate smoothly.

Installation Configuration

Installing Vagrant

Arch-based Linux can be installed directly through the package management tool at

  1. [[email protected].io]# yay -Sy vagrant

Just wait for the installation to finish. For more other systems you can follow the instructions in the official documentation: Installing Vagrant.

Installing the DigitialOcean plugin

  1. [[email protected].io]# vagrant plugin install vagrant-digitalocean
  2. [[email protected].io]# cat Vagrantfile
  3. # -*- mode: ruby -*-
  4. # vi: set ft=ruby :
  5. require 'yaml'
  6. current_dir = File.dirname(File.expand_path(__FILE__))
  7. configs = YAML.load_file("#{current_dir}/.vagrant.yaml")
  8. vagrant_config = configs['configs']
  9. do_token = vagrant_config['digital_ocean_token']
  10. Vagrant.configure('2') do |config|
  11. config.vm.define "droplet1" do |config|
  12. config.vm.provider :digital_ocean do |provider, override|
  13. override.ssh.private_key_path = '~/.ssh/id_rsa'
  14. override.vm.box = 'digital_ocean'
  15. override.vm.box_url = "https://github.com/devopsgroup-io/vagrant-digitalocean/raw/master/box/digital_ocean.box"
  16. override.nfs.functional = false
  17. override.vm.allowed_synced_folder_types = :rsync
  18. provider.token = do_token
  19. provider.image = 'ubuntu-18-04-x64'
  20. provider.region = 'nyc1'
  21. provider.size = 's-1vcpu-1gb'
  22. end
  23. end
  24. end

Installing KVM Plugins

  1. [[email protected].io]# sudo pacman --sync --sysupgrade --refresh
  2. [[email protected].io]# sudo pacman --query --search 'iptables' | grep "local" | grep "iptables " && \
  3. sudo pacman --remove --nodeps --nodeps --noconfirm iptables
  4. [[email protected].io]# sudo pacman --sync --needed --noprogressbar --noconfirm \
  5. iptables-nft libvirt qemu openbsd-netcat bridge-utils dnsmasq vagrant \
  6. pkg-config gcc make ruby
  7. [[email protected].io]# vagrant plugin install vagrant-libvirt

Vagrant testing

  1. [[email protected].io]# vagrant init fedora/32-cloud-base
  2. [[email protected].io]# vagrant up --provider=digital_ocean # Try digital ocean
  3. [[email protected].io]# vagrant up --provider=libvirt # try libvirt
  4. [[email protected].io]# vagrant ssh
  5. [[email protected].io]# vagrant destroy

Specify virtualization via environment variables

  1. [[email protected].io]# export VAGRANT_DEFAULT_PROVIDER=libvirt


enables ipv6 by default

When booting via vagrant up, I found that ipv6 is used by default, and then it fails to.

  1. [[email protected].io]# vagrant up
  2. Bringing machine 'default' up with 'libvirt' provider...
  3. ==> default: Checking if box 'centos/7' version '2004.01' is up to date...
  4. ... ...

I didn’t find out what the cause was, and suspected that it might be caused by some configuration not taking effect after the installation without rebooting, so I tried a reboot, and then found that it worked fine after the reboot.