Concept Introduce
Qemu
Qemu is an emulator that simulates the CPU and other hardware to the Guest OS, which thinks it is dealing directly with the hardware, but is actually dealing with the hardware simulated by Qemu, which translates the instructions to the real hardware.
Since all instructions have to pass through Qemu, performance is poor.
Figure 1:Qemu Architecture |
---|
![]() |
From:KVM-Qemu-Libvirt三者之间的关系 |
KVM
KVM is a module for the linux kernel, which requires CPU support. Using hardware-assisted virtualization technologies Intel-VT, AMD-V, memory-related such as Intel’s EPT and AMD’s RVI technologies, Guest OS CPU instructions do not have to be translated by Qemu and run directly, greatly improving speed. KVM exposes the interface through /dev/kvm, and user-state programs can access this interface through the ioctl function. See the following pseudo-code:
open("/dev/kvm")
ioctl(KVM_CREATE_VM)
ioctl(KVM_CREATE_VCPU)
for (;;) {
ioctl(KVM_RUN)
switch (exit_reason) {
case KVM_EXIT_IO:
case KVM_EXIT_HLT:
}
}
The KVM kernel module itself can only provide CPU and memory virtualization, so it must be combined with QEMU to form a completed virtualization technology, which is called qemu-kvm.
qemu-kvm
Qemu integrates KVM, calls the /dev/kvm interface via ioctl, and leaves the CPU instructions to the kernel module. kvm is responsible for cpu virtualization + memory virtualization, which virtualizes cpu and memory, but kvm cannot emulate other devices. qemu emulates IO devices (NICs, disks, etc.), and kvm, together with qemu, enables server virtualization in the true sense. It is called qemu-kvm because it uses both of these things.
Qemu emulates other hardware, such as Network, Disk, which also affects the performance of these devices, so the pass through semi-virtualized devices virtio_blk, virtio_net are created to improve the performance of the devices.
Figure 2:Qemu-KVM Architecture |
---|
![]() |
From:UCSB CS290B |
Libvirt
Why Libvirt?
- Hypervisors such as qemu-kvm have command-line virtual machine management tools with many parameters that are difficult to use.
- There are many different types of Hypervisors and no unified programming interface to manage them, which is important for cloud environments.
- There is no unified way to easily define the various manageable objects associated with a VM.
What does Libvirt provide?
- It provides a unified, stable, open source application programming interface (API), a daemon (libvirtd), and a default command line management tool (virsh).
- It provides management of the virtualized client and its virtualized devices, network and storage.
- It provides a more stable set of application programming interfaces in C. Bindings to libvirt are now available in several other popular programming languages, and libraries for libvirt are already available directly in Python, Perl, Java, Ruby, PHP, OCaml, and other high-level programming languages.
- Its support for many different Hypervisors is implemented through a driver-based architecture. libvirt provides different drivers for different Hypervisors, including a driver for Xen, a QEMU driver for QEMU/KVM, a VMware driver, and so on. Driver source code files like qemu_driver.c, xen_driver.c, xenapi_driver.c, vmware_driver.c, vbox_driver.c can be easily found in the libvirt source code.
- It acts as an intermediate adaptation layer, allowing the underlying Hypervisor to be completely transparent to upper-level user space management tools, because libvirt shields the details of the underlying Hypervisor and provides a unified, more stable interface (API) for upper-level management tools.
- It uses XML to define various virtual machine-related managed objects.
Currently, libvirt has become the most widely used tool and API for managing various virtual machines, and some common virtual machine management tools (e.g. virsh, virt-install, virt-manager, etc.) and cloud computing framework platforms (e.g. OpenStack, OpenNebula, Eucalyptus, etc.) are available. Eucalyptus, etc.) all use libvirt’s APIs at the bottom.
Figure 3:Relation between libvirt and KVM |
---|
![]() |
From: Libvirt Wiki |
Operations
Install and config in Arch Linux
[[email protected].io]# yay -Sy archlinux-keyring
[[email protected].io]# yay -Sy qemu virt-manager virt-viewer dnsmasq vde2 bridge-utils openbsd-netcat
[[email protected].io]# yay -Sy ebtables iptables
[[email protected].io]# yay -Sy libguestfs
[[email protected].io]# sudo systemctl enable libvirtd.service
[[email protected].io]# sudo systemctl start libvirtd.service
This will install all the software needed, the next step is to configure it:
[[email protected].io]# cat /etc/libvirt/libvirtd.conf
... ...
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
[[email protected].io]# sudo usermod -a -G libvirt $(whoami)
[[email protected].io]# sudo systemctl restart libvirtd.service
virsh Operation
Configure Network
[[email protected].io]# sudo virsh net-define /etc/libvirt/qemu/networks/default.xml
[[email protected].io]# sudo virsh net-start default
[[email protected].io]# sudo virsh net-autostart default # run at system start
[[email protected].io]#
Configure console connection
[[email protected].io]# sudo systemctl enable serial-[email protected].service
[[email protected].io]# sudo systemctl start serial-[email protected].service
[[email protected].io]#
Create VM
[[email protected].io]# sudo virt-install --name=testvm-00 \
--os-type=Linux \
--os-variant=centos7.0 \
--vcpu=4 \
--ram=4096 \
--disk path=/home/liuliqiang/data/kvm/images/testvm00.img,size=30 \
--graphics spice \
--location=/home/liuliqiang/data/kvm/isos/CentOS-7-x86_64-DVD-2009.iso \
--network bridge:virbr0
Enter VM
[[email protected].io]# virsh console zhangsan
Shutdown VM
[[email protected].io]# virsh shutdown VM_NAME
[[email protected].io]# virsh shutdown --domain VM_NAME
[[email protected].io]# virsh destroy VM_NAME # force stop
[[email protected].io]# virsh destroy --domain VM_NAME # force stop
[[email protected].io]# virsh undefine --domain VM_NAME # remove vm
View VM info
[[email protected].io]# virsh list --all
Id Name State
----------------------------
1 200 running
2 envoy180 running
... ...
- base-f-vm shut off
[[email protected].io]#