Introduction to Fedora virtualization products This chapter introduces the various virtualization products available in Fedora.
KVM and virtualization in Fedora What is KVM? KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on AMD64 and Intel 64 hardware that is built into the standard Fedora kernel. It can run multiple, unmodified Windows and Linux guest operating systems. The KVM hypervisor in Fedora is managed with the libvirt API and tools built for libvirt (such as virt-manager and virsh). Virtual machines are executed and run as multi-threaded Linux processes controlled by these tools. Overcommitting KVM hypervisor supports overcommitting of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system. Memory overcommitting allows hosts to utilize memory and virtual memory to increase guest densities. Overcommitting involves possible risks to system stability. Thin provisioning Thin provisioning allows the allocation of flexible storage and optimizes the available space for every guest. It gives the appearance that there is more physical storage on the guest than is actually available. This is not the same as overcommitting as this only pertains to storage and not CPUs or memory allocations. However, like overcommitting, the same warning applies. Thin provisioning involves possible risks to system stability. KSM Kernel SamePage Merging (KSM), used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. QEMU Guest Agent The QEMU Guest Agent runs on the guest operating system and allows the host machine to issue commands to the guest operating system. KVM guest virtual machine compatibility To verify whether your processor supports the virtualization extensions and for information on enabling the virtualization extensions if they are disabled, refer to the Fedora Virtualization Deployment and Administration Guide.
libvirt and libvirt tools The libvirt package is a hypervisor-independent virtualization API that is able to interact with the virtualization capabilities of a range of operating systems. The libvirt package provides: A common, generic, and stable layer to securely manage virtual machines on a host. A common interface for managing local systems and networked hosts. All of the APIs required to provision, create, modify, monitor, control, migrate, and stop virtual machines, but only if the hypervisor supports these operations. Although multiple hosts may be accessed with libvirt simultaneously, the APIs are limited to single node operations. The libvirt package is designed as a building block for higher level management tools and applications, for example, virt-manager and the virsh command-line management tools. With the exception of migration capabilities, libvirt focuses on managing single hosts and provides APIs to enumerate, monitor and use the resources available on the managed node, including CPUs, memory, storage, networking and Non-Uniform Memory Access (NUMA) partitions. The management tools can be located on separate physical machines from the host using secure protocols. Fedora supports libvirt and included libvirt-based tools as its default method for virtualization management. The libvirt package is available as free software under the GNU Lesser General Public License. The libvirt project aims to provide a long term stable C API to virtualization management tools, running on top of varying hypervisor technologies. virsh The virsh command-line tool is built on the libvirt management API and operates as an alternative to the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration. virt-manager virt-manager is a graphical desktop tool for managing virtual machines. It allows access to graphical guest consoles and can be used to perform virtualization administration, virtual machine creation, migration, and configuration tasks. The ability to view virtual machines, host statistics, device information and performance graphs is also provided. The local hypervisor and remote hypervisors can be managed through a single interface. For more information on virsh and virt-manager, refer to the Fedora Virtualization Deployment and Administration Guide.
Virtualized hardware devices Virtualization on Fedora presents three distinct types of system devices to virtual machines. The three types include: Virtualized and emulated devices Para-virtualized devices Physically shared devices These hardware devices all appear as being physically attached to the virtual machine but the device drivers work in different ways.
Virtualized and emulated devices KVM implements many core devices for virtual machines in software. These emulated hardware devices are crucial for virtualizing operating systems. Emulated devices are virtual devices which exist entirely in software. Emulated drivers may use either a physical device or a virtual software device. Emulated drivers are a translation layer between the virtual machine and the Linux kernel (which manages the source device). The device level instructions are completely translated by the KVM hypervisor. Any device, of the same type (storage, network, keyboard, and mouse) and recognized by the Linux kernel, may be used as the backing source device for the emulated drivers. Virtual CPUs (vCPUs) A host system can have up to 160 virtual CPUs (vCPUs) that can be presented to guests for their use, regardless of the number of host CPUs. Emulated graphics devices Two emulated graphics devices are provided. These devices can be connected to with the SPICE (Simple Protocol for Independent Computing Environments) protocol or with VNC: A Cirrus CLGD 5446 PCI VGA card (using the cirrus device) A standard VGA graphics card with Bochs VESA extensions (hardware level, including all non-standard modes) Emulated system components The following core system components are emulated to provide basic system functions: Intel i440FX host PCI bridge PIIX3 PCI to ISA bridge PS/2 mouse and keyboard EvTouch USB Graphics Tablet PCI UHCI USB controller and a virtualized USB hub Emulated serial ports EHCI controller, virtualized USB storage and a USB mouse USB 3.0 xHCI host controller Emulated sound devices Fedora provides an emulated (Intel) HDA sound device, intel-hda. The following two emulated sound devices are also available, but are not recommended due to compatibility issues with certain guest operating systems: ac97, an emulated Intel 82801AA AC97 Audio compatible sound card es1370, an emulated ENSONIQ AudioPCI ES1370 sound card Emulated watchdog devices Fedora provides two emulated watchdog devices. A watchdog can be used to automatically reboot a virtual machine when it becomes overloaded or unresponsive. The watchdog package must be installed on the guest. The two devices available are: i6300esb, an emulated Intel 6300 ESB PCI watchdog device. ib700, an emulated iBase 700 ISA watchdog device. Emulated network devices There are two emulated network devices available: The e1000 device emulates an Intel E1000 network adapter (Intel 82540EM, 82573L, 82544GC). The rtl8139 device emulates a Realtek 8139 network adapter. Emulated storage drivers Storage devices and storage pools can use these emulated devices to attach storage devices to virtual machines. The guest uses an emulated storage driver to access the storage pool. Note that like all virtual devices, the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtual machine. The backing storage device can be any supported type of storage device, file, or storage pool volume. The emulated IDE driver KVM provides two emulated PCI IDE interfaces. An emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtual machine. The emulated IDE driver is also used for virtualized CD-ROM and DVD-ROM drives. The emulated floppy disk drive driver The emulated floppy disk drive driver is used for creating virtualized floppy drives.
Para-virtualized devices Para-virtualized devices are drivers for virtual devices that increase the I/O performance of virtual machines. Para-virtualized devices decrease I/O latency and increase I/O throughput to near bare-metal levels. It is recommended to use the para-virtualized drivers for virtual machines running I/O intensive applications. The para-virtualized devices must be installed on the guest operating system. The para-virtualized drivers must be manually installed on Windows guests. The para-virtualized network driver (virtio-net) The para-virtualized network driver can be used as the driver for existing network devices or new network devices for virtual machines. The para-virtualized block driver (virtio-blk) The para-virtualized block driver is a driver for all storage devices, is supported by the hypervisor, and is attached to the virtual machine (except for floppy disk drives, which must be emulated). The para-virtualized controller device (virtio-scsi) The para-virtualized SCSI controller device provides a more flexible and scalable alternative to virtio-blk. A virtio-scsi guest is capable of inheriting the feature set of the target device, and can handle hundreds of devices compared to virtio-blk, which can only handle 28 devices. The para-virtualized clock Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues. KVM works around hosts that do not have a constant Time Stamp Counter by providing guests with a para-virtualized clock. The para-virtualized serial driver (virtio-serial) The para-virtualized serial driver is a bytestream-oriented, character stream driver, and provides a simple communication interface between the host's user space and the guest's user space. The balloon driver (virtio-balloon) The balloon driver can designate part of a virtual machine's RAM as not being used (a process known as balloon inflation), so that the memory can be freed for the host (or for other virtual machines on that host) to use. When the virtual machine needs the memory again, the balloon can be deflated and the host can distribute the RAM back to the virtual machine.
Physical host devices Certain hardware platforms allow virtual machines to directly access various hardware devices and components. This process in virtualization is known as device assignment. Device assignment is also known as passthrough. PCI device assignment The KVM hypervisor supports attaching PCI devices on the host system to virtual machines. PCI device assignment allows guests to have exclusive access to PCI devices for a range of tasks. It allows PCI devices to appear and behave as if they were physically attached to the guest virtual machine. Device assignment is supported on PCI Express devices, with the exception of graphics cards. Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts. USB passthrough The KVM hypervisor supports attaching USB devices on the host system to virtual machines. USB device assignment allows guests to have exclusive access to USB devices for a range of tasks. It allows USB devices to appear and behave as if they were physically attached to the virtual machine. SR-IOV SR-IOV (Single Root I/O Virtualization) is a PCI Express standard that extends a single physical PCI function to share its PCI resources as separate, virtual functions (VFs). Each function is capable of being used by a different virtual machine via PCI device assignment. An SR-IOV capable PCI-e device, provides a Single Root Function (for example, a single Ethernet port) and presents multiple, separate virtual devices as unique PCI device functions. Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts. NPIV N_Port ID Virtualization (NPIV) is a functionality available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adapters (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtual machines can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs). NPIV can provide high density virtualized environments with enterprise-level storage solutions.
Guest CPU models Historically, CPU model definitions were hard-coded in qemu. This method of defining CPU models was inflexible, and made it difficult to create virtual CPUs with feature sets that matched existing physical CPUs. Typically, users modified a basic CPU model definition with feature flags in order to provide the CPU characteristics required by a virtual machine. Unless these feature sets were carefully controlled, safe migration — which requires feature sets between current and prospective hosts to match — was difficult to support. qemu-kvm has now replaced most hard-wired definitions with configuration file based CPU model definitions. Definitions for a number of current processor models are now included by default, allowing users to specify features more accurately and migrate more safely.
Storage Storage for virtual machines is abstracted from the physical storage used by the virtual machine. It is attached to the virtual machine using the para-virtualized or emulated block device drivers.
Storage pools A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to virtual machines. Storage pools are divided into storage volumes that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. Local storage pools Local storage pools are directly attached to the host server. They include local directories, directly attached disks, physical partitions, and LVM volume groups on local devices. Local storage pools are useful for development, testing and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environments as they do not support live migration. Networked (shared) storage pools Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between hosts with virt-manager, but is optional when migrating with virsh. Networked storage pools are managed by libvirt.
Storage volumes Storage pools are further divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt. Storage volumes are presented to virtual machines as local storage devices regardless of the underlying hardware.