Introduction to Fedora virtualization products
This chapter introduces the various virtualization products available in Fedora.
KVM and virtualization in Fedora
What is KVM?
KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on AMD64 and Intel 64 hardware that is built into the standard Fedora kernel. It can run multiple, unmodified Windows and Linux guest operating systems. The KVM hypervisor in Fedora is managed with the libvirt API and tools built for libvirt (such as virt-manager and virsh). Virtual machines are executed and run as multi-threaded Linux processes controlled by these tools.
Overcommitting
KVM hypervisor supports overcommitting of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system. Memory overcommitting allows hosts to utilize memory and virtual memory to increase guest densities.
Overcommitting involves possible risks to system stability. For more information on overcommitting with KVM, and the precautions that should be taken, refer to the Fedora Virtualization Administration Guide.
Thin provisioning
Thin provisioning allows the allocation of flexible storage and optimizes the available space for every guest. It gives the appearance that there is more physical storage on the guest than is actually available. This is not the same as overcommitting as this only pertains to storage and not CPUs or memory allocations. However, like overcommitting, the same warning applies.
Thin provisioning involves possible risks to system stability. For more information on thin provisioning with KVM, and the precautions that should be taken, refer to the Fedora Virtualization Administration Guide.
KSM
Kernel SamePage Merging (KSM), used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication.
For more information on KSM, refer to the Fedora Virtualization Administration Guide.
KVM Guest VM Compatibility
To verify whether your processor supports the virtualization extensions and for information on enabling the virtualization extensions if they are disabled, refer to the Fedora Virtualization Administration Guide.
Virtualized hardware devices
Virtualization on Fedora presents three distinct types of system devices to virtual machines. The three types include:
Virtualized and emulated software devices
Para-virtualized devices
Physically shared devices
These hardware devices all appear as being physically attached to the virtual machine but the device drivers work in different ways.
Virtualized and emulated devices
KVM implements many core devices for virtual machines in software. These emulated hardware devices are crucial for virtualizing operating systems.
Emulated devices are virtual devices which exist entirely in software.
Emulated drivers may use either a physical device or a virtual software device. Emulated drivers are a translation layer between the virtual machine and the Linux kernel (which manages the source device). The device level instructions are completely translated by the KVM hypervisor. Any device, of the same type (storage, network, keyboard, and mouse) and recognized by the Linux kernel, may be used as the backing source device for the emulated drivers.
Virtual CPUs (vCPUs)
A host system can have up to 160 virtual CPUs (vCPUs) that can be presented to guests for their use, regardless of the number of host CPUs.
Emulated graphics devices
Two emulated graphics devices are provided. These devices can be connected to with the SPICE (Simple Protocol for Independent Computing Environments) protocol or with VNC:
A Cirrus CLGD 5446 PCI VGA card (using the cirrus device)
A standard VGA graphics card with Bochs VESA extensions (hardware level, including all non-standard modes)
Emulated system components
The following core system components are emulated to provide basic system functions:
Intel i440FX host PCI bridge
PIIX3 PCI to ISA bridge
PS/2 mouse and keyboard
EvTouch USB Graphics Tablet
PCI UHCI USB controller and a virtualized USB hub
Emulated serial ports
EHCI controller, virtualized USB storage and a USB mouse
Emulated network devices
There are two emulated network devices available:
The e1000 device emulates an Intel E1000 network adapter (Intel 82540EM, 82573L, 82544GC).
The rtl8139 device emulates a Realtek 8139 network adapter.
Emulated storage drivers
Storage devices and storage pools can use these emulated devices to attach storage devices to virtual machines. The guest uses an emulated storage driver to access the storage pool.
Note that like all virtual devices, the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtual machine. The backing storage device can be any supported type of storage device, file, or storage pool volume.
The emulated IDE driver
KVM provides two emulated PCI IDE interfaces. An emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtual machine. The emulated IDE driver is also used for virtualized CD-ROM and DVD-ROM drives.
The emulated floppy disk drive driver
The emulated floppy disk drive driver is used for creating virtualized floppy drives.
Para-virtualized devices
Para-virtualized devices are drivers for virtual devices that increase the I/O performance of virtual machines.
Para-virtualized devices decrease I/O latency and increase I/O throughput to near bare-metal levels. It is recommended to use the para-virtualized drivers for virtual machines running I/O intensive applications.
The para-virtualized devices must be installed on the guest operating system. The para-virtualized drivers must be manually installed on Windows guests.
For more information on using the para-virtualized drivers, refer to the Fedora Virtualization Deployment Guide.
Para-virtualized network driver (virtio-net)
The para-virtualized network driver can be used as the driver for existing network devices or new network devices for virtual machines.
>Para-virtualized block driver (virtio-blk)
The para-virtualized block driver is a driver for all storage devices, is supported by the hypervisor, and is attached to the virtual machine (except for floppy disk drives, which must be emulated).
>The para-virtualized clock
Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues. KVM works around hosts that do not have a constant Time Stamp Counter by providing guests with a para-virtualized clock.
>The para-virtualized serial driver (virtio-serial)
The para-virtualized serial driver is a bytestream-oriented, character stream driver, and provides a simple communication interface between the host's user space and the guest's user space.
>The balloon driver (virtio-balloon)
The balloon driver can designate part of a virtual machine's RAM as not being used (a process known as balloon inflation), so that the memory can be freed for the host (or for other virtual machines on that host) to use. When the virtual machine needs the memory again, the balloon can be deflated and the host can distribute the RAM back to the virtual machine.
Physical host devices
Certain hardware platforms allow virtual machines to directly access various hardware devices and components. This process in virtualization is known as device assignment. Device assignment is also known as passthrough.
PCI device assignment
The KVM hypervisor supports attaching PCI devices on the host system to virtual machines. PCI device assignment allows guests to have exclusive access to PCI devices for a range of tasks. It allows PCI devices to appear and behave as if they were physically attached to the guest operating system.
Device assignment is supported on PCI Express devices, with the exception of graphics cards. Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts.
For more information on device assignment, refer to the Fedora Virtualization Deployment Guide.
>USB passthrough
The KVM hypervisor supports attaching USB devices on the host system to virtual machines. USB device assignment allows guests to have exclusive access to USB devices for a range of tasks. It allows USB devices to appear and behave as if they were physically attached to the virtual machine.
For more information on USB passthrough, refer to the Fedora Virtualization Administration Guide.
>SR-IOV
SR-IOV (Single Root I/O Virtualization) is a PCI Express standard that extends a single physical PCI function to share its PCI resources as separate, virtual functions (VFs). Each function is capable of being used by a different virtual machine via PCI device assignment.
An SR-IOV capable PCI-e device, provides a Single Root Function (for example, a single Ethernet port) and presents multiple, separate virtual devices as unique PCI device functions. Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts.
For more information on SR-IOV, refer to the Fedora Virtualization Deployment Guide.
>NPIV
N_Port ID Virtualization (NPIV) is a functionality available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adapters (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtual machines can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs).
NPIV can provide high density virtualized environments with enterprise-level storage solutions.
For more information on NPIV, refer to the Fedora Virtualization Administration Guide.
Guest CPU models
t
Historically, CPU model definitions were hard-coded in
qemu. This method of defining CPU models was
inflexible, and made it difficult to create virtual CPUs with feature sets
that matched existing physical CPUs. Typically, users modified a basic CPU
model definition with feature flags in order to provide the CPU
characteristics required by a virtual machine. Unless these feature sets were carefully
controlled, safe migration — which requires feature sets between current and
prospective hosts to match — was difficult to support.
qemu-kvm has now replaced most hard-wired definitions
with configuration file based CPU model definitions. Definitions for a number
of current processor models are now included by default, allowing users to specify
features more accurately and migrate more safely.
A list of supported CPU models can be viewed with the
/usr/libexec/qemu-kvm -cpu ?model command. This command outputs
the name used to select the CPU model at the command line,
and a model identifier that corresponds to a commercial instance of that processor
class.
Configuration details for all of these CPU models can be output with the
/usr/libexec/qemu-kvm -cpu ?dump command, but they are also stored in the
/usr/share/qemu-kvm/cpu-model/cpu-x86_64.conf file
by default. Each CPU model definition begins with [cpudef], as shown:
[cpudef]
name = "Nehalem"
level = "2"
vendor = "GenuineIntel"
family = "6"
model = "26"
stepping = "3"
feature_edx = "sse2 sse fxsr mmx clflush pse36 pat cmov mca \
pge mtrr sep apic cx8 mce pae msr tsc pse de fpu"
feature_ecx = "popcnt x2apic sse4.2 sse4.1 cx16 ssse3 sse3"
extfeature_edx = "i64 syscall xd"
extfeature_ecx = "lahf_lm"
xlevel = "0x8000000A"
model_id = "Intel Core i7 9xx (Nehalem Class Core i7)"
The four CPUID fields, feature_edx, feature_ecx,
extfeature_edx and extfeature_ecx, accept
named flag values from the corresponding feature sets listed by the
/usr/libexec/qemu-kvm -cpu ?cpuid command, as shown:
# /usr/libexec/qemu-kvm -cpu ?cpuid
Recognized CPUID flags:
f_edx: pbe ia64 tm ht ss sse2 sse fxsr mmx acpi ds clflush pn \
pse36 pat cmov mca pge mtrr sep apic cx8 mce pae msr tsc \
pse de vme fpu
f_ecx: hypervisor avx osxsave xsave aes popcnt movbe x2apic \
sse4.2|sse4_2 sse4.1|sse4_1 dca pdcm xtpr cx16 fma cid \
ssse3 tm2 est smx vmx ds_cpl monitor dtes64 pclmuldq \
pni|sse3
extf_edx: 3dnow 3dnowext lm rdtscp pdpe1gb fxsr_opt fxsr mmx \
mmxext nx pse36 pat cmov mca pge mtrr syscall apic cx8 \
mce pae msr tsc pse de vme fpu
extf_ecx: nodeid_msr cvt16 fma4 wdt skinit xop ibs osvw \
3dnowprefetch misalignsse sse4a abm cr8legacy extapic svm \
cmp_legacy lahf_lm
These feature sets are described in greater detail in the appropriate Intel
and AMD specifications.
It is important to use the check
flag to verify that all
configured features are available.
# /usr/libexec/qemu-kvm -cpu Nehalem,check
warning: host cpuid 0000_0001 lacks requested flag 'sse4.2|sse4_2' [0x00100000]
warning: host cpuid 0000_0001 lacks requested flag 'popcnt' [0x00800000]
If a defined feature is not available, those features will fail silently
by default.
Storage
Storage for virtual machines is abstracted from the physical storage used by the virtual machine. It is attached to the virtual machine using the para-virtualized or emulated block device drivers.
Storage pools
A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to virtual machines. Storage pools are divided into storage volumes that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. Refer to the Fedora Virtualization Administration Guide for more information.
Local storage pools
Local storage pools are directly attached to the host server. They include local directories, directly attached disks, physical partitions, and LVM volume groups on local devices. Local storage pools are useful for development, testing and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environments as they do not support live migration.
Networked (shared) storage pools
Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required for migrating virtual machines between hosts. Networked storage pools are managed by libvirt.
Storage Volumes
Storage pools are further divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt. Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware.
For more information on storage and virtualization, refer to the Fedora Virtualization Administration Guide.