summaryrefslogtreecommitdiffstats
path: root/en-US/Products.xml
blob: 082a5f4a3352d3221e8457ffb2dd341053ba2e4e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
<?xml version='1.0' encoding='utf-8' ?>
<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
]>
<chapter id="chap-Virtualization_Getting_Started-Products">
	<title>Introduction to Red Hat virtualization products</title>
	<para>This chapter introduces the various virtualization products available in Red Hat Enterprise Linux.</para>
<section id="sec-kvm_and_virt">
	<title>KVM and virtualization in Red Hat Enterprise Linux</title>
	<formalpara>
		<title>What is KVM?</title>
		<para>
			KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on AMD64 and Intel&nbsp;64 hardware that is built into the standard Red Hat Enterprise Linux&nbsp;6 kernel. It can run multiple, unmodified Windows and Linux guest operating systems. The KVM hypervisor in Red Hat Enterprise Linux is managed with the <application>libvirt</application> API and tools built for <application>libvirt</application> (such as <command>virt-manager</command> and <command>virsh</command>). Virtual machines are executed and run as multi-threaded Linux processes controlled by these tools.
		</para>
	</formalpara>
	<formalpara>
		<title>Overcommitting</title>
		<para>
		KVM hypervisor supports <firstterm>overcommitting</firstterm> of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system. Memory overcommitting allows hosts to utilize memory and virtual memory to increase guest densities.
		</para>
	</formalpara>
	<important>
		<para>Overcommitting involves possible risks to system stability. For more information on overcommitting with KVM, and the precautions that should be taken, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle>.</para>
	</important>
	<formalpara>
		<title>Thin provisioning</title>
		<para>
		Thin provisioning allows the allocation of flexible storage and optimizes the available space for every guest. It gives the appearance that there is more physical storage on the guest than is actually available. This is not the same as overcommitting as this only pertains to storage and not CPUs or memory allocations. However, like overcommitting, the same warning applies.
		</para>
	</formalpara>
		<important>
			<para>Thin provisioning involves possible risks to system stability. For more information on thin provisioning with KVM, and the precautions that should be taken, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle>.</para>
		</important>
	<formalpara>
		<title>KSM</title>
		<para>
			<firstterm>Kernel SamePage Merging (KSM)</firstterm>, used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication.
		</para>
	</formalpara>
	<para>
		For more information on KSM, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle>.
	</para>
	<formalpara>
		<title>KVM Guest VM Compatibility</title>
		<para>To verify whether your processor supports the virtualization extensions and for information on enabling the virtualization extensions if they are disabled, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide.</citetitle></para>
	</formalpara>
	<para>
	Red Hat Enterprise Linux&nbsp;6 servers have certain support limits.
	</para>
	<para>
	The following URLs explain the processor and memory amount limitations for Red Hat Enterprise Linux:
	</para>
	<itemizedlist>
		<listitem>
			<para>For host systems: <ulink url="http://www.redhat.com/products/enterprise-linux/server/compare.html">http://www.redhat.com/products/enterprise-linux/server/compare.html</ulink></para>
		</listitem>
		<listitem>
			<para>For hypervisors: <ulink url="http://www.redhat.com/resourcelibrary/articles/virtualization-limits-rhel-hypervisors">http://www.redhat.com/resourcelibrary/articles/virtualization-limits-rhel-hypervisors</ulink></para>
		</listitem>
	</itemizedlist>
	<para>
		For a complete chart of supported operating systems and host and guest combinations refer to<ulink url="http://www.redhat.com/resourcelibrary/articles/enterprise-linux-virtualization-support"> http://www.redhat.com/resourcelibrary/articles/enterprise-linux-virtualization-support</ulink>.
	</para>
</section>
<section id="sec_libvirt-libvirt-tools">
	<title>libvirt and libvirt tools</title>
	<para>The <package>libvirt</package> package is a hypervisor-independent virtualization API that is able to interact with the virtualization capabilities of a range of operating systems.</para>
		
	<para>The <package>libvirt</package> package provides:</para>
	<itemizedlist>
		<listitem>
			<para>A common, generic, and stable layer to securely manage virtual machines on a host.
			</para>
		</listitem>
		<listitem>
			<para>A common interface for managing local systems and networked hosts.
			</para>
		</listitem>
		<listitem>
		  <para>All of the APIs required to provision, create, modify, monitor, control, migrate, and stop virtual machines, but only if the hypervisor supports these operations. Although multiple hosts may be accessed with <application>libvirt</application> simultaneously, the APIs are limited to single node operations.
		  </para>
		</listitem>
	</itemizedlist>
		
	<para>The <package>libvirt</package> package is designed as a building block for higher level management tools and applications, for example, <command>virt-manager</command> and the <command>virsh</command> command line management tools. With the exception of migration capabilities, <application>libvirt</application> focuses on managing single hosts and provides APIs to enumerate, monitor and use the resources available on the managed node, including CPUs, memory, storage, networking and Non-Uniform Memory Access (NUMA) partitions. The management tools can be located on separate physical machines from the host using secure protocols.</para>
	
		<para>Red Hat Enterprise Linux&nbsp;6 supports <application>libvirt</application> and included <application>libvirt</application>-based tools as its default method for virtualization management (as in Red Hat Enterprise Virtualization Management).</para>
		<para>The <package>libvirt</package> package is available as free software under the GNU Lesser General Public License. The <package>libvirt</package> project aims to provide a long term stable C API to virtualization management tools, running on top of varying hypervisor technologies. The <package>libvirt</package> package supports Xen on Red Hat Enterprise Linux&nbsp;5, and it supports KVM on both Red Hat Enterprise Linux&nbsp;5 and Red Hat Enterprise Linux&nbsp;6.</para>
		
		<formalpara>
			<title>virsh</title>
			<para>
				The <command>virsh</command> command-line tool is built on the <application>libvirt</application> management API and operates as an alternative to the graphical <command>virt-manager</command> application. The <command>virsh</command> command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The <command>virsh</command> command is ideal for scripting virtualization administration.
			</para>
		</formalpara>
		<formalpara><title>virt-manager</title>
			<para>
				<command>virt-manager</command> is a graphical desktop tool for managing virtual machines. It allows access to graphical guest consoles and can be used to perform virtualization administration, virtual machine creation, migration, and configuration tasks. The ability to view virtual machines, host statistics, device information and performance graphs is also provided. The local hypervisor and remote hypervisors can be managed through a single interface.</para>
		</formalpara>
		<para>
			For more information on <command>virt-manager</command>, refer to the <citetitle>Red Hat Enterprise Linux 6 Virtualization Administration Guide</citetitle>.
		</para>
	</section>
	<section id="sec-virtualized-hardware-devices">
		<title>Virtualized hardware devices</title>
		<para>Virtualization on Red Hat Enterprise Linux&nbsp;6 presents three distinct types of system devices to virtual machines. The three types include:</para>
		<itemizedlist>
			<listitem>
				<para>Emulated software devices</para>
			</listitem>
			<listitem>
				<para>Para-virtualized devices</para>
			</listitem>
			<listitem>
				<para>Physically shared devices</para>
			</listitem>
		</itemizedlist>
		<para>These hardware devices all appear as being physically attached to the virtual machine but the device drivers work in different ways.</para>
		<section id="sec-virt-emulated-devices">
			<title>Virtualized and emulated devices</title>
			
			<para>KVM implements many core devices for virtual machines in software. These emulated hardware devices are crucial for virtualizing operating systems.</para>
			<para>Emulated devices are virtual devices which exist entirely in software.</para>
			<para>Emulated drivers may use either a physical device or a virtual software device. Emulated drivers are a translation layer between the virtual machine and the Linux kernel (which manages the source device). The device level instructions are completely translated by the KVM hypervisor. Any device, of the same type (storage, network, keyboard, and mouse) and recognized by the Linux kernel, may be used as the backing source device for the emulated drivers.</para>
			<formalpara>
				<title>Virtual CPUs (vCPUs)</title>
				<para>
				A host system can have up to 160 <!--(changed from 64 for BZ#832415)--> virtual CPUs (vCPUs) that can be presented to guests for their use, regardless of the number of host CPUs. 
				</para>
			</formalpara>
			<formalpara>
				<title>Emulated graphics devices</title>
				<para>Two emulated graphics devices are provided. These devices can be connected to with the SPICE (Simple Protocol for Independent Computing Environments) protocol or with VNC:</para>
			</formalpara>
			<itemizedlist>
				<listitem>
					<para>A Cirrus CLGD 5446 PCI VGA card (using the <emphasis>cirrus</emphasis> device)</para>
				</listitem>
				<listitem>
					<para>A standard VGA graphics card with Bochs VESA extensions (hardware level, including all non-standard modes)</para>
				</listitem>
			</itemizedlist>
				
			<formalpara>
				<title>Emulated system components</title>
				<para>The following core system components are emulated to provide basic system functions:</para>
				
			</formalpara>
				
			<itemizedlist>
				<listitem>
					<para>Intel i440FX host PCI bridge</para>
				</listitem>
				<listitem>
					<para>PIIX3 PCI to ISA bridge</para>
				</listitem>
				<listitem>
					<para>PS/2 mouse and keyboard</para>
				</listitem>
				<listitem>
					<para>EvTouch USB Graphics Tablet</para>
				</listitem>
				<listitem>
					<para>PCI UHCI USB controller and a virtualized USB hub</para>
				</listitem>
				<!-- removed to fix BZ798106 <listitem>
					<para>PCI network adapters</para>
				</listitem>-->
				<listitem>
					<para>Emulated serial ports</para>
				</listitem>
				<listitem>
				  <para>EHCI controller, virtualized USB storage and a USB mouse</para>
				</listitem>
			</itemizedlist>
			<formalpara>
				<title>Emulated sound devices</title>
				<para>
               Red Hat Enterprise Linux&nbsp;6.1 and above provides an emulated (Intel) HDA sound device, <systemitem>intel-hda</systemitem>. This device is supported on the following guest operating systems:
            			</para>
			</formalpara>
         <itemizedlist>
            <listitem>
               <para>
                  Red Hat Enterprise Linux 6, for i386 and x86_64 architectures
               </para>
            </listitem>
            <listitem>
               <para>
                  Red Hat Enterprise Linux 5, for i386 and x86_64 architectures
               </para>
            </listitem>
            <listitem>
               <para>
                  Red Hat Enterprise Linux 4, for i386 and x86_64 architectures
               </para>
            </listitem>
            <!--<listitem>
               <para>
                  Red Hat Enterprise Linux 3.9.z, for the i386 architecture
               </para>
	     </listitem>
            <listitem>
               <para>
                  Windows 2003 with Service Pack 2, for the i386 architecture
               </para>
            </listitem>
            <listitem>
               <para>
                  Windows 2008, for the i386 architecture
               </para>
	     </listitem>-->
            <listitem>
               <para>
                  Windows 7, for i386 and x86_64 architectures
               </para>
            </listitem>
	    <listitem>
               <para>
                  Windows 2008 R2, for the x86_64 architecture
               </para>
            </listitem>
         </itemizedlist>
            <para>
               The following two emulated sound devices are also available, but are not recommended due to compatibility issues with certain guest operating systems:
            </para>
			<itemizedlist>
				<listitem>
					<para><systemitem>ac97</systemitem>, an emulated Intel 82801AA AC97 Audio compatible sound card</para>
				</listitem>
				<listitem>
					<para><systemitem>es1370</systemitem>, an emulated ENSONIQ AudioPCI ES1370 sound card</para>
				</listitem>
			</itemizedlist>
			<formalpara>
			  <title>Emulated watchdog devices</title>
			  <para>Red Hat Enterprise Linux&nbsp;6.0 and above provides two emulated watchdog devices. A watchdog can be used to automatically reboot a virtual machine when it becomes overloaded or unresponsive.
			  </para>
			</formalpara>
			<para>
			  The <package>watchdog</package> package must be installed on the guest. 
			</para>
			<para>
			  The two devices available are:
			</para>
			<itemizedlist>
			  <listitem>
			    <para>
			      <systemitem>i6300esb</systemitem>, an emulated Intel 6300 ESB PCI watchdog device. It is supported in guest operating system Red Hat Enterprise Linux versions 6.0 and above, and is the recommended device to use.
			    </para>
			  </listitem>
			  <listitem>
			    <para>
			      <systemitem>ib700</systemitem>, an emulated iBase 700 ISA watchdog device. The <systemitem>ib700</systemitem> watchdog device is only supported in guests using Red Hat Enterprise Linux&nbsp;6.2 and above.
			    </para>
			  </listitem>
			</itemizedlist>
			    <para>
			      Both watchdog devices are supported in i386 and x86_64 architectures for guest operating systems Red Hat Enterprise Linux&nbsp;6.2 and above.
			    </para>
			<formalpara>
				<title>Emulated network devices</title>
				<para>There are two emulated network devices available:</para>
			</formalpara>
			<itemizedlist>
				<listitem>
					<para>The <systemitem>e1000</systemitem> device emulates an Intel E1000 network adapter (Intel 82540EM, 82573L, 82544GC).</para>
				</listitem>
				<listitem>
					<para>The <systemitem>rtl8139</systemitem> device emulates a Realtek 8139 network adapter. </para>
				</listitem>
			</itemizedlist>
				
			<formalpara id="emustore">
				<title>Emulated storage drivers</title>
				<para>Storage devices and storage pools can use these emulated devices to attach storage devices to virtual machines. The guest uses an emulated storage driver to access the storage pool</para>
			</formalpara>
			    <para>
			    Note that like all virtual devices, the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtual machine. The backing storage device can be any supported type of storage device, file, or storage pool volume.
			    </para>
			<variablelist>
				<varlistentry>
					<term>The emulated IDE driver</term>
					<listitem>
						<para>
					KVM provides two emulated PCI IDE interfaces. An emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtual machine. Emulated IDE driver is also used for virtualized CD-ROM and DVD-ROM drives.</para>
					</listitem>
				</varlistentry>
				<varlistentry>
					<term>The emulated floppy disk drive driver</term>
					<listitem>
						<para>The emulated floppy disk drive driver is used for creating virtualized floppy drives.</para>
					</listitem>
				</varlistentry>
			</variablelist>
		</section>
		<section id="para-virtdevices">
			<title>Para-virtualized devices</title>
				
			<para>
				Para-virtualized devices are drivers for virtual devices that increase the I/O performance of virtual machines. 
			</para>
			<para>
			      Para-virtualized devices decrease I/O latency and increase I/O throughput to near bare-metal levels. It is recommended to use the para-virtualized drivers for virtual machines running I/O intensive applications.
			</para>
			<para>The para-virtualized devices must be installed on the guest operating system. By default, the para-virtualized drivers are included in Red Hat Enterprise Linux 4.7 and newer, Red Hat Enterprise Linux 5.4 and newer and Red Hat Enterprise Linux 6.0 and newer. The para-virtualized drivers must be manually installed on Windows guests.</para>
<para> For more information on using the para-virtualized drivers refer to the <citetitle>Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide</citetitle>.</para>

			<formalpara>
				<title>Para-virtualized network driver (virtio-net)</title>
				<para>The para-virtualized network driver is a Red Hat branded virtual network device. It can be used as the driver for existing network devices or new network devices for virtual machines.</para>
			</formalpara>
			<formalpara>
				<title>Para-virtualized block driver (virtio-blk)</title>
				<para>The para-virtualized block driver is a driver for all storage devices, is supported  by the hypervisor, and is attached to the virtual machine (except for floppy disk drives, which must be emulated).</para>
			</formalpara>
			<formalpara>
				<title>The para-virtualized clock</title>
				<para>
		Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues. KVM works around hosts that do not have a constant Time Stamp Counter by providing guests with a para-virtualized clock.</para>
			</formalpara>
			<formalpara>
				<title>The para-virtualized serial driver (virtio-serial)</title>
				<para>The para-virtualized serial driver is a bytestream-oriented, character stream driver, and provides a simple communication interface between the host's user space and the guest's user space.</para>
			</formalpara>
			
			<formalpara>
				<title>The balloon driver (virtio-balloon)</title>
				<para>The balloon driver can designate part of a virtual machine's RAM as not being used (a process known as balloon <emphasis>inflation</emphasis>), so that the memory can be freed for the host (or for other virtual machines on that host) to use. When the virtual machine needs the memory again, the balloon can be <emphasis>deflated</emphasis> and the host can distribute the RAM back to the virtual machine.
				</para>
			</formalpara>
			<!--
			Additional Balloon driver information
			
			visit http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.5/html/Technical_Notes/xen.html> and do an in-page search for ‘balloon’ will take you to the most substantive thing I could find on balloon drivers in any errata so far published (at least as far as I can find).
It’s the documentation for BZ#512041, https://bugzilla.redhat.com/show_bug.cgi?id=512041, as part of RHBA-2010:0294: xen bug fix and enhancement update.-->
			<!--
			Presently removed in RHEL6.0
			<formalpara>
				<title>The hypercall device</title>
				<para>KVM implements a hypercall device. The hypercall device provides a hypercall interface for guests to make hypercalls to the hypervisor. </para>
			</formalpara>-->
		</section>
	<section>
			<title>Physical host devices</title>
			<para>Certain hardware platforms allow virtual machines to directly access various hardware devices and components. This process in virtualization is known as <firstterm>device assignment</firstterm>. Device assignment is also known as <firstterm>passthrough</firstterm>.
			</para>
			<formalpara>
				<title>PCI device assignment</title>
				<para>The KVM hypervisor supports attaching PCI devices on the host system to virtual machines. PCI device assignment allows guests to have exclusive access to PCI devices for a range of tasks. It allows PCI devices to appear and behave as if they were physically attached to the guest operating system.</para>
			</formalpara>
			
			
			<para>Device assignment is supported on PCI Express devices, with the exception of graphics cards. Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts.</para>
			
			<note><para> For more information on device assignment, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Host Configuration and Guest Installation Guide</citetitle>.
			</para></note> 
			
			<formalpara>
			  <title>USB passthrough</title>
			  <para>
			    The KVM hypervisor supports attaching USB devices on the host system to virtual machines. USB device assignment allows guests to have exclusive access to USB devices for a range of tasks. It allows USB devices to appear and behave as if they were physically attached to the virtual machine.
			  </para>
			</formalpara>
			  
			<note>
			  <para>For more information on USB passthrough, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle>.
			</para>
		      </note>
				
			<formalpara>
				<title>SR-IOV</title>
				<para>
					SR-IOV (Single Root I/O Virtualization) is a PCI Express standard that extends a single physical PCI function to share its PCI resources as separate, virtual functions (VFs). Each function is capable of being used by a different virtual machine via PCI device assignment. 
	</para>
			</formalpara>
			<para>
		An SR-IOV capable PCI-e device, provides a Single Root Function (for example, a single Ethernet port) and presents multiple, separate virtual devices as unique PCI device functions. Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts.</para> 
			<note><para>For more information on SR-IOV, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Host Configuration and Guest Installation Guide</citetitle>.</para></note>
<formalpara>
				<title>NPIV</title>
			<para>N_Port ID Virtualization (NPIV) is a functionality available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adapters (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtual machines can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs).</para></formalpara><para>NPIV can provide high density virtualized environments with enterprise-level storage solutions.</para>
			<note><para>For more information on NPIV, refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle>.</para></note>
</section>

<section id="para-CPU_Models">
	<title>Guest CPU models</title>

   <para>
      Historically, CPU model definitions were hard-coded in 
      <application>qemu</application>. This method of defining CPU models was 
      inflexible, and made it difficult to create virtual CPUs with feature sets 
      that matched existing physical CPUs. Typically, users modified a basic CPU 
      model definition with feature flags in order to provide the CPU 
      characteristics required by a virtual machine. Unless these feature sets were carefully 
      controlled, safe migration &mdash; which requires feature sets between current and 
      prospective hosts to match &mdash; was difficult to support.
   </para>
   <para>
      <application>qemu-kvm</application> has now replaced most hard-wired definitions 
      with configuration file based CPU model definitions. Definitions for a number 
      of current processor models are now included by default, allowing users to specify 
      features more accurately and migrate more safely.
   </para>

   <para>
      A list of supported CPU models can be viewed with the
      <command>/usr/libexec/qemu-kvm -cpu ?model</command> command. This command outputs
      the <parameter>name</parameter> used to select the CPU model at the command line,
      and a model identifier that corresponds to a commercial instance of that processor
	class. The CPU models that Red Hat Enterprise Linux supports can be found in the <citetitle>qemu-kvm Whitelist</citetitle> chapter in the <citetitle>Virtualization Administration Guide</citetitle>.
      </para>

   <para>
      Configuration details for all of these CPU models can be output with the 
      <command>/usr/libexec/qemu-kvm -cpu ?dump</command> command, but they are also stored in the
      <filename>/usr/share/qemu-kvm/cpu-model/cpu-x86_64.conf</filename> file
      by default. Each CPU model definition begins with <literal>[cpudef]</literal>, as shown:
   </para>
   <screen>[cpudef]
   name = "Nehalem"
   level = "2"
   vendor = "GenuineIntel"
   family = "6"
   model = "26"
   stepping = "3"
   feature_edx = "sse2 sse fxsr mmx clflush pse36 pat cmov mca \
                  pge mtrr sep apic cx8 mce pae msr tsc pse de fpu"
   feature_ecx = "popcnt x2apic sse4.2 sse4.1 cx16 ssse3 sse3"
   extfeature_edx = "i64 syscall xd"
   extfeature_ecx = "lahf_lm"
   xlevel = "0x8000000A"
   model_id = "Intel Core i7 9xx (Nehalem Class Core i7)"</screen>

   <para>
      The four CPUID fields, <literal>feature_edx</literal>, <literal>feature_ecx</literal>,
      <literal>extfeature_edx</literal> and <literal>extfeature_ecx</literal>, accept
      named flag values from the corresponding feature sets listed by the
      <command>/usr/libexec/qemu-kvm -cpu ?cpuid</command> command, as shown:      
   </para>
   <screen># /usr/libexec/qemu-kvm -cpu ?cpuid
Recognized CPUID flags:
  f_edx: pbe ia64 tm ht ss sse2 sse fxsr mmx acpi ds clflush pn    \
         pse36 pat cmov mca pge mtrr sep apic cx8 mce pae msr tsc  \
         pse de vme fpu
  f_ecx: hypervisor avx osxsave xsave aes popcnt movbe x2apic      \
         sse4.2|sse4_2 sse4.1|sse4_1 dca pdcm xtpr cx16 fma cid    \
         ssse3 tm2 est smx vmx ds_cpl monitor dtes64 pclmuldq      \
         pni|sse3
  extf_edx: 3dnow 3dnowext lm rdtscp pdpe1gb fxsr_opt fxsr mmx     \
         mmxext nx pse36 pat cmov mca pge mtrr syscall apic cx8    \
         mce pae msr tsc pse de vme fpu
  extf_ecx: nodeid_msr cvt16 fma4 wdt skinit xop ibs osvw          \
         3dnowprefetch misalignsse sse4a abm cr8legacy extapic svm \
         cmp_legacy lahf_lm</screen>
   <para>
      These feature sets are described in greater detail in the appropriate Intel
      and AMD specifications.
   </para>
   <para>
      It is important to use the <code>check</code> flag to verify that all
      configured features are available.
   </para>
   <screen># /usr/libexec/qemu-kvm -cpu Nehalem,check
warning: host cpuid 0000_0001 lacks requested flag 'sse4.2|sse4_2' [0x00100000]
warning: host cpuid 0000_0001 lacks requested flag 'popcnt' [0x00800000]</screen>
   <para>
      If a defined feature is not available, those features will fail silently
      by default.
    </para>
    <!--Use the <code>enforce</code> flag to force QEMU to exit in error
      when an explicit or implicit feature flag is not supported:
   </para>
   <screen># /usr/libexec/qemu-kvm -cpu Nehalem,enforce
warning: host cpuid 0000_0001 lacks requested flag 'sse4.2|sse4_2' [0x00100000]
warning: host cpuid 0000_0001 lacks requested flag 'popcnt' [0x00800000]
  Unable to support requested x86 CPU definition</screen>-->

</section>
	</section>	
	<section>
		<title>Storage</title>
			<para>Storage for virtual machines is abstracted from the physical storage used by the virtual machine. It is attached to the virtual machine using the para-virtualized or emulated block device drivers.</para>
		<section>
			<title>Storage pools</title>
			<para>
			  A <firstterm>storage pool</firstterm> is a file, directory, or storage device managed by <application>libvirt</application> for the purpose of providing storage to virtual machines. Storage pools are divided into storage <firstterm>volumes</firstterm> that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. Refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle> for more information.
			</para>
<!--DROPPING after feedback in #660470		<para>
			Storage pools can be divided up into volumes or allocated directly to a guest. Volumes of a storage pool can be allocated to virtualized guests. There are two categories of storage pool available:
		</para>-->
			<variablelist>
				<varlistentry>
				<term>Local storage pools</term>
			<listitem>
			<para>Local storage pools are directly attached to the host server. They include local directories, directly attached disks, physical partitions, and LVM volume groups on local devices. Local storage pools are useful for development, testing and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environments as they do not support live migration.</para>
			</listitem>
			</varlistentry>
			<varlistentry>
				<term>Networked (shared) storage pools</term>
			<listitem>
			  <para>Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required for migrating virtual machines between hosts. Networked storage pools are managed by <application>libvirt</application>.</para>
			</listitem>
			</varlistentry>
			</variablelist>
		<formalpara><title>Storage Volumes</title>
		<para>Storage pools are further divided into storage volumes. Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by <application>libvirt</application>. Storage volumes are presented to virtualized guests as local storage devices regardless of the underlying hardware.</para></formalpara>

      <note><para>For more information on storage and virtualization refer to the <citetitle>Red Hat Enterprise Linux&nbsp;6 Virtualization Administration Guide</citetitle>.</para>
      </note>
	</section>
	</section>
</chapter>