用 户手册 |
Xen v3.0 |
中译部份:北南南北 etony From LinuxSir.Org |
DISCLAIMER: This documentation is always under active development and as such there may be mistakes and omissions -- watch out for these and please report any you find to the developers' mailing list, xen-devel@lists.xensource.com. The latest version is always available on-line. Contributions of material, suggestions and corrections are welcome.
Xen is Copyright ©2002-2005, University of Cambridge, UK, XenSource Inc., IBM Corp., Hewlett-Packard Co., Intel Corp., AMD Inc., and others. All rights reserved.
Xen 是一个开源项目。Xen的大多部分组件遵循GNU GPL通 用公共许可证的第二版发布。其它部份的组件在GNU 次要通用公共许可证, Zope 公 共许可证 2.0 ,或``BSD-style'' 许可下发布。详情请见COPYING文件。
Xen is an open-source project. Most portions of Xen are licensed for copying under the terms of the GNU General Public License, version 2. Other portions are licensed under the terms of the GNU Lesser General Public License, the Zope Public License 2.0, or under ``BSD-style'' licenses. Please refer to the COPYING file for details.
Xen includes software by Christopher Clark. This software is covered by the following licence:
Xen 包括 Christopher Clark出品的软件。它适用以下版权许可;
Copyright (c) 2002, Christopher Clark. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of the original author; nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
目录
-
- 1 Installation(安装)
- 2 Configuration and Management(配置和管理工具)
- 4. Domain Management Tools(Domain管理工具)
- 5. Domain Configuration(Domain配置)
- 6. Storage and File System Management(存储和文件系统)
- 7. CPU Management(CPU管理)
- 8. Migrating Domains(迁移Domain)
- 9. Securing Xen(安全应用Xen)
- 3 Reference(参考)
=10000 =10000 1.1
1. Introduction(介绍)
Xen is an open-source para-virtualizing virtual machine monitor (VMM), or ``hypervisor'', for the x86 processor architecture. Xen can securely execute multiple virtual machines on a single physical system with close-to-native performance. Xen facilitates enterprise-grade functionality, including:
Xen 是一个开放源代码的para-virtualizing虚拟机(VMM),或“管理程序 ”,是为x86架构的机器而设计的。Xen 可以在 一套物理硬件上安全的执行多个虚拟机
- Virtual machines with performance close to native hardware. (虚拟机的性能更接近真实硬件环 境)
- Live migration of running virtual machines between physical hosts. (在真实物理环境的平台和 虚拟平台间自由切 换)
- Up to 32 virtual CPUs per guest virtual machine, with VCPU hotplug.(在每个客户虚拟机支持到 32个虚拟CPU,通过 VCPU热插拔)
- x86/32, x86/32 with PAE, and x86/64 platform support.(x86/32,支持PAE指令集的x86/32, x86/64平台)
- Intel Virtualization Technology (VT-x) for unmodified guest operating systems (including Microsoft Windows).(通过Intel 虚拟支持VT的支持来用虚拟原始操作系统(未经修改的)支持(包括Microsoft Windows)
- Excellent hardware support (supports almost all Linux device drivers).(优秀的硬件支持.支持 几乎所有的Linux设备驱 动)
1.1 Usage Scenarios(应用范围)
Usage scenarios for Xen include:
Xen的应用范围包括:
- Server Consolidation. (服务器整合)
- Move multiple servers onto a single physical host with performance and fault isolation provided at the virtual machine boundaries.
- 在虚拟机范围内,在一台物理主机上安装多个服务器, 用于演示及故障隔绝。
- Hardware Independence.(无 硬件依赖)
- Allow legacy applications and operating systems to exploit new hardware.(允许应用程序和操作系统对新硬件的移值测试)
- Multiple OS configurations.(多 操作系统配置)
- Run multiple operating systems simultaneously, for development or testing purposes.(以开 发和测试为目的,同时运行多 个操作系统)
- Kernel Development.(内核开发)
- Test and debug kernel modifications in a sand-boxed virtual machine -- no need for a separate test machine.(在虚拟机的沙盒中,做内 核的测试和调试,无 需为 了测试而半独架设一台独立的机器)
- Cluster Computing.(集 群运算)
- Management at VM granularity provides more flexibility than separately managing each physical host, but better control and isolation than single-system image solutions, particularly by using live migration for load balancing. (和 单独的管理每个物理主机相比较,在VM级管理更加灵活,在负载 均衡方面,更易于控 制,和隔离).
- Hardware support for custom OSes.(为客户操作系统提供硬件技术支持)
- Allow development of new OSes while benefiting from the wide-ranging hardware support of existing OSes such as Linux. (可 以开发新的操作系统, 以得益于现存操作系统的广泛硬件支持,比如Linux.)
1.2 Operating System Support (操作系统支持)
Para-virtualization permits very high performance virtualization, even on architectures like x86 that are traditionally very hard to virtualize.
This approach requires operating systems to be ported to run on Xen. Porting an OS to run on Xen is similar to supporting a new hardware platform, however the process is simplified because the para-virtual machine architecture is very similar to the underlying native hardware. Even though operating system kernels must explicitly support Xen, a key feature is that user space applications and libraries do not require modification.
With hardware CPU virtualization as provided by Intel VT and AMD SVM technology, the ability to run an unmodified guest OS kernel is available. No porting of the OS is required, although some additional driver support is necessary within Xen itself. Unlike traditional full virtualization hypervisors, which suffer a tremendous performance overhead, the combination of Xen and VT or Xen and Pacifica technology complement one another to offer superb performance for para-virtualized guest operating systems and full support for unmodified guests running natively on the processor. Full support for VT and Pacifica chipsets will appear in early 2006.
Paravirtualized Xen support is available for increasingly many operating systems: currently, mature Linux support is available and included in the standard distribution. Other OS ports--including NetBSD, FreeBSD and Solaris x86 v10--are nearing completion.
1.3 Hardware Support(硬件支 持)
Xen currently runs on the x86 architecture, requiring a ``P6'' or newer processor (e.g. Pentium Pro, Celeron, Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron). Multiprocessor machines are supported, and there is support for HyperThreading (SMT). In addition, ports to IA64 and Power architectures are in progress.Xen
目前运行在x86架构的 机器上,需要P6或更新的处理器(比如 Pentium Pro, Celeron, Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron)。支 持多 处理器,并且支持超线程(SMT)。另外对IA64和Power架构的开发也在进行中。
The default 32-bit Xen supports up to 4GB of memory. However Xen 3.0 adds support for Intel's Physical Addressing Extensions (PAE), which enable x86/32 machines to address up to 64 GB of physical memory. Xen 3.0 also supports x86/64 platforms such as Intel EM64T and AMD Opteron which can currently address up to 1TB of physical memory.
32位Xen支持最大4GB内存。可是Xen 3.0 为Intel处理器物理指令集(PAE)提供支持,这样就能使x86/32架构的机器支持到64GB的物理内存。Xen 3.0也能支持x86/64 平台支持,比如 Intel EM64T 和AMD Opteron能支持1TB的物理 内存以上;
Xen offloads most of the hardware support issues to the guest OS running in the Domain 0 management virtual machine. Xen itself contains only the code required to detect and start secondary processors, set up interrupt routing, and perform PCI bus enumeration. Device drivers run within a privileged guest OS rather than within Xen itself. This approach provides compatibility with the majority of device hardware supported by Linux. The default XenLinux build contains support for most server-class network and disk hardware, but you can add support for other hardware by configuring your XenLinux kernel in the normal way.
1.4 Structure of a Xen-Based System(基于Xen的操作系统架构)
A Xen system has multiple layers, the lowest and most privileged of which is Xen itself.
一个Xen系统拥有多个层,最底层和最高特权层是 Xen程序本身。
Xen may host multiple guest operating systems, each of which is executed within a secure virtual machine. In Xen terminology, a domain. Domains are scheduled by Xen to make effective use of the available physical CPUs. Each guest OS manages its own applications. This management includes the responsibility of scheduling each application within the time allotted to the VM by Xen.
Xen 可以管理多个客户操作系统,每个操作系统都能在一个安全的虚拟机中实 现。在Xen的术语中,Domain由Xen控制,以 高效的 利用CPU的物理资源。每个 客户操作系统可以管理它自身的应用。这种管理包括每个程序在规定时间 内的响应到执行,是通过Xen调度到虚拟机中实现。
The first domain, domain 0, is created automatically when the system boots and has special management privileges. Domain 0 builds other domains and manages their virtual devices. It also performs administrative tasks such as suspending, resuming and migrating other virtual machines.
第一个domain,也 就是domain 0(注:其实它就是 第一个虚拟的客户系统),是在系统引导时 自动创建,它拥有特殊的管理权限。Domain 0 可以构建其它的更多的Domain ,并管理虚拟设备。 它还能执行管理任务,比如虚拟机的体眠、 唤醒和迁移其它虚拟机。
Within domain 0, a process called xend runs to manage the system. Xend is responsible for managing virtual machines and providing access to their consoles. Commands are issued to xend over an HTTP interface, via a command-line tool.
一个被称为xend的服务器 进程通过domain 0来管理系统,Xend 负责管理众多的虚拟主机,并且提供进入这些系统的控制台。命令经一个命令行的工具通过一个HTTP的接口被传送到xend。
1.5 History(历史)
Xen was originally developed by the Systems Research Group at the University of Cambridge Computer Laboratory as part of the XenoServers project, funded by the UK-EPSRC.
XenoServers aim to provide a ``public infrastructure for global distributed computing''. Xen plays a key part in that, allowing one to efficiently partition a single machine to enable multiple independent clients to run their operating systems and applications in an environment. This environment provides protection, resource isolation and accounting. The project web page contains further information along with pointers to papers and technical reports: http://www.cl.cam.ac.uk/xeno
Xen has grown into a fully-fledged project in its own right, enabling us to investigate interesting research issues regarding the best techniques for virtualizing resources such as the CPU, memory, disk and network. Project contributors now include XenSource, Intel, IBM, HP, AMD, Novell, RedHat.
Xen was first described in a paper presented at SOSP in 20031.1, and the first public release (1.0) was made that October. Since then, Xen has significantly matured and is now used in production scenarios on many sites.
1.6 What's New(最新特性)
Xen 3.0.0 offers(Xen 3.0.0 拥有):
- Support for up to 32-way SMP guest operating systems (支持最多32路的处理器的客户虚拟系统)
- Intel (Physical Addressing Extensions) PAE to support 32-bit servers with more than 4GB physical memory
- x86/64 support (Intel EM64T, AMD Opteron)
- Intel VT-x support to enable the running of unmodified guest operating systems (Windows XP/2003, Legacy Linux)
- Enhanced control tools
- Improved ACPI support
- AGP/DRM graphics ;
Xen 3.0 features greatly enhanced hardware support, configuration flexibility, usability and a larger complement of supported operating systems. This latest release takes Xen a step closer to being the definitive open source solution for virtualization.
1 Installation(安装)
2. Basic Installation(基础安装)
The Xen distribution includes three main components: Xen itself, ports of Linux and NetBSD to run on Xen, and the userspace tools required to manage a Xen-based system. This chapter describes how to install the Xen 3.0 distribution from source. Alternatively, there may be pre-built packages available as part of your operating system distribution.
Xen 发行版包括三个主要的部件:Xen本身,在Xen上运行Linux和NetBSD的接口,及管理基于Xen的系统的用户工具。这一节我们讲述以源代码的安 装方式安装Xen 3.0 。当然,您也可以根据您所用的发行版来选择已经预编译好的Xen的软件包
2.1 Prerequisites (准 备工作)
The following is a full list of prerequisites. Items marked `' are required by the xend control tools, and hence required if you want to run more than one virtual machine; items marked `*' are only required if you wish to build from source.
- $$
- A working Linux distribution using the GRUB bootloader and running on a P6-class or newer CPU.(用GRUB的导Linux系统,并且运行在一个 P6级或更新的 CPU机器上)
- The iproute2 package(iproute2软件包)
- The Linux bridge-utils(Linux桥接工具 )2.1 (e.g., /sbin/brctl)
- The Linux hotplug system2.2 (e.g., /sbin/hotplug and related scripts). On newer distributions, this is included alongside the Linux udev system2.3.
- *
- Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make). {编译工具 (gcc v3.2 或v3.3.x ,binutils,GNU make)}
- *
- Development installation of zlib (e.g., zlib-dev).{zlib的开发库 (例如zlib-dev)}
- *
- Development installation of Python v2.2 or later (e.g., python-dev).{Python v2.2 或高于v2.2的版本的开发库(比如 python-dev)}
- *
- LATEX and transfig are required to build the documentation.
Once you have satisfied these prerequisites, you can now install either a binary or source distribution of Xen.
一旦您已经做好这些准备 工作,您现在可以通过二进制包或源码包来 安装Xen。
2.2 Installing from Binary Tarball(通过二进制软件包安装)
Pre-built tarballs are available for download from the XenSource downloads page:
可以到XenSource 下载预 编译好的二进制包,下载地址发下:
http://www.xensource.com/downloads/
Once you've downloaded the tarball, simply unpack and install:
一旦您下载了软件包,就可以轻松的解压和安 装:
# tar zxvf xen-3.0-install.tgz
# cd xen-3.0-install
# sh ./install.sh
Once you've installed the binaries you need to configure your system as described in Section 2.5.
一旦您已经安装了二进制包,您需要配置您的 系统,请参考 2.5的章节
2.3 Installing from RPMs从RPM包安装
Pre-built RPMs are available for download from the XenSource downloads page:可以到XenSource 下载预编译好的RPM 包,下载地址发下:
http://www.xensource.com/downloads/
Once you've downloaded the RPMs, you typically install them via the RPM commands:
一旦您下载了RPM包,可以通过RPM的管 理工具来安装他们:
# rpm -iv rpmname
See the instructions and the Release Notes for each RPM set referenced at:
这里提供了每个相关RPM的介绍和发行说明
http://www.xensource.com/downloads/.
2.4 Installing from Source(通过源码包安装)
This section describes how to obtain, build and install Xen from source.
这一节主要介绍如 何获取源码包并进行编 译和安装;
2.4.1 Obtaining the Source(获取源码包)
The Xen source tree is available as either a compressed source tarball or as a clone of our master Mercurial repository.
- Obtaining the Source Tarball(获取源码包)
Stable versions and daily snapshots of the Xen source tree are available from the Xen download page:- 稳 定版和开发版的每日快照下载页:
-
http://www.xensource.com/downloads/
- Obtaining the source via Mercurial(通过Mercurial获取源代码)
The source tree may also be obtained via the public Mercurial repository at:- 可以使用公共 Mercurial库从以下地址获取代码树:
http://xenbits.xensource.com
See the instructions and the Getting Started Guide referenced at(获取介绍和指导手册):http://www.xensource.com/downloads/
2.4.2 Building from Source(从源码包编译安装)
The top-level Xen Makefile includes a target ``world'' that will do the following:
- Build Xen
- 编译Xen
- Build the control tools, including xend
- 编译控制工具,包 括xend
- Download (if necessary) and unpack the Linux 2.6 source code, and patch it for use with Xen.
- 下载Linux 2.6内核源码,并且为内核打补丁
- Build a Linux kernel to use in domain 0 and a smaller unprivileged kernel, which can be used for unprivileged virtual machines.
- 编译一个 Linux的内核,主要是用于domain 0 ,并且是一个比较小的无特权的内核。它能用于无特权的虚拟机;
After the build has completed you should have a top-level directory called dist/ in which all resulting targets will be placed. Of particular interest are the two XenLinux kernel images, one with a ``-xen0'' extension which contains hardware device drivers and drivers for Xen's virtual devices, and one with a ``-xenU'' extension that just contains the virtual ones. These are found in dist/install/boot/ along with the image for Xen itself and the configuration files used during the build.
内核编译完成后,您应该 有一个顶级目录dist/,所有的编译好 的包将放在这里。总共有两个内核,一个是文件名带有"_xen0"的,它包括硬件设 备驱动和Xen虚拟设备。另一个内核文件扩展名带有"_xenU"的,这个内核包括虚拟设备。这些文件连同Xen本身及编译过程中所需要的配置文件位于 dist/install/boot目录中
To customize the set of kernels built you need to edit the top-level Makefile. Look for the line:
KERNELS ?= linux-2.6-xen0 linux-2.6-xenU
You can edit this line to include any set of operating system kernels which have configurations in the top-level buildconfigs/ directory.
您可以把这行加入到操作系统本身拥有的内核 配置文件中,此文件位于 buildconfigs/目 录;
2.4.3 Custom Kernels自义义内核
If you wish to build a customized XenLinux kernel (e.g. to support additional devices or enable distribution-required features), you can use the standard Linux configuration mechanisms, specifying that the architecture being built for is xen, e.g:
如果您想自定义编译XenLinuxmw 内核(比如支持附加设备或打开发行版的一些特性或功能),您可以用标准的Linux内核配置文件
# cd linux-2.6.12-xen0
# make ARCH=xen xconfig
# cd ..
# make
You can also copy an existing Linux configuration (.config) into e.g. linux-2.6.12-xen0 and execute:
您也可以复制已有的Linux发行版的内 核配置文件。比如Linux- 2.6.12-xen0,然后执行:
# make ARCH=xen oldconfig
You may be prompted with some Xen-specific options. We advise accepting the defaults for these options.
也可以启用一些Xen的特殊选项。 我们建议您用默认的选项。
Note that the only difference between the two types of Linux kernels that are built is the configuration file used for each. The ``U'' suffixed (unprivileged) versions don't contain any of the physical hardware device drivers, leading to a 30% reduction in size; hence you may prefer these for your non-privileged domains. The ``0'' suffixed privileged versions can be used to boot the system, as well as in driver domains and unprivileged domains.
注意,编译出来的两个内 核是不同的。文件名中带有U的,是无特 权内核,这个是用来引导被虚拟的操作系统而用的,它不包括物理硬件的驱动等。而文件 名中带有0的包括物理硬件的驱动,它才是被用于引导真实操作系统的内核。
2.4.4 Installing Generated Binaries (安装已经编译好的二进 制内核)
The files produced by the build process are stored under the dist/install/ directory. To install them in their default locations, do:
当编译好内核后,他们被存放在 dist/install/目录中。我们 通过如下的命令来安装:
# make install
Alternatively, users with special installation requirements may wish to install them manually by copying the files to their appropriate destinations.
The dist/install/boot directory will also contain the config files used for building the XenLinux kernels, and also versions of Xen and XenLinux kernels that contain debug symbols such as (xen-syms-3.0.0 and vmlinux-syms-2.6.12.6-xen0) which are essential for interpreting crash dumps. Retain these files as the developers may wish to see them if you post on the mailing list.
在目录 /dist/install/boot目录中,包含编 译 XenLinux内核的配置文件config ,也包含XenLinux内核的debug链接文件,比如(xen-syms-3.0.0 和vmlinux-syms-2.6.12.6 -xen0),当系 统崩溃时,他们还是有必要的,因为做为开 发者,他们需要这些崩溃纪录,您可以通过邮 件列表发送给开发 者。
2.5 Configuration(配 置)
Once you have built and installed the Xen distribution, it is simple to prepare the machine for booting and running Xen.
如果您编译并安装好了Xen的发行版,您要做一些准备工作,以让机器引 导时运行Xen。
2.5.1 GRUB Configuration(GRUB的配 置)
An entry should be added to grub.conf (often found under /boot/ or /boot/grub/) to allow Xen / XenLinux to boot. This file is sometimes called menu.lst, depending on your distribution. The entry should look something like the following:
编译好内核后,您应该修 改grub.conf文件(这个文件大多 是位于/boot或/boot/grub目录中),目的是允许引 导 Xen/XenLinux。 grub.conf有时也被 称为menul.st 。在grub.conf或menu.lst中应该加入类似下面的一段:
title Xen 3.0 / XenLinux 2.6
kernel /boot/xen-3.0.gz dom0_mem=262144
module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro console=tty0
The kernel line tells GRUB where to find Xen itself and what boot parameters should be passed to it (in this case, setting the domain 0 memory allocation in kilobytes and the settings for the serial port). For more details on the various Xen boot parameters see Section 10.3.
kernel 一行告诉GRUB引导时能找到Xen,以及使用 哪些引导参数(在 这里,能设置domain 0所占用物理内存大小,其单位是kb,也能设置所占用的串口)。详细请参阅Xen 引导参数章节 10.3 。
The module line of the configuration describes the location of the XenLinux kernel that Xen should start and the parameters that should be passed to it. These are standard Linux parameters, identifying the root device and specifying it be initially mounted read only and instructing that console output be sent to the screen. Some distributions such as SuSE do not require the ro parameter.
module 这行是描述XenLinux内核所处的位置,这些是Linux的标准参数,比如定义root设备,指定挂载时初始化是只读,并且将控制台输出发送到屏幕 上。 一些发行版并不需要 ro参数,比如SuSE。
To use an initrd, add another module line to the configuration, like:
要使用initrd,还需要在配置文件中添加一 module行,行如:
module /boot/my_initrd.gz
When installing a new kernel, it is recommended that you do not delete existing menu options from menu.lst, as you may wish to boot your old Linux kernel in future, particularly if you have problems.
安 装新的内核时,我们建议您不要删除GRUB启动菜单定义文件 grub.conf或menu.lst中的关于老内核的引导项,当用新内核 引导时可能遇到问题,或者需要老内核引导时,您可以用老内核启动系统。
2.5.2 Serial Console (optional)(串口控制台.可选)
Serial console access allows you to manage, monitor, and interact with your system over a serial console. This can allow access from another nearby system via a null-modem (``LapLink'') cable or remotely via a serial concentrator.
串口控制台允许您管理、监视, 和和您的系统相互通信通过一个串口控制台。它能 提供从别的附近的系统通过一个null- modem("LapLink") cable 或经由远程串口集线器访问登录。
You system's BIOS, bootloader (GRUB), Xen, Linux, and login access must each be individually configured for serial console access. It is not strictly necessary to have each component fully functional, but it can be quite useful.
您系统的BIOS,系统引导管理器 (GRUB),Xen, Linux,以及登录许可必须单独分别为串口控制台登录提供有效的配置。它没有必要完全拥有所 有的功能和部件,但它的确是比较有用的。
For general information on serial console configuration under Linux, refer to the ``Remote Serial Console HOWTO'' at The Linux Documentation Project: http://www.tldp.org
对于在Linux中,关于串口控制台配置的 一般信息,请参考“Remote Serial Console HOWTO",此 文 件计划位于http://www.tldp.org
2.5.2.1 Serial Console BIOS configuration(串口控制台 BIOS的配置)
Enabling system serial console output neither enables nor disables serial capabilities in GRUB, Xen, or Linux, but may make remote management of your system more convenient by displaying POST and other boot messages over serial port and allowing remote BIOS configuration.
Refer to your hardware vendor's documentation for capabilities and procedures to enable BIOS serial redirection.
2.5.2.2 Serial Console GRUB configuration(串口控制台 GRUB的配置)
Enabling GRUB serial console output neither enables nor disables Xen or Linux serial capabilities, but may made remote management of your system more convenient by displaying GRUB prompts, menus, and actions over serial port and allowing remote GRUB management.
Adding the following two lines to your GRUB configuration file, typically either /boot/grub/menu.lst or /boot/grub/grub.conf depending on your distro, will enable GRUB serial output.
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
Note that when both the serial port and the local monitor and keyboard are enabled, the text ``Press any key to continue'' will appear at both. Pressing a key on one device will cause GRUB to display to that device. The other device will see no output. If no key is pressed before the timeout period expires, the system will boot to the default GRUB boot entry.
Please refer to the GRUB documentation for further information.
2.5.2.3 Serial Console Xen configuration(串口控制台 Xen的配置)
Enabling Xen serial console output neither enables nor disables Linux kernel output or logging in to Linux over serial port. It does however allow you to monitor and log the Xen boot process via serial console and can be very useful in debugging.
In order to configure Xen serial console output, it is necessary to add a boot option to your GRUB config; e.g. replace the previous example kernel line with:
kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
This configures Xen to output on COM1 at 115,200 baud, 8 data bits, no parity and 1 stop bit. Modify these parameters for your environment. See Section 10.3 for an explanation of all boot parameters.
One can also configure XenLinux to share the serial console; to achieve this append ``console=ttyS0'' to your module line.
2.5.2.4 Serial Console Linux configuration(串口控制台 Linux的配置)
Enabling Linux serial console output at boot neither enables nor disables logging in to Linux over serial port. It does however allow you to monitor and log the Linux boot process via serial console and can be very useful in debugging.
To enable Linux output at boot time, add the parameter console=ttyS0 (or ttyS1, ttyS2, etc.) to your kernel GRUB line. Under Xen, this might be:
module /vmlinuz-2.6-xen0 ro root=/dev/VolGroup00/LogVol00 \to enable output over ttyS0 at 115200 baud.
console=ttyS0, 115200
2.5.2.5 Serial Console Login configuration(串口控制台登 录配置)
Logging in to Linux via serial console, under Xen or otherwise, requires specifying a login prompt be started on the serial port. To permit root logins over serial console, the serial port must be added to /etc/securetty.
To automatically start a login prompt over the serial port, add the line:
c:2345:respawn:/sbin/mingetty ttyS0to /etc/inittab. Run init q to force a reload of your inttab and start getty.
To enable root logins, add ttyS0 to /etc/securetty if not already present.
Your distribution may use an alternate getty; options include getty, mgetty and agetty. Consult your distribution's documentation for further information.
2.5.3 TLS Libraries (TLS库)
Users of the XenLinux 2.6 kernel should disable Thread Local Storage (TLS) (e.g. by doing a mv /lib/tls /lib/tls.disabled) before attempting to boot a XenLinux kernel2.4. You can always reenable TLS by restoring the directory to its original location (i.e. mv /lib/tls.disabled /lib/tls).
如果您使用 XenLinux 2.6 内核,在用它引导前需要禁用掉 Thread Local Storage (TLS)(线程局 部存储)(例如运行 /lib/tls /lib/tls.disabled). 当需要TLS时, 可以通过恢复其目录重新启用(即 mv /lib/tls.disabled /lib/tls).
The reason for this is that the current TLS implementation uses segmentation in a way that is not permissible under Xen. If TLS is not disabled, an emulation mode is used within Xen which reduces performance substantially. To ensure full performance you should install a `Xen-friendly' (nosegneg) version of the library.
这是由于当前的TLS应用程序使用了在Xen下不允许的段方法. 如果禁用TLS, Xen中使用的一个仿真模式将会实际性能有所削弱. 如果要避免这种现象, 您需要安装一个`Xen-friendly'(nosegneg) 版的共享库.
2.6 Booting Xen(引导Xen)
It should now be possible to restart the system and use Xen. Reboot and choose the new Xen option when the Grub screen appears.
现在可以使用 Xen 重新引导操作系统了. (因 为我们前面已 经谈了编译、安装Xen,译注),重新启动后,我们要在GRUB的启动菜单上选择带 有Xen支持的选择,这样我就可以启动有Xen支持的操作系统了。
What follows should look much like a conventional Linux boot. The first portion of the output comes from Xen itself, supplying low level information about itself and the underlying hardware. The last portion of the output comes from XenLinux.
用支持Xen的内核启动,看起来有点象 Linux常规引导。第一部份输 出的是Xen本身的信息,这些信息是关于Xen自身和底层的硬件的信息。最后的输出 是来自于XenLinux。
You may see some error messages during the XenLinux boot. These are not necessarily anything to worry about--they may result from kernel configuration differences between your XenLinux kernel and the one you usually use.
当XenLinux引导时,您能查看一些错误的信 息。对于这些信息没有必要为 他们担心,这是因为您的XenLinux和您原来用的没有带有Xen支 持的那个之间不同配置而引起的。
When the boot completes, you should be able to log into your system as usual. If you are unable to log in, you should still be able to reboot with your normal Linux kernel by selecting it at the GRUB prompt.
当引导完成后,您应该可以登录您的操作系统了。如 果不能登录,应该用您的 普通内核(也就是原来没有Xen的那个内核--译者注)来引导。然后进 行一些问题排查。
3. Booting a Xen System(引导Xen操作系统)
Booting the system into Xen will bring you up into the privileged management domain, Domain0. At that point you are ready to create guest domains and ``boot'' them using the xm create command.
引导系统进入Xen将要带你进入一个特权的 domain 管理,Domain0。在这时,您可以创建客户domain,并通过xm create 命令来引导他们。(其实每个被虚拟的操作系统,都会产生一个Domain。一般说来,每个Domain就是一个虚拟的操作系统。当用支持Xen的内核启动 时,系统会 自动创建一个拥有超级权限的Domain0,这个Domain0负责管理硬件和为虚拟操作系统提供虚拟硬件环境。
北南 加注 From LinuxSir.Org)
3.1 Booting Domain0(从Domain0开始引导)
After installation and configuration is complete, reboot the system and and choose the new Xen option when the Grub screen appears.
当我们安装和配置完成,并重新引导系统,就 可以在GRUB的启动菜单上 选择有支持Xen的启动项来启动系统。
What follows should look much like a conventional Linux boot. The first portion of the output comes from Xen itself, supplying low level information about itself and the underlying hardware. The last portion of the output comes from XenLinux.
用Xen支持的内核启 动,看起来有点象Linux常规引导。第一 部份输出的是Xen本身的信息,这些信息是关于Xen自身和底层的硬件的信息。最后 的输出是来自于XenLinux。
When the boot completes, you should be able to log into your system as usual. If you are unable to log in, you should still be able to reboot with your normal Linux kernel by selecting it at the GRUB prompt.
当引导完成后,您应该可以登录您的操作系统了。如 果不能登录,应该使用 普通内核(也就是原来没有Xen的那个内核--译者注)来引导。然后进 行一些问题排查。
The first step in creating a new domain is to prepare a root filesystem for it to boot. Typically, this might be stored in a normal partition, an LVM or other volume manager partition, a disk file or on an NFS server. A simple way to do this is simply to boot from your standard OS install CD and install the distribution into another partition on your hard drive.
创建一个新的Domain的首先要准备一个 root文件系统,这个文件系统 可以是一个物理分区,一个LVM或其它的逻辑卷 分区,映 像文件,或在一个NFS 服务器上。最简单的是通过操作系统的安装盘把操作系统安装进另一个物理分区。
(注: 这句话的意思是,用Xen虚拟的操作系统,可以安装在一个物理分区上,也可以安装在LVM上,可以安装在一个映像文件上,也可以安装在NFS 服务器提供的nfs文件系统上。最简单的方法就是在硬盘上分一 个物理分区,安装好操作系统,然后再来通过Xen来虚拟。他在这里所说的创建您的 Domain 就是虚拟新的操作系统。——译者注)。
To start the xend control daemon, type
启动xend守护进程,请输入如下命令:
# xend start
If you wish the daemon to start automatically, see the instructions in Section 4.1. Once the daemon is running, you can use the xm tool to monitor and maintain the domains running on your system. This chapter provides only a brief tutorial. We provide full details of the xm tool in the next chapter.
如果您想让xend服务 器守护程序开机自动运行,请参考 4.1。 一旦xend 运行起来了,就可用xm工具来 监视、管理、运行系统中的Domain。这一节提供的仅仅是概要。我们会在下一章节上提供xm工具的详细说明。
3.2 Booting Guest Domains(引导客户Domain)
3.2.1 Creating a Domain Configuration File(创 建一个Domain配置文 件)
Before you can start an additional domain, you must create a configuration file. We provide two example files which you can use as a starting point:
在启动一个虚拟的操作系统之前,您必须创建 一个引导这个虚拟操作系统的 配置文件 。 我们提供了两个示例文 件,这能做为您学习Xen虚拟操作系统的一个起点。
- /etc/xen/xmexample1 is a simple template configuration file for describing a single VM.
- /etc/xen/xmexample1 是引导一个虚 拟操作系统的配置文件示例;
- /etc/xen/xmexample2 file is a template description that is intended to be reused for multiple virtual machines. Setting the value of the vmid variable on the xm command line fills in parts of this template.
- /etc/xen/xmexample2 是可 以引导多个虚拟操作系统的配置文件;设置xmid的变量的值,这样就可以通过xm 指定vmid对虚拟的操作系统进行管理。
There are also a number of other examples which you may find useful. Copy one of these files and edit it as appropriate. Typical values you may wish to edit include:
还有其它一些有关 Domain的配置文件,您可以加以修改应用。 下面是一些配置文件的参数,可能您会用到;
- kernel (内核)
- Set this to the path of the kernel you compiled for use with Xen (e.g. kernel = ``/boot/vmlinuz-2.6-xenU'')
- 这是是指定被虚拟的操作系统所用的内核的路径(例如 kernel = "/boot/vmlinuz-2.6-xenU")。(值得注意的是被虚拟的操作系统所用的内核是带有-xenU字样的)
- memory (内存)
- Set this to the size of the domain's memory in megabytes (e.g. memory = 64)
- 设置被虚拟操作系统所占用物理内存的大小,单位是M(比如 memory=64)
- disk 硬 盘
- Set the first entry in this list to calculate the offset of the domain's root partition, based on the domain ID. Set the second to the location of /usr if you are sharing it between domains (e.g. disk = ['phy:your_hard_drive%d,sda1,w' % (base_partition_number + vmid), 'phy:your_usr_partition,sda6,r' ]
- dhcp
- Uncomment the dhcp variable, so that the domain will receive its IP address from a DHCP server (e.g. dhcp=``dhcp'')
- dhcp 变量主要是让被虚拟的操作系 统,能从DHCP服务器上获取它的 IP地址(比如dhcp="dhcp")
You may also want to edit the vif variable in order to choose the MAC address of the virtual ethernet interface yourself. For example:
您可以通过编辑vif变, 来设置虚拟网 卡的MAC地址,比如:
vif = ['mac=00:16:3E:F6:BB:B3']
If you do not set this variable, xend will automatically generate a random MAC address from the range 00:16:3E:xx:xx:xx, assigned by IEEE to XenSource as an OUI (organizationally unique identifier). XenSource Inc. gives permission for anyone to use addresses randomly allocated from this range for use by their Xen domains.如果您不设置vif这个变量,xend服务 器将会为网卡分配一个随机MAC地址址。 范围从00:16:3E:xx:xx:xx。
For a list of IEEE OUI assignments, see http://standards.ieee.org/regauth/oui/oui.txt IEEE OUI 分配列表, 参阅
3.2.2 Booting the Guest Domain (引导客户Domain)
The xm tool provides a variety of commands for managing domains. Use the create command to start new domains. Assuming you've created a configuration file myvmconf based around /etc/xen/xmexample2, to start a domain with virtual machine ID 1 you should type:
xm工具为管理Domain提供很多指令。 用create 指令来引导新的Domain。可以基于/etc/xen/xmexample2 创建自己的Domain管理配置文件myvmconf,这样启动一个Domain可以通过虚拟机的ID来引导。比如ID是1,您应该输入:
# xm create -c myvmconf vmid=1
The -c switch causes xm to turn into the domain's console after creation. The vmid=1 sets the vmid variable used in the myvmconf file.
-c 参数是指后面要接配置文件,意思是通过配置文件引导,vmid=1是在myvmconf中的变量,不同的Domain,vmid的值也不一样。(注:比如 您虚拟的Slackware操作系统,他的vmid您设置为1,Gentoo,他的vmid设置为2。这样当我们通过xm create -c myvmconf vmid=1时,引导起来的就是Slackware,当我们通过xm create -c myvmconf vmid=2时,我们引导起来的就是Gentoo。—— 译者注)
You should see the console boot messages from the new domain appearing in the terminal in which you typed the command, culminating in a login prompt.
您应该能看到从新Domain的在控制台启动的信息,最后您能登录被虚拟的操作系统。
3.3 Starting / Stopping Domains Automatically(自 动启动/停止 Domain)
It is possible to have certain domains start automatically at boot time and to have dom0 wait for all running domains to shutdown before it shuts down the system.
当系统启动的时候, Domain 也随之启动,并生成一个dom0守护进程,当dom0关 闭系统之前,dom0上运行的Domain都要关闭。
To specify a domain is to start at boot-time, place its configuration file (or a link to it) under /etc/xen/auto/.
可以指定一个Domain随系统自动启 动,请放配置文件(或建一个 边链接)文件到/etc/xen/auto目录下。
A Sys-V style init script for Red Hat and LSB-compliant systems is provided and will be automatically copied to /etc/init.d/ during install. You can then enable it in the appropriate way for your distribution.
对于Red Hat 和LSB-compliant等系统,安装xen时,会在/etc/init.d目录下安装Sys-V风格初始化脚本。您可以根据自己使用的发行版启用它 们。
For instance, on Red Hat:
例如,在Red HAT中:
# chkconfig --add xendomains
By default, this will start the boot-time domains in runlevels 3, 4 and 5.
默认情况下, 在运行级别是3、4、5 时, 引导时会启动它们.
You can also use the service command to run this script manually, e.g:
当然您也能用服务器的命令手动运行这个脚 本,比如执行下面的命令:
# service xendomains start
Starts all the domains with config files under /etc/xen/auto/.
这样系统会根据放置于 /etc/xen/auto中配置文件, 启动所有定义的虚拟系统。
# service xendomains stop
Shuts down all running Xen domains.
执行上面的命令就会关掉所有的Xen 虚拟的系统;
2 Configuration and Management(配置和管理)
4. Domain Management Tools Domain(管理工具)
This chapter summarizes the management software and tools available.
这一节分析Xen的管理工和应用。
4.1 Xend
The Xend node control daemon performs system management functions related to virtual machines. It forms a central point of control of virtualized resources, and must be running in order to start and manage virtual machines. Xend must be run as root because it needs access to privileged system management functions.
An initialization script named /etc/init.d/xend is provided to start Xend at boot time. Use the tool appropriate (i.e. chkconfig) for your Linux distribution to specify the runlevels at which this script should be executed, or manually create symbolic links in the correct runlevel directories.
Xend can be started on the command line as well, and supports the following set of parameters:
# xend start | start xend, if not already running(启动xend,如果 xend没有运行) |
# xend stop | stop xend if already running (停止xend,如果xend正在运行) |
# xend restart | restart xend if running, otherwise start it (重启正在运行的 xend,如果xend没有运行,则启 动) |
# xend status | indicates xend status by its return code (查看xend状态) |
A SysV init script called xend is provided to start xend at boot time. make install installs this script in /etc/init.d. To enable it, you have to make symbolic links in the appropriate runlevel directories or use the chkconfig tool, where available. Once xend is running, administration can be done using the xm tool.
SysV初始化启动脚本xend 提供了在系统启动时随系统启动的功能。make install将会将此脚本安装到/etc/init.d目 录中,您可以借助chkconfig工具或在对应运行级别目录创建符号连接方便的启用它。xend运行起来后,系统管理员就能用xm管理工具进行虚拟机的 管理和操作。
4.1.1 Logging日志
As xend runs, events will be logged to /var/log/xend.log and (less frequently) to /var/log/xend-debug.log. These, along with the standard syslog files, are useful when troubleshooting problems.
xend运行时,其日志存放于 /var/log/xend.log 和/var/log/xend-debug.log(这里存放的比较 少)文件中。遇到问题时,可以查看这些日志,通过日志来排查错误。
4.1.2 Configuring Xend (配置Xend)
Xend is written in Python. At startup, it reads its configuration information from the file /etc/xen/xend-config.sxp. The Xen installation places an example xend-config.sxp file in the /etc/xen subdirectory which should work for most installations.
Xend是由Python 语言编写的,在启动时,Xend会读取其配置文件/etc/xen/xend-config.sxp。在Xen的安装时, 会在/etc/xen目录下生成一个示例性 xend-config- sxp文件,对于大多数的安装,它应该能工作。
See the example configuration file xend-debug.sxp and the section 5 man page xend-config.sxp for a full list of parameters and more detailed information. Some of the most important parameters are discussed below.
更多的详细信息,请参阅 配置文件示例xend- debug.sxp文件,man xend-config.sxp有详细的参数列表。一些比较重要的参数会在以下讨论。
An HTTP interface and a Unix domain socket API are available to communicate with Xend. This allows remote users to pass commands to the daemon. By default, Xend does not start an HTTP server. It does start a Unix domain socket management server, as the low level utility xm requires it. For support of cross-machine migration, Xend can start a relocation server. This support is not enabled by default for security reasons.
一个HTTP 接口和一个Unix的Domain 套接字API用于和Xend通信,目的是让远程用户传递命令到Domain。事实上Xend 没有启动HTTP服务器。它只是启动了一个Unix 的Domain套接字,然后通过xm工具来管理服务器,这是作为底层的工具xm得以实现Domain管理的需要。对于跨平台迁移的支持,Xend能启动一 个 再定位服务器。对于这项的支持由于安全的原因没有被设置为默认选项。
Note: the example xend configuration file modifies the defaults and starts up Xend as an HTTP server as well as a relocation server.
注意:这个xend示例配置文件修改了默认 选择,并且启动Xend,再 定位服务器和HTTP服务器一样。
From the file(由这个文件得知):
#(xend-http-server no)
(xend-http-server yes)
#(xend-unix-server yes)
#(xend-relocation-server no)
(xend-relocation-server yes)
Comment or uncomment lines in that file to disable or enable features that you require.
根据自己的需要请去掉或添加配置文件中非说 明文字行前的#号。如果您想 实现某个特性,请把该特性行前面的#号去掉。如果想关掉某项功能,请在行前加#号。
Connections from remote hosts are disabled by default:
通过远程主机连接功能默认被禁止:
# Address xend should listen on for HTTP connections, if xend-http-server is
# set.
# Specifying 'localhost' prevents remote connections.
# Specifying the empty string '' (the default) allows all connections.
#(xend-address '')
(xend-address localhost)
It is recommended that if migration support is not needed, the xend-relocation-server parameter value be changed to ``no'' or commented out.
建议,如果不需要支持迁移功能, xend-relocation- server 参数值应该改为"no"或把这功能注掉。
4.2 Xm
The xm tool is the primary tool for managing Xen from the console. The general format of an xm command line is:
xm 工具是通过控制台管理Xen的主要工具。常规xm命令行格式如下:
# xm command [switches] [arguments] [variables]
The available switches and arguments are dependent on the command chosen. The variables may be set using declarations of the form variable=value and command line declarations override any of the values in the configuration file being used, including the standard variables described above and any custom variables (for instance, the xmdefconfig file uses a vmid variable).
switches 和 arguments 项根据command有所不同. 变量(variables)的设置通过variable= value的方法来声明,并且命令(command)行声明优于任何 在配置文件中被应用过的参数,包括任何以上被描述的标准变量和任何变量(比如, xmdefconfig文件所用的vmid变量)
For online help for the commands available, type:
查看帮助,请输入:
# xm help
This will list the most commonly used commands. The full list can be obtained using xm help --long
. You can also type xm help
通过上面的指令,获 得最常用的xm 指令。全部的帮助列表,您 可以通过xm help --long 来获得。您也能通过输入xm help
4.2.1 Basic Management Commands (基本管理命令)
One useful command is # xm list
which lists all domains running in rows of the following format:
一个比较有用的指令是#xm list ,它能列出所有正在运行的Domain ,格式如下:
The meaning of each field is as follows:
每个字段代表的意思如下:
- name
- The descriptive name of the virtual machine. 虚拟机的名字,是一个描述型的名字
- domid
- The number of the domain ID this virtual machine is running in. 正在运行的虚拟机的Domain ID,是一个数值
- memory
- Memory size in megabytes. 内 存大小,单位是M
- vcpus
- The number of virtual CPUs this domain has. Domain 拥有虚拟CPU的个数,是一个数值
- state
- Domain state consists of 5 fields: Domain 状态,有五种状态;
- r
- running 正 在运行
- b
- blocked
- p
- paused 停 止
- s
- shutdown 关 闭
- c
- crashed 崩溃
- cputime
- How much CPU time (in seconds) the domain has used so far. CPU运 行时间 ,单位是秒
The xm list command also supports a long output format when the -l switch is used. This outputs the full details of the running domains in xend's SXP configuration format.
xm list 也支持长格式输出,要用到 -l 选项。长格式能输出正在运行Domain的详细内容。
You can get access to the console of a particular domain using the # xm console
command (e.g. # xm console myVM
).
您可以通过 #xm console command (比如#xm console myVM)进入Domain 的控制台;
5. Domain Configuration Domain (配置)
The following contains the syntax of the domain configuration files and description of how to further specify networking, driver domain and general scheduling behavior.
下面的内容包括Domain 配置文件中的语法结构,网络预制,驱动Domain及如何调用等;
5.1 Configuration Files (配 置文件)
Xen configuration files contain the following standard variables. Unless otherwise stated, configuration items should be enclosed in quotes: see the configuration scripts in /etc/xen/ for concrete examples.
Xen的配置文件包括标准变量。被预制的内 容应该用(括号)包含进来, 请看/etc/xen/目录中的示例。
- kernel
- Path to the kernel image. 指定内核的路径
- ramdisk
- Path to a ramdisk image (optional). 指定ramdisk 的映像路径(可选)
- memory
- Memory size in megabytes. 指定内存大小,单位是M
- vcpus
- The number of virtual CPUs. 指定被虚拟系统的CPU个数
- console
- Port to export the domain console on (default 9600 + domain ID). 指定Domain 的控制台端口(默认是9600+Domain ID)
- vif
- Network interface configuration. This may simply contain an empty string for each desired interface, or may override various settings, e.g.
- 网络端口配置。预定端口可以是一个一个空 值,或指定更多的变量设置,比 如
vif = [ 'mac=00:16:3E:00:00:11, bridge=xen-br0',
'bridge=xen-br1' ]
- to assign a MAC address and bridge to the first interface and assign a different bridge to the second interface, leaving xend to choose the MAC address. The settings that may be overridden in this way are type, mac, bridge, ip, script, backend, and vifname.
- 首先设置Domain 的网卡MAC地址,然后桥接第一个网络接口到另一个不同的网络接口,让xend来选择MAC地址。这些设置包括type,mac,bridge,ip, script,backend以及vifname等;
- disk
- List of block devices to export to the domain e.g.
disk = [ 'phy:hda1,sda1,r' ]
exports physical device /dev/hda1 to the domain as /dev/sda1 with read-only access. Exporting a disk read-write which is currently mounted is dangerous - if you are certain you wish to do this, you can specify w! as the mode. - 设置Domain所用的硬盘,比如 disk=
[ 'phy:hda1,sda1,r' ],意思是Domain所用的硬盘是物理分区/dev/hda1 ,然后被映射到/dev/sda1,并且是只读访问。如果您想打开读写功能,请把r换成w,不过这样做目前有点危险。
- dhcp
- Set to `dhcp' if you want to use DHCP to configure networking. 设置Domain获取IP的方式是从 DHCP服务器获得
- netmask
- Manually configured IP netmask. 设置Domain的网络掩码
- gateway
- Manually configured IP gateway. 设置Domain的网关
- hostname
- Set the hostname for the virtual machine. 设置虚拟机的hostname
- root
- Specify the root device parameter on the kernel command line. 通过root来指定Domain所在的硬盘分区,在这里 要指定 所映射的硬盘;
- nfs_server
- IP address for the NFS server (if any). 指定NFS服务器的IP地址
- nfs_root
- Path of the root filesystem on the NFS server (if any). 指定NFS 服务器所提供的root文件系统
- extra
- Extra string to append to the kernel command line (if any)
Additional fields are documented in the example configuration files (e.g. to configure virtual TPM functionality).
附加的字段已经被写入了示例配置文件中(例如,配置虚拟 TPM 功能)
For additional flexibility, it is also possible to include Python scripting commands in configuration files. An example of this is the xmexample2 file, which uses Python code to handle the vmid variable.
对于附加性的可选的功能,它可能包括在配置文件中Python的脚本。 在示例/etc/xen/xmexample2文件中,比较它用Python代码来 定义vmid的变量。
5.2 Network Configuration (网络配置)
For many users, the default installation should work ``out of the box''. More complicated network setups, for instance with multiple Ethernet interfaces and/or existing bridging setups will require some special configuration.
对于大多数用户来说,默认安装应该能工作,默认的配置文件远远超出了常规用户 的需要 。更为复杂的网络设置,比如更多的网络接口或(并且)桥接才能实现将需要特殊的配置。
The purpose of this section is to describe the mechanisms provided by xend to allow a flexible configuration for Xen's virtual networking.
这一节主要是讲配置Xen虚拟网络的灵活性。
5.2.1 Xen virtual network topology (Xen 虚拟网络技术)
Each domain network interface is connected to a virtual network interface in dom0 by a point to point link (effectively a ``virtual crossover cable''). These devices are named vif
在dom0中,每个Domain 网络接口通过点对点的(效果相当于“交叉线”)连接到一个虚拟的网络接口,这些设备 被命名为vif
Traffic on these virtual interfaces is handled in domain 0 using standard Linux mechanisms for bridging, routing, rate limiting, etc. Xend calls on two shell scripts to perform initial configuration of the network and configuration of new virtual interfaces. By default, these scripts configure a single bridge for all the virtual interfaces. Arbitrary routing / bridging configurations can be configured by customizing the scripts, as described in the following section.
这些虚拟的网络接口是通过domain 0的控制而相互通信,并且用标准的Linux结构来桥接、路由及传输限制等 。Xend向两个shell脚本呼叫,并且初始化网络及新的虚拟接口的配置。做为默认,这些脚本对于所有的虚拟接口都是通过单一的桥接。在下一节中,将讲 述通过自义定脚本来实现路由/桥接配置方法。
5.2.2 Xen networking scripts (Xen的网络脚本)
Xen's virtual networking is configured by two shell scripts (by default network-bridge and vif-bridge). These are called automatically by xend when certain events occur, with arguments to the scripts providing further contextual information. These scripts are found by default in /etc/xen/scripts. The names and locations of the scripts can be configured in /etc/xen/xend-config.sxp.
Xen的虚拟网络是通过两个shell脚本实现的(默认的 network-bridge和vif-bridge)。当事件发生时,这些 脚本通过 xend自动响应调用,并且脚本提供更深一层的前后关系的信息的许可。这些脚本位于/etc/xen/scripts目录中,脚本的名字和位置能在 /etc/xen/xend-config-sxp中配置。
- network-bridge:
- This script is called whenever xend is started or stopped to respectively initialize or tear down the Xen virtual network. In the default configuration initialization creates the bridge `xen-br0' and moves eth0 onto that bridge, modifying the routing accordingly. When xend exits, it deletes the Xen bridge and removes eth0, restoring the normal IP and routing configuration.
- 无论xend何时启动或停止,network-bridge都会响 应xend 来分别初始化或卸载Xen 虚拟网络。做为默认初始化配置,系统会创建一个网桥'xen-br0',并且移动eth0到这个网桥,与此同时修改相录的路由。当xend退出时,他会删 除Xen的网桥,并且移动eth0,恢复正常的IP和路由配置。
- vif-bridge:
- This script is called for every domain virtual interface and can configure firewalling rules and add the vif to the appropriate bridge. By default, this adds and removes VIFs on the default Xen bridge.
- 这个脚本对每个Domain 虚拟网络接口做出响应,并且能配置防火墙规则,添加vif到适当的网桥上。默认情况下,用于在默认的Xen网桥上添加或移除VIF。
Other example scripts are available (network-route and vif-route, network-nat and vif-nat). For more complex network setups (e.g. where routing is required or integrate with existing bridges) these scripts may be replaced with customized variants for your site's preferred configuration.
其它示例脚本是可以应用的(network-route 、 vif-route、 network-nat 和 vif -nat)。对于更为复杂的网络设置(比如 路由或对现有网桥的整合)脚本能够被自定义的变量所替换,以做为你网络预译的首选。
5.3 Driver Domain Configuration (Domain 驱动配置)
5.3.1 PCI
Individual PCI devices can be assigned to a given domain to allow that domain direct access to the PCI hardware. To use this functionality, ensure that the PCI Backend is compiled in to a privileged domain (e.g. domain 0) and that the domains which will be assigned PCI devices have the PCI Frontend compiled in. In XenLinux, the PCI Backend is available under the Xen configuration section while the PCI Frontend is under the architecture-specific "Bus Options" section. You may compile both the backend and the frontend into the same kernel; they will not affect each other.
The PCI devices you wish to assign to unprivileged domains must be "hidden" from your backend domain (usually domain 0) so that it does not load a driver for them. Use the pciback.hide kernel parameter which is specified on the kernel command-line and is configurable through GRUB (see Section 2.5). Note that devices are not really hidden from the backend domain. The PCI Backend ensures that no other device driver loads for those devices. PCI devices are identified by hexadecimal slot/funciton numbers (on Linux, use lspci to determine slot/funciton numbers of your devices) and can be specified with or without the PCI domain:
An example kernel command-line which hides two PCI devices might be:
To configure a domU to receive a PCI device:
- Command-line:
- Use the pci command-line flag. For multiple devices, use the option multiple times.
xm create netcard-dd pci=01:00.0 pci=02:03.0
- Flat Format configuration file:
- Specify all of your PCI devices in a python list named pci.
pci=['01:00.0','02:03.0']
- SXP Format configuration file:
- Use a single PCI device section for all of your devices (specify the numbers in hexadecimal with the preceding '0x'). Note that domain here refers to the PCI domain, not a virtual machine within Xen.
(device (pci
(dev (domain 0x0)(bus 0x3)(slot 0x1a)(func 0x1)
(dev (domain 0x0)(bus 0x1)(slot 0x5)(func 0x0)
)
There are a number of security concerns associated with PCI Driver Domains that you can read about in Section 9.2.
6. Storage and File System Management (存储和文件系统管理)
Storage can be made available to virtual machines in a number of different ways. This chapter covers some possible configurations.
对于虚拟机来说有几种存储方法,这一节我们将说一说几种可能的配置方法。
The most straightforward method is to export a physical block device (a hard drive or partition) from dom0 directly to the guest domain as a virtual block device (VBD).
最常用的,最简单的方法是以物理块设备(一个硬盘或分区)做为虚 拟系统的块设备。
Storage may also be exported from a filesystem image or a partitioned filesystem image as a file-backed VBD.
也可以用一个映像文件或已经分割的文件系统映像为做为虚拟系统的块设 备。
Finally, standard network storage protocols such as NBD, iSCSI, NFS, etc., can be used to provide storage to virtual machines.
最后,标准的网络存储协议,比如NBD,iSCSI,NFS等,也能做 为虚拟系统的存储系统。
[注:这一节主要是说,虚拟系统可以安装在物理硬盘或物理硬盘的 分区上,也可以安装在一个映像文件上,也可以安装在一个映像文件的分区 上,还可以安 装到网络文件系统上。—— 北南 加注 From LinuxSir.Org]
6.1 Exporting Physical Devices as VBDs (把物理硬盘做为虚拟块设备)
One of the simplest configurations is to directly export individual partitions from domain 0 to other domains. To achieve this use the phy: specifier in your domain configuration file. For example a line like
一个简单的配置就是直接把有效的物理分区做为虚拟块设备。在您的 domain配置文件中,通过用phy:来指定。比如类似下面的一行:
disk = ['phy:hda3,sda1,w']
specifies that the partition /dev/hda3 in domain 0 should be exported read-write to the new domain as /dev/sda1; one could equally well export it as /dev/hda or /dev/sdb5 should one wish.指定物理分区/dev/hda3虚拟为/dev/sda1,并且被虚拟 的系统所用的文件系统位于/dev/sda1。当然也可以虚拟为/dev/hda或 /dev/sdb5,就看您想用哪个了。
[注:因为被虚拟的操作系统用 的是虚拟的文件系统,所以要通过phy:来定义,首先定义的是物理分区,然后是虚拟分区(虚拟分区是根据自己喜欢而定义,但 不能定义为正在应用的平台分区,比如您在Fedora 5.0上虚拟Slackware,就不能用Fedora 5.0的root分区做为Slackware的虚拟root分区),接着是读写规划,是可读可写的w,还是只读的r —— 北南 加注 From LinuxSir.Org]
In addition to local disks and partitions, it is possible to export any device that Linux considers to be ``a disk'' in the same manner. For example, if you have iSCSI disks or GNBD volumes imported into domain 0 you can export these to other domains using the phy: disk syntax. E.g.:
补充一点,本地硬盘和分区,它能被做为能被Linux认可的硬件设备。 例如,如果您有一个iSCSI硬盘或GNBD逻辑卷,你也能用phy:disk syntax 来定义被虚拟系统的硬盘。
disk = ['phy:vg/lvm1,sda2,w']
Block devices should typically only be shared between domains in a read-only fashion otherwise the Linux kernel's file systems will get very confused as the file system structure may change underneath them (having the same ext3 partition mounted rw twice is a sure fire way to cause irreparable damage)! Xend will attempt to prevent you from doing this by checking that the device is not mounted read-write in domain 0, and hasn't already been exported read-write to another domain. If you want read-write sharing, export the directory to other domains via NFS from domain 0 (or use a cluster file system such as GFS or ocfs2).
块设备作为经典的配置在Domain中是只读的,否则Linux内 核的文件系统由于Domain文件系统多次改变而变得混乱(相同的ext3分区以rw读 写方式挂载两次的解决办法会导致崩溃的危险)!。Xend通过检查设备没有以rw可写读模式被挂载于Domain0 上,并且检查同一个块设备没有以读写的方式应用于另一外一个Domain上。
6.2 Using File-backed VBDs (用文件做为虚拟块设备)
It is also possible to use a file in Domain 0 as the primary storage for a virtual machine. As well as being convenient, this also has the advantage that the virtual block device will be sparse -- space will only really be allocated as parts of the file are used. So if a virtual machine uses only half of its disk space then the file really takes up half of the size allocated.
Domain 0是用于管理虚拟主机的,所以以一个文件做为虚拟机的文件系统是可能的。
For example, to create a 2GB sparse file-backed virtual block device (actually only consumes 1KB of disk):
例如:创建一个2G的文件,(文件的块的大小为1KB)
# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1
Make a file system in the disk file:
在映像文件上创建文件系 统:
# mkfs -t ext3 vm1disk
(when the tool asks for confirmation, answer `y')
(当有提示确认时,请输入'y')
Populate the file system e.g. by copying from the current root:
移植文件系统,比如持目前您正在应用的Linux文件系统中拷贝:
# mount -o loop vm1disk /mnt
# cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
# mkdir /mnt/{proc,sys,home,tmp}
Tailor the file system by editing /etc/fstab, /etc/hostname, etc. Don't forget to edit the files in the mounted file system, instead of your domain 0 filesystem, e.g. you would edit /mnt/etc/fstab instead of /etc/fstab. For this example put /dev/sda1 to root in fstab.
您应该编辑/etc/fstab,/etc/hostname等。不要忘记是 在被mount的文件系统中更改这些,而不是您的 domain 0的文件系统。比如您应该编辑/mnt/etc/fstab,而不是/etc/fstab。例如在/mnt/etc/fstab中添加一行 /dev/sda1。
Now unmount (this is important!):
现在卸载文件系统(这一步很重要!):
# umount /mnt
In the configuration file set:
在配置文件中的设置:
disk = ['file:/full/path/to/vm1disk,sda1,w']
As the virtual machine writes to its `disk', the sparse file will be filled in and consume more space up to the original 2GB.
就象虚拟机写入自己的硬盘,所以先要设置映像文件所处的位置 ,然后虚拟的硬盘(这个要在Linux认可的规则中设置,这个是可以自己定义的,比如sda1,sda2 ... —— 北南 加注 From LinuxSir.Org),接着是可读可写设置。
Note that file-backed VBDs may not be appropriate for backing I/O-intensive domains. File-backed VBDs are known to experience substantial slowdowns under heavy I/O workloads, due to the I/O handling by the loopback block device used to support file-backed VBDs in dom0. Better I/O performance can be achieved by using either LVM-backed VBDs (Section 6.3) or physical devices as VBDs (Section 6.1).
Linux supports a maximum of eight file-backed VBDs across all domains by default. This limit can be statically increased by using the max_loop module parameter if CONFIG_BLK_DEV_LOOP is compiled as a module in the dom0 kernel, or by using the max_loop=n boot option if CONFIG_BLK_DEV_LOOP is compiled directly into the dom0 kernel.
Linux支持最多8个虚拟文件系统,如果想解除这个设置,请用 max_loop的参数来配置,当然您所用的虚拟平台dom0内核已经 编译了 CONFIG_BLK_DEV_LOOP 这个选项。您可以在系统启动时,在boot选择中设置max_loop=n。
6.3 Using LVM-backed VBDs(用LVM做虚拟 块设备)
A particularly appealing solution is to use LVM volumes as backing for domain file-systems since this allows dynamic growing/shrinking of volumes as well as snapshot and other features.
最有吸引力的解决方案是用LVM卷作为虚拟机的文件系统;
To initialize a partition to support LVM volumes:
初始化一个分区到LVM卷:
# pvcreate /dev/sda10
Create a volume group named `vg' on the physical partition:
创建一个卷组名'vg'在物理分区上
# vgcreate vg /dev/sda10
Create a logical volume of size 4GB named `myvmdisk1':
创建一个逻辑卷大小为4G,名字为'myvmdisk1'
# lvcreate -L4096M -n myvmdisk1 vg
You should now see that you have a /dev/vg/myvmdisk1 Make a filesystem, mount it and populate it, e.g.:
现在你能在/dev/vg/myvmdisk1中创建一个文件系统,然 后挂载它,并且构建虚拟系统:
# mkfs -t ext3 /dev/vg/myvmdisk1
# mount /dev/vg/myvmdisk1 /mnt
# cp -ax / /mnt
# umount /mnt
Now configure your VM with the following disk configuration:
现在对您的VM做如下配置:
disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
LVM enables you to grow the size of logical volumes, but you'll need to resize the corresponding file system to make use of the new space. Some file systems (e.g. ext3) now support online resize. See the LVM manuals for more details.
LVM能让您调节逻辑卷的体积,你可以调整适合文件系统的体积大 小以便于有效的利用空闲空间。一些文件系统(比如ext3)支持在线调 整,请看 LVM手册,以获取更多的信息。
You can also use LVM for creating copy-on-write (CoW) clones of LVM volumes (known as writable persistent snapshots in LVM terminology). This facility is new in Linux 2.6.8, so isn't as stable as one might hope. In particular, using lots of CoW LVM disks consumes a lot of dom0 memory, and error conditions such as running out of disk space are not handled well. Hopefully this will improve in future.
您也可以通过copy-on-write(CoW)来创建LVM 卷的克隆(在LVM术语的通称是可写的持续快照)。这个工具在最早出现在Linux 2.6.8的内核中,因此他不可能象希望的那样稳定。特别注意的是,大量应用CoW LVM 硬盘会占用很多dom0的内存,并且有错误情况发生,例如超出硬盘空间的不能被处理。希望这个特性在未来有所提升。
To create two copy-on-write clones of the above file system you would use the following commands:
在一个文件系统上创建两个copy-on-write 克隆,您应该输入如下的指令:
# lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
# lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
Each of these can grow to have 1GB of differences from the master volume. You can grow the amount of space for storing the differences using the lvextend command, e.g.:
上面的命令从主卷为两个逻辑卷个划分了1G的空间,可以使用 lvextend命令对空间容量进行调整, 比如。
# lvextend +100M /dev/vg/myclonedisk1
Don't let the `differences volume' ever fill up otherwise LVM gets rather confused. It may be possible to automate the growing process by using dmsetup wait to spot the volume getting full and then issue an lvextend.
In principle, it is possible to continue writing to the volume that has been cloned (the changes will not be visible to the clones), but we wouldn't recommend this: have the cloned volume as a `pristine' file system install that isn't mounted directly by any of the virtual machines.
6.4 Using NFS Root (用NFS Root)
First, populate a root filesystem in a directory on the server machine. This can be on a distinct physical machine, or simply run within a virtual machine on the same node.
用NFS服务器提供的文件系统做为虚拟系统的文件系统。
Now configure the NFS server to export this filesystem over the network by adding a line to /etc/exports, for instance:
首先我们要配配NFS服务器,通过修改/etc/exports文件, 比如加如下的一行:
/export/vm1root 1.2.3.4/24 (rw,sync,no_root_squash)
Finally, configure the domain to use NFS root. In addition to the normal variables, you should make sure to set the following values in the domain's configuration file:
最后,我们来配置虚拟机所用的NFS root。当然要指定NFS服务器的IP地址,应该确保有如下的参数,在虚拟系统引导的配置文件中;
root = '/dev/nfs'
nfs_server = '2.3.4.5' # substitute IP address of server
nfs_root = '/path/to/root' # path to root FS on the server
The domain will need network access at boot time, so either statically configure an IP address using the config variables ip, netmask, gateway, hostname; or enable DHCP (dhcp='dhcp').
Note that the Linux NFS root implementation is known to have stability problems under high load (this is not a Xen-specific problem), so this configuration may not be appropriate for critical servers.
7. CPU Management (CPU管理)
Xen allows a domain's virtual CPU(s) to be associated with one or more host CPUs. This can be used to allocate real resources among one or more guests, or to make optimal use of processor resources when utilizing dual-core, hyperthreading, or other advanced CPU technologies.
Xen enumerates physical CPUs in a `depth first' fashion. For a system with both hyperthreading and multiple cores, this would be all the hyperthreads on a given core, then all the cores on a given socket, and then all sockets. I.e. if you had a two socket, dual core, hyperthreaded Xeon the CPU order would be:
socket0 | socket1 | ||||||
core0 | core1 | core0 | core1 | ||||
ht0 | ht1 | ht0 | ht1 | ht0 | ht1 | ht0 | ht1 |
#0 | #1 | #2 | #3 | #4 | #5 | #6 | #7 |
Having multiple vcpus belonging to the same domain mapped to the same physical CPU is very likely to lead to poor performance. It's better to use `vcpus-set' to hot-unplug one of the vcpus and ensure the others are pinned on different CPUs.
If you are running IO intensive tasks, its typically better to dedicate either a hyperthread or whole core to running domain 0, and hence pin other domains so that they can't use CPU 0. If your workload is mostly compute intensive, you may want to pin vcpus such that all physical CPU threads are available for guest domains.
8. Migrating Domains (迁移Domain)
8.1 Domain Save and Restore (Domain的存储和恢复)
The administrator of a Xen system may suspend a virtual machine's current state into a disk file in domain 0, allowing it to be resumed at a later time.
Xen系统管理员可以将虚拟机的当前状态挂起到domain 0 的一个磁盘文件中, 也可以稍后恢复。
For example you can suspend a domain called ``VM1'' to disk using the command:
例如可以使用下面的命令将名为``VM1''的domain挂起:
# xm save VM1 VM1.chk
This will stop the domain named ``VM1'' and save its current state into a file called VM1.chk.
这将停止名为``VM1''的domain, 并保存其状态到一个名为 VM1.chk 的文件.
To resume execution of this domain, use the xm restore command:
使用xm restore命令 恢复这个domain的运行:
# xm restore VM1.chk
This will restore the state of the domain and resume its execution. The domain will carry on as before and the console may be reconnected using the xm console command, as described earlier.
这将恢复这个domain的状态并继续运行. 这个domain会象以前一样运行, 可以使用xm console命令同它的控制台连 接, 如前面描述的那样.
8.2 Migration and Live Migration(迁 移和场景迁移)
Migration is used to transfer a domain between physical hosts. There are two varieties: regular and live migration. The former moves a virtual machine from one host to another by pausing it, copying its memory contents, and then resuming it on the destination. The latter performs the same logical functionality but without needing to pause the domain for the duration. In general when performing live migration the domain continues its usual activities and--from the user's perspective--the migration should be imperceptible.
两个物理主机之间的通讯和传输是通过迁移实现的。有两个变量 regulare和 live migration。
To perform a live migration, both hosts must be running Xen / xend and the destination host must have sufficient resources (e.g. memory capacity) to accommodate the domain after the move. Furthermore we currently require both source and destination machines to be on the same L2 subnet.
Currently, there is no support for providing automatic remote access to filesystems stored on local disk when a domain is migrated. Administrators should choose an appropriate storage solution (i.e. SAN, NAS, etc.) to ensure that domain filesystems are also available on their destination node. GNBD is a good method for exporting a volume from one machine to another. iSCSI can do a similar job, but is more complex to set up.
When a domain migrates, it's MAC and IP address move with it, thus it is only possible to migrate VMs within the same layer-2 network and IP subnet. If the destination node is on a different subnet, the administrator would need to manually configure a suitable etherip or IP tunnel in the domain 0 of the remote node.
A domain may be migrated using the xm migrate command. To live migrate a domain to another machine, we would use the command:
# xm migrate --live mydomain destination.ournetwork.com
Without the -live flag, xend simply stops the domain and copies the memory image over to the new node and restarts it. Since domains can have large allocations this can be quite time consuming, even on a Gigabit network. With the -live flag xend attempts to keep the domain running while the migration is in progress, resulting in typical down times of just 60-300ms.
For now it will be necessary to reconnect to the domain's console on the new machine using the xm console command. If a migrated domain has any open network connections then they will be preserved, so SSH connections do not have this limitation.
9. Securing Xen (安全应用Xen)
This chapter describes how to secure a Xen system. It describes a number of scenarios and provides a corresponding set of best practices. It begins with a section devoted to understanding the security implications of a Xen system.
9.1 Xen Security Considerations (Xen的安全事项)
When deploying a Xen system, one must be sure to secure the management domain (Domain-0) as much as possible. If the management domain is compromised, all other domains are also vulnerable. The following are a set of best practices for Domain-0:
- Run the smallest number of necessary services. The less things that are present in a management partition, the better. Remember, a service running as root in the management domain has full access to all other domains on the system.
- Use a firewall to restrict the traffic to the management domain. A firewall with default-reject rules will help prevent attacks on the management domain.
- Do not allow users to access Domain-0. The Linux kernel has been known to have local-user root exploits. If you allow normal users to access Domain-0 (even as unprivileged users) you run the risk of a kernel exploit making all of your domains vulnerable.
9.2 Driver Domain Security Considerations (驱动Domain安全事项)
Driver domains address a range of security problems that exist regarding the use of device drivers and hardware. On many operating systems in common use today, device drivers run within the kernel with the same privileges as the kernel. Few or no mechanisms exist to protect the integrity of the kernel from a misbehaving (read "buggy") or malicious device driver. Driver domains exist to aid in isolating a device driver within its own virtual machine where it cannot affect the stability and integrity of other domains. If a driver crashes, the driver domain can be restarted rather than have the entire machine crash (and restart) with it. Drivers written by unknown or untrusted third-parties can be confined to an isolated space. Driver domains thus address a number of security and stability issues with device drivers.
However, due to limitations in current hardware, a number of security concerns remain that need to be considered when setting up driver domains (it should be noted that the following list is not intended to be exhaustive).
- Without an IOMMU, a hardware device can DMA to memory regions outside of its controlling domain. Architectures which do not have an IOMMU (e.g. most x86-based platforms) to restrict DMA usage by hardware are vulnerable. A hardware device which can perform arbitrary memory reads and writes can read/write outside of the memory of its controlling domain. A malicious or misbehaving domain could use a hardware device it controls to send data overwriting memory in another domain or to read arbitrary regions of memory in another domain.
- Shared buses are vulnerable to sniffing. Devices that share a data bus can sniff (and possible spoof) each others' data. Device A that is assigned to Domain A could eavesdrop on data being transmitted by Domain B to Device B and then relay that data back to Domain A.
- Devices which share interrupt lines can either prevent the reception of that interrupt by the driver domain or can trigger the interrupt service routine of that guest needlessly. A devices which shares a level-triggered interrupt (e.g. PCI devices) with another device can raise an interrupt and never clear it. This effectively blocks other devices which share that interrupt line from notifying their controlling driver domains that they need to be serviced. A device which shares an any type of interrupt line can trigger its interrupt continually which forces execution time to be spent (in multiple guests) in the interrupt service routine (potentially denying time to other processes within that guest). System architectures which allow each device to have its own interrupt line (e.g. PCI's Message Signaled Interrupts) are less vulnerable to this denial-of-service problem.
- Devices may share the use of I/O memory address space. Xen can only restrict access to a device's physical I/O resources at a certain granularity. For interrupt lines and I/O port address space, that granularity is very fine (per interrupt line and per I/O port). However, Xen can only restrict access to I/O memory address space on a page size basis. If more than one device shares use of a page in I/O memory address space, the domains to which those devices are assigned will be able to access the I/O memory address space of each other's devices.
9.3 Security Scenarios (安全情况)
9.3.1 The Isolated Management Network (隔绝管理网络)
In this scenario, each node has two network cards in the cluster. One network card is connected to the outside world and one network card is a physically isolated management network specifically for Xen instances to use.
在这种情况中,在集群中每个节点都有两块网卡。一个网卡要与外部相连, 另一个网卡是被Xen接口而用的物理隔绝管理的网络。
As long as all of the management partitions are trusted equally, this is the most secure scenario. No additional configuration is needed other than forcing Xend to bind to the management interface for relocation.
只要所有的管理分区是平等可信,这种情况就是最可信的。没有必要 为了隔绝网络,而额外配置强制Xend绑定到其它管理端口。
9.3.2 A Subnet Behind a Firewall (子网的防火墙)
In this scenario, each node has only one network card but the entire cluster sits behind a firewall. This firewall should do at least the following:
在这种情况中,每个节点有自己的网卡,但整个集群在防火墙之后。这个网 络至少做到以下方面:
- Prevent IP spoofing from outside of the subnet. (阻止来自外网的IP欺骗)
- Prevent access to the relocation port of any of the nodes in the cluster except from within the cluster. (禁止任何群外节点登录隔绝部份)
The following iptables rules can be used on each node to prevent migrations to that node from outside the subnet assuming the main firewall does not do this for you:
由于主防火墙没有为您做禁止外网登录,所以您能用下面的iptable 的规则能阻止外网节点访问:
# this command disables all access to the Xen relocation
# port:
#这个命令能阻止所有的Xen变换布署的端口:
iptables -A INPUT -p tcp --destination-port 8002 -j REJECT
# this command enables Xen relocations only from the specific
# subnet:
#这个命令让Xen重新布署端口,仅通过指定的网络:
iptables -I INPUT -p tcp -{}-source 192.168.1.1/8 \
--destination-port 8002 -j ACCEPT
9.3.3 Nodes on an Untrusted Subnet (不信任子网注意事项)
Migration on an untrusted subnet is not safe in current versions of Xen. It may be possible to perform migrations through a secure tunnel via an VPN or SSH. The only safe option in the absence of a secure tunnel is to disable migration completely. The easiest way to do this is with iptables:
对于一个不可信的子网,在Xen当前版本下,进行迁移是不安全 的。但它能通过一个安全通道来实现,比如VPN或SSH。在缺乏安全通道 的不环境下, 完全屏掉迁移只能是一个安全选择。比较容易的办法是用下面的iptable指令。
# this command disables all access to the Xen relocation port
iptables -A INPUT -p tcp -{}-destination-port 8002 -j REJECT
3 Reference (参考)
10. Build and Boot Options (编译和引导选项)
This chapter describes the build- and boot-time options which may be used to tailor your Xen system.
10.1 Top-level Configuration Options (高级配置选项)
Top-level configuration is achieved by editing one of two files: Config.mk and Makefile.
The former allows the overall build target architecture to be specified. You will typically not need to modify this unless you are cross-compiling or if you wish to build a PAE-enabled Xen system. Additional configuration options are documented in the Config.mk file.
The top-level Makefile is chiefly used to customize the set of kernels built. Look for the line:
KERNELS ?= linux-2.6-xen0 linux-2.6-xenU
Allowable options here are any kernels which have a corresponding build configuration file in the buildconfigs/ directory.
10.2 Xen Build Options (Xen 编译 选项)
Xen provides a number of build-time options which should be set as environment variables or passed on make's command-line.
- verbose=y
- Enable debugging messages when Xen detects an unexpected condition. Also enables console output from all domains.
- debug=y
- Enable debug assertions. Implies verbose=y. (Primarily useful for tracing bugs in Xen).
- debugger=y
- Enable the in-Xen debugger. This can be used to debug Xen, guest OSes, and applications.
- perfc=y
- Enable performance counters for significant events within Xen. The counts can be reset or displayed on Xen's console via console control keys.
10.3 Xen Boot Options (Xen 引导选项)
These options are used to configure Xen's behaviour at runtime. They should be appended to Xen's command line, either manually or by editing grub.conf.
- noreboot
- Don't reboot the machine automatically on errors. This is useful to catch debug output if you aren't catching console messages via the serial line.
- nosmp
- Disable SMP support. This option is implied by `ignorebiostables'.
- watchdog
- Enable NMI watchdog which can report certain failures.
- noirqbalance
- Disable software IRQ balancing and affinity. This can be used on systems such as Dell 1850/2850 that have workarounds in hardware for IRQ-routing issues.
- badpage=
, , ... - Specify a list of pages not to be allocated for use because they contain bad bytes. For example, if your memory tester says that byte 0x12345678 is bad, you would place `badpage=0x12345' on Xen's command line.
- com1=
,DPS, , com2= ,DPS, ,
Xen supports up to two 16550-compatible serial ports. For example: `com1=9600, 8n1, 0x408, 5' maps COM1 to a 9600-baud port, 8 data bits, no parity, 1 stop bit, I/O port base 0x408, IRQ 5. If some configuration options are standard (e.g., I/O base and IRQ), then only a prefix of the full configuration string need be specified. If the baud rate is pre-configured (e.g., by the bootloader) then you can specify `auto' in place of a numeric baud rate.- console=
- Specify the destination for Xen console I/O. This is a comma-separated list of, for example:
- vga
- Use VGA console and allow keyboard input.
- com1
- Use serial port com1.
- com2H
- Use serial port com2. Transmitted chars will have the MSB set. Received chars must have MSB set.
- com2L
- Use serial port com2. Transmitted chars will have the MSB cleared. Received chars must have MSB cleared.
- sync_console
- Force synchronous console output. This is useful if you system fails unexpectedly before it has sent all available output to the console. In most cases Xen will automatically enter synchronous mode when an exceptional event occurs, but this option provides a manual fallback.
- conswitch=
- Specify how to switch serial-console input between Xen and DOM0. The required sequence is CTRL-
pressed three times. Specifying the backtick character disables switching. The specifies whether Xen should auto-switch input to DOM0 when it boots -- if it is `x' then auto-switching is disabled. Any other value, or omitting the character, enables auto-switching. [NB. Default switch-char is `a'.] - nmi=xxx
- Specify what to do with an NMI parity or I/O error.
`nmi=fatal': Xen prints a diagnostic and then hangs.
`nmi=dom0': Inform DOM0 of the NMI.
`nmi=ignore': Ignore the NMI. - mem=xxx
- Set the physical RAM address limit. Any RAM appearing beyond this physical address in the memory map will be ignored. This parameter may be specified with a B, K, M or G suffix, representing bytes, kilobytes, megabytes and gigabytes respectively. The default unit, if no suffix is specified, is kilobytes.
- dom0_mem=xxx
- Set the amount of memory to be allocated to domain0. In Xen 3.x the parameter may be specified with a B, K, M or G suffix, representing bytes, kilobytes, megabytes and gigabytes respectively; if no suffix is specified, the parameter defaults to kilobytes. In previous versions of Xen, suffixes were not supported and the value is always interpreted as kilobytes.
- tbuf_size=xxx
- Set the size of the per-cpu trace buffers, in pages (default 1). Note that the trace buffers are only enabled in debug builds. Most users can ignore this feature completely.
- sched=xxx
- Select the CPU scheduler Xen should use. The current possibilities are `sedf' (default) and `bvt'.
- apic_verbosity=debug,verbose
- Print more detailed information about local APIC and IOAPIC configuration.
- lapic
- Force use of local APIC even when left disabled by uniprocessor BIOS.
- nolapic
- Ignore local APIC in a uniprocessor system, even if enabled by the BIOS.
- apic=bigsmp,default,es7000,summit
- Specify NUMA platform. This can usually be probed automatically.
In addition, the following options may be specified on the Xen command line. Since domain 0 shares responsibility for booting the platform, Xen will automatically propagate these options to its command line. These options are taken from Linux's command-line syntax with unchanged semantics.
- acpi=off,force,strict,ht,noirq,...
- Modify how Xen (and domain 0) parses the BIOS ACPI tables.
- acpi_skip_timer_override
- Instruct Xen (and domain 0) to ignore timer-interrupt override instructions specified by the BIOS ACPI tables.
- noapic
- Instruct Xen (and domain 0) to ignore any IOAPICs that are present in the system, and instead continue to use the legacy PIC.
10.4 XenLinux Boot Options (XenLinux引导选项)
In addition to the standard Linux kernel boot options, we support:
- xencons=xxx
- Specify the device node to which the Xen virtual console driver is attached. The following options are supported:
`xencons=off': disable virtual console `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) `xencons=ttyS': attach console to /dev/ttyS0
11. Further Support (未来支持)
If you have questions that are not answered by this manual, the sources of information listed below may be of interest to you. Note that bug reports, suggestions and contributions related to the software (or the documentation) should be sent to the Xen developers' mailing list (address below).
11.1 Other Documentation (其它文档)
For developers interested in porting operating systems to Xen, the Xen Interface Manual is distributed in the docs/ directory of the Xen source distribution.
11.2 Online References (在线参考)
The official Xen web site can be found at:
http://www.xensource.com
This contains links to the latest versions of all online documentation, including the latest version of the FAQ.
Information regarding Xen is also available at the Xen Wiki at
http://wiki.xensource.com/xenwiki/The Xen project uses Bugzilla as its bug tracking system. You'll find the Xen Bugzilla at http://bugzilla.xensource.com/bugzilla/.
11.3 Mailing Lists (邮件列表)
There are several mailing lists that are used to discuss Xen related topics. The most widely relevant are listed below. An official page of mailing lists and subscription information can be found at
http://lists.xensource.com/
- xen-devel@lists.xensource.com
- Used for development discussions and bug reports. Subscribe at:
http://lists.xensource.com/xen-devel - xen-users@lists.xensource.com
- Used for installation and usage discussions and requests for help. Subscribe at:
http://lists.xensource.com/xen-users - xen-announce@lists.xensource.com
- Used for announcements only. Subscribe at:
http://lists.xensource.com/xen-announce - xen-changelog@lists.xensource.com
- Changelog feed from the unstable and 2.0 trees - developer oriented. Subscribe at:
http://lists.xensource.com/xen-changelog
A. Unmodified (VMX) guest domains in Xen with Intel®Virtualization Technology (VT)
Xen supports guest domains running unmodified Guest operating systems using Virtualization Technology (VT) available on recent Intel Processors. More information about the Intel Virtualization Technology implementing Virtual Machine Extensions (VMX) in the processor is available on the Intel website at
http://www.intel.com/technology/computing/vptech
A.1 Building Xen with VT support
The following packages need to be installed in order to build Xen with VT support. Some Linux distributions do not provide these packages by default.
Package | Description |
dev86 | The dev86 package provides an assembler and linker for real mode 80x86 instructions. You need to have this package installed in order to build the BIOS code which runs in (virtual) real mode. If the dev86 package is not available on the x86_64 distribution, you can install the i386 version of it. The dev86 rpm package for various distributions can be found at http://www.rpmfind.net/linux/rpm2html/search.php?query=dev86&submit=Search |
LibVNCServer | The unmodified guest's VGA display, keyboard, and mouse are virtualized using the vncserver library provided by this package. You can get the sources of libvncserver from http://sourceforge.net/projects/libvncserver. Build and install the sources on the build system to get the libvncserver library. The 0.8pre version of libvncserver is currently working well with Xen. |
SDL-devel, SDL | Simple DirectMedia Layer (SDL) is another way of virtualizing the unmodified guest console. It provides an X window for the guest console. If the SDL and SDL-devel packages are not installed by default on the build system, they can be obtained from http://www.rpmfind.net/linux/rpm2html/search.php?query=SDL&submit=Search , http://www.rpmfind.net/linux/rpm2html/search.php?query=SDL-devel&submit=Search |
A.2 Configuration file for unmodified VMX guests
The Xen installation includes a sample configuration file, /etc/xen/xmexample.vmx. There are comments describing all the options. In addition to the common options that are the same as those for paravirtualized guest configurations, VMX guest configurations have the following settings:
Parameter | Description |
kernel | The VMX firmware loader, /usr/lib/xen/boot/vmxloader |
builder | The domain build function. The VMX domain uses the vmx builder. |
acpi | Enable VMX guest ACPI, default=0 (disabled) |
apic | Enable VMX guest APIC, default=0 (disabled) |
vif | Optionally defines MAC address and/or bridge for the network interfaces. Random MACs are assigned if not given. type=ioemu means ioemu is used to virtualize the VMX NIC. If no type is specified, vbd is used, as with paravirtualized guests. |
disk | Defines the disk devices you want the domain to have access to, and what you want them accessible as. If using a physical device as the VMX guest's disk, each disk entry is of the form phy:UNAME,ioemu:DEV,MODE, where UNAME is the device, DEV is the device name the domain will see, and MODE is r for read-only, w for read-write. ioemu means the disk will use ioemu to virtualize the VMX disk. If not adding ioemu, it uses vbd like paravirtualized guests. If using disk image file, its form should be like file:FILEPATH,ioemu:DEV,MODE If using more than one disk, there should be a comma between each disk entry. For example: disk = ['file:/var/images/image1.img,ioemu:hda,w', 'file:/var/images/image2.img,ioemu:hdb,w'] |
cdrom | Disk image for CD-ROM. The default is /dev/cdrom for Domain0. Inside the VMX domain, the CD-ROM will available as device /dev/hdc. The entry can also point to an ISO file. |
boot | Boot from floppy (a), hard disk (c) or CD-ROM (d). For example, to boot from CD-ROM, the entry should be: boot='d' |
device_model | The device emulation tool for VMX guests. This parameter should not be changed. |
sdl | Enable SDL library for graphics, default = 0 (disabled) |
vnc | Enable VNC library for graphics, default = 1 (enabled) |
vncviewer | Enable spawning of the vncviewer (only valid when vnc=1), default = 1 (enabled) If vnc=1 and vncviewer=0, user can use vncviewer to manually connect VMX from remote. For example: vncviewer domain0_IP_address:VMX_domain_id |
ne2000 | Enable ne2000, default = 0 (disabled; use pcnet) |
serial | Enable redirection of VMX serial output to pty device |
localtime | Set the real time clock to local time [default=0, that is, set to UTC]. |
enable-audio | Enable audio support. This is under development. |
full-screen | Start in full screen. This is under development. |
nographic | Another way to redirect serial output. If enabled, no 'sdl' or 'vnc' can work. Not recommended. |
A.3 Creating virtual disks from scratch
A.3.1 Using physical disks
If you are using a physical disk or physical disk partition, you need to install a Linux OS on the disk first. Then the boot loader should be installed in the correct place. For example dev/sda for booting from the whole disk, or /dev/sda1 for booting from partition 1.A.3.2 Using disk image files
You need to create a large empty disk image file first; then, you need to install a Linux OS onto it. There are two methods you can choose. One is directly installing it using a VMX guest while booting from the OS installation CD-ROM. The other is copying an installed OS into it. The boot loader will also need to be installed.To create the image file:
The image size should be big enough to accommodate the entire OS. This example assumes the size is 1G (which is probably too small for most OSes).# dd if=/dev/zero of=hd.img bs=1M count=1 seek=1023
To directly install Linux OS into an image file using a VMX guest:
Install Xen and create VMX with the original image file with booting from CD-ROM. Then it is just like a normal Linux OS installation. The VMX configuration file should have these two entries before creating:
cdrom='/dev/cdrom' boot='d'
If this method does not succeed, you can choose the following method of copying an installed Linux OS into an image file.
To copy a installed OS into an image file:
Directly installing is an easier way to make partitions and install an OS in a disk image file. But if you want to create a specific OS in your disk image, then you will most likely want to use this method.- Install a normal Linux OS on the host machine
You can choose any way to install Linux, such as using yum to install Red Hat Linux or YAST to install Novell SuSE Linux. The rest of this example assumes the Linux OS is installed in /var/guestos/. - Make the partition table
The image file will be treated as hard disk, so you should make the partition table in the image file. For example:# losetup /dev/loop0 hd.img
# fdisk -b 512 -C 4096 -H 16 -S 32 /dev/loop0
press 'n' to add new partition
press 'p' to choose primary partition
press '1' to set partition number
press "Enter" keys to choose default value of "First Cylinder" parameter.
press "Enter" keys to choose default value of "Last Cylinder" parameter.
press 'w' to write partition table and exit
# losetup -d /dev/loop0 - Make the file system and install grub
# ln -s /dev/loop0 /dev/loop
# losetup /dev/loop0 hd.img
# losetup -o 16384 /dev/loop1 hd.img
# mkfs.ext3 /dev/loop1
# mount /dev/loop1 /mnt
# mkdir -p /mnt/boot/grub
# cp /boot/grub/stage* /boot/grub/e2fs_stage1_5 /mnt/boot/grub
# umount /mnt
# grub
grub> device (hd0) /dev/loop
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
# rm /dev/loop
# losetup -d /dev/loop0
# losetup -d /dev/loop1The losetup option -o 16384 skips the partition table in the image file. It is the number of sectors times 512. We need /dev/loop because grub is expecting a disk device name, where name represents the entire disk and name1 represents the first partition.
- Copy the OS files to the image
If you have Xen installed, you can easily use lomount instead of losetup and mount when coping files to some partitions. lomount just needs the partition information.# lomount -t ext3 -diskimage hd.img -partition 1 /mnt/guest
# cp -ax /var/guestos/{root,dev,var,etc,usr,bin,sbin,lib} /mnt/guest
# mkdir /mnt/guest/{proc,sys,home,tmp} - Edit the /etc/fstab of the guest image
The fstab should look like this:# vim /mnt/guest/etc/fstab
/dev/hda1 / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs efaults 0 0 - umount the image file
# umount /mnt/guest
Now, the guest OS image hd.img is ready. You can also reference http://free.oszoo.org for quickstart images. But make sure to install the boot loader.
A.3.3 Install Windows into an Image File using a VMX guest
In order to install a Windows OS, you should keep acpi=0 in your VMX configuration file.A.4 VMX Guests
A.4.1 Editing the Xen VMX config file
Make a copy of the example VMX configuration file /etc/xen/xmeaxmple.vmx and edit the line that readsdisk = [ 'file:/var/images/guest.img,ioemu:hda,w' ]
replacing guest.img with the name of the guest OS image file you just made.
A.4.2 Creating VMX guests
Simply follow the usual method of creating the guest, using the -f parameter and providing the filename of your VMX configuration file: # xend start
# xm create /etc/xen/vmxguest.vmx
In the default configuration, VNC is on and SDL is off. Therefore VNC windows will open when VMX guests are created. If you want to use SDL to create VMX guests, set sdl=1 in your VMX configuration file. You can also turn off VNC by setting vnc=0.
A.4.3 Destroy VMX guests
VMX guests can be destroyed in the same way as can paravirtualized guests. We recommend that you type the commandpoweroff
in the VMX guest's console first to prevent data loss. Then execute the command
xm destroy vmx_guest_id
at the Domain0 console.
A.4.4 VMX window (X or VNC) Hot Key
If you are running in the X environment after creating a VMX guest, an X window is created. There are several hot keys for control of the VMX guest that can be used in the window.Ctrl+Alt+2 switches from guest VGA window to the control window. Typing help shows the control commands help. For example, 'q' is the command to destroy the VMX guest.
Ctrl+Alt+1 switches back to VMX guest's VGA.
Ctrl+Alt+3 switches to serial port output. It captures serial output from the VMX guest. It works only if the VMX guest was configured to use the serial port.
A.4.5 Save/Restore and Migration
VMX guests currently cannot be saved and restored, nor migrated. These features are currently under active development.B. Vnets - Domain Virtual Networking
Xen optionally supports virtual networking for domains using vnets. These emulate private LANs that domains can use. Domains on the same vnet can be hosted on the same machine or on separate machines, and the vnets remain connected if domains are migrated. Ethernet traffic on a vnet is tunneled inside IP packets on the physical network. A vnet is a virtual network and addressing within it need have no relation to addressing on the underlying physical network. Separate vnets, or vnets and the physical network, can be connected using domains with more than one network interface and enabling IP forwarding or bridging in the usual way.
Vnet support is included in xm and xend:
# xm vnet-createcreates a vnet using the configuration in the file
. When a vnet is created its configuration is stored by xend and the vnet persists until it is deleted using # xm vnet-deleteThe vnets xend knows about are listed by
# xm vnet-listMore vnet management commands are available using the vn tool included in the vnet distribution.
The format of a vnet configuration file is
(vnet (idWhite space is not significant. The parameters are:)
(bridge)
(vnetif)
(security))
: vnet id, the 128-bit vnet identifier. This can be given as 8 4-digit hex numbers separated by colons, or in short form as a single 4-digit hex number. The short form is the same as the long form with the first 7 fields zero. Vnet ids must be non-zero and id 1 is reserved.
: the name of a bridge interface to create for the vnet. Domains are connected to the vnet by connecting their virtual interfaces to the bridge. Bridge names are limited to 14 characters by the kernel.
: the name of the virtual interface onto the vnet (optional). The interface encapsulates and decapsulates vnet traffic for the network and is attached to the vnet bridge. Interface names are limited to 14 characters by the kernel.
: security level for the vnet (optional). The level may be one ofnone
: no security (default). Vnet traffic is in clear on the network.auth
: authentication. Vnet traffic is authenticated using IPSEC ESP with hmac96.conf
: confidentiality. Vnet traffic is authenticated and encrypted using IPSEC ESP with hmac96 and AES-128.
B.1 Example
If the file vnet97.sxp contains(vnet (id 97) (bridge vnet97) (vnetif vnif97)Then xm vnet-create vnet97.sxp will define a vnet with id 97 and no security. The bridge for the vnet is called vnet97 and the virtual interface for it is vnif97. To add an interface on a domain to this vnet set its bridge to vnet97 in its configuration. In Python:
(security none))
vif="bridge=vnet97"In sxp:
(dev (vif (mac aa:00:00:01:02:03) (bridge vnet97)))Once the domain is started you should see its interface in the output of brctl show under the ports for vnet97.
To get best performance it is a good idea to reduce the MTU of a domain's interface onto a vnet to 1400. For example using ifconfig eth0 mtu 1400 or putting MTU=1400 in ifcfg-eth0. You may also have to change or remove cached config files for eth0 under /etc/sysconfig/networking. Vnets work anyway, but performance can be reduced by IP fragmentation caused by the vnet encapsulation exceeding the hardware MTU.
B.2 Installing vnet support
Vnets are implemented using a kernel module, which needs to be loaded before they can be used. You can either do this manually before starting xend, using the command vn insmod, or configure xend to use the network-vnet script in the xend configuration file /etc/xend/xend-config.sxp:(network-script network-vnet)This script insmods the module and calls the network-bridge script.
The vnet code is not compiled and installed by default. To compile the code and install on the current system use make install in the root of the vnet source tree, tools/vnet. It is also possible to install to an installation directory using make dist. See the Makefile in the source for details.
The vnet module creates vnet interfaces vnif0002, vnif0003 and vnif0004 by default. You can test that vnets are working by configuring IP addresses on these interfaces and trying to ping them across the network. For example, using machines hostA and hostB:
hostA# ifconfig vnif0004 10.0.0.100 up
hostB# ifconfig vnif0004 10.0.0.101 up
hostB# ping 10.0.0.100
The vnet implementation uses IP multicast to discover vnet interfaces, so all machines hosting vnets must be reachable by multicast. Network switches are often configured not to forward multicast packets, so this often means that all machines using a vnet must be on the same LAN segment, unless you configure vnet forwarding.
You can test multicast coverage by pinging the vnet multicast address:
# ping -b 224.10.0.1You should see replies from all machines with the vnet module running. You can see if vnet packets are being sent or received by dumping traffic on the vnet UDP port:
# tcpdump udp port 1798
If multicast is not being forwaded between machines you can configure multicast forwarding using vn. Suppose we have machines hostA on 10.10.0.100 and hostB on 10.11.0.100 and that multicast is not forwarded between them. We use vn to configure each machine to forward to the other:
hostA# vn peer-add hostBMulticast forwarding needs to be used carefully - you must avoid creating forwarding loops. Typically only one machine on a subnet needs to be configured to forward, as it will forward multicasts received from other machines on the subnet.
hostB# vn peer-add hostA
C. Glossary of Terms
- BVT
- The BVT scheduler is used to give proportional fair shares of the CPU to domains.
- Domain
- A domain is the execution context that contains a running virtual machine. The relationship between virtual machines and domains on Xen is similar to that between programs and processes in an operating system: a virtual machine is a persistent entity that resides on disk (somewhat like a program). When it is loaded for execution, it runs in a domain. Each domain has a domain ID.
- Domain 0
- The first domain to be started on a Xen machine. Domain 0 is responsible for managing the system.
- Domain ID
- A unique identifier for a domain, analogous to a process ID in an operating system.
- Full virtualization
- An approach to virtualization which requires no modifications to the hosted operating system, providing the illusion of a complete system of real hardware devices.
- Hypervisor
- An alternative term for VMM, used because it means `beyond supervisor', since it is responsible for managing multiple `supervisor' kernels.
- Live migration
- A technique for moving a running virtual machine to another physical host, without stopping it or the services running on it.
- Paravirtualization
- An approach to virtualization which requires modifications to the operating system in order to run in a virtual machine. Xen uses paravirtualization but preserves binary compatibility for user space applications.
- Shadow pagetables
- A technique for hiding the layout of machine memory from a virtual machine's operating system. Used in some VMMs to provide the illusion of contiguous physical memory, in Xen this is used during live migration.
- Virtual Block Device
- Persistant storage available to a virtual machine, providing the abstraction of an actual block storage device. VBDs may be actual block devices, filesystem images, or remote/network storage.
- Virtual Machine
- The environment in which a hosted operating system runs, providing the abstraction of a dedicated machine. A virtual machine may be identical to the underlying hardware (as in full virtualization, or it may differ, as in paravirtualization).
- VMM
- Virtual Machine Monitor - the software that allows multiple virtual machines to be multiplexed on a single physical machine.
- Xen
- Xen is a paravirtualizing virtual machine monitor, developed primarily by the Systems Research Group at the University of Cambridge Computer Laboratory.
- XenLinux
- A name for the port of the Linux kernel that runs on Xen.
Footnotes
- ... 20031.1
- http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf
- ... bridge-utils2.1
- Available from http://bridge.sourceforge.net
- ... system2.2
- Available from http://linux-hotplug.sourceforge.net/
- ... system2.3
- See http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev.html/
- ... kernel2.4
- If you boot without first disabling TLS, you will get a warning message during the boot process. In this case, simply perform the rename after the machine is up and then run /sbin/ldconfig to make it take effect.
Xen Daemon 2006-04-06
没有评论:
发表评论