视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
A Virtual Machine Migration System Based on a CPU
2025-10-02 04:32:22 责编:小OO
文档
A Virtual Machine Migration System Based on a CPU Emulator

Koichi Onoue

University of Tokyo koichi@yl.is.s.u-tokyo.ac.jp

Yoshihiro Oyama

The University of Electro-Communications

oyama@cs.uec.ac.jp

Akinori Yonezawa

University of Tokyo

yonezawa@yl.is.s.u-tokyo.ac.jp

Abstract

Migration of virtual computing environments is a use-ful mechanism for advanced management of servers and utilization of a uniform computing environment on differ-ent machines.There have been a number of studies on mi-gration of virtual computing environments based on virtual machine monitors(e.g.,VMware)or language-level virtual machines(e.g.,Java).However,migration systems based on a CPU emulator have not received much attention and their viability in a practical setting is not clear.In this paper,we describe Quasar,a virtual machine(VM)migration system implemented on top of the QEMU CPU emulator.Quasar can migrate a whole operating system between physical ma-chines whose architectures are different(e.g.,between an x86machine and a PowerPC machine).Quasar provides a virtual networking facility,which allows migrating VMs to continue communication without disconnecting sockets for migration.Quasar also provides a staged migration func-tion to reduce the downtime of migrating VMs.We have examined the viability of Quasar through experiments,in which Quasar was compared with Xen,SBUML,and UML. The experiments assessed the performance of virtual server hosting,the sizes of thefiles that represent VMs,and the amount of downtime for VM migration.

1.Introduction

The technology of virtual machines has significantly de-veloped,and the performance and functionalities of virtual machines have greatly improved.There are various kinds of virtual machine.One kind is the language VM,which pro-vides an original instruction set(e.g.,Java VM and.NET CLR).Another kind is the virtual machine monitor,which provides a virtualized and multiplexed form of underly-ing physical hardware(e.g.,VMware,Xen[4],and De-nali[19]).The other kind is the CPU emulator,which emu-lates real hardware as software(e.g.,bochs and QEMU).

CPU emulators bring strong platform-independency and the ability to execute legacy binaries.They have a potential to be good building blocks of advanced software infrastruc-tures.However,their viability for realistic purposes has not yet been fully studied.There are few case studies on apply-ing CPU emulators to real-world problems.For example,as far as we know,no case studies exist on the performance of a server hosted on a CPU emulator.

In this paper,we propose Quasar,which is a virtual ma-chine migration system implemented on top of a CPU em-ulator.It enables users to migrate a virtual computing envi-ronment to another physical machine while keeping its ex-ecution state intact.Since a CPU emulator strongly decou-ples software from hardware,its users can enjoy a uniform computing environment on a wide range of platforms.They can do their jobs anywhere in their virtual computing en-vironment,which may be migrated between machines with different architectures,such as x86and PowerPC.Users do not have to modify the code of the applications or the oper-ating systems to execute them in a virtual computing envi-ronment.

Building virtual machine migration systems on top of CPU emulators provides several benefits.First,programs can run on various physical machines because CPU emu-lators can virtualize many CPU architectures and various peripherals.Most of the existing virtual machine monitors, e.g.,VMware,require the CPU of the underlying physical machine to have the x86architecture.On the other hand, Quasar enables programs developed for one architecture to run on a machine that has another architecture.Second,na-tive code applications that usually runs directly on physical machines can be executed in a virtual computing environ-ment without modifying the code.Unlike language VMs, Quasar can execute native code applications such as Apache and Firefox,and even commodity operating systems suchas Windows and Linux.In contrast,it is not straightfor-ward to execute an operating system on language VMs.As

a result,language VMs provide only application-level“run-anywhere”.They are not suitable for migrating a complete execution environment,which contains a wide range of re-sources such asfile systems.Quasar can migrate a com-plete computing environment,and hence,it provides OS-level“run-anywhere”capability.

The contribution of this work is as follows:

•This work presents the design and implementation of

a virtual machine migration system based on a CPU

emulator.There have been many studies on virtual machine systems based on language VMs and virtual machine monitors.However,as far as we know,there have not been many studies on a methodology of de-veloping a virtual machine migration system using a CPU emulator as a building block.Our system en-ables us to migrate a virtual computing environment between an x86/Linux and a PowerPC/Linux.•This work shows evaluation results on the viability of a virtual machine migration system based on a CPU em-ulator.Up to now,a number of CPU emulators have been proposed.However,their viability in realistic sit-uations has not yet been fully studied.We expect that the experimental results discussed in this paper will be useful for researchers,developers,and machine ad-ministrators when they consider the potential of CPU emulators.

The rest of this paper is organized as follows.Section2 presents an outline of how our system works.Section3 describes the implementation of functions that our system provides.Section4evaluates the viability of our system. Section5discusses related work.We conclude the paper in Section6.

2.Overview of Quasar

2.1.Architecture

A VM migration environment provided by Quasar is il-lustrated in Fig.1.Quasar is composed of the Quasar VM and the forwarding router.Quasar assumes there is one for-warding router in one local area network.In addition,The forwarding router needs to be running on a physical ma-chine that is accessible from the external network.We as-sume that users has prepared a virtual disk on all the phys-ical machines on which users intend to use a uniform vir-tual computing environment.For example,to use the vir-tual computing environment created by using VD3on PC C deployed in LAN III,a user needs to copy VD3from PC A deployed in LAN I or PC

B deployed in LAN II(in Fig.1).

QVM1

Forwarding

router

VD1

QVM1

QVM : Quasar VM

VD : Virtual disk

Forwarding route

Migration

VD1

VD3

QVM2

router

VD3

VD1

VD4VD2VD2

VD4

router

PC A

PC B

Figure1.VM migration environment in our

system

App: Application Figure2.Physical machine running Quasar

VMs

Quasar not only provides migration between physical ma-chines on a local area network such as Fig.1(a),but also between ones on a wide area network such as Fig.1(b).

The Quasar VM provides a virtual computing environ-ment.Fig.2is shown a physical machine on which two Quasar VM instances and one application run.The for-warding router handles migration and network routing for Quasar VM instances.The forwarding router maintains a

list of all the Quasar VM instances running in the local area network.The forwarding router enables users to migrate their virtual computing environment to or from the physical machine which is not accessible from the external network.2.2.Provided Functions

Automation of migration procedures When Quasar re-ceives a migration request from a user,it automatically saves,transmits,and restores the user’s virtual com-puting environment.

Virtual networking facility Quasar appropriately handles network routing before and after migration.A user can use the same IP address before and after migra-tion.Moreover,communications are maintained dur-ing and after migration.As a result,programs running on the Quasar VM can continue to communicate with their communication peers without making the peers aware of migration.For example,Quasar allows users to keep an SSH session when a server and/or client program of SSH is migrated.

Staged migration To reduce the service downtime during migration,we take the pre-copy approach presented in the work of Amoeba[16].Quasar divides the mi-gration operation into two phases.In thefirst phase, the actions of saving and transmission of the execution state overlaps the provision of services in the virtual computing environment.In the second phase,the ex-ecution stops and the execution state that was modi-fied during thefirst phase is transmitted.This staged migration scheme has been used in the work of VMo-tion[17]and Xen[5].

In the absence of a reliable and secure data transmission, the transfer data in Quasar might be lost or stolen,especially across wide area networks.Thus,Quasar uses a virtual pri-vate network to transfer data.

3.Implementation

3.1.Automatic Migration Mechanism

To migrate a virtual computing environment between physical machines,it is necessary to provide functions for saving and restoring virtual machine states and a virtual disk.

The virtual machine states include CPU,memory,pro-grammable interrupt controller(PIC),and so on.QEMU already provides the functions for saving and restoring vir-tual machine states.We modified the function not to save and restore the clock ticks and real time clock states be-cause there are problems with emulations related to them after migration.

To reduce the transfer size of a virtual disk,Quasar runs the Quasar VM in the snapshot mode.The snapshot mode is the function provided by QEMU.When the Quasar VM is running in this mode,all the writes to the virtual disk from the start of the Quasar VM change into the writes to a tem-poraryfile.Only the data that were written to the temporary file are transmitted to the destination machine.

3.2.Virtual Networking Facility

We implemented the network functions that include the virtual networking facility for supporting transparent con-nection mobility.

Quasar handles the network data received by the data link layer.We called the network data raw packets.We im-plemented the bridged network,which is supported in ex-isting VM systems such as VMware and Xen.The guest OS which runs on the Quasar VM is identified by the MAC address of the virtual network interface card(NIC).The net-working facility provides two benefits.One is that Quasar enables the guest OS to run on the Quasar VM to be as-signed dynamic global IP addresses from the DHCP server in the local area network.The other is that the user does not need to add the Quasar VM specific network configuration (e.g.,NAT)to the physical machine.

To enable users to use the same IP addresses before and after migration,Quasar forwards the raw packets destined for the Quasar VM.The virtual networking facility does not need to modify a host OS.The basic mechanism of our vir-tual networking are the same as that of Mobile IP[10]and VNET[14,15].Mobile IP is for the transport and network layers.VNET supports network-transparent mobility at the level of the data link layer.Unlike Mobile IP,Quasar en-ables users to use IP addresses obtained from the DHCP server running on the local area network when the guest OS starts,without adding a specific mechanism for supporting DHCP facility.

After receiving a migration request from the Quasar VM, the forwarding router identifies the migration route.If network addresses of the destination Quasar VM and the source forwarding router are same,the destination Quasar VM handles the raw packets.In this case,the raw packets are not forwarded by the forwarding router.If network ad-dresses of the destination Quasar VM and the source for-warding router are different,the destination Quasar VM handles the raw packets via the forwarding router.

3.3.Reducing Service Downtime

Fig.3shows two migration mechanisms:the bundle mi-gration and the staged migration.At the start of migration, the Quasar VM on a destination machine sends a migration request to the forwarding router.

In the bundle migration,we save and restore the virtual computing environment after stopping the Quasar VM on the source machine.This approach is taken in[7,12].The service downtime for the bundle migration is proportionalB

(a) Bundle migration Destination

Source

B

dst1dst3

dst2

src1

: Start of saving

B

dst1

: Start of migration

B

dst2

: Transmission of migration request

B

dst3: Completion of migration

S

(b) Staged migration

Destination

Source

S

S

dst3

S

dst1

S

dst2

src1

: Start of 1st saving

S

src2

: Start of 2nd saving

S

dst1

: Start of migration

S

dst2

: Transmission of the 2nd migration request

S

dst3

: Completion of migration

S

Transmission of virtual

Downtime

Migration time

computing environment

Figure3.Migration transition

to workloads on the guest and host OSes during migration and the amount of time and size required for saving and restoring the virtual computing environment.Thus,it might be significantly long.

To reduce the service downtime,Quasar provides the staged migration,which makes the service overlap migra-tion operations.The staged migration has two phases.

In thefirst phase,Quasar saves,transfers,and restores the memory and the virtual disk states.At this point,the Quasar VM on a source machine are still running.We do not ensure the consistency of the memory and the virtual disk states that are transmitted in thefirst phase because the transmitted data do not necessarily correspond with their states at a certain moment.Instead,we ensure the consis-tency by using the transmitted data in the second phase.

In the second phase,Quasar stops the guest OS on the source machine.Then Quasar saves,transfers,and restores the virtual machine states including CPU,PIC,and NIC states and the remaining raw packets.Quasar also sends all the data written to the memory and the virtual disk from the start of thefirst phase until the start of the second phase. The start points of thefirst and the second phase are S src1 and S src2in Fig.3(b).Finally,Quasar restarts the guest OS on the destination machine.The staged migration reduces the service downtime because the amount of time and trans-mitted data in the second phase are typically smaller than ones in thefirst phase.

To recognize the writes to the memory and the vir-tual disk during thefirst phase(between S src1and S src2), Quasar prepares the bitmaps for recognizing the dirty pages and dirty sectors at the start of thefirst phase.Currently, we use the software MMU,which is supported by QEMU. We modified the write operations of the software MMU and the virtual disk.For every write operation to a page,Quasar sets the bit which represents the page in the memory bitmap.For every write operation to a sector,Quasar also sets the bit which represents the sector in the virtual disk bitmap.In the second phase,Quasar transmits only the pages and the sec-tors for which a bitmap has been set.

4.Experiments

We conducted three experiments to evaluate the viability of Quasar.For all the experiments,we used the Quasar VM which emulates the x86architecture.

We used two physical machines in these experiments. One was an HP workstation xw4100which had an In-tel Pentium43.0GHz with HyperThreading enabled,1GB memory and a Linux2.6.13kernel(P M hp).The other was a Thinkpad R51,which had an Intel Pentium M1.5GHz, 512MB memory and a Linux2.6.14kernel(P M tp).In the last two experiments,the two physical machines were con-nected via a gigabit network in a local area network.Both machines had a gigabit network controller and there were five gigabit switches between them.

4.1.Size of Data for Saving and Restoring

To migrate a uniform computing environment in Quasar, it is necessary to save and restore virtual machine states such as CPU and VM memory,and virtual disk.To save and restore the virtual disk state,Quasar ran in the snapshot mode and transferred only the dirty sectors.

To compare with existing systems with functions for saving and restoring a virtual computing environment,we also conducted the same experiment on Xen2.0-testing and ScrapBook for User-Mode Linux(SBUML)[13].Xen does not save or restore the virtual disk state.SBUML uses the copy-on-writefile function provided by User-Mode Linux(UML)as the virtual disk.We configured all theVMs to have a1.25GB virtual disk and installed Debian GNU/Linux3.1in the virtual computing environment.

We saved and restored the following four states(two CUI and two GUI states)for the128MB and256MB VM mem-ories.

login We started the guest OS and got the CUI login prompt.This has few write accesses to the VM mem-ory and the virtual disk.

x-window After login,we downloaded,installed and con-figured the x-window-system package.This com-mand can automatically get packages on which the x-window-system package depends.The x-window-system package requires63new packages.This state has many write accesses to the VM memory and the virtual disk.

xterm We started the X window system and ran the win-dow manager twm and one xterm.

firefox We started the X window system,ran the twm, one xterm and the web browser Firefox.We created five new tabs in Firefox and browsed the web page www.google.co.jp on the tab.

The results in Table1indicate the amount of the size required for saving and restoring the virtual computing en-vironment.The saving and restoring size in our system is small.Almost all the size in our system consists of the VM memory state.The size of the other virtual machine states, such as CPU and PIC,is about52KB.QEMU provides the simple compression for saving and restoring the VM mem-ory state.The compression effect is indicated the case of login,xterm(for the128and256MB VM memories)and firefox(for the256MB VM memory)in Table1.

4.2.Networking and Service Hosting Throughput

To evaluate Quasar networking and the cost of host-ing services on Quasar,we used Netperf 2.4.1and ApacheBench2.0.41.

The server and client machines are P M hp and P M tp respectively.We configured the Quasar VM on the server machine to have a256MB VM memory.We also ran these benchmarks on Xen and UML for comparison.

Atfirst,we measured the request and response through-put on a TCP and UDP network connection by using Net-perf.

We ran the netserver process on the server machine and the netperf process on the client machine.The transaction in this experiment was a single communication with a data size of1byte.The results in Table2indicate that The per-formance in our system is about4and4.5times worse than

Quasar UML Xen

TCP102326684002

UDP129439695395 Table2.Request and response throughput

(transactions/second)

that of Xen and2.5and3times worse than that of UML for TCP or UDP.

To examine hosting web services on Quasar,we ran the web server,Apache2.0.54,on P M hp.We configured the Quasar VM to have a256MB VM memory.By using ApacheBench on P M tp,we sent1024requests to get a static content to Apache on the Quasar VM.We used1KB and100KBfiles as the static content and we varied the con-currency of the request from1to128exponentially.We also ran these benchmarks on Xen and UML for comparison.

Table3indicates that the number of the processed re-quests per second were about120for1KBfile and24for 100KBfile.The results indicate that the overhead incurred by hosting web services on current our system is large.In Quasar,the hosting throughput is virtually constant regard-less of the increase of the concurrency.In contrast,the throughput in Xen and UML was increased.Thus,we will have to analyze the factors of the overhead in detail and en-hance our system in the future.

4.3.Migration between Physical Machines

Finally,we measured the migration time and the service downtime due to the bundle migration and the staged mi-gration from P M hp to P M tp.

We configured the Quasar VM to have a256MB VM memory and1.25GB virtual disk.The forwarding router ran on the source machine P M hp,and we sent a migration request from the destination machine P M tp.

To examine the overhead of the migration time and the service downtime incurred by using SSL,we measured the migration time and the service downtime with SSL enabled and with it disabled.Fig.3indicates the measuring points of the migration time and the service downtime for each migration.We defined the pseudo-service downtime for the bundle migration and the staged migration as the time between B dst2and B dst3and the time between S dst2and S dst3,respectively.We regarded the pseudo-service down-time as the actual service downtime because B src1(S src2) and B dst3(S dst3)were measured on the different physical machines.

We migrated the following active execution states of the Quasar VM.VM memory:128MB

CUI GUI

login x-window xtermfirefox Quasar

(MS/DS)54/3126/21085/4120/5 Xen129129129129 SBUML152583469

VM memory:256MB

CUI GUI

login x-window xtermfirefox Quasar

(MS/DS)56/3254/21087/4124/5 Xen257257257257 SBUML173533771

MS:Machine states DS:Dirty sectors

Table1.Saving and restoring size(MB)

(a)1KB

Concurrency

12481632128 Quasar115113115122121118117116 UML444470474492513560579598 Xen95010421118111204127413051381

(b)100KB

Concurrency

12481632128 Quasar2423232322222221

UML111116117118121120119120

Xen42601002204915577628 Table3.Web service throughput(requests/second)

top After CUI login,we started running the top command.

In this case,the write accesses to the VM memory and the virtual disk were small,and the CPU workload was low during migration.

x-top After CUI login,we started running the X window system.Then we ran the twm,one xterm,and the top. kernel After CUI login,we started creating the bzImage of the Linux kernel2.4.23.In this case,the write accesses to the VM memory were large and the CPU workload was high.

em-kernel After login,we downloaded,installed and con-figured the emacs21package.To get this package,it is necessary to get6new packages.After the configu-ration was completed,we started creating the bzImage of the kernel2.4.23.In this case,the amount of write accesses to the VM memory and the virtual disk were large.

The migration request was sent one minute after the last command,such as top and creating the bzImage,was exe-cuted on the source machine P M hp.The last command was in progress before and after migration.

The results in Tables4and5indicate the migration time and the service downtime for each execution state.In all the cases,although the migration time for the staged migra-tion is lager than the one for the bundle migration,the ser-vice downtime for the staged migration is smaller than one for the bundle migration.Compared with the SSL-disabled case,the SSL-enabled migration took longer,but the differ-ence of the service downtime due to the staged migration was small.

For the staged migration,we also measured the amount of the transmitted VM memory and virtual disk in thefirst and second phases.The results are shown in Table6.In the second phase,the virtual execution states were transmit-ted and the size was about52KB.The table indicates that more data was transferred in thefirst phase than in the sec-ond phase.This is one of the factors that led to the shorter downtime of the staged migration.

5.Related Work

There has been a lot of work on applying virtual comput-ing environments to migration and server hosting.[2,3,5,

(a)Migration time

top x-top kernel em-kernel Bundle 3.5 3.6 3.913.4 Staged 3.6 3.6 4.516.7

(b)Service downtime

top x-top kernel em-kernel Bundle 2.86 3.43 3.7012.80 Staged0.030.050.420.59 Table4.Migration time and service down-time with SSL-enabled(seconds)

(a)Migration time

top x-top kernel em-kernel Bundle 2.0 2.0 2.0 5.0 Staged 2.0 2.0 2.1 5.5

(b)Service downtime

top x-top kernel em-kernel Bundle 1.87 1.88 1. 4.31 Staged0.010.020.060.60 Table5.Migration time and service down-time with SSL-disabled(seconds)

(a)SSL-enabled

Phase top x-top kernel em-kernel First VM Memory(MB)39.1757.97.70216.41 Virtual Disk(KB)25731724391995571 Second VM Memory(MB)0.76 1.4713.6018.20 Virtual Disk(KB)330461

(b)SSL-disabled

Phase top x-top kernel em-kernel First VM Memory(MB)34.6557.26.71216.42 Virtual Disk(KB)25651736386595531 Second VM Memory(MB)0.63 1.8710.1912.22 Virtual Disk(KB)030100 Table6.Transfer size for staged migration

8,9,12,17,20].

Many migration systems based on virtual machine mon-itors have been proposed.VMware and Xen have the mech-anism to reduce the service downtime[5,17].They assume a disk storage shared among different virtual machines, which is typically provided by storage area network(SAN), network attached storage(NAS),or a distributed storage system such as Parallax[18].Collective[12]has pro-posed some optimizations,e.g.,the compression of the vir-tual disks and reducing the amount of transmitted data by sending only differences of the virtual disks.Internet Sus-pend/Resume[7]enables migration by combining virtual machine technology(VMware)and distributedfile systems. However,these systems do not have a mechanism to reduce the service downtime due to the migration.The above mi-gration systems restrict the available host CPU because they are built on top of the x86architecture.

There are many hosting and migration systems based on virtualization of system call execution and resource views. Examples are Zap[9]and SoftwarePot[6].Many systems following this approach have the advantage over our system in that virtualization overhead is smaller than when using a CPU emulator.However,they are more dependent on an underlying real computing environment.

One.world[1]is a Java-based framework that supports mobility of applications in ubiquitous environments.Since it is implemented on Java,it cannot execute native code ap-plications such as Apache.

Mobidesk[3],VNC[11],and Remote Desktop in Win-dows enable a user to provide a uniform computing en-vironment on various physical machines.These systems make applications run on servers,and exchange user in-put and application output between server and client ma-chines.These systems cannot be used in disconnected en-vironments.Furthermore,these systems depend on the net-work performance and/or the distance between server and client machines because of the interaction between servers and clients.Unlike these thin-client approaches,all process executions and computations run on the local physical ma-chine in our system.6.Conclusion

Quasar is a virtual machine migration system that is built on top of the QEMU CPU emulator.By integrating mi-gration functions with a CPU emulator,it enables a user to use a uniform computing environment.It provides three functions:an automatic migration mechanism,a virtual net-working facility,and a mechanism to reduce the service downtime.We implemented a prototype on Linux and eval-uated its viability.The prototype had reasonable size re-quired for saving and resuming a virtual computing environ-ment and the service downtime due to migration.The ex-perimental results of the web server hosting on the current prototype were that the number of the processed requests per second were about120and24for1KB and100KB files respectively.and the overhead was large.

We plan to analyze and enhance our system so it can pro-vide hosting services with a smaller overhead.Moreover, we are going to use a multi-staged migration to provide bet-ter support for reducing the downtime during migration.We are also interested in applying to security systems.

References

[1]L.Arnstein,R.Grimm,C.Hung,J.H.Kang,A.LaMarca,

G.Look,S.B.Sigurdsson,J.Su,and G.Borriello.Systems

Support for Ubiquitous Computing:A Case Study of Two Implementations of Labscape.In Proceedings of the First International Conference on Pervasive Computing,pages 30–44,Zurich,August2002.

[2] A.A.Awadallah and M.Rosenblum.The vMatrix:Server

Switching.In Proceedings of the10th IEEE International Workshop on Future Trends in Distributed Computing Sys-tems(FTDCS’04),Suzhou,May2004.

[3]R.A.Baratto,S.Potter,G.Su,and J.Nieth.MobiDesk:Mo-

bile Virtual Desktop Computing.In Proceedings of the10th Annual International Conference on Mobile Computing and Networking(MOBICOM2004),pages1–15,Philadelphia, September2004.

[4]P.Barham,B.Dragovic,K.Fraser,S.Hand,T.Harris,

A.Ho,R.Neugebauer,I.Pratt,and A.Warfield.Xen and the

Art of Virtualization.In Proceedings of the19th ACM Sym-posium on Operating Systems Principles(SOSP’03),pages 1–177,New York,October2003.

[5] C.Clark,K.Fraser,S.Hand,J.G.Hansen, E.Jul,

C.Limpach,I.Pratt,and A.Waarfield.Live Migration of

Virtual Machines.In Proceedings of2nd Symposium on Networked Systems Design and Implementation(NSDI’05), Boston,May2005.

[6]K.Kato and Y.Oyama.SoftwarePot:An Encapsulated

Transferable File System for Secure Software Circulation.

Technical Report ISE-TR-02-185,Institute of Information Sciences and Electronics,University of Tsukuba,January 2002.

[7]M.Kozuch and M.Satyanarayanan.Internet Sus-

pend/Resume.In Proceedings of the4th IEEE Workshop on

Mobile Computing Systems and Applications,pages40–46, June2002.

[8]I.Krsul,A.Ganguly,J.Zhang,J.A.B.Fortes,and R.J.

Figueiredo.VMPlants:Providing and Managing Virtual Machine Execution Environments for Grid Computing.In Proceedings of the2004ACM/IEEE Conference on Super-computing(SC’04),Pittsburgh,November2004.

[9]S.Osman,D.Subhraveti,G.Su,and J.Nieh.The Design

and Implementation of Zap:A System for Migrating Com-puting Environments.In Proceedings of the5th Symposium on Operating Systems Design and Implementation(OSDI ’02),pages361–376,Boston,December2002.

[10] C.E.Perkins and A.Myles.Mobile IP.In Proceedings of

International Telecommunications Symposium,pages415–419,1997.

[11]T.Richardson,Q.Stafford-Fraser,K.R.Wood,and A.Hop-

per.Virtual Network Computing.IEEE Internet Computing, 2(1):33–38,January1998.

[12] C.P.Sapuntzakis,R.Chandra,B.Pfaff,J.Chow,M.S.

Lam,and M.Rsenblum.Optimizing the Migration of Vir-tual Computers.In Proceedings of the5th Symposium on Operating Systems Design and Implementation(OSDI’02), pages377–390,Boston,December2002.

[13]O.Sato,R.Potter,M.Yamamoto,and M.Hagiya.UML

Scrapbook and Realization of Snapshot Programming En-vironment.In Proceedings of the Second Mext-NSF-JSPS International Symposium on Software Security(ISSS2003), volume3233,pages281–295,Tokyo,2003.Springer. [14] A.Sundararaj,A.Gupta,and P.Dinda.Increasing Applica-

tion Performance In Virtual Environments Trough Run-time Inference and Adaptation.In Proceedings of the14th IEEE International Symposium on High Performance Distributed Computing(HPDC-14),Research Triangle Park,July2005.

[15] A.I.Sundararaj and P.A.Dinda.Towards Virtual Networks

for Virtual Machine Grid Computing.In Proceedings of the 3th Virtual Machine Research and Technology Symposium (VM’04),pages177–190,San Jose,2004.

[16]M.Theimer,K.A.Lantz,and D.R.Cheriton.Preempt-

able Remote Execution Facilities for the V-System.In Pro-ceedings of the10th ACM Symposium on Operating System Principles,pages2–12,Orcas Island,Washington,Decem-ber1985.

[17]VMotion.http://www.vmware.com/products/

vc/vmotion.html.

[18] A.Warfield,R.Ross,K.Fraser,C.Limpach,and S.Hand.

Parallax:Managing Storage for a Million Machines.In Pro-ceedings of the10th Workshop on Hot Topics in Operating Systems(HotOS X),Santa Fe,NM,June2005.

[19] A.Whitaker,M.Shaw,and S.D.Gribble.Scale and Per-

formance in the Denali Isolation Kernel.In Proceedings of the5th Symposium on Operating Systems Design and Imple-mentation(OSDI’02),pages195–209,Boston,December 2002.

[20]M.Zhao,J.Zhang,and R.Figueiredo.Distributed File

System Support for Virtual Machines in Grid Computing.

In Proceedings of the13th IEEE International Symposium on High Performance Distributed Computing(HPDC’04), pages202–211,Honolulu,June2004.下载本文

显示全文
专题