Comment 2 for bug 461528

Revision history for this message
Dmitry Ljautov (dljautov) wrote :

As for hostnames it works perfectly for jaunty, not for karmic.

In Jaunty live migration worked with:
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
in /etc/libvirt/libvirtd.conf

Worked in Jaunty:
# virsh --connect=qemu+tcp://node1/system migrate --live vm1 qemu+tcp://node2/system

hostnames node1 and node2 should be in /etc/hosts or setuped any other way.
vm1 guest is in running state.

Here is a few changes i made in profiles to test migration in karmic.
To bind libvirtd (libvirtd_opts should be with -l key to /etc/default/libvirt-bin) add to /etc/apparmor.d/usr.sbin.libvirtd:
  network inet dgram,
  network inet6 stream,
  network inet6 dgram,

I also added my nfs share to save and restore domains out of $home in /etc/apparmor.d/abstractions/libvirt-qemu:
  /mnt/nfs/save/** rw,

Worked in karmic:
# virsh --connect=qemu+tcp://node1/system save vm1 /mnt/nfs/save/vm1
# virsh --connect=qemu+tcp://node1/system restore /mnt/nfs/save/vm1
I suggest there's enough permissions for migration. or not?

Also tried migration under karmic, too:
# virsh --connect=qemu+tcp://node1/system migrate --live vm1 qemu+tcp://node2/system
Tried to test when vm1 guest is in running state (in suspended state the same result).
It seems to pass right, but guest hangs after migration (but in virsh list it correctly shows it running on destination host after migration even if vm1 was suspended before migration). But if i suspend and resume guest it became working like if guest was paused before migration:

# virsh --connect=qemu+tcp://node2/system suspend vm1
# virsh --connect=qemu+tcp://node2/system resume vm1

But such live migration not a _live_ migration as it should be for running vm1 guest. There's a non-zero downtime between suspend and resume. :(

I think problem is not only in apparmor profilies (tried to turn off at all).
Any ideas?

PS. Sorry for terrible English.