public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH-SERIES v3] migration tests
@ 2020-12-01 12:06 Fabian Ebner
  2020-12-01 12:06 ` [pve-devel] [PATCH v3 container 1/5] use new move_config_to_node method Fabian Ebner
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Fabian Ebner @ 2020-12-01 12:06 UTC (permalink / raw)
  To: pve-devel

Refactor some code and create a test enviroment for migration. See the last
patch for a description of the latter.

The first two patches depend on libpve-guest-common-perl >=3.1-3

Changes from v2:
    * drop already applied patch introducing move_config_to_node helper
    * rebase on current master and verify tests still behave the same
      way (they did, but it's not surprising as QemuMigrate.pm didn't
      change much since v2 was sent).
    * add a patch to sort volumes migrated with storage_migrate and adapt
      affected test


container:

Fabian Ebner (1):
  use new move_config_to_node method

 src/PVE/LXC/Migrate.pm | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)


qemu-server:

Fabian Ebner (4):
  use new move_config_to_node method
  migration: factor out starting remote tunnel
  migration: sort volumes migrated with storage_migrate
  create test environment for migration

 PVE/QemuMigrate.pm                    |  130 +-
 test/Makefile                         |    5 +-
 test/MigrationTest/QemuMigrateMock.pm |  319 +++++
 test/MigrationTest/QmMock.pm          |  142 +++
 test/MigrationTest/Shared.pm          |  170 +++
 test/run_qemu_migrate_tests.pl        | 1594 +++++++++++++++++++++++++
 6 files changed, 2295 insertions(+), 65 deletions(-)
 create mode 100644 test/MigrationTest/QemuMigrateMock.pm
 create mode 100644 test/MigrationTest/QmMock.pm
 create mode 100644 test/MigrationTest/Shared.pm
 create mode 100755 test/run_qemu_migrate_tests.pl

-- 
2.20.1





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 container 1/5] use new move_config_to_node method
  2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
@ 2020-12-01 12:06 ` Fabian Ebner
  2020-12-01 12:06 ` [pve-devel] [PATCH v3 qemu-server 2/5] " Fabian Ebner
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Fabian Ebner @ 2020-12-01 12:06 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Required dependency on libpve-guest-common-perl >=3.1-3

 src/PVE/LXC/Migrate.pm | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 7c3536f..cb1ea7a 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -301,9 +301,6 @@ sub phase1 {
 	}
     }
 
-    my $conffile = PVE::LXC::Config->config_file($vmid);
-    my $newconffile = PVE::LXC::Config->config_file($vmid, $self->{node});
-
     if ($self->{running}) {
 	die "implement me";
     }
@@ -318,15 +315,10 @@ sub phase1 {
     my $vollist = PVE::LXC::Config->get_vm_volumes($conf);
     PVE::Storage::deactivate_volumes($self->{storecfg}, $vollist);
 
-   # transfer replication state before move config
+    # transfer replication state before moving config
     $self->transfer_replication_state() if $rep_volumes;
-
-    # move config
-    die "Failed to move config to node '$self->{node}' - rename failed: $!\n"
-	if !rename($conffile, $newconffile);
-
+    PVE::LXC::Config->move_config_to_node($vmid, $self->{node});
     $self->{conf_migrated} = 1;
-
     $self->switch_replication_job_target() if $rep_volumes;
 }
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 qemu-server 2/5] use new move_config_to_node method
  2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
  2020-12-01 12:06 ` [pve-devel] [PATCH v3 container 1/5] use new move_config_to_node method Fabian Ebner
@ 2020-12-01 12:06 ` Fabian Ebner
  2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 3/5] migration: factor out starting remote tunnel Fabian Ebner
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Fabian Ebner @ 2020-12-01 12:06 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Required dependency on libpve-guest-common-perl >=3.1-3

Changes from v2:
    * rebase

 PVE/QemuMigrate.pm | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 83ba5e2..a8f6644 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1194,14 +1194,7 @@ sub phase3_cleanup {
 
     # transfer replication state before move config
     $self->transfer_replication_state() if $self->{is_replicated};
-
-    # move config to remote node
-    my $conffile = PVE::QemuConfig->config_file($vmid);
-    my $newconffile = PVE::QemuConfig->config_file($vmid, $self->{node});
-
-    die "Failed to move config to node '$self->{node}' - rename failed: $!\n"
-        if !rename($conffile, $newconffile);
-
+    PVE::QemuConfig->move_config_to_node($vmid, $self->{node});
     $self->switch_replication_job_target() if $self->{is_replicated};
 
     if ($self->{livemigration}) {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 qemu-server 3/5] migration: factor out starting remote tunnel
  2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
  2020-12-01 12:06 ` [pve-devel] [PATCH v3 container 1/5] use new move_config_to_node method Fabian Ebner
  2020-12-01 12:06 ` [pve-devel] [PATCH v3 qemu-server 2/5] " Fabian Ebner
@ 2020-12-01 12:07 ` Fabian Ebner
  2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 4/5] migration: sort volumes migrated with storage_migrate Fabian Ebner
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Fabian Ebner @ 2020-12-01 12:07 UTC (permalink / raw)
  To: pve-devel

so it can be mocked when testing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
 PVE/QemuMigrate.pm | 119 ++++++++++++++++++++++++---------------------
 1 file changed, 64 insertions(+), 55 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a8f6644..97af397 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -204,6 +204,69 @@ sub finish_tunnel {
     die $err if $err;
 }
 
+sub start_remote_tunnel {
+    my ($self, $raddr, $rport, $ruri, $unix_socket_info) = @_;
+
+    my $nodename = PVE::INotify::nodename();
+    my $migration_type = $self->{opts}->{migration_type};
+
+    if ($migration_type eq 'secure') {
+
+	if ($ruri =~ /^unix:/) {
+	    my $ssh_forward_info = ["$raddr:$raddr"];
+	    $unix_socket_info->{$raddr} = 1;
+
+	    my $unix_sockets = [ keys %$unix_socket_info ];
+	    for my $sock (@$unix_sockets) {
+		push @$ssh_forward_info, "$sock:$sock";
+		unlink $sock;
+	    }
+
+	    $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
+
+	    my $unix_socket_try = 0; # wait for the socket to become ready
+	    while ($unix_socket_try <= 100) {
+		$unix_socket_try++;
+		my $available = 0;
+		foreach my $sock (@$unix_sockets) {
+		    if (-S $sock) {
+			$available++;
+		    }
+		}
+
+		if ($available == @$unix_sockets) {
+		    last;
+		}
+
+		usleep(50000);
+	    }
+	    if ($unix_socket_try > 100) {
+		$self->{errors} = 1;
+		$self->finish_tunnel($self->{tunnel});
+		die "Timeout, migration socket $ruri did not get ready";
+	    }
+	    $self->{tunnel}->{unix_sockets} = $unix_sockets if (@$unix_sockets);
+
+	} elsif ($ruri =~ /^tcp:/) {
+	    my $ssh_forward_info = [];
+	    if ($raddr eq "localhost") {
+		# for backwards compatibility with older qemu-server versions
+		my $pfamily = PVE::Tools::get_host_address_family($nodename);
+		my $lport = PVE::Tools::next_migrate_port($pfamily);
+		push @$ssh_forward_info, "$lport:localhost:$rport";
+	    }
+
+	    $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
+
+	} else {
+	    die "unsupported protocol in migration URI: $ruri\n";
+	}
+    } else {
+	#fork tunnel for insecure migration, to send faster commands like resume
+	$self->{tunnel} = $self->fork_tunnel();
+    }
+}
+
 sub lock_vm {
     my ($self, $vmid, $code, @param) = @_;
 
@@ -808,62 +871,8 @@ sub phase2 {
     }
 
     $self->log('info', "start remote tunnel");
+    $self->start_remote_tunnel($raddr, $rport, $ruri, $unix_socket_info);
 
-    if ($migration_type eq 'secure') {
-
-	if ($ruri =~ /^unix:/) {
-	    my $ssh_forward_info = ["$raddr:$raddr"];
-	    $unix_socket_info->{$raddr} = 1;
-
-	    my $unix_sockets = [ keys %$unix_socket_info ];
-	    for my $sock (@$unix_sockets) {
-		push @$ssh_forward_info, "$sock:$sock";
-		unlink $sock;
-	    }
-
-	    $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
-
-	    my $unix_socket_try = 0; # wait for the socket to become ready
-	    while ($unix_socket_try <= 100) {
-		$unix_socket_try++;
-		my $available = 0;
-		foreach my $sock (@$unix_sockets) {
-		    if (-S $sock) {
-			$available++;
-		    }
-		}
-
-		if ($available == @$unix_sockets) {
-		    last;
-		}
-
-		usleep(50000);
-	    }
-	    if ($unix_socket_try > 100) {
-		$self->{errors} = 1;
-		$self->finish_tunnel($self->{tunnel});
-		die "Timeout, migration socket $ruri did not get ready";
-	    }
-	    $self->{tunnel}->{unix_sockets} = $unix_sockets if (@$unix_sockets);
-
-	} elsif ($ruri =~ /^tcp:/) {
-	    my $ssh_forward_info = [];
-	    if ($raddr eq "localhost") {
-		# for backwards compatibility with older qemu-server versions
-		my $pfamily = PVE::Tools::get_host_address_family($nodename);
-		my $lport = PVE::Tools::next_migrate_port($pfamily);
-		push @$ssh_forward_info, "$lport:localhost:$rport";
-	    }
-
-	    $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
-
-	} else {
-	    die "unsupported protocol in migration URI: $ruri\n";
-	}
-    } else {
-	#fork tunnel for insecure migration, to send faster commands like resume
-	$self->{tunnel} = $self->fork_tunnel();
-    }
     my $start = time();
 
     my $opt_bwlimit = $self->{opts}->{bwlimit};
-- 
2.20.1





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 qemu-server 4/5] migration: sort volumes migrated with storage_migrate
  2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
                   ` (2 preceding siblings ...)
  2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 3/5] migration: factor out starting remote tunnel Fabian Ebner
@ 2020-12-01 12:07 ` Fabian Ebner
  2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 5/5] create test environment for migration Fabian Ebner
  2020-12-15 15:20 ` [pve-devel] applied-series: [PATCH-SERIES v3] migration tests Thomas Lamprecht
  5 siblings, 0 replies; 7+ messages in thread
From: Fabian Ebner @ 2020-12-01 12:07 UTC (permalink / raw)
  To: pve-devel

Having a deterministic order here is useful for testing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v3.

 PVE/QemuMigrate.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 97af397..5c019fc 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -610,7 +610,7 @@ sub sync_disks {
 
 	$self->log('info', "copying local disk images") if scalar(%$local_volumes);
 
-	foreach my $volid (keys %$local_volumes) {
+	foreach my $volid (sort keys %$local_volumes) {
 	    my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
 	    my $targetsid = PVE::QemuServer::map_storage($self->{opts}->{storagemap}, $sid);
 	    my $ref = $local_volumes->{$volid}->{ref};
-- 
2.20.1





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH v3 qemu-server 5/5] create test environment for migration
  2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
                   ` (3 preceding siblings ...)
  2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 4/5] migration: sort volumes migrated with storage_migrate Fabian Ebner
@ 2020-12-01 12:07 ` Fabian Ebner
  2020-12-15 15:20 ` [pve-devel] applied-series: [PATCH-SERIES v3] migration tests Thomas Lamprecht
  5 siblings, 0 replies; 7+ messages in thread
From: Fabian Ebner @ 2020-12-01 12:07 UTC (permalink / raw)
  To: pve-devel

and the associated parts for 'qm start'.

Each test will first populate the MigrationTest/run directory
with the relevant configuration files and files keeping track of the
state of everything necessary. Second, the mock-script for migration
is executed, which in turn will execute the 'qm start' mock-script
(if it's an online test that gets far enough). The scripts will simulate
a migration and update the relevant files in the MigrationTest/run directory.
Finally, the main test script will evaluate the state.

The main checks are the volume IDs on the source and target and the VM
configuration itself. Additional checks are the vm_status and expected_calls,
keeping track if certain calls have been made.

The rationale behind creating two mock-scripts is two-fold:
1. It removes the need to hard code responses for the tunnel
   and to recycle logic for determining and allocating migration volumes.
   Some of that logic already happens in the API part, so it was necessary
   to mock the whole CLI-Handler.
2. It allows testing the code relevant for migration in 'qm start' as well,
   and it should even be possible to test different versions of the
   mock-scripts against each other. With a bit of extra work and things
   like 'git worktree', it might even be possible to automate this.

The helper get_patched config is useful to change pre-defined configuration
files on the fly, avoiding the new to explicitly define whole configurations to
test for something in many cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Changes from v2:
    * adapt previously non-deterministic test to new behavior introduced by
      the previous patch

 test/Makefile                         |    5 +-
 test/MigrationTest/QemuMigrateMock.pm |  319 +++++
 test/MigrationTest/QmMock.pm          |  142 +++
 test/MigrationTest/Shared.pm          |  170 +++
 test/run_qemu_migrate_tests.pl        | 1594 +++++++++++++++++++++++++
 5 files changed, 2229 insertions(+), 1 deletion(-)
 create mode 100644 test/MigrationTest/QemuMigrateMock.pm
 create mode 100644 test/MigrationTest/QmMock.pm
 create mode 100644 test/MigrationTest/Shared.pm
 create mode 100755 test/run_qemu_migrate_tests.pl

diff --git a/test/Makefile b/test/Makefile
index d88cbd2..a356057 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
 all: test
 
-test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert
+test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration
 
 test_snapshot: run_snapshot_tests.pl
 	./run_snapshot_tests.pl
@@ -17,3 +17,6 @@ test_qemu_img_convert: run_qemu_img_convert_tests.pl
 
 test_pci_addr_conflicts: run_pci_addr_checks.pl
 	./run_pci_addr_checks.pl
+
+test_migration: run_qemu_migrate_tests.pl MigrationTest/*.pm
+	./run_qemu_migrate_tests.pl
diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
new file mode 100644
index 0000000..efd6130
--- /dev/null
+++ b/test/MigrationTest/QemuMigrateMock.pm
@@ -0,0 +1,319 @@
+package MigrationTest::QemuMigrateMock;
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::MockModule;
+
+use MigrationTest::Shared;
+
+use PVE::API2::Qemu;
+use PVE::Storage;
+use PVE::Tools qw(file_set_contents file_get_contents);
+
+use PVE::CLIHandler;
+use base qw(PVE::CLIHandler);
+
+my $RUN_DIR_PATH = $ENV{RUN_DIR_PATH} or die "no RUN_DIR_PATH set\n";
+my $QM_LIB_PATH = $ENV{QM_LIB_PATH} or die "no QM_LIB_PATH set\n";
+
+my $source_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/source_volids"));
+my $source_vdisks = decode_json(file_get_contents("${RUN_DIR_PATH}/source_vdisks"));
+my $vm_status = decode_json(file_get_contents("${RUN_DIR_PATH}/vm_status"));
+my $expected_calls = decode_json(file_get_contents("${RUN_DIR_PATH}/expected_calls"));
+my $fail_config = decode_json(file_get_contents("${RUN_DIR_PATH}/fail_config"));
+my $storage_migrate_map = decode_json(file_get_contents("${RUN_DIR_PATH}/storage_migrate_map"));
+my $migrate_params = decode_json(file_get_contents("${RUN_DIR_PATH}/migrate_params"));
+
+my $test_vmid = $migrate_params->{vmid};
+my $test_target = $migrate_params->{target};
+my $test_opts = $migrate_params->{opts};
+my $current_log = '';
+
+my $vm_stop_executed = 0;
+
+# mocked modules
+
+my $inotify_module = Test::MockModule->new("PVE::INotify");
+$inotify_module->mock(
+    nodename => sub {
+       return 'pve0';
+    },
+);
+
+$MigrationTest::Shared::qemu_config_module->mock(
+    move_config_to_node => sub {
+	my ($self, $vmid, $target) = @_;
+	die "moving wrong config: '$vmid'\n" if $vmid ne $test_vmid;
+	die "moving config to wrong node: '$target'\n" if $target ne $test_target;
+	delete $expected_calls->{move_config_to_node};
+    },
+);
+
+my $qemu_migrate_module = Test::MockModule->new("PVE::QemuMigrate");
+$qemu_migrate_module->mock(
+    finish_tunnel => sub {
+	delete $expected_calls->{'finish_tunnel'};
+	return;
+    },
+    fork_tunnel => sub {
+	die "fork_tunnel (mocked) - implement me\n"; # currently no call should lead here
+    },
+    read_tunnel => sub {
+	die "read_tunnel (mocked) - implement me\n"; # currently no call should lead here
+    },
+    start_remote_tunnel => sub {
+	my ($self, $raddr, $rport, $ruri, $unix_socket_info) = @_;
+	$expected_calls->{'finish_tunnel'} = 1;
+	$self->{tunnel} =  {
+	    writer => "mocked",
+	    reader => "mocked",
+	    pid => 123456,
+	    version => 1,
+	};
+    },
+    write_tunnel => sub {
+	my ($self, $tunnel, $timeout, $command) = @_;
+
+	if ($command =~ m/^resume (\d+)$/) {
+	    my $vmid = $1;
+	    die "resuming wrong VM '$vmid'\n" if $vmid ne $test_vmid;
+	    return;
+	}
+	die "write_tunnel (mocked) - implement me: $command\n";
+    },
+    log => sub {
+	my ($self, $level, $message) = @_;
+	$current_log .= "$level: $message\n";
+    },
+    mon_cmd => sub {
+	my ($vmid, $command, %params) = @_;
+
+	if ($command eq 'nbd-server-start') {
+	    return;
+	} elsif ($command eq 'block-dirty-bitmap-add') {
+	    my $drive = $params{node};
+	    delete $expected_calls->{"block-dirty-bitmap-add-${drive}"};
+	    return;
+	} elsif ($command eq 'block-dirty-bitmap-remove') {
+	    return;
+	} elsif ($command eq 'query-migrate') {
+	    return { status => 'failed' } if $fail_config->{'query-migrate'};
+	    return { status => 'completed' };
+	} elsif ($command eq 'migrate') {
+	    return;
+	} elsif ($command eq 'migrate-set-parameters') {
+	    return;
+	} elsif ($command eq 'migrate_cancel') {
+	    return;
+	}
+	die "mon_cmd (mocked) - implement me: $command";
+    },
+    transfer_replication_state => sub {
+	delete $expected_calls->{transfer_replication_state};
+    },
+    switch_replication_job_target => sub {
+	delete $expected_calls->{switch_replication_job_target};
+    },
+);
+
+$MigrationTest::Shared::qemu_server_module->mock(
+    kvm_user_version => sub {
+	return "5.0.0";
+    },
+    qemu_blockjobs_cancel => sub {
+	return;
+    },
+    qemu_drive_mirror => sub {
+	my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, $completion, $qga, $bwlimit, $src_bitmap) = @_;
+
+	die "drive_mirror with wrong vmid: '$vmid'\n" if $vmid ne $test_vmid;
+	die "qemu_drive_mirror '$drive' error\n" if $fail_config->{qemu_drive_mirror}
+						 && $fail_config->{qemu_drive_mirror} eq $drive;
+
+	my $nbd_info = decode_json(file_get_contents("${RUN_DIR_PATH}/nbd_info"));
+	die "target does not expect drive mirror for '$drive'\n"
+	    if !defined($nbd_info->{$drive});
+	delete $nbd_info->{$drive};
+	file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd_info));
+    },
+    qemu_drive_mirror_monitor => sub {
+	return;
+    },
+    set_migration_caps => sub {
+	return;
+    },
+    vm_stop => sub {
+	$vm_stop_executed = 1;
+	delete $expected_calls->{'vm_stop'};
+    },
+);
+
+my $qemu_server_cpuconfig_module = Test::MockModule->new("PVE::QemuServer::CPUConfig");
+$qemu_server_cpuconfig_module->mock(
+    get_cpu_from_running_vm => sub {
+	die "invalid test: if you specify a custom CPU model you need to " .
+	    "specify runningcpu as well\n" if !defined($vm_status->{runningcpu});
+	return $vm_status->{runningcpu};
+    }
+);
+
+my $qemu_server_helpers_module = Test::MockModule->new("PVE::QemuServer::Helpers");
+$qemu_server_helpers_module->mock(
+    vm_running_locally => sub {
+	return $vm_status->{running} && !$vm_stop_executed;
+    },
+);
+
+my $qemu_server_machine_module = Test::MockModule->new("PVE::QemuServer::Machine");
+$qemu_server_machine_module->mock(
+    qemu_machine_pxe => sub {
+	die "invalid test: no runningmachine specified\n"
+	    if !defined($vm_status->{runningmachine});
+	return $vm_status->{runningmachine};
+    },
+);
+
+my $ssh_info_module = Test::MockModule->new("PVE::SSHInfo");
+$ssh_info_module->mock(
+    get_ssh_info => sub {
+	my ($node, $network_cidr) = @_;
+	return {
+	    ip => '1.2.3.4',
+	    name => $node,
+	    network => $network_cidr,
+	};
+    },
+);
+
+$MigrationTest::Shared::storage_module->mock(
+    storage_migrate => sub {
+	my ($cfg, $volid, $target_sshinfo, $target_storeid, $opts, $logfunc) = @_;
+
+	die "storage_migrate '$volid' error\n" if $fail_config->{storage_migrate}
+					       && $fail_config->{storage_migrate} eq $volid;
+
+	my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+
+	die "invalid test: need to add entry for '$volid' to storage_migrate_map\n"
+	    if $storeid ne $target_storeid && !defined($storage_migrate_map->{$volid});
+
+	my $target_volname = $storage_migrate_map->{$volid} // $opts->{target_volname} // $volname;
+	my $target_volid = "${target_storeid}:${target_volname}";
+	MigrationTest::Shared::add_target_volid($target_volid);
+
+	return $target_volid;
+    },
+    vdisk_list => sub { # expects vmid to be set
+	my ($cfg, $storeid, $vmid, $vollist) = @_;
+
+	my @storeids = defined($storeid) ? ($storeid) : keys %{$source_vdisks};
+
+	my $res = {};
+	foreach my $storeid (@storeids) {
+	    my $list_for_storeid = $source_vdisks->{$storeid};
+	    my @list_for_vm = grep { $_->{vmid} eq $vmid } @{$list_for_storeid};
+	    $res->{$storeid} = \@list_for_vm;
+	}
+	return $res;
+    },
+    vdisk_free => sub {
+	my ($scfg, $volid) = @_;
+
+	die "vdisk_free '$volid' error\n" if defined($fail_config->{vdisk_free})
+					  && $fail_config->{vdisk_free} eq $volid;
+
+	delete $source_volids->{$volid};
+    },
+);
+
+$MigrationTest::Shared::tools_module->mock(
+    get_host_address_family => sub {
+	die "get_host_address_family (mocked) - implement me\n"; # currently no call should lead here
+    },
+    next_migrate_port => sub {
+	die "next_migrate_port (mocked) - implement me\n"; # currently no call should lead here
+    },
+    run_command => sub {
+	my ($cmd_tail, %param) = @_;
+
+	my $cmd_msg = to_json($cmd_tail);
+
+	my $cmd = shift @{$cmd_tail};
+
+	if ($cmd eq '/usr/bin/ssh') {
+	    while (scalar(@{$cmd_tail})) {
+		$cmd = shift @{$cmd_tail};
+		if ($cmd eq '/bin/true') {
+		    return 0;
+		} elsif ($cmd eq 'qm') {
+		    $cmd = shift @{$cmd_tail};
+		    if ($cmd eq 'start') {
+			delete $expected_calls->{ssh_qm_start};
+
+			delete $vm_status->{runningmachine};
+			delete $vm_status->{runningcpu};
+
+			my @options = ( @{$cmd_tail} );
+			while (scalar(@options)) {
+			    my $opt = shift @options;
+			    if ($opt eq '--machine') {
+				$vm_status->{runningmachine} = shift @options;
+			    } elsif ($opt eq '--force-cpu') {
+				$vm_status->{runningcpu} = shift @options;
+			    }
+			}
+
+			return $MigrationTest::Shared::tools_module->original('run_command')->([
+			    '/usr/bin/perl',
+			    "-I${QM_LIB_PATH}",
+			    "-I${QM_LIB_PATH}/test",
+			    "${QM_LIB_PATH}/test/MigrationTest/QmMock.pm",
+			    'start',
+			    @{$cmd_tail},
+			    ], %param);
+
+		    } elsif ($cmd eq 'nbdstop') {
+			delete $expected_calls->{ssh_nbdstop};
+			return 0;
+		    } elsif ($cmd eq 'resume') {
+			return 0;
+		    } elsif ($cmd eq 'unlock') {
+			my $vmid = shift @{$cmd_tail};;
+			die "unlocking wrong vmid: $vmid\n" if $vmid ne $test_vmid;
+			PVE::QemuConfig->remove_lock($vmid);
+			return 0;
+		    } elsif ($cmd eq 'stop') {
+			return 0;
+		    }
+		    die "run_command (mocked) ssh qm command - implement me: ${cmd_msg}";
+		} elsif ($cmd eq 'pvesm') {
+		    $cmd = shift @{$cmd_tail};
+		    if ($cmd eq 'free') {
+			my $volid = shift @{$cmd_tail};
+			return 1 if $fail_config->{ssh_pvesm_free}
+				 && $fail_config->{ssh_pvesm_free} eq $volid;
+			MigrationTest::Shared::remove_target_volid($volid);
+			return 0;
+		    }
+		    die "run_command (mocked) ssh pvesm command - implement me: ${cmd_msg}";
+		}
+	    }
+	    die "run_command (mocked) ssh command - implement me: ${cmd_msg}";
+	}
+	die "run_command (mocked) - implement me: ${cmd_msg}";
+    },
+);
+
+eval { PVE::QemuMigrate->migrate($test_target, undef, $test_vmid, $test_opts) };
+my $error = $@;
+
+file_set_contents("${RUN_DIR_PATH}/source_volids", to_json($source_volids));
+file_set_contents("${RUN_DIR_PATH}/vm_status", to_json($vm_status));
+file_set_contents("${RUN_DIR_PATH}/expected_calls", to_json($expected_calls));
+file_set_contents("${RUN_DIR_PATH}/log", $current_log);
+
+die $error if $error;
+
+1;
diff --git a/test/MigrationTest/QmMock.pm b/test/MigrationTest/QmMock.pm
new file mode 100644
index 0000000..2f1fffc
--- /dev/null
+++ b/test/MigrationTest/QmMock.pm
@@ -0,0 +1,142 @@
+package MigrationTest::QmMock;
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::MockModule;
+
+use MigrationTest::Shared;
+
+use PVE::API2::Qemu;
+use PVE::Storage;
+use PVE::Tools qw(file_set_contents file_get_contents);
+
+use PVE::CLIHandler;
+use base qw(PVE::CLIHandler);
+
+my $RUN_DIR_PATH = $ENV{RUN_DIR_PATH} or die "no RUN_DIR_PATH set\n";
+
+my $target_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids"));
+my $fail_config = decode_json(file_get_contents("${RUN_DIR_PATH}/fail_config"));
+my $migrate_params = decode_json(file_get_contents("${RUN_DIR_PATH}/migrate_params"));
+my $nodename = $migrate_params->{target};
+
+my $kvm_exectued = 0;
+
+sub setup_environment {
+    my $rpcenv = PVE::RPCEnvironment::init('MigrationTest::QmMock', 'cli');
+}
+
+# mock RPCEnvironment directly
+
+sub get_user {
+    return 'root@pam';
+}
+
+sub fork_worker {
+    my ($self, $dtype, $id, $user, $function, $background) = @_;
+    $function->(123456);
+    return '123456';
+}
+
+# mocked modules
+
+my $inotify_module = Test::MockModule->new("PVE::INotify");
+$inotify_module->mock(
+    nodename => sub {
+       return $nodename;
+    },
+);
+
+$MigrationTest::Shared::qemu_server_module->mock(
+    nodename => sub {
+	return $nodename;
+    },
+    config_to_command => sub {
+	return [ 'mocked_kvm_command' ];
+    },
+);
+
+my $qemu_server_helpers_module = Test::MockModule->new("PVE::QemuServer::Helpers");
+$qemu_server_helpers_module->mock(
+    vm_running_locally => sub {
+	return $kvm_exectued;
+    },
+);
+
+# to make sure we get valid and predictable names
+my $disk_counter = 10;
+
+$MigrationTest::Shared::storage_module->mock(
+    vdisk_alloc => sub {
+	my ($cfg, $storeid, $vmid, $fmt, $name, $size) = @_;
+
+	die "vdisk_alloc (mocked) - name is not expected to be set - implement me\n"
+	    if defined($name);
+
+	my $name_without_extension = "vm-${vmid}-disk-${disk_counter}";
+	$disk_counter++;
+
+	my $volid;
+	my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+	if ($scfg->{path}) {
+	    $volid = "${storeid}:${vmid}/${name_without_extension}.${fmt}";
+	} else {
+	    $volid = "${storeid}:${name_without_extension}";
+	}
+
+	die "vdisk_alloc '$volid' error\n" if $fail_config->{vdisk_alloc}
+					   && $fail_config->{vdisk_alloc} eq $volid;
+
+	MigrationTest::Shared::add_target_volid($volid);
+
+	return $volid;
+    },
+);
+
+$MigrationTest::Shared::qemu_server_module->mock(
+    mon_cmd => sub {
+	my ($vmid, $command, %params) = @_;
+
+	if ($command eq 'nbd-server-start') {
+	    return;
+	} elsif ($command eq 'nbd-server-add') {
+	    return;
+	} elsif ($command eq 'qom-set') {
+	    return;
+	}
+	die "mon_cmd (mocked) - implement me: $command";
+    },
+    run_command => sub {
+	my ($cmd_full, %param) = @_;
+
+	my $cmd_msg = to_json($cmd_full);
+
+	my $cmd = shift @{$cmd_full};
+
+	if ($cmd eq '/bin/systemctl') {
+	    return;
+	} elsif ($cmd eq 'mocked_kvm_command') {
+	    $kvm_exectued = 1;
+	    return 0;
+	}
+	die "run_command (mocked) - implement me: ${cmd_msg}";
+    },
+    set_migration_caps => sub {
+	return;
+    },
+    vm_migrate_alloc_nbd_disks => sub{
+	my $nbd = $MigrationTest::Shared::qemu_server_module->original('vm_migrate_alloc_nbd_disks')->(@_);
+	file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd));
+	return $nbd;
+    },
+);
+
+our $cmddef = {
+    start => [ "PVE::API2::Qemu", 'vm_start', ['vmid'], { node => $nodename } ],
+};
+
+MigrationTest::QmMock->run_cli_handler();
+
+1;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
new file mode 100644
index 0000000..c09562c
--- /dev/null
+++ b/test/MigrationTest/Shared.pm
@@ -0,0 +1,170 @@
+package MigrationTest::Shared;
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::MockModule;
+use Socket qw(AF_INET);
+
+use PVE::QemuConfig;
+use PVE::Tools qw(file_set_contents file_get_contents lock_file_full);
+
+my $RUN_DIR_PATH = $ENV{RUN_DIR_PATH} or die "no RUN_DIR_PATH set\n";
+
+my $storage_config = decode_json(file_get_contents("${RUN_DIR_PATH}/storage_config"));
+my $replication_config = decode_json(file_get_contents("${RUN_DIR_PATH}/replication_config"));
+my $fail_config = decode_json(file_get_contents("${RUN_DIR_PATH}/fail_config"));
+my $migrate_params = decode_json(file_get_contents("${RUN_DIR_PATH}/migrate_params"));
+my $test_vmid = $migrate_params->{vmid};
+
+# helpers
+
+sub add_target_volid {
+    my ($volid) = @_;
+
+    lock_file_full("${RUN_DIR_PATH}/target_volids.lock", undef, 0, sub {
+	my $target_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids"));
+	die "target volid already present " if defined($target_volids->{$volid});
+	$target_volids->{$volid} = 1;
+	file_set_contents("${RUN_DIR_PATH}/target_volids", to_json($target_volids));
+    });
+    die $@ if $@;
+}
+
+sub remove_target_volid {
+    my ($volid) = @_;
+
+    lock_file_full("${RUN_DIR_PATH}/target_volids.lock", undef, 0, sub {
+	my $target_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids"));
+	die "target volid does not exist " if !defined($target_volids->{$volid});
+	delete $target_volids->{$volid};
+	file_set_contents("${RUN_DIR_PATH}/target_volids", to_json($target_volids));
+    });
+    die $@ if $@;
+}
+
+my $mocked_cfs_read_file = sub {
+    my ($file) = @_;
+
+    if ($file eq 'datacenter.cfg') {
+	return {};
+    } elsif ($file eq 'replication.cfg') {
+	return $replication_config;
+    }
+    die "cfs_read_file (mocked) - implement me: $file\n";
+};
+
+# mocked modules
+
+our $cluster_module = Test::MockModule->new("PVE::Cluster");
+$cluster_module->mock(
+    cfs_read_file => $mocked_cfs_read_file,
+    check_cfs_quorum => sub {
+	return 1;
+    },
+);
+
+our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
+$ha_config_module->mock(
+    vm_is_ha_managed => sub {
+	return 0;
+    },
+);
+
+our $qemu_config_module = Test::MockModule->new("PVE::QemuConfig");
+$qemu_config_module->mock(
+    assert_config_exists_on_node => sub {
+	return;
+    },
+    load_config => sub {
+	my ($class, $vmid, $node) = @_;
+	die "trying to load wrong config: '$vmid'\n" if $vmid ne $test_vmid;
+	return decode_json(file_get_contents("${RUN_DIR_PATH}/vm_config"));
+    },
+    lock_config => sub { # no use locking here because lock is local to node
+	my ($self, $vmid, $code, @param) = @_;
+	return $code->(@param);
+    },
+    write_config => sub {
+	my ($class, $vmid, $conf) = @_;
+	die "trying to write wrong config: '$vmid'\n" if $vmid ne $test_vmid;
+	file_set_contents("${RUN_DIR_PATH}/vm_config", to_json($conf));
+    },
+);
+
+our $qemu_server_cloudinit_module = Test::MockModule->new("PVE::QemuServer::Cloudinit");
+$qemu_server_cloudinit_module->mock(
+    generate_cloudinitconfig => sub {
+	return;
+    },
+);
+
+our $qemu_server_module = Test::MockModule->new("PVE::QemuServer");
+$qemu_server_module->mock(
+    clear_reboot_request => sub {
+	return 1;
+    },
+    get_efivars_size => sub {
+	 return 128 * 1024;
+    },
+);
+
+our $replication_module = Test::MockModule->new("PVE::Replication");
+$replication_module->mock(
+    run_replication => sub {
+	die "run_replication error" if $fail_config->{run_replication};
+
+	my $vm_config = PVE::QemuConfig->load_config($test_vmid);
+	return PVE::QemuConfig->get_replicatable_volumes(
+	    $storage_config,
+	    $test_vmid,
+	    $vm_config,
+	);
+    },
+);
+
+our $replication_config_module = Test::MockModule->new("PVE::ReplicationConfig");
+$replication_config_module->mock(
+    cfs_read_file => $mocked_cfs_read_file,
+);
+
+our $storage_module = Test::MockModule->new("PVE::Storage");
+$storage_module->mock(
+    activate_volumes => sub {
+	return 1;
+    },
+    deactivate_volumes => sub {
+	return 1;
+    },
+    config => sub {
+	return $storage_config;
+    },
+    get_bandwitdth_limit => sub {
+	return 123456;
+    },
+);
+
+our $systemd_module = Test::MockModule->new("PVE::Systemd");
+$systemd_module->mock(
+    wait_for_unit_removed => sub {
+	return;
+    },
+    enter_systemd_scope => sub {
+	return;
+    },
+);
+
+my $migrate_port_counter = 60000;
+
+our $tools_module = Test::MockModule->new("PVE::Tools");
+$tools_module->mock(
+    get_host_address_family => sub {
+	return AF_INET;
+    },
+    next_migrate_port => sub {
+	return $migrate_port_counter++;
+    },
+);
+
+1;
diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
new file mode 100755
index 0000000..1e3f135
--- /dev/null
+++ b/test/run_qemu_migrate_tests.pl
@@ -0,0 +1,1594 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::More;
+use Test::MockModule;
+
+use PVE::JSONSchema;
+use PVE::Tools qw(file_set_contents file_get_contents run_command);
+
+my $QM_LIB_PATH = '..';
+my $MIGRATE_LIB_PATH = '..';
+my $RUN_DIR_PATH = './MigrationTest/run/';
+
+# test configuration shared by all tests
+
+my $replication_config = {
+    'ids' => {
+	'105-0' => {
+	    'guest' => '105',
+	    'id' => '105-0',
+	    'jobnum' => '0',
+	    'source' => 'pve0',
+	    'target' => 'pve2',
+	    'type' => 'local'
+	},
+    },
+    'order' => {
+	'105-0' => 1,
+    }
+};
+
+my $storage_config = {
+    ids => {
+	local => {
+	    content => {
+		images => 1,
+	    },
+	    path => "/var/lib/vz",
+	    type => "dir",
+	    shared => 0,
+	},
+	"local-lvm" => {
+	    content => {
+		images => 1,
+	    },
+	    nodes => {
+		pve0 => 1,
+		pve1 => 1,
+	    },
+	    type => "lvmthin",
+	    thinpool => "data",
+	    vgname => "pve",
+	},
+	"local-zfs" => {
+	    content => {
+		images => 1,
+		rootdir => 1,
+	    },
+	    pool => "rpool/data",
+	    sparse => 1,
+	    type => "zfspool",
+	},
+	"rbd-store" => {
+	    monhost => "127.0.0.42,127.0.0.21,::1",
+	    content => {
+		images => 1,
+	    },
+	    type => "rbd",
+	    pool => "cpool",
+	    username => "admin",
+	    shared => 1,
+	},
+	"local-dir" => {
+	    content => {
+		images => 1,
+	    },
+	    path => "/some/dir/",
+	    type => "dir",
+	},
+	"other-dir" => {
+	    content => {
+		images => 1,
+	    },
+	    path => "/some/other/dir/",
+	    type => "dir",
+	},
+    },
+};
+
+my $vm_configs = {
+     105 => {
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'ide0' => 'local-zfs:vm-105-disk-1,size=103M',
+	'ide2' => 'none,media=cdrom',
+	'memory' => 512,
+	'name' => 'Copy-of-VM-newapache',
+	'net0' => 'virtio=4A:A3:E4:4C:CF:F0,bridge=vmbr0,firewall=1',
+	'numa' => 0,
+	'ostype' => 'l26',
+	'parent' => 'ohsnap',
+	'pending' => {},
+	'scsi0' => 'local-zfs:vm-105-disk-0,size=4G',
+	'scsihw' => 'virtio-scsi-pci',
+	'smbios1' => 'uuid=1ddfe18b-77e0-47f6-a4bd-f1761bf6d763',
+	'snapshots' => {
+	    'ohsnap' => {
+		'bootdisk' => 'scsi0',
+		'cores' => 1,
+		'ide2' => 'none,media=cdrom',
+		'memory' => 512,
+		'name' => 'Copy-of-VM-newapache',
+		'net0' => 'virtio=4A:A3:E4:4C:CF:F0,bridge=vmbr0,firewall=1',
+		'numa' => 0,
+		'ostype' => 'l26',
+		'scsi0' => 'local-zfs:vm-105-disk-0,size=4G',
+		'scsihw' => 'virtio-scsi-pci',
+		'smbios1' => 'uuid=1ddfe18b-77e0-47f6-a4bd-f1761bf6d763',
+		'snaptime' => 1580976924,
+		'sockets' => 1,
+		'startup' => 'order=2',
+		'vmgenid' => '4eb1d535-9381-4ddc-a8aa-af50c4d9177b'
+	    },
+	},
+	'sockets' => 1,
+	'startup' => 'order=2',
+	'vmgenid' => '4eb1d535-9381-4ddc-a8aa-af50c4d9177b',
+    },
+    149 => {
+	'agent' => '0',
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'hotplug' => 'disk,network,usb,memory,cpu',
+	'ide2' => 'none,media=cdrom',
+	'memory' => 4096,
+	'name' => 'asdf',
+	'net0' => 'virtio=52:5D:7E:62:85:97,bridge=vmbr1',
+	'numa' => 1,
+	'ostype' => 'l26',
+	'scsi0' => 'local-lvm:vm-149-disk-0,format=raw,size=4G',
+	'scsi1' => 'local-dir:149/vm-149-disk-0.qcow2,format=qcow2,size=1G',
+	'scsihw' => 'virtio-scsi-pci',
+	'snapshots' => {},
+	'smbios1' => 'uuid=e980bd43-a405-42e2-b5f4-31efe6517460',
+	'sockets' => 1,
+	'startup' => 'order=2',
+	'vmgenid' => '36c6c50c-6ef5-4adc-9b6f-6ba9c8071db0',
+    },
+    341 => {
+	'arch' => 'aarch64',
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'efidisk0' => 'local-lvm:vm-341-disk-0',
+	'ide2' => 'none,media=cdrom',
+	'ipconfig0' => 'ip=103.214.69.10/25,gw=103.214.69.1',
+	'memory' => 4096,
+	'name' => 'VM1033',
+	'net0' => 'virtio=4E:F1:82:6D:D7:4B,bridge=vmbr0,firewall=1,rate=10',
+	'numa' => 0,
+	'ostype' => 'l26',
+	'scsi0' => 'rbd-store:vm-341-disk-0,size=1G',
+	'scsihw' => 'virtio-scsi-pci',
+	'snapshots' => {},
+	'smbios1' => 'uuid=e01e4c73-46f1-47c8-af79-288fdf6b7462',
+	'sockets' => 2,
+	'vmgenid' => 'af47c000-eb0c-48e8-8991-ca4593cd6916',
+    },
+    1033 => {
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'ide0' => 'rbd-store:vm-1033-cloudinit,media=cdrom,size=4M',
+	'ide2' => 'none,media=cdrom',
+	'ipconfig0' => 'ip=103.214.69.10/25,gw=103.214.69.1',
+	'memory' => 4096,
+	'name' => 'VM1033',
+	'net0' => 'virtio=4E:F1:82:6D:D7:4B,bridge=vmbr0,firewall=1,rate=10',
+	'numa' => 0,
+	'ostype' => 'l26',
+	'scsi0' => 'rbd-store:vm-1033-disk-1,size=1G',
+	'scsihw' => 'virtio-scsi-pci',
+	'snapshots' => {},
+	'smbios1' => 'uuid=e01e4c73-46f1-47c8-af79-288fdf6b7462',
+	'sockets' => 2,
+	'vmgenid' => 'af47c000-eb0c-48e8-8991-ca4593cd6916',
+    },
+    4567 => {
+	'bootdisk' => 'scsi0',
+	'cores' => 1,
+	'ide2' => 'none,media=cdrom',
+	'memory' => 512,
+	'name' => 'snapme',
+	'net0' => 'virtio=A6:D1:F1:EB:7B:C2,bridge=vmbr0,firewall=1',
+	'numa' => 0,
+	'ostype' => 'l26',
+	'parent' => 'snap1',
+	'pending' => {},
+	'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
+	'scsihw' => 'virtio-scsi-pci',
+	'smbios1' => 'uuid=2925fdec-a066-4228-b46b-eef8662f5e74',
+	'snapshots' => {
+	    'snap1' => {
+		'bootdisk' => 'scsi0',
+		'cores' => 1,
+		'ide2' => 'none,media=cdrom',
+		'memory' => 512,
+		'name' => 'snapme',
+		'net0' => 'virtio=A6:D1:F1:EB:7B:C2,bridge=vmbr0,firewall=1',
+		'numa' => 0,
+		'ostype' => 'l26',
+		'runningcpu' => 'kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep',
+		'runningmachine' => 'pc-i440fx-5.0+pve0',
+		'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
+		'scsihw' => 'virtio-scsi-pci',
+		'smbios1' => 'uuid=2925fdec-a066-4228-b46b-eef8662f5e74',
+		'snaptime' => 1595928799,
+		'sockets' => 1,
+		'startup' => 'order=2',
+		'vmgenid' => '932b227a-8a39-4ede-955a-dbd4bc4385ed',
+		'vmstate' => 'local-dir:4567/vm-4567-state-snap1.raw',
+	    },
+	    'snap2' => {
+		'bootdisk' => 'scsi0',
+		'cores' => 1,
+		'ide2' => 'none,media=cdrom',
+		'memory' => 512,
+		'name' => 'snapme',
+		'net0' => 'virtio=A6:D1:F1:EB:7B:C2,bridge=vmbr0,firewall=1',
+		'numa' => 0,
+		'ostype' => 'l26',
+		'parent' => 'snap1',
+		'runningcpu' => 'kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep',
+		'runningmachine' => 'pc-i440fx-5.0+pve0',
+		'scsi0' => 'local-dir:4567/vm-4567-disk-0.qcow2,size=4G',
+		'scsi1' => 'local-zfs:vm-4567-disk-0,size=1G',
+		'scsihw' => 'virtio-scsi-pci',
+		'smbios1' => 'uuid=2925fdec-a066-4228-b46b-eef8662f5e74',
+		'snaptime' => 1595928871,
+		'sockets' => 1,
+		'startup' => 'order=2',
+		'vmgenid' => '932b227a-8a39-4ede-955a-dbd4bc4385ed',
+		'vmstate' => 'local-dir:4567/vm-4567-state-snap2.raw',
+	    },
+	},
+	'sockets' => 1,
+	'startup' => 'order=2',
+	'unused0' => 'local-zfs:vm-4567-disk-0',
+	'vmgenid' => 'e698e60c-9278-4dd9-941f-416075383f2a',
+	},
+};
+
+my $source_vdisks = {
+    'local-dir' => [
+	{
+	    'ctime' => 1589439681,
+	    'format' => 'qcow2',
+	    'parent' => undef,
+	    'size' => 1073741824,
+	    'used' => 335872,
+	    'vmid' => '149',
+	    'volid' => 'local-dir:149/vm-149-disk-0.qcow2',
+	},
+	{
+	    'ctime' => 1595928898,
+	    'format' => 'qcow2',
+	    'parent' => undef,
+	    'size' => 4294967296,
+	    'used' => 1811664896,
+	    'vmid' => '4567',
+	    'volid' => 'local-dir:4567/vm-4567-disk-0.qcow2',
+	},
+	{
+	    'ctime' => 1595928800,
+	    'format' => 'raw',
+	    'parent' => undef,
+	    'size' => 274666496,
+	    'used' => 274669568,
+	    'vmid' => '4567',
+	    'volid' => 'local-dir:4567/vm-4567-state-snap1.raw',
+	},
+	{
+	    'ctime' => 1595928872,
+	    'format' => 'raw',
+	    'parent' => undef,
+	    'size' => 273258496,
+	    'used' => 273260544,
+	    'vmid' => '4567',
+	    'volid' => 'local-dir:4567/vm-4567-state-snap2.raw',
+	},
+    ],
+    'local-lvm' => [
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 4294967296,
+	    'vmid' => '149',
+	    'volid' => 'local-lvm:vm-149-disk-0',
+	},
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 4194304,
+	    'vmid' => '341',
+	    'volid' => 'local-lvm:vm-341-disk-0',
+	},
+    ],
+    'local-zfs' => [
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 4294967296,
+	    'vmid' => '105',
+	    'volid' => 'local-zfs:vm-105-disk-0',
+	},
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 108003328,
+	    'vmid' => '105',
+	    'volid' => 'local-zfs:vm-105-disk-1',
+	},
+	{
+	    'format' => 'raw',
+	    'name' => 'vm-4567-disk-0',
+	    'parent' => undef,
+	    'size' => 1073741824,
+	    'vmid' => '4567',
+	    'volid' => 'local-zfs:vm-4567-disk-0',
+	},
+    ],
+    'rbd-store' => [
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 1073741824,
+	    'vmid' => '1033',
+	    'volid' => 'rbd-store:vm-1033-disk-1',
+	},
+	{
+	    'ctime' => '1589277334',
+	    'format' => 'raw',
+	    'size' => 1073741824,
+	    'vmid' => '1033',
+	    'volid' => 'rbd-store:vm-1033-cloudinit',
+	},
+    ],
+};
+
+my $default_expected_calls_online = {
+    move_config_to_node => 1,
+    ssh_qm_start => 1,
+    vm_stop => 1,
+};
+
+my $default_expected_calls_offline = {
+    move_config_to_node => 1,
+};
+
+my $replicated_expected_calls_online = {
+    %{$default_expected_calls_online},
+    transfer_replication_state => 1,
+    switch_replication_job_target => 1,
+};
+
+my $replicated_expected_calls_offline = {
+    %{$default_expected_calls_offline},
+    transfer_replication_state => 1,
+    switch_replication_job_target => 1,
+};
+
+# helpers
+
+sub get_patched_config {
+    my ($vmid, $patch) = @_;
+
+    my $new_config = { %{$vm_configs->{$vmid}} };
+    patch_config($new_config, $patch) if defined($patch);
+
+    return $new_config;
+}
+
+sub patch_config {
+    my ($config, $patch) = @_;
+
+    foreach my $key (keys %{$patch}) {
+	if ($key eq 'snapshots' && defined($patch->{$key})) {
+	    my $new_snapshot_configs = {};
+	    foreach my $snap (keys %{$patch->{snapshots}}) {
+		my $new_snapshot_config = { %{$config->{snapshots}->{$snap}} };
+		patch_config($new_snapshot_config, $patch->{snapshots}->{$snap});
+		$new_snapshot_configs->{$snap} = $new_snapshot_config;
+	    }
+	    $config->{snapshots} = $new_snapshot_configs;
+	} elsif (defined($patch->{$key})) {
+	    $config->{$key} = $patch->{$key};
+	} else { # use undef value for deletion
+	    delete $config->{$key};
+	}
+    }
+}
+
+sub local_volids_for_vm {
+    my ($vmid) = @_;
+
+    my $res = {};
+    foreach my $storeid (keys %{$source_vdisks}) {
+	next if $storage_config->{ids}->{$storeid}->{shared};
+	$res = {
+	    %{$res},
+	    map { $_->{vmid} eq $vmid ? ($_->{volid} => 1) : () } @{$source_vdisks->{$storeid}}
+	};
+    }
+    return $res;
+}
+
+my $tests = [
+# each test consists of the following:
+# name           - unique name for the test
+# target         - hostname of target node
+# vmid           - ID of the VM to migrate
+# opts           - options for the migrate() call
+# target_volids  - hash of volids on the target at the beginning
+# vm_status      - hash with running, runningmachine and optionally runningcpu
+# expected_calls - hash whose keys are calls which are required
+#                  to be made if the migration gets far enough
+# expect_die     - expect the migration call to fail, and an error message
+#                  matching the specified text in the log
+# expected       - hash consisting of:
+#                  source_volids    - hash of volids expected on the source
+#                  target_volids    - hash of volids expected on the target
+#                  vm_config        - vm configuration hash
+#                  vm_status        - hash with running, runningmachine and optionally runningcpu
+    {
+	# NOTE get_efivars_size is mocked and returns 128K
+	name => '341_running_efidisk_targetstorage_dir',
+	target => 'pve1',
+	vmid => 341,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	    targetstorage => 'local-dir',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-dir:341/vm-341-disk-10.raw' => 1,
+	    },
+	    vm_config => get_patched_config(341, {
+		efidisk0 => 'local-dir:341/vm-341-disk-10.raw,format=raw,size=128K',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	# NOTE get_efivars_size is mocked and returns 128K
+	name => '341_running_efidisk',
+	target => 'pve1',
+	vmid => 341,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-lvm:vm-341-disk-10' => 1,
+	    },
+	    vm_config => get_patched_config(341, {
+		efidisk0 => 'local-lvm:vm-341-disk-10,format=raw,size=128K',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_vdisk_alloc_and_pvesm_free_fail',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	fail_config => {
+	    vdisk_alloc => 'local-dir:149/vm-149-disk-11.qcow2',
+	    pvesm_free => 'local-lvm:vm-149-disk-10',
+	},
+	expected_calls => {},
+	expect_die => "remote command failed with exit code",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {
+		'local-lvm:vm-149-disk-10' => 1,
+	    },
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_vdisk_alloc_fail',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	fail_config => {
+	    vdisk_alloc => 'local-lvm:vm-149-disk-10',
+	},
+	expected_calls => {},
+	expect_die => "remote command failed with exit code",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {},
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_vdisk_free_fail',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    'with-local-disks' => 1,
+	},
+	fail_config => {
+	    'vdisk_free' => 'local-lvm:vm-149-disk-0',
+	},
+	expected_calls => $default_expected_calls_offline,
+	expect_die => "vdisk_free 'local-lvm:vm-149-disk-0' error",
+	expected => {
+	    source_volids => {
+		'local-lvm:vm-149-disk-0' => 1,
+	    },
+	    target_volids => local_volids_for_vm(149),
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_replicated_run_replication_fail',
+	target => 'pve2',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	target_volids => local_volids_for_vm(105),
+	fail_config => {
+	    run_replication => 1,
+	},
+	expected_calls => {},
+	expect_die => 'run_replication error',
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => $vm_configs->{105},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '1033_running_query_migrate_fail',
+	target => 'pve2',
+	vmid => 1033,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	},
+	fail_config => {
+	    'query-migrate' => 1,
+	},
+	expected_calls => {},
+	expect_die => 'online migrate failure - aborting',
+	expected => {
+	    source_volids => {},
+	    target_volids => {},
+	    vm_config => $vm_configs->{1033},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '4567_targetstorage_dirotherdir',
+	target => 'pve1',
+	vmid => 4567,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    targetstorage => 'local-dir:other-dir,local-zfs:local-zfs',
+	},
+	storage_migrate_map => {
+	    'local-dir:4567/vm-4567-disk-0.qcow2' => '4567/vm-4567-disk-0.qcow2',
+	    'local-dir:4567/vm-4567-state-snap1.raw' => '4567/vm-4567-state-snap1.raw',
+	    'local-dir:4567/vm-4567-state-snap2.raw' => '4567/vm-4567-state-snap2.raw',
+	},
+	expected_calls => $default_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'other-dir:4567/vm-4567-disk-0.qcow2' => 1,
+		'other-dir:4567/vm-4567-state-snap1.raw' => 1,
+		'other-dir:4567/vm-4567-state-snap2.raw' => 1,
+		'local-zfs:vm-4567-disk-0' => 1,
+	    },
+	    vm_config => get_patched_config(4567, {
+		'scsi0' => 'other-dir:4567/vm-4567-disk-0.qcow2,size=4G',
+		snapshots => {
+		    snap1 => {
+			'scsi0' => 'other-dir:4567/vm-4567-disk-0.qcow2,size=4G',
+			'vmstate' => 'other-dir:4567/vm-4567-state-snap1.raw',
+		    },
+		    snap2 => {
+			'scsi0' => 'other-dir:4567/vm-4567-disk-0.qcow2,size=4G',
+			'scsi1' => 'local-zfs:vm-4567-disk-0,size=1G',
+			'vmstate' => 'other-dir:4567/vm-4567-state-snap2.raw',
+		    },
+		},
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '4567_running',
+	target => 'pve1',
+	vmid => 4567,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-i440fx-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	expected_calls => {},
+	expect_die => 'online storage migration not possible if snapshot exists',
+	expected => {
+	    source_volids => local_volids_for_vm(4567),
+	    target_volids => {},
+	    vm_config => $vm_configs->{4567},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-i440fx-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '4567_offline',
+	target => 'pve1',
+	vmid => 4567,
+	vm_status => {
+	    running => 0,
+	},
+	expected_calls => $default_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => local_volids_for_vm(4567),
+	    vm_config => $vm_configs->{4567},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	# FIXME: Maybe add orphaned drives as unused?
+	name => '149_running_orphaned_disk_targetstorage_zfs',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	    targetstorage => 'local-zfs',
+	},
+	config_patch => {
+	    scsi1 => undef,
+	},
+	storage_migrate_map => {
+	    'local-dir:149/vm-149-disk-0.qcow2' => 'vm-149-disk-0',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-zfs:vm-149-disk-10' => 1,
+		'local-zfs:vm-149-disk-0' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => undef,
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	# FIXME: Maybe add orphaned drives as unused?
+	name => '149_running_orphaned_disk',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	config_patch => {
+	    scsi1 => undef,
+	},
+	storage_migrate_map => {
+	    'local-dir:149/vm-149-disk-0.qcow2' => '149/vm-149-disk-0.qcow2',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-lvm:vm-149-disk-10' => 1,
+		'local-dir:149/vm-149-disk-0.qcow2' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => undef,
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	# FIXME: This test is not (yet) a realistic situation, because
+	# storage_migrate currently never changes the format (AFAICT)
+	# But if such migrations become possible, we need to either update
+	# the 'format' property or simply remove it for drives migrated
+	# with storage_migrate (the property is optional, so it shouldn't be a problem)
+	name => '149_targetstorage_map_lvmzfs_defaultlvm',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    targetstorage => 'local-lvm:local-zfs,local-lvm',
+	},
+	storage_migrate_map => {
+	    'local-lvm:vm-149-disk-0' => 'vm-149-disk-0',
+	    'local-dir:149/vm-149-disk-0.qcow2' => 'vm-149-disk-0',
+	},
+	expected_calls => $default_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-zfs:vm-149-disk-0' => 1,
+		'local-lvm:vm-149-disk-0' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-zfs:vm-149-disk-0,format=raw,size=4G',
+		scsi1 => 'local-lvm:vm-149-disk-0,format=qcow2,size=1G',
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	# FIXME same as for the previous test
+	name => '149_targetstorage_map_dirzfs_lvmdir',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	    targetstorage => 'local-dir:local-zfs,local-lvm:local-dir',
+	},
+	storage_migrate_map => {
+	    'local-lvm:vm-149-disk-0' => '149/vm-149-disk-0.raw',
+	    'local-dir:149/vm-149-disk-0.qcow2' => 'vm-149-disk-0',
+	},
+	expected_calls => $default_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-dir:149/vm-149-disk-0.raw' => 1,
+		'local-zfs:vm-149-disk-0' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-dir:149/vm-149-disk-0.raw,format=raw,size=4G',
+		scsi1 => 'local-zfs:vm-149-disk-0,format=qcow2,size=1G',
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '149_running_targetstorage_map_lvmzfs_defaultlvm',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	    targetstorage => 'local-lvm:local-zfs,local-lvm',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-zfs:vm-149-disk-10' => 1,
+		'local-lvm:vm-149-disk-11' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => 'local-lvm:vm-149-disk-11,format=raw,size=1G',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_targetstorage_map_lvmzfs_dirdir',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	    targetstorage => 'local-lvm:local-zfs,local-dir:local-dir',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-zfs:vm-149-disk-10' => 1,
+		'local-dir:149/vm-149-disk-11.qcow2' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => 'local-dir:149/vm-149-disk-11.qcow2,format=qcow2,size=1G',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_targetstorage_zfs',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	    targetstorage => 'local-zfs',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-zfs:vm-149-disk-10' => 1,
+		'local-zfs:vm-149-disk-11' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-zfs:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => 'local-zfs:vm-149-disk-11,format=raw,size=1G',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_wrong_size',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	config_patch => {
+	    scsi0 => 'local-lvm:vm-149-disk-0,size=123T',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-lvm:vm-149-disk-10' => 1,
+		'local-dir:149/vm-149-disk-11.qcow2' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => 'local-dir:149/vm-149-disk-11.qcow2,format=qcow2,size=1G',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_missing_size',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	config_patch => {
+	    scsi0 => 'local-lvm:vm-149-disk-0',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-lvm:vm-149-disk-10' => 1,
+		'local-dir:149/vm-149-disk-11.qcow2' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => 'local-dir:149/vm-149-disk-11.qcow2,format=qcow2,size=1G',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '105_local_device_shared',
+	target => 'pve1',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    ide2 => '/dev/sde,shared=1',
+	},
+	expected_calls => $default_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => get_patched_config(105, {
+		ide2 => '/dev/sde,shared=1',
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_local_device_in_snapshot',
+	target => 'pve1',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    snapshots => {
+		ohsnap => {
+		    ide2 => '/dev/sde',
+		},
+	    },
+	},
+	expected_calls => {},
+	expect_die => "can't migrate local disk '/dev/sde': local file/device",
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => {},
+	    vm_config => get_patched_config(105, {
+		snapshots => {
+		    ohsnap => {
+			ide2 => '/dev/sde',
+		    },
+		},
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_local_device',
+	target => 'pve1',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    ide2 => '/dev/sde',
+	},
+	expected_calls => {},
+	expect_die => "can't migrate local disk '/dev/sde': local file/device",
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => {},
+	    vm_config => get_patched_config(105, {
+		ide2 => '/dev/sde',
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_cdrom_in_snapshot',
+	target => 'pve1',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    snapshots => {
+		ohsnap => {
+		    ide2 => 'cdrom,media=cdrom',
+		},
+	    },
+	},
+	expected_calls => {},
+	expect_die => "can't migrate local cdrom drive (referenced in snapshot - ohsnap",
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => {},
+	    vm_config => get_patched_config(105, {
+		snapshots => {
+		    ohsnap => {
+			ide2 => 'cdrom,media=cdrom',
+		    },
+		},
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_cdrom',
+	target => 'pve1',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    ide2 => 'cdrom,media=cdrom',
+	},
+	expected_calls => {},
+	expect_die => "can't migrate local cdrom drive",
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => {},
+	    vm_config => get_patched_config(105, {
+		ide2 => 'cdrom,media=cdrom',
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '149_running_missing_option_withlocaldisks',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	},
+	expected_calls => {},
+	expect_die => "can't live migrate attached local disks without with-local-disks option",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {},
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_missing_option_online',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    'with-local-disks' => 1,
+	},
+	expected_calls => {},
+	expect_die => "can't migrate running VM without --online",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {},
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '1033_running_customcpu',
+	target => 'pve1',
+	vmid => 1033,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	    runningcpu => 'host,+kvm_pv_eoi,+kvm_pv_unhalt',
+	},
+	opts => {
+	    online => 1,
+	},
+	config_patch => {
+	    cpu => 'custom-mycpu',
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {},
+	    vm_config => get_patched_config(1033, {
+		cpu => 'custom-mycpu',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+		runningcpu => 'host,+kvm_pv_eoi,+kvm_pv_unhalt',
+	    },
+	},
+    },
+    {
+	name => '105_replicated_to_non_replication_target',
+	target => 'pve1',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	target_volids => {},
+	expected_calls => $replicated_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => $vm_configs->{105},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_running_replicated',
+	target => 'pve2',
+	vmid => 105,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-i440fx-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	target_volids => local_volids_for_vm(105),
+	expected_calls => {},
+	expect_die => "online storage migration not possible if snapshot exists",
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => $vm_configs->{105},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-i440fx-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '105_replicated',
+	target => 'pve2',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	target_volids => local_volids_for_vm(105),
+	expected_calls => $replicated_expected_calls_offline,
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => $vm_configs->{105},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '105_running_replicated_without_snapshot',
+	target => 'pve2',
+	vmid => 105,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-i440fx-5.0+pve0',
+	},
+	config_patch => {
+	    snapshots => undef,
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	target_volids => local_volids_for_vm(105),
+	expected_calls => {
+	    %{$replicated_expected_calls_online},
+	    'block-dirty-bitmap-add-drive-scsi0' => 1,
+	    'block-dirty-bitmap-add-drive-ide0' => 1,
+	},
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => get_patched_config(105, {
+		snapshots => {},
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-i440fx-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '105_replicated_without_snapshot',
+	target => 'pve2',
+	vmid => 105,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    snapshots => undef,
+	},
+	opts => {
+	    online => 1,
+	},
+	target_volids => local_volids_for_vm(105),
+	expected_calls => $replicated_expected_calls_offline,
+	expected => {
+	    source_volids => local_volids_for_vm(105),
+	    target_volids => local_volids_for_vm(105),
+	    vm_config => get_patched_config(105, {
+		snapshots => {},
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '1033_running',
+	target => 'pve2',
+	vmid => 1033,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {},
+	    vm_config => $vm_configs->{1033},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_locked',
+	target => 'pve2',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	config_patch => {
+	    lock => 'locked',
+	},
+	expected_calls => {},
+	expect_die => "VM is locked",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {},
+	    vm_config => get_patched_config(149, {
+		lock => 'locked',
+	    }),
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '149_storage_not_available',
+	target => 'pve2',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	expected_calls => {},
+	expect_die => "storage 'local-lvm' is not available on node 'pve2'",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {},
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	name => '149_running',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	expected_calls => $default_expected_calls_online,
+	expected => {
+	    source_volids => {},
+	    target_volids => {
+		'local-lvm:vm-149-disk-10' => 1,
+		'local-dir:149/vm-149-disk-11.qcow2' => 1,
+	    },
+	    vm_config => get_patched_config(149, {
+		scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
+		scsi1 => 'local-dir:149/vm-149-disk-11.qcow2,format=qcow2,size=1G',
+	    }),
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_running_drive_mirror_fail',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 1,
+	    runningmachine => 'pc-q35-5.0+pve0',
+	},
+	opts => {
+	    online => 1,
+	    'with-local-disks' => 1,
+	},
+	expected_calls => {},
+	expect_die => "qemu_drive_mirror 'scsi1' error",
+	fail_config => {
+	    'qemu_drive_mirror' => 'scsi1',
+	},
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {},
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 1,
+		runningmachine => 'pc-q35-5.0+pve0',
+	    },
+	},
+    },
+    {
+	name => '149_offline',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    'with-local-disks' => 1,
+	},
+	expected_calls => $default_expected_calls_offline,
+	expected => {
+	    source_volids => {},
+	    target_volids => local_volids_for_vm(149),
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+    {
+	# FIXME also cleanup remote disks when failing this early
+	name => '149_storage_migrate_fail',
+	target => 'pve1',
+	vmid => 149,
+	vm_status => {
+	    running => 0,
+	},
+	opts => {
+	    'with-local-disks' => 1,
+	},
+	fail_config => {
+	    'storage_migrate' => 'local-lvm:vm-149-disk-0',
+	},
+	expected_calls => {},
+	expect_die => "storage_migrate 'local-lvm:vm-149-disk-0' error",
+	expected => {
+	    source_volids => local_volids_for_vm(149),
+	    target_volids => {
+		'local-dir:149/vm-149-disk-0.qcow2' => 1,
+	    },
+	    vm_config => $vm_configs->{149},
+	    vm_status => {
+		running => 0,
+	    },
+	},
+    },
+];
+
+mkdir $RUN_DIR_PATH;
+
+file_set_contents("${RUN_DIR_PATH}/replication_config", to_json($replication_config));
+file_set_contents("${RUN_DIR_PATH}/storage_config", to_json($storage_config));
+file_set_contents("${RUN_DIR_PATH}/source_vdisks", to_json($source_vdisks));
+
+my $single_test_name = shift;
+
+foreach my $test (@{$tests}) {
+    my $name = $test->{name};
+    next if defined($single_test_name) && $name ne $single_test_name;
+    my $expect_die = $test->{expect_die};
+    my $expected = $test->{expected};
+
+    my $source_volids = local_volids_for_vm($test->{vmid});
+    my $target_volids = $test->{target_volids} // {};
+
+    my $config_patch = $test->{config_patch};
+    my $vm_config = get_patched_config($test->{vmid}, $test->{config_patch});
+
+    my $fail_config = $test->{fail_config} // {};
+    my $storage_migrate_map = $test->{storage_migrate_map} // {};
+
+    if (my $targetstorage = $test->{opts}->{targetstorage}) {
+	$test->{opts}->{storagemap} = PVE::JSONSchema::parse_idmap($targetstorage, 'pve-storage-id');
+    }
+
+    my $migrate_params = {
+	target => $test->{target},
+	vmid => $test->{vmid},
+	opts => $test->{opts},
+    };
+
+    file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json({}));
+    file_set_contents("${RUN_DIR_PATH}/source_volids", to_json($source_volids));
+    file_set_contents("${RUN_DIR_PATH}/target_volids", to_json($target_volids));
+    file_set_contents("${RUN_DIR_PATH}/vm_config", to_json($vm_config));
+    file_set_contents("${RUN_DIR_PATH}/vm_status", to_json($test->{vm_status}));
+    file_set_contents("${RUN_DIR_PATH}/expected_calls", to_json($test->{expected_calls}));
+    file_set_contents("${RUN_DIR_PATH}/fail_config", to_json($fail_config));
+    file_set_contents("${RUN_DIR_PATH}/storage_migrate_map", to_json($storage_migrate_map));
+    file_set_contents("${RUN_DIR_PATH}/migrate_params", to_json($migrate_params));
+
+    $ENV{QM_LIB_PATH} = $QM_LIB_PATH;
+    $ENV{RUN_DIR_PATH} = $RUN_DIR_PATH;
+    my $exitcode = run_command([
+	'/usr/bin/perl',
+	"-I${MIGRATE_LIB_PATH}",
+	"-I${MIGRATE_LIB_PATH}/test",
+	"${MIGRATE_LIB_PATH}/test/MigrationTest/QemuMigrateMock.pm",
+    ], noerr => 1, errfunc => sub {print "#$name - $_[0]\n"} );
+
+    if (defined($expect_die) && $exitcode) {
+	my $log = file_get_contents("${RUN_DIR_PATH}/log");
+	my @lines = split /\n/, $log;
+
+	my $matched = 0;
+	foreach my $line (@lines) {
+	    $matched = 1 if $line =~ m/^err:.*\Q${expect_die}\E/;
+	    $matched = 1 if $line =~ m/^warn:.*\Q${expect_die}\E/;
+	}
+	if (!$matched) {
+	    fail($name);
+	    note("expected error message is not present in log");
+	}
+    } elsif (defined($expect_die) && !$exitcode) {
+	fail($name);
+	note("mocked migrate call didn't fail, but it was expected to - check log");
+    } elsif (!defined($expect_die) && $exitcode) {
+	fail($name);
+	note("mocked migrate call failed, but it was not expected - check log");
+    }
+
+    my $expected_calls = decode_json(file_get_contents("${RUN_DIR_PATH}/expected_calls"));
+    foreach my $call (keys %{$expected_calls}) {
+	fail($name);
+	note("expected call '$call' was not made");
+    }
+
+    if (!defined($expect_die)) {
+	my $nbd_info = decode_json(file_get_contents("${RUN_DIR_PATH}/nbd_info"));
+	foreach my $drive (keys %{$nbd_info}) {
+	    fail($name);
+	    note("drive '$drive' was not mirrored");
+	}
+    }
+
+    my $actual = {
+	source_volids => decode_json(file_get_contents("${RUN_DIR_PATH}/source_volids")),
+	target_volids => decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids")),
+	vm_config => decode_json(file_get_contents("${RUN_DIR_PATH}/vm_config")),
+	vm_status => decode_json(file_get_contents("${RUN_DIR_PATH}/vm_status")),
+    };
+
+    is_deeply($actual, $expected, $name);
+
+    rename("${RUN_DIR_PATH}/log", "${RUN_DIR_PATH}/log-$name") or die "rename log failed\n";
+}
+
+done_testing();
-- 
2.20.1





^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied-series:  [PATCH-SERIES v3] migration tests
  2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
                   ` (4 preceding siblings ...)
  2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 5/5] create test environment for migration Fabian Ebner
@ 2020-12-15 15:20 ` Thomas Lamprecht
  5 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2020-12-15 15:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 01.12.20 13:06, Fabian Ebner wrote:
> Refactor some code and create a test enviroment for migration. See the last
> patch for a description of the latter.
> 
> The first two patches depend on libpve-guest-common-perl >=3.1-3
> 
> Changes from v2:
>     * drop already applied patch introducing move_config_to_node helper
>     * rebase on current master and verify tests still behave the same
>       way (they did, but it's not surprising as QemuMigrate.pm didn't
>       change much since v2 was sent).
>     * add a patch to sort volumes migrated with storage_migrate and adapt
>       affected test
> 
> 
> container:
> 
> Fabian Ebner (1):
>   use new move_config_to_node method
> 
>  src/PVE/LXC/Migrate.pm | 12 ++----------
>  1 file changed, 2 insertions(+), 10 deletions(-)
> 
> 
> qemu-server:
> 
> Fabian Ebner (4):
>   use new move_config_to_node method
>   migration: factor out starting remote tunnel
>   migration: sort volumes migrated with storage_migrate
>   create test environment for migration
> 
>  PVE/QemuMigrate.pm                    |  130 +-
>  test/Makefile                         |    5 +-
>  test/MigrationTest/QemuMigrateMock.pm |  319 +++++
>  test/MigrationTest/QmMock.pm          |  142 +++
>  test/MigrationTest/Shared.pm          |  170 +++
>  test/run_qemu_migrate_tests.pl        | 1594 +++++++++++++++++++++++++
>  6 files changed, 2295 insertions(+), 65 deletions(-)
>  create mode 100644 test/MigrationTest/QemuMigrateMock.pm
>  create mode 100644 test/MigrationTest/QmMock.pm
>  create mode 100644 test/MigrationTest/Shared.pm
>  create mode 100755 test/run_qemu_migrate_tests.pl
> 



applied series, thanks!

The tests for qemu add ~ 36s of build time, which is rather much for that package.
Can you look into this, maybe we can move the tests to single .json file snippets
and do them in parallel through the make buildsystem.





^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-12-15 15:20 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-01 12:06 [pve-devel] [PATCH-SERIES v3] migration tests Fabian Ebner
2020-12-01 12:06 ` [pve-devel] [PATCH v3 container 1/5] use new move_config_to_node method Fabian Ebner
2020-12-01 12:06 ` [pve-devel] [PATCH v3 qemu-server 2/5] " Fabian Ebner
2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 3/5] migration: factor out starting remote tunnel Fabian Ebner
2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 4/5] migration: sort volumes migrated with storage_migrate Fabian Ebner
2020-12-01 12:07 ` [pve-devel] [PATCH v3 qemu-server 5/5] create test environment for migration Fabian Ebner
2020-12-15 15:20 ` [pve-devel] applied-series: [PATCH-SERIES v3] migration tests Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal