* [pve-devel] [PATCH qemu-server 1/2] migration: factor out starting remote tunnel
@ 2020-07-16 12:06 Fabian Ebner
2020-07-16 12:07 ` [pve-devel] [RFC qemu-server 2/2] create test environment for QemuMigrate.pm Fabian Ebner
0 siblings, 1 reply; 2+ messages in thread
From: Fabian Ebner @ 2020-07-16 12:06 UTC (permalink / raw)
To: pve-devel
so the '-S' check can be avoided with mocking.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/QemuMigrate.pm | 119 ++++++++++++++++++++++++---------------------
1 file changed, 64 insertions(+), 55 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index cd4a005..0d8dc82 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -204,6 +204,69 @@ sub finish_tunnel {
die $err if $err;
}
+sub start_remote_tunnel {
+ my ($self, $raddr, $rport, $ruri, $unix_socket_info) = @_;
+
+ my $nodename = PVE::INotify::nodename();
+ my $migration_type = $self->{opts}->{migration_type};
+
+ if ($migration_type eq 'secure') {
+
+ if ($ruri =~ /^unix:/) {
+ my $ssh_forward_info = ["$raddr:$raddr"];
+ $unix_socket_info->{$raddr} = 1;
+
+ my $unix_sockets = [ keys %$unix_socket_info ];
+ for my $sock (@$unix_sockets) {
+ push @$ssh_forward_info, "$sock:$sock";
+ unlink $sock;
+ }
+
+ $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
+
+ my $unix_socket_try = 0; # wait for the socket to become ready
+ while ($unix_socket_try <= 100) {
+ $unix_socket_try++;
+ my $available = 0;
+ foreach my $sock (@$unix_sockets) {
+ if (-S $sock) {
+ $available++;
+ }
+ }
+
+ if ($available == @$unix_sockets) {
+ last;
+ }
+
+ usleep(50000);
+ }
+ if ($unix_socket_try > 100) {
+ $self->{errors} = 1;
+ $self->finish_tunnel($self->{tunnel});
+ die "Timeout, migration socket $ruri did not get ready";
+ }
+ $self->{tunnel}->{unix_sockets} = $unix_sockets if (@$unix_sockets);
+
+ } elsif ($ruri =~ /^tcp:/) {
+ my $ssh_forward_info = [];
+ if ($raddr eq "localhost") {
+ # for backwards compatibility with older qemu-server versions
+ my $pfamily = PVE::Tools::get_host_address_family($nodename);
+ my $lport = PVE::Tools::next_migrate_port($pfamily);
+ push @$ssh_forward_info, "$lport:localhost:$rport";
+ }
+
+ $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
+
+ } else {
+ die "unsupported protocol in migration URI: $ruri\n";
+ }
+ } else {
+ #fork tunnel for insecure migration, to send faster commands like resume
+ $self->{tunnel} = $self->fork_tunnel();
+ }
+}
+
sub lock_vm {
my ($self, $vmid, $code, @param) = @_;
@@ -795,62 +858,8 @@ sub phase2 {
}
$self->log('info', "start remote tunnel");
+ $self->start_remote_tunnel($raddr, $rport, $ruri, $unix_socket_info);
- if ($migration_type eq 'secure') {
-
- if ($ruri =~ /^unix:/) {
- my $ssh_forward_info = ["$raddr:$raddr"];
- $unix_socket_info->{$raddr} = 1;
-
- my $unix_sockets = [ keys %$unix_socket_info ];
- for my $sock (@$unix_sockets) {
- push @$ssh_forward_info, "$sock:$sock";
- unlink $sock;
- }
-
- $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
-
- my $unix_socket_try = 0; # wait for the socket to become ready
- while ($unix_socket_try <= 100) {
- $unix_socket_try++;
- my $available = 0;
- foreach my $sock (@$unix_sockets) {
- if (-S $sock) {
- $available++;
- }
- }
-
- if ($available == @$unix_sockets) {
- last;
- }
-
- usleep(50000);
- }
- if ($unix_socket_try > 100) {
- $self->{errors} = 1;
- $self->finish_tunnel($self->{tunnel});
- die "Timeout, migration socket $ruri did not get ready";
- }
- $self->{tunnel}->{unix_sockets} = $unix_sockets if (@$unix_sockets);
-
- } elsif ($ruri =~ /^tcp:/) {
- my $ssh_forward_info = [];
- if ($raddr eq "localhost") {
- # for backwards compatibility with older qemu-server versions
- my $pfamily = PVE::Tools::get_host_address_family($nodename);
- my $lport = PVE::Tools::next_migrate_port($pfamily);
- push @$ssh_forward_info, "$lport:localhost:$rport";
- }
-
- $self->{tunnel} = $self->fork_tunnel($ssh_forward_info);
-
- } else {
- die "unsupported protocol in migration URI: $ruri\n";
- }
- } else {
- #fork tunnel for insecure migration, to send faster commands like resume
- $self->{tunnel} = $self->fork_tunnel();
- }
my $start = time();
my $opt_bwlimit = $self->{opts}->{bwlimit};
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* [pve-devel] [RFC qemu-server 2/2] create test environment for QemuMigrate.pm
2020-07-16 12:06 [pve-devel] [PATCH qemu-server 1/2] migration: factor out starting remote tunnel Fabian Ebner
@ 2020-07-16 12:07 ` Fabian Ebner
0 siblings, 0 replies; 2+ messages in thread
From: Fabian Ebner @ 2020-07-16 12:07 UTC (permalink / raw)
To: pve-devel
and the associated parts for 'qm start'.
Each test will first populate the MigrationTest/run directory
with the relevant configuration files and files keeping track of the
state of everything necessary. Second, the mock-script for migration
is executed, which in turn will execute the 'qm start' mock-script
(if it's an online test that gets far enough). The scripts will simulate
a migration and update the relevant files in the MigrationTest/run directory.
Finally, the main test script will evaluate the state.
The main checks are the volume IDs on the source and target and the VM
configuration itself. Additional checks are the vm_status and expected_calls,
keeping track if certain calls have been made.
The rationale behind creating two mock-scripts is two-fold:
1. It removes the need to hard code responses for the tunnel
and to recycle logic for determining and allocating migration volumes.
Some of that logic already happens in the API part, so it was necessary
to mock the whole CLI-Handler.
2. It allows testing the code relevant for migration in 'qm start' as well,
and it should even be possible to test different versions of the
mock-scripts against each other. With a bit of extra work and things
like 'git worktree', it might even be possible to automate this.
A helper get_patched config is introduced to be able to make small
modifications to the config file for modified tests without wasting much space.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Depends on [0].
Sadly, running against older versions is not possible without back-porting
the two patches that made mocking in QemuMigrate.pm even possible,
namely introducing move_config_to_node and start_remote_tunnel.
And either the scripts themselves would need to be placed in the correct
place in the old code tree, or one would need to introduce {QM,MIGRATE}_SCRIPT_PATH
variable alongside {QM,MIGRATE}_LIB_PATH.
Things still missing:
* currently only errors leading to 'migration aborted' can be expected,
errors leading to 'migration problems' are not yet matched for
* more failure modes
* more expected_calls
* many tests, including:
- tests with replication
- tests with custom cpu
- tests with snapshots
- tests with failure modes
* something I can't think of right now
[0]: https://lists.proxmox.com/pipermail/pve-devel/2020-July/044179.html
test/Makefile | 6 +-
test/MigrationTest/QemuMigrateMock.pm | 305 ++++++++++++++++++++
test/MigrationTest/QmMock.pm | 137 +++++++++
test/MigrationTest/Shared.pm | 150 ++++++++++
test/run_qemu_migrate_tests.pl | 382 ++++++++++++++++++++++++++
5 files changed, 979 insertions(+), 1 deletion(-)
create mode 100644 test/MigrationTest/QemuMigrateMock.pm
create mode 100644 test/MigrationTest/QmMock.pm
create mode 100644 test/MigrationTest/Shared.pm
create mode 100755 test/run_qemu_migrate_tests.pl
diff --git a/test/Makefile b/test/Makefile
index d88cbd2..80c6bc8 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
all: test
-test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert
+test: test_snapshot test_ovf test_cfg_to_cmd test_pci_addr_conflicts test_qemu_img_convert test_migration
test_snapshot: run_snapshot_tests.pl
./run_snapshot_tests.pl
@@ -17,3 +17,7 @@ test_qemu_img_convert: run_qemu_img_convert_tests.pl
test_pci_addr_conflicts: run_pci_addr_checks.pl
./run_pci_addr_checks.pl
+
+test_migration: run_qemu_migrate_tests.pl
+ perl -I../../pve-guest-common ./run_qemu_migrate_tests.pl
+#TODO remove pve-guest-common path once the move_config_to_node patch has landed
diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
new file mode 100644
index 0000000..c8ed903
--- /dev/null
+++ b/test/MigrationTest/QemuMigrateMock.pm
@@ -0,0 +1,305 @@
+package MigrationTest::QemuMigrateMock;
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::MockModule;
+
+use MigrationTest::Shared;
+
+use PVE::API2::Qemu;
+use PVE::Storage;
+use PVE::Tools qw(file_set_contents file_get_contents);
+
+use PVE::CLIHandler;
+use base qw(PVE::CLIHandler);
+
+my $RUN_DIR_PATH = './MigrationTest/run/';
+
+my $QM_LIB_PATH = $ENV{QM_LIB_PATH};
+die "no QM_LIB_PATH set\n" if !$QM_LIB_PATH;
+
+my $source_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/source_volids"));
+my $source_vdisks = decode_json(file_get_contents("${RUN_DIR_PATH}/source_vdisks"));
+my $vm_status = decode_json(file_get_contents("${RUN_DIR_PATH}/vm_status"));
+my $expected_calls = decode_json(file_get_contents("${RUN_DIR_PATH}/expected_calls"));
+my $fail_config = decode_json(file_get_contents("${RUN_DIR_PATH}/fail_config"));
+my $storage_migrate_map = decode_json(file_get_contents("${RUN_DIR_PATH}/storage_migrate_map"));
+my $migrate_params = decode_json(file_get_contents("${RUN_DIR_PATH}/migrate_params"));
+
+my $test_vmid = $migrate_params->{vmid};
+my $test_target = $migrate_params->{target};
+my $test_opts = $migrate_params->{opts};
+my $current_log = '';
+my $die_message = '';
+
+my $vm_stop_executed = 0;
+
+# mocked modules
+
+my $inotify_module = Test::MockModule->new("PVE::INotify");
+$inotify_module->mock(
+ nodename => sub {
+ return 'pve0';
+ },
+);
+
+$MigrationTest::Shared::qemu_config_module->mock(
+ move_config_to_node => sub {
+ my ($self, $vmid, $target) = @_;
+
+ die "moving wrong config: '$vmid'\n" if $vmid ne $test_vmid;
+ die "moving config to wrong node: '$target'\n" if $target ne $test_target;
+
+ delete $expected_calls->{move_config_to_node};
+ },
+);
+
+my $qemu_migrate_module = Test::MockModule->new("PVE::QemuMigrate");
+$qemu_migrate_module->mock(
+ finish_tunnel => sub {
+ delete $expected_calls->{'finish_tunnel'};
+ return;
+ },
+ fork_tunnel => sub {
+ die "fork_tunnel (mocked) - implement me\n"; # currently no call should lead here
+ },
+ read_tunnel => sub {
+ die "read_tunnel (mocked) - implement me\n"; # currently no call should lead here
+ },
+ start_remote_tunnel => sub {
+ my ($self, $raddr, $rport, $ruri, $unix_socket_info) = @_;
+ $expected_calls->{'finish_tunnel'} = 1;
+ $self->{tunnel} = {
+ writer => "mocked",
+ reader => "mocked",
+ pid => 123456,
+ version => 1,
+ };
+ },
+ write_tunnel => sub {
+ my ($self, $tunnel, $timeout, $command) = @_;
+
+ if ($command =~ m/^resume (\d+)$/) {
+ my $vmid = $1;
+ die "resuming wrong VM '$vmid'\n" if $vmid ne $test_vmid;
+ return;
+ }
+ die "write_tunnel (mocked) - implement me: $command\n";
+ },
+ log => sub {
+ my ($self, $level, $message) = @_;
+
+ $current_log .= "$level: $message\n";
+
+ $die_message = $message if $message =~ /^migration aborted \(duration /;
+ },
+ mon_cmd => sub {
+ my ($vmid, $command, %params) = @_;
+
+ if ($command eq 'nbd-server-start') {
+ return;
+ } elsif ($command eq 'nbd-server-add') {
+ return;
+ } elsif ($command eq 'qom-set') {
+ return;
+ } elsif ($command eq 'query-migrate') {
+ return { status => 'completed' };
+ } elsif ($command eq 'migrate') {
+ return;
+ } elsif ($command eq 'migrate-set-parameters') {
+ return;
+ } elsif ($command eq 'migrate-cancel') {
+ return;
+ }
+ die "mon_cmd (mocked) - implement me: $command";
+ },
+ transfer_replication_state => sub {
+ delete $expected_calls->{transfer_replication_state};
+ },
+ switch_replication_job_target => sub {
+ delete $expected_calls->{switch_replication_job_target};
+ },
+);
+
+$MigrationTest::Shared::qemu_server_module->mock(
+ kvm_user_version => sub {
+ return "5.0.0";
+ },
+ qemu_blockjobs_cancel => sub {
+ return;
+ },
+ qemu_drive_mirror => sub {
+ my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, $completion, $qga, $bwlimit, $src_bitmap) = @_;
+#TODO cross-check with info from $RUN_DIR_PATH/nbd_info
+
+ die "qemu_drive_mirror '$drive' error\n" if $fail_config->{qemu_drive_mirror}
+ && $fail_config->{qemu_drive_mirror} eq $drive;
+
+ die "drive_mirror with wrong vmid: '$vmid'\n" if $vmid ne $test_vmid;
+
+ return;
+ },
+ qemu_drive_mirror_monitor => sub {
+#TODO add failure mode
+ die "qemu_drive_mirror_monitor error\n" if $fail_config->{drive_mirror_monitor};
+ return;
+ },
+ set_migration_caps => sub {
+ return;
+ },
+ vm_stop => sub {
+ $vm_stop_executed = 1;
+ delete $expected_calls->{'vm_stop'};
+ },
+);
+
+my $qemu_server_cpuconfig_module = Test::MockModule->new("PVE::QemuServer::CPUConfig");
+$qemu_server_cpuconfig_module->mock(
+ get_cpu_from_running_vm => sub {
+ die "implement me";
+# TODO return cpu from vm_status
+ }
+);
+
+my $qemu_server_helpers_module = Test::MockModule->new("PVE::QemuServer::Helpers");
+$qemu_server_helpers_module->mock(
+ vm_running_locally => sub {
+ return $vm_status->{running} && !$vm_stop_executed;
+ },
+);
+
+my $qemu_server_machine_module = Test::MockModule->new("PVE::QemuServer::Machine");
+$qemu_server_machine_module->mock(
+ qemu_machine_pxe => sub {
+ return $vm_status->{runningmachine};
+ },
+);
+
+my $ssh_info_module = Test::MockModule->new("PVE::SSHInfo");
+$ssh_info_module->mock(
+ get_ssh_info => sub {
+ my ($node, $network_cidr) = @_;
+ return {
+ ip => '1.2.3.4',
+ name => $node,
+ network => $network_cidr,
+ };
+ },
+);
+
+$MigrationTest::Shared::storage_module->mock(
+ storage_migrate => sub {
+ my ($cfg, $volid, $target_sshinfo, $target_storeid, $opts, $logfunc) = @_;
+
+ die "storage_migrate '$volid' error\n" if $fail_config->{storage_migrate}
+ && $fail_config->{storage_migrate} eq $volid;
+
+ my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+ my $target_volname = $storage_migrate_map->{$volid} // $opts->{target_volname} // $volname;
+
+ my $target_volid = "${target_storeid}:${target_volname}";
+
+ MigrationTest::Shared::add_target_volid($target_volid);
+
+ return $target_volid;
+ },
+ vdisk_list => sub { # expects vmid to be set
+ my ($cfg, $storeid, $vmid, $vollist) = @_;
+
+ my @storeids = defined($storeid) ? ($storeid) : keys %{$source_vdisks};
+
+ my $res = {};
+ foreach my $storeid (@storeids) {
+ my $list_for_storeid = $source_vdisks->{$storeid};
+ my @list_for_vm = grep { $_->{vmid} eq $vmid } @{$list_for_storeid};
+ $res->{$storeid} = \@list_for_vm;
+ }
+ return $res;
+ },
+ vdisk_free => sub {
+ my ($scfg, $volid) = @_;
+
+ die "vdisk_free error\n" if $fail_config->{$volid} && $fail_config->{$volid} eq 'vdisk_free';
+
+ delete $source_volids->{$volid};
+ },
+);
+
+my $second_call = 0;
+
+$MigrationTest::Shared::tools_module->mock(
+ get_host_address_family => sub {
+ die "get_host_address_family (mocked) - implement me\n"; # currently no call should lead here
+ },
+ next_migrate_port => sub {
+ die "next_migrate_port (mocked) - implement me\n"; # currently no call should lead here
+ },
+ run_command => sub {
+ my ($cmd_full, %param) = @_;
+
+ my $cmd_msg = to_json($cmd_full);
+
+ my $cmd = shift @{$cmd_full};
+
+ if ($cmd eq '/usr/bin/ssh') {
+ while (defined($cmd)) {
+ $cmd = shift @{$cmd_full};
+ if ($cmd eq '/bin/true') {
+ return 0;
+ } elsif ($cmd eq 'qm') {
+ $cmd = shift @{$cmd_full};
+ if ($cmd eq 'start') {
+ delete $expected_calls->{ssh_qm_start};
+ # TODO check for forcemachine+forcecpu in parameters
+ return $MigrationTest::Shared::tools_module->original('run_command')->([
+ 'perl',
+ "-I${QM_LIB_PATH}",
+ "-I${QM_LIB_PATH}/test",
+ "${QM_LIB_PATH}/test/MigrationTest/QmMock.pm",
+ 'start',
+ @{$cmd_full},
+ ], %param);
+ } elsif ($cmd eq 'nbdstop') {
+ delete $expected_calls->{ssh_nbdstop};
+ return 0;
+ } elsif ($cmd eq 'resume') {
+ return 0;
+ } elsif ($cmd eq 'unlock') {
+ my $vmid = shift @{$cmd_full};;
+
+ die "unlocking wrong vmid: $vmid\n" if $vmid ne $test_vmid;
+
+ PVE::QemuConfig->remove_lock($vmid);
+
+ return 0;
+ } elsif ($cmd eq 'stop') {
+ return 0;
+ }
+ die "run_command (mocked) ssh qm command - implement me: ${cmd_msg}";
+ } elsif ($cmd eq 'pvesm') {
+ $cmd = shift @{$cmd_full};
+ if ($cmd eq 'free') {
+ my $volid = shift @{$cmd_full};
+ MigrationTest::Shared::remove_target_volid($volid);
+ }
+ die "run_command (mocked) ssh pvesm command - implement me: ${cmd_msg}";
+ }
+ }
+ die "run_command (mocked) ssh command - implement me: ${cmd_msg}";
+ }
+ die "run_command (mocked) - implement me: ${cmd_msg}";
+ },
+);
+
+eval { PVE::QemuMigrate->migrate($test_target, undef, $test_vmid, $test_opts) };
+if (my $error = $@) {
+ file_set_contents("${RUN_DIR_PATH}/die_message", $die_message);
+}
+
+file_set_contents("${RUN_DIR_PATH}/source_volids", to_json($source_volids));
+file_set_contents("${RUN_DIR_PATH}/expected_calls", to_json($expected_calls));
+file_set_contents("${RUN_DIR_PATH}/log", $current_log);
+
+1;
diff --git a/test/MigrationTest/QmMock.pm b/test/MigrationTest/QmMock.pm
new file mode 100644
index 0000000..6d72bec
--- /dev/null
+++ b/test/MigrationTest/QmMock.pm
@@ -0,0 +1,137 @@
+package MigrationTest::QmMock;
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::MockModule;
+
+use MigrationTest::Shared;
+
+use PVE::API2::Qemu;
+use PVE::Storage;
+use PVE::Tools qw(file_set_contents file_get_contents);
+
+use PVE::CLIHandler;
+use base qw(PVE::CLIHandler);
+
+my $RUN_DIR_PATH = './MigrationTest/run/';
+
+my $target_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids"));
+my $migrate_params = decode_json(file_get_contents("${RUN_DIR_PATH}/migrate_params"));
+my $nodename = $migrate_params->{target};
+
+my $kvm_exectued = 0;
+
+sub setup_environment {
+ my $rpcenv = PVE::RPCEnvironment::init('MigrationTest::QmMock', 'cli');
+}
+
+# mock RPCEnvironment directly
+
+sub get_user {
+ return 'root@pam';
+}
+
+sub fork_worker {
+ my ($self, $dtype, $id, $user, $function, $background) = @_;
+ $function->(123456);
+ return '123456';
+}
+
+# mocked modules
+
+my $inotify_module = Test::MockModule->new("PVE::INotify");
+$inotify_module->mock(
+ nodename => sub {
+ return $nodename;
+ },
+);
+
+$MigrationTest::Shared::qemu_server_module->mock(
+ nodename => sub {
+ return $nodename;
+ },
+ config_to_command => sub {
+ return [ 'mocked_kvm_command' ];
+ },
+);
+
+my $qemu_server_helpers_module = Test::MockModule->new("PVE::QemuServer::Helpers");
+$qemu_server_helpers_module->mock(
+ vm_running_locally => sub {
+ return $kvm_exectued;
+ },
+);
+
+my $disk_counter = 10;
+
+$MigrationTest::Shared::storage_module->mock(
+ vdisk_alloc => sub {
+ my ($cfg, $storeid, $vmid, $fmt, $name, $size) = @_;
+
+ die "vdisk_alloc (mocked) - name is not expected to be set - implement me\n" if defined($name);
+
+ my $name_without_extension = "vm-${vmid}-disk-${disk_counter}";
+
+ $disk_counter++;
+
+ my $volid;
+ my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+ if ($scfg->{path}) {
+ $volid = "${storeid}:${vmid}/${name_without_extension}.${fmt}";
+ } else {
+ $volid = "${storeid}:${name_without_extension}";
+ }
+
+ MigrationTest::Shared::add_target_volid($volid);
+
+ return $volid;
+ },
+);
+
+$MigrationTest::Shared::qemu_server_module->mock(
+ mon_cmd => sub { # it's imported so mock it here
+ my ($vmid, $command, %params) = @_;
+
+ if ($command eq 'nbd-server-start') {
+ return;
+ } elsif ($command eq 'nbd-server-add') {
+ return;
+ } elsif ($command eq 'qom-set') {
+ return;
+ }
+ die "mon_cmd (mocked) - implement me: $command";
+ },
+ run_command => sub {
+ my ($cmd_full, %param) = @_;
+
+ my $cmd_msg = to_json($cmd_full);
+
+ my $cmd = shift @{$cmd_full};
+
+ if ($cmd eq '/bin/systemctl') {
+ return;
+ } elsif ($cmd eq 'mocked_kvm_command') {
+ $kvm_exectued = 1;
+ return 0;
+ }
+ die "run_command (mocked) - implement me: ${cmd_msg}";
+ },
+ set_migration_caps => sub {
+ return;
+ },
+ vm_migrate_alloc_nbd_disks => sub{
+ my $nbd = $MigrationTest::Shared::qemu_server_module->original('vm_migrate_alloc_nbd_disks')->(@_);
+ file_set_contents("${RUN_DIR_PATH}/nbd_info", to_json($nbd));
+ return $nbd;
+ },
+);
+
+our $cmddef = {
+ start => [ "PVE::API2::Qemu", 'vm_start', ['vmid'], { node => $nodename } ],
+};
+
+MigrationTest::QmMock->run_cli_handler();
+
+1;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
new file mode 100644
index 0000000..3c98b71
--- /dev/null
+++ b/test/MigrationTest/Shared.pm
@@ -0,0 +1,150 @@
+package MigrationTest::Shared;
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::MockModule;
+use Socket qw(AF_INET);
+
+use PVE::QemuConfig;
+use PVE::Tools qw(file_set_contents file_get_contents lock_file_full);
+
+my $RUN_DIR_PATH = './MigrationTest/run/';
+
+my $storage_config = decode_json(file_get_contents("${RUN_DIR_PATH}/storage_config"));
+my $migrate_params = decode_json(file_get_contents("${RUN_DIR_PATH}/migrate_params"));
+my $test_vmid = $migrate_params->{vmid};
+
+# helpers
+
+sub add_target_volid {
+ my ($volid) = @_;
+
+ lock_file_full("${RUN_DIR_PATH}/target_volids.lock", undef, 0, sub {
+ my $target_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids"));
+ die "target volid already present " if defined($target_volids->{$volid});
+ $target_volids->{$volid} = 1;
+ file_set_contents("${RUN_DIR_PATH}/target_volids", to_json($target_volids));
+ });
+ die $@ if $@;
+}
+
+sub remove_target_volid {
+ my ($volid) = @_;
+
+ lock_file_full("${RUN_DIR_PATH}/target_volids.lock", undef, 0, sub {
+ my $target_volids = decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids"));
+ die "target volid does not exist " if !defined($target_volids->{$volid});
+ delete $target_volids->{$volid};
+ file_set_contents("${RUN_DIR_PATH}/target_volids", to_json($target_volids));
+ });
+ die $@ if $@;
+}
+
+# mocked modules
+
+our $cluster_module = Test::MockModule->new("PVE::Cluster");
+$cluster_module->mock(
+ cfs_read_file => sub {
+ my ($file) = @_;
+ if ($file eq 'datacenter.cfg') {
+ return {};
+ } elsif ($file eq 'replication.cfg') {
+ return {}; # TODO create mocked replication
+ }
+ die "cfs_read_file (mocked) - implement me: $file\n";
+ },
+ check_cfs_quorum => sub {
+ return 1;
+ },
+);
+
+our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
+$ha_config_module->mock(
+ vm_is_ha_managed => sub {
+ return 0;
+ },
+);
+
+our $qemu_config_module = Test::MockModule->new("PVE::QemuConfig");
+$qemu_config_module->mock(
+ assert_config_exists_on_node => sub {
+ return;
+ },
+ load_config => sub {
+ my ($class, $vmid, $node) = @_;
+ die "trying to load wrong config: '$vmid'\n" if $vmid ne $test_vmid;
+ return decode_json(file_get_contents("${RUN_DIR_PATH}/vm_config"));
+ },
+ lock_config => sub { # no use locking here because lock needs to be local to node
+ my ($self, $vmid, $code, @param) = @_;
+ return $code->(@param);
+ },
+ write_config => sub {
+ my ($class, $vmid, $conf) = @_;
+ die "trying to write wrong config: '$vmid'\n" if $vmid ne $test_vmid;
+ file_set_contents("${RUN_DIR_PATH}/vm_config", to_json($conf));
+ },
+);
+
+our $qemu_server_module = Test::MockModule->new("PVE::QemuServer");
+$qemu_server_module->mock(
+ clear_reboot_request => sub {
+ return 1;
+ },
+);
+
+our $replication_module = Test::MockModule->new("PVE::Replication");
+$replication_module->mock(
+ run_replication => sub { #TODO add failure mode
+ my $vm_config = PVE::QemuConfig->load_config($test_vmid);
+ return PVE::QemuConfig->get_replicatable_volumes(
+ $storage_config,
+ $test_vmid,
+ $vm_config,
+ );
+ die "implement me\n";
+ },
+);
+
+
+our $storage_module = Test::MockModule->new("PVE::Storage");
+$storage_module->mock(
+ activate_volumes => sub {
+ return 1;
+ },
+ deactivate_volumes => sub {
+ return 1;
+ },
+ config => sub {
+ return $storage_config;
+ },
+ get_bandwitdth_limit => sub {
+ return 123456;
+ },
+);
+
+our $systemd_module = Test::MockModule->new("PVE::Systemd");
+$systemd_module->mock(
+ wait_for_unit_removed => sub {
+ return;
+ },
+ enter_systemd_scope => sub {
+ return;
+ },
+);
+
+my $migrate_port_counter = 60000;
+
+our $tools_module = Test::MockModule->new("PVE::Tools");
+$tools_module->mock(
+ get_host_address_family => sub {
+ return AF_INET;
+ },
+ next_migrate_port => sub {
+ return $migrate_port_counter++;
+ },
+);
+
+1;
diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl
new file mode 100755
index 0000000..4b9e391
--- /dev/null
+++ b/test/run_qemu_migrate_tests.pl
@@ -0,0 +1,382 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use JSON;
+use Test::More;
+use Test::MockModule;
+
+use PVE::Tools qw(file_set_contents file_get_contents run_command);
+
+my $QM_LIB_PATH = '..';
+my $MIGRATE_LIB_PATH = '..';
+my $RUN_DIR_PATH = './MigrationTest/run/';
+
+# test configuration shared by all tests
+
+my $storage_config = {
+ ids => {
+ local => {
+ content => {
+ images => 1,
+ },
+ path => "/var/lib/vz",
+ type => "dir",
+ shared => 0,
+ },
+ "local-lvm" => {
+ content => {
+ images => 1,
+ },
+ nodes => {
+ pve0 => 1,
+ pve1 => 1,
+ },
+ type => "lvmthin",
+ thinpool => "data",
+ vgname => "pve",
+ },
+ "local-zfs" => {
+ content => {
+ images => 1,
+ rootdir => 1,
+ },
+ pool => "rpool/data",
+ sparse => 1,
+ type => "zfspool",
+ },
+ "rbd-store" => {
+ monhost => "127.0.0.42,127.0.0.21,::1",
+ content => {
+ images => 1,
+ },
+ type => "rbd",
+ pool => "cpool",
+ username => "admin",
+ shared => 1,
+ },
+ "local-dir" => {
+ content => {
+ images => 1,
+ },
+ path => "/some/dir/",
+ type => "dir",
+ },
+ },
+};
+
+my $vm_configs = {
+ 1033 => {
+ 'digest' => '589436937d69b9ec07658cfb9f8ad37e435e2d7e',
+ 'sockets' => 2,
+ 'scsihw' => 'virtio-scsi-pci',
+ 'memory' => 4096,
+ 'bootdisk' => 'scsi0',
+ 'numa' => 0,
+ 'smbios1' => 'uuid=e01e4c73-46f1-47c8-af79-288fdf6b7462',
+ 'ipconfig0' => 'ip=103.214.69.10/25,gw=103.214.69.1',
+ 'ostype' => 'l26',
+ 'vmgenid' => 'af47c000-eb0c-48e8-8991-ca4593cd6916',
+ 'ide0' => 'rbd-kvm:vm-1033-cloudinit,media=cdrom,size=4M',
+ 'ide2' => 'none,media=cdrom',
+ 'scsi0' => 'rbd-kvm:vm-1033-disk-1,size=1G',
+ 'cores' => 1,
+ 'name' => 'VM1033',
+ 'net0' => 'virtio=4E:F1:82:6D:D7:4B,bridge=vmbr0,firewall=1,rate=10',
+ },
+ 149 => {
+ 'agent' => '0',
+ 'bootdisk' => 'scsi0',
+ 'cores' => 1,
+ 'digest' => 'b32096b0dd49742385dd437036deed232946a631',
+ 'hotplug' => 'disk,network,usb,memory,cpu',
+ 'ide2' => 'none,media=cdrom',
+ 'memory' => 4096,
+ 'name' => 'asdf',
+ 'net0' => 'virtio=52:5D:7E:62:85:97,bridge=vmbr1',
+ 'numa' => 1,
+ 'ostype' => 'l26',
+ 'scsi0' => 'local-lvm:vm-149-disk-0,format=raw,size=4G',
+ 'scsi1' => 'local-dir:149/vm-149-disk-0.qcow2,format=qcow2,size=1G',
+ 'scsihw' => 'virtio-scsi-pci',
+ 'smbios1' => 'uuid=e980bd43-a405-42e2-b5f4-31efe6517460',
+ 'sockets' => 1,
+ 'startup' => 'order=2',
+ 'vmgenid' => '36c6c50c-6ef5-4adc-9b6f-6ba9c8071db0',
+ },
+};
+
+my $source_vdisks = {
+ 'local-dir' => [
+ {
+ 'ctime' => 1589439681,
+ 'format' => 'qcow2',
+ 'parent' => undef,
+ 'size' => 1073741824,
+ 'used' => 335872,
+ 'vmid' => '149',
+ 'volid' => 'local-dir:149/vm-149-disk-0.qcow2',
+ },
+ ],
+ 'local-lvm' => [
+ {
+ 'ctime' => '1589277334',
+ 'format' => 'raw',
+ 'size' => 4294967296,
+ 'vmid' => '149',
+ 'volid' => 'local-lvm:vm-149-disk-0',
+ },
+ ],
+};
+
+#TODO add default expected_calls for successful migration (different depending on online/offline)
+
+# helpers
+
+sub get_patched_config {
+ my ($vmid, $patch) = @_;
+
+ my $config = { %{$vm_configs->{$vmid}} };
+ patch_config($config, $patch) if defined($patch);
+
+ return $config;
+}
+
+sub patch_config {
+ my ($config, $patch) = @_;
+
+ foreach my $key (keys %{$patch}) {
+ if (defined($patch->{$key})) {
+ $config->{$key} = $patch->{$key};
+ } else { # use undef value for deletion
+ delete $config->{$key};
+ }
+ }
+}
+
+sub volids_for_vm {
+ my ($vmid) = @_;
+
+ my $res = {};
+ foreach my $storeid (keys %{$source_vdisks}) {
+ $res = {
+ %{$res},
+ map { $_->{vmid} eq $vmid ? ($_->{volid} => 1) : () } @{$source_vdisks->{$storeid}}
+ };
+ }
+ return $res;
+}
+
+my $tests = [
+# NOTE that the nodename in QmMock.pm is hard-coded as 'pve1'
+# each test consists of the following:
+# name - unique name for the test
+# target - hostname of target node
+# vmid - ID of the VM to migrate
+# opts - options for the migrate() call
+# target_volids - hash of volids on the target at the beginning
+# vm_status - hash with running, runningmachine, runningcpu
+# expected_calls - hash whose keys are calls which are required
+# to be made if the migration gets far enough
+# expect_die - expect the migration call to fail with an error message containing this
+# expected - hash consisting of:
+# source_volids - hash of volids expected on the source
+# target_volids - hash of volids expected on the target
+# vm_config - vm configuration hash
+# vm_status - hash with running, runningmachine, runningcpu
+#
+#
+ {
+ name => '149_storage_not_available',
+ target => 'pve2',
+ vmid => 149,
+ vm_status => {
+ running => 0,
+ },
+ expected_calls => {},
+ expect_die => "storage 'local-lvm' is not available on node 'pve2'",
+ expected => {
+ source_volids => volids_for_vm(149),
+ target_volids => {},
+ vm_config => $vm_configs->{149},
+ vm_status => {
+ running => 0,
+ },
+ },
+ },
+ {
+ name => '149_running',
+ target => 'pve1',
+ vmid => 149,
+ vm_status => {
+ running => 1,
+ runningmachine => 'pc-q35-5.0+pve0',
+ runningcpu => 'host,+kvm_pv_eoi,+kvm_pv_unhalt',
+ },
+ opts => {
+ online => 1,
+ 'with-local-disks' => 1,
+ },
+ expected_calls => {
+ move_config_to_node => 1,
+ ssh_qm_start => 1,
+ vm_stop => 1,
+ },
+ expected => {
+ source_volids => {},
+ target_volids => {
+ 'local-lvm:vm-149-disk-10' => 1,
+ 'local-dir:149/vm-149-disk-11.qcow2' => 1,
+ },
+ vm_config => get_patched_config(149, {
+ scsi0 => 'local-lvm:vm-149-disk-10,format=raw,size=4G',
+ scsi1 => 'local-dir:149/vm-149-disk-11.qcow2,format=qcow2,size=1G',
+ snapshots => {},
+ }),
+ vm_status => {
+ running => 1,
+ runningmachine => 'pc-q35-5.0+pve0',
+ runningcpu => 'host,+kvm_pv_eoi,+kvm_pv_unhalt',
+ },
+ },
+ },
+ {
+ name => '149_running_drive_mirror_fail',
+ target => 'pve1',
+ vmid => 149,
+ vm_status => {
+ running => 1,
+ runningmachine => 'pc-q35-5.0+pve0',
+ runningcpu => 'host,+kvm_pv_eoi,+kvm_pv_unhalt',
+ },
+ opts => {
+ online => 1,
+ 'with-local-disks' => 1,
+ },
+ expected_calls => {},
+ fail_config => {
+ 'qemu_drive_mirror' => 'scsi1',
+ },
+ expected => {
+ source_volids => volids_for_vm(149),
+ target_volids => {},
+ vm_config => get_patched_config(149, {
+ snapshots => {},
+ }),
+ vm_status => {
+ running => 1,
+ runningmachine => 'pc-q35-5.0+pve0',
+ runningcpu => 'host,+kvm_pv_eoi,+kvm_pv_unhalt',
+ },
+ },
+ },
+ {
+ # FIXME test is not deterministic, because sometimes the second volume is migrated
+ # first and there is no cleanup of remotedisks yet, when failing this early
+ name => '149_storage_migrate_fail',
+ target => 'pve1',
+ vmid => 149,
+ vm_status => {
+ running => 0,
+ },
+ opts => {
+ online => 1,
+ 'with-local-disks' => 1,
+ },
+ fail_config => {
+ 'storage_migrate' => 'local-lvm:vm-149-disk-0',
+ },
+ expected_calls => {},
+ expect_die => "storage_migrate 'local-lvm:vm-149-disk-0' error",
+ expected => {
+ source_volids => volids_for_vm(149),
+ target_volids => {},
+ vm_config => get_patched_config(149, {
+ snapshots => {},
+ }),
+ vm_status => {
+ running => 0,
+ },
+ },
+ },
+];
+
+mkdir $RUN_DIR_PATH;
+
+file_set_contents("${RUN_DIR_PATH}/storage_config", to_json($storage_config));
+file_set_contents("${RUN_DIR_PATH}/source_vdisks", to_json($source_vdisks));
+
+foreach my $test (@$tests) {
+ my $name = $test->{name};
+ my $expected_source_volids = $test->{expected_source_volids};
+ my $expect_die = $test->{expect_die};
+ my $expected = $test->{expected};
+
+ my $source_volids = volids_for_vm($test->{vmid});
+ my $target_volids = $test->{target_volids} // {};
+
+ my $config_patch = $test->{config_patch};
+ my $vm_config = get_patched_config($test->{vmid}, $test->{config_patch});
+
+ my $fail_config = $test->{fail_config} // {};
+ my $storage_migrate_map = $test->{storage_migrate_map} // {};
+
+ my $migrate_params = {
+ target => $test->{target},
+ vmid => $test->{vmid},
+ opts => $test->{opts},
+ };
+
+ file_set_contents("${RUN_DIR_PATH}/source_volids", to_json($source_volids));
+ file_set_contents("${RUN_DIR_PATH}/target_volids", to_json($target_volids));
+ file_set_contents("${RUN_DIR_PATH}/vm_config", to_json($vm_config));
+ file_set_contents("${RUN_DIR_PATH}/vm_status", to_json($test->{vm_status}));
+ file_set_contents("${RUN_DIR_PATH}/expected_calls", to_json($test->{expected_calls}));
+ file_set_contents("${RUN_DIR_PATH}/fail_config", to_json($fail_config));
+ file_set_contents("${RUN_DIR_PATH}/storage_migrate_map", to_json($storage_migrate_map));
+ file_set_contents("${RUN_DIR_PATH}/migrate_params", to_json($migrate_params));
+
+ $ENV{QM_LIB_PATH} = $QM_LIB_PATH;
+ # TODO remove pve-guest-common path once the move_config_to_node patch has landed
+ my $exitcode = run_command([
+ 'perl',
+ '-I../../pve-guest-common',
+ "-I${MIGRATE_LIB_PATH}",
+ "-I${MIGRATE_LIB_PATH}/test",
+ "${MIGRATE_LIB_PATH}/test/MigrationTest/QemuMigrateMock.pm",
+ ]);
+
+ my $die_message = eval { file_get_contents("${RUN_DIR_PATH}/die_message") };
+
+ if (defined($expect_die)) {
+ like($die_message, qr/\Q${expect_die}\E/, $name);
+ } elsif (!defined($expect_die) && $exitcode) {
+ fail($name);
+ note("mocked migrate call failed, but it was not expected - check log");
+ }
+
+ my $expected_calls = decode_json(file_get_contents("${RUN_DIR_PATH}/expected_calls"));
+ foreach my $call (keys %{$expected_calls}) {
+ fail($name);
+ note("expected call '$call' was not made");
+ }
+
+ my $actual = {
+ source_volids => decode_json(file_get_contents("${RUN_DIR_PATH}/source_volids")),
+ target_volids => decode_json(file_get_contents("${RUN_DIR_PATH}/target_volids")),
+ vm_config => decode_json(file_get_contents("${RUN_DIR_PATH}/vm_config")),
+ vm_status => decode_json(file_get_contents("${RUN_DIR_PATH}/vm_status")),
+ };
+
+# use Data::Dumper;
+# $Data::Dumper::Sortkeys = 1;
+# print Dumper($actual);
+# print Dumper($expected);
+
+ is_deeply($actual, $expected, $name);
+
+ rename("${RUN_DIR_PATH}/log", "${RUN_DIR_PATH}/log-$name") or die "rename log failed\n";
+}
+
+done_testing();
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-07-16 12:07 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-16 12:06 [pve-devel] [PATCH qemu-server 1/2] migration: factor out starting remote tunnel Fabian Ebner
2020-07-16 12:07 ` [pve-devel] [RFC qemu-server 2/2] create test environment for QemuMigrate.pm Fabian Ebner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox