public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk
@ 2024-05-03  8:34 Fiona Ebner
  2024-05-03  8:34 ` [pve-devel] [PATCH qemu-server 2/2] migration: add missing use statements Fiona Ebner
  2024-05-23  9:08 ` [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fabian Grünbichler
  0 siblings, 2 replies; 7+ messages in thread
From: Fiona Ebner @ 2024-05-03  8:34 UTC (permalink / raw)
  To: pve-devel

There is a possibility that the drive-mirror job is not yet done when
the migration wants to inactivate the source's blockdrives:

> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.

This can be prevented by using the 'write-blocking' copy mode (also
called active mode) for the mirror. However, with active mode, the
guest write speed is limited by the synchronous writes to the mirror
target. For this reason, a way to start out in the faster 'background'
mode and later switch to active mode was introduced in QEMU 8.2.

The switch is done once the mirror job for all drives is ready to be
completed to reduce the time spent where guest IO is limited.

Reported rarely, but steadily over the years:
https://forum.proxmox.com/threads/78954/post-353651
https://forum.proxmox.com/threads/78954/post-380015
https://forum.proxmox.com/threads/100020/post-431660
https://forum.proxmox.com/threads/111831/post-482425
https://forum.proxmox.com/threads/111831/post-499807
https://forum.proxmox.com/threads/137849/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 PVE/QemuMigrate.pm                    |  9 ++++++
 PVE/QemuServer.pm                     | 41 +++++++++++++++++++++++++++
 test/MigrationTest/QemuMigrateMock.pm |  3 ++
 3 files changed, 53 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 8d9b35ae..649cfec4 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1142,6 +1142,15 @@ sub phase2 {
 	    $self->log('info', "$drive: start migration to $nbd_uri");
 	    PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
 	}
+
+	my $kvm_version = PVE::QemuServer::kvm_user_version();
+	if (min_version($kvm_version, 8, 2)) {
+	    $self->log('info', "switching mirror jobs to actively synced mode");
+	    PVE::QemuServer::qemu_drive_mirror_switch_to_active_mode(
+		$vmid,
+		$self->{storage_migration_jobs},
+	    );
+	}
     }
 
     $self->log('info', "starting online/live migration on $migrate_uri");
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 82e7d6a6..5bc313e2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -8118,6 +8118,47 @@ sub qemu_blockjobs_cancel {
     }
 }
 
+# Callers should version guard this (only available with a binary >= QEMU 8.2)
+sub qemu_drive_mirror_switch_to_active_mode {
+    my ($vmid, $jobs) = @_;
+
+    my $switching = {};
+
+    for my $job (sort keys $jobs->%*) {
+	print "$job: switching to actively synced mode\n";
+
+	eval {
+	    mon_cmd(
+		$vmid,
+		"block-job-change",
+		id => $job,
+		type => 'mirror',
+		'copy-mode' => 'write-blocking',
+	    );
+	    $switching->{$job} = 1;
+	};
+	die "could not switch mirror job $job to active mode - $@\n" if $@;
+    }
+
+    while (1) {
+	my $stats = mon_cmd($vmid, "query-block-jobs");
+
+	my $running_jobs = {};
+	$running_jobs->{$_->{device}} = $_ for $stats->@*;
+
+	for my $job (sort keys $switching->%*) {
+	    if ($running_jobs->{$job}->{'actively-synced'}) {
+		print "$job: successfully switched to actively synced mode\n";
+		delete $switching->{$job};
+	    }
+	}
+
+	last if scalar(keys $switching->%*) == 0;
+
+	sleep 1;
+    }
+}
+
 # Check for bug #4525: drive-mirror will open the target drive with the same aio setting as the
 # source, but some storages have problems with io_uring, sometimes even leading to crashes.
 my sub clone_disk_check_io_uring {
diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
index 1efabe24..2b9e91b3 100644
--- a/test/MigrationTest/QemuMigrateMock.pm
+++ b/test/MigrationTest/QemuMigrateMock.pm
@@ -152,6 +152,9 @@ $MigrationTest::Shared::qemu_server_module->mock(
 	}
 	return;
     },
+    qemu_drive_mirror_switch_to_active_mode => sub {
+	return;
+    },
     set_migration_caps => sub {
 	return;
     },
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH qemu-server 2/2] migration: add missing use statements
  2024-05-03  8:34 [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fiona Ebner
@ 2024-05-03  8:34 ` Fiona Ebner
  2024-05-23  9:03   ` [pve-devel] applied: " Fabian Grünbichler
  2024-05-23  9:08 ` [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fabian Grünbichler
  1 sibling, 1 reply; 7+ messages in thread
From: Fiona Ebner @ 2024-05-03  8:34 UTC (permalink / raw)
  To: pve-devel

There's functions from all of those being used, but without importing
first.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 PVE/QemuMigrate.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 649cfec4..312ddaf8 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -7,14 +7,17 @@ use IO::File;
 use IPC::Open2;
 use Time::HiRes qw( usleep );
 
+use PVE::AccessControl;
 use PVE::Cluster;
 use PVE::Format qw(render_bytes);
 use PVE::GuestHelpers qw(safe_boolean_ne safe_string_ne);
 use PVE::INotify;
+use PVE::JSONSchema;
 use PVE::RPCEnvironment;
 use PVE::Replication;
 use PVE::ReplicationConfig;
 use PVE::ReplicationState;
+use PVE::Storage::Plugin;
 use PVE::Storage;
 use PVE::StorageTunnel;
 use PVE::Tools;
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH qemu-server 2/2] migration: add missing use statements
  2024-05-03  8:34 ` [pve-devel] [PATCH qemu-server 2/2] migration: add missing use statements Fiona Ebner
@ 2024-05-23  9:03   ` Fabian Grünbichler
  2024-05-28  7:50     ` Fiona Ebner
  0 siblings, 1 reply; 7+ messages in thread
From: Fabian Grünbichler @ 2024-05-23  9:03 UTC (permalink / raw)
  To: Proxmox VE development discussion

but slight follow-up question below

On May 3, 2024 10:34 am, Fiona Ebner wrote:
> There's functions from all of those being used, but without importing
> first.
> 
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  PVE/QemuMigrate.pm | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 649cfec4..312ddaf8 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -7,14 +7,17 @@ use IO::File;
>  use IPC::Open2;
>  use Time::HiRes qw( usleep );
>  
> +use PVE::AccessControl;
>  use PVE::Cluster;
>  use PVE::Format qw(render_bytes);
>  use PVE::GuestHelpers qw(safe_boolean_ne safe_string_ne);
>  use PVE::INotify;
> +use PVE::JSONSchema;
>  use PVE::RPCEnvironment;
>  use PVE::Replication;
>  use PVE::ReplicationConfig;
>  use PVE::ReplicationState;
> +use PVE::Storage::Plugin;

this one is only used for a single check_connection call in `prepare`:

	if ($scfg->{shared}) {
	    # PVE::Storage::activate_storage checks this for non-shared storages
	    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
	    warn "Used shared storage '$sid' is not online on source node!\n"
		if !$plugin->check_connection($sid, $scfg);
	}

can't we get rid of this? either we are live-migrating, then the storage
is already active and in use. or we are migrating offline, then shared
storages are irrelevant (unless it's a remote migration, but then we
will activate the storage anyhow)?

or, couldn't we replace it with an eval-ed call to activate_storage for
the same effect (to get a warning if the storage is somehow broken, even
though it is not a pre-requisite for migration)?

I'm currently trying to eliminate direct calls to plugin code (i.e., not
via PVE::Storage itself as entry point), and this seems like low-hanging
fruit ;) the same also exists in pve-container for migration..

>  use PVE::Storage;
>  use PVE::StorageTunnel;
>  use PVE::Tools;
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk
  2024-05-03  8:34 [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fiona Ebner
  2024-05-03  8:34 ` [pve-devel] [PATCH qemu-server 2/2] migration: add missing use statements Fiona Ebner
@ 2024-05-23  9:08 ` Fabian Grünbichler
  2024-05-23  9:29   ` Thomas Lamprecht
  2024-05-28  8:01   ` Fiona Ebner
  1 sibling, 2 replies; 7+ messages in thread
From: Fabian Grünbichler @ 2024-05-23  9:08 UTC (permalink / raw)
  To: Proxmox VE development discussion

one nit below, otherwise very nice to have this!

On May 3, 2024 10:34 am, Fiona Ebner wrote:
> There is a possibility that the drive-mirror job is not yet done when
> the migration wants to inactivate the source's blockdrives:
> 
>> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
> 
> This can be prevented by using the 'write-blocking' copy mode (also
> called active mode) for the mirror. However, with active mode, the
> guest write speed is limited by the synchronous writes to the mirror
> target. For this reason, a way to start out in the faster 'background'
> mode and later switch to active mode was introduced in QEMU 8.2.
> 
> The switch is done once the mirror job for all drives is ready to be
> completed to reduce the time spent where guest IO is limited.
> 
> Reported rarely, but steadily over the years:
> https://forum.proxmox.com/threads/78954/post-353651
> https://forum.proxmox.com/threads/78954/post-380015
> https://forum.proxmox.com/threads/100020/post-431660
> https://forum.proxmox.com/threads/111831/post-482425
> https://forum.proxmox.com/threads/111831/post-499807
> https://forum.proxmox.com/threads/137849/
> 
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>  PVE/QemuMigrate.pm                    |  9 ++++++
>  PVE/QemuServer.pm                     | 41 +++++++++++++++++++++++++++
>  test/MigrationTest/QemuMigrateMock.pm |  3 ++
>  3 files changed, 53 insertions(+)
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 8d9b35ae..649cfec4 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -1142,6 +1142,15 @@ sub phase2 {
>  	    $self->log('info', "$drive: start migration to $nbd_uri");
>  	    PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
>  	}
> +
> +	my $kvm_version = PVE::QemuServer::kvm_user_version();

wouldn't this need to check the *running* qemu binary for the migrated
VM, not the *installed* qemu binary on the system?

> +	if (min_version($kvm_version, 8, 2)) {
> +	    $self->log('info', "switching mirror jobs to actively synced mode");
> +	    PVE::QemuServer::qemu_drive_mirror_switch_to_active_mode(
> +		$vmid,
> +		$self->{storage_migration_jobs},
> +	    );
> +	}
>      }
>  
>      $self->log('info', "starting online/live migration on $migrate_uri");
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 82e7d6a6..5bc313e2 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -8118,6 +8118,47 @@ sub qemu_blockjobs_cancel {
>      }
>  }
>  
> +# Callers should version guard this (only available with a binary >= QEMU 8.2)
> +sub qemu_drive_mirror_switch_to_active_mode {
> +    my ($vmid, $jobs) = @_;
> +
> +    my $switching = {};
> +
> +    for my $job (sort keys $jobs->%*) {
> +	print "$job: switching to actively synced mode\n";
> +
> +	eval {
> +	    mon_cmd(
> +		$vmid,
> +		"block-job-change",
> +		id => $job,
> +		type => 'mirror',
> +		'copy-mode' => 'write-blocking',
> +	    );
> +	    $switching->{$job} = 1;
> +	};
> +	die "could not switch mirror job $job to active mode - $@\n" if $@;
> +    }
> +
> +    while (1) {
> +	my $stats = mon_cmd($vmid, "query-block-jobs");
> +
> +	my $running_jobs = {};
> +	$running_jobs->{$_->{device}} = $_ for $stats->@*;
> +
> +	for my $job (sort keys $switching->%*) {
> +	    if ($running_jobs->{$job}->{'actively-synced'}) {
> +		print "$job: successfully switched to actively synced mode\n";
> +		delete $switching->{$job};
> +	    }
> +	}
> +
> +	last if scalar(keys $switching->%*) == 0;
> +
> +	sleep 1;
> +    }
> +}
> +
>  # Check for bug #4525: drive-mirror will open the target drive with the same aio setting as the
>  # source, but some storages have problems with io_uring, sometimes even leading to crashes.
>  my sub clone_disk_check_io_uring {
> diff --git a/test/MigrationTest/QemuMigrateMock.pm b/test/MigrationTest/QemuMigrateMock.pm
> index 1efabe24..2b9e91b3 100644
> --- a/test/MigrationTest/QemuMigrateMock.pm
> +++ b/test/MigrationTest/QemuMigrateMock.pm
> @@ -152,6 +152,9 @@ $MigrationTest::Shared::qemu_server_module->mock(
>  	}
>  	return;
>      },
> +    qemu_drive_mirror_switch_to_active_mode => sub {
> +	return;
> +    },
>      set_migration_caps => sub {
>  	return;
>      },
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk
  2024-05-23  9:08 ` [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fabian Grünbichler
@ 2024-05-23  9:29   ` Thomas Lamprecht
  2024-05-28  8:01   ` Fiona Ebner
  1 sibling, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2024-05-23  9:29 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 23/05/2024 um 11:08 schrieb Fabian Grünbichler:
>> +	my $kvm_version = PVE::QemuServer::kvm_user_version();
> wouldn't this need to check the *running* qemu binary for the migrated
> VM, not the *installed* qemu binary on the system?

Yes, this would need to check the running QEMU version via QMP,
as otherwise, the new command will be also get issued to VMs
started before the updated, which then will fail.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] applied: [PATCH qemu-server 2/2] migration: add missing use statements
  2024-05-23  9:03   ` [pve-devel] applied: " Fabian Grünbichler
@ 2024-05-28  7:50     ` Fiona Ebner
  0 siblings, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2024-05-28  7:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 23.05.24 um 11:03 schrieb Fabian Grünbichler:
> but slight follow-up question below
> 
> On May 3, 2024 10:34 am, Fiona Ebner wrote:
>> +use PVE::Storage::Plugin;
> 
> this one is only used for a single check_connection call in `prepare`:
> 
> 	if ($scfg->{shared}) {
> 	    # PVE::Storage::activate_storage checks this for non-shared storages
> 	    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> 	    warn "Used shared storage '$sid' is not online on source node!\n"
> 		if !$plugin->check_connection($sid, $scfg);
> 	}
> 
> can't we get rid of this? either we are live-migrating, then the storage
> is already active and in use. or we are migrating offline, then shared
> storages are irrelevant (unless it's a remote migration, but then we
> will activate the storage anyhow)?
> 

This warning check was introduced by

qemu-server: 73f5ee92 ("fix #971: don't activate shared storage in
offline migration")

pve-container: 20ab40f ("fix #971: don't activate shared storage in
offline migration")

I experimented a bit what happens when you live-migrate when QEMU cannot
talk to an NFS volume. Migration will fail with error messages on the
target side:

> May 28 09:30:27 pve8a1 QEMU[46691]: kvm: Failed to load virtio_pci/modern_queue_state:used
> May 28 09:30:27 pve8a1 QEMU[46691]: kvm: Failed to load virtio_pci/modern_state:vqs
> May 28 09:30:27 pve8a1 QEMU[46691]: kvm: Failed to load virtio/extra_state:extra_state
> May 28 09:30:27 pve8a1 QEMU[46691]: kvm: Failed to load virtio-blk:virtio
> May 28 09:30:27 pve8a1 QEMU[46691]: kvm: error while loading state for instance 0x0 of device '0000:00:0a.0/virtio-blk'
> May 28 09:30:27 pve8a1 QEMU[46691]: kvm: load of migration failed: Input/output error

I'd guess, because the source side never gets to write the state for the
disk.

However, I can imagine there are cases where our storage layer's
check_connection/activate would fail on the source side, but work on the
target side and QEMU still can talk to the volume. Then the warning
might even be more confusing than helpful.

> or, couldn't we replace it with an eval-ed call to activate_storage for
> the same effect (to get a warning if the storage is somehow broken, even
> though it is not a pre-requisite for migration)?
> 

Both variants are fine by me. The warning is good to have in case of
hard-to-understand errors like mentioned above, but even then not super
useful IMHO, so I'd slightly prefer removal.

> I'm currently trying to eliminate direct calls to plugin code (i.e., not
> via PVE::Storage itself as entry point), and this seems like low-hanging
> fruit ;) the same also exists in pve-container for migration..
> 

For containers, the call is even guarded by !$remote additionally ;) And
there is no live-migration, so I think the warning is even less useful.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk
  2024-05-23  9:08 ` [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fabian Grünbichler
  2024-05-23  9:29   ` Thomas Lamprecht
@ 2024-05-28  8:01   ` Fiona Ebner
  1 sibling, 0 replies; 7+ messages in thread
From: Fiona Ebner @ 2024-05-28  8:01 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Am 23.05.24 um 11:08 schrieb Fabian Grünbichler:
> one nit below, otherwise very nice to have this!
> 
> On May 3, 2024 10:34 am, Fiona Ebner wrote:
>> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
>> index 8d9b35ae..649cfec4 100644
>> --- a/PVE/QemuMigrate.pm
>> +++ b/PVE/QemuMigrate.pm
>> @@ -1142,6 +1142,15 @@ sub phase2 {
>>  	    $self->log('info', "$drive: start migration to $nbd_uri");
>>  	    PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
>>  	}
>> +
>> +	my $kvm_version = PVE::QemuServer::kvm_user_version();
> 
> wouldn't this need to check the *running* qemu binary for the migrated
> VM, not the *installed* qemu binary on the system?
> 

Good catch! Yes, I copied/adapted this from the check for replication
which also does it wrong. I'll send a v2 fixing it (and remove the check
for replication for version 4.2 which is not needed nowadays anymore).


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-05-28  8:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-03  8:34 [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fiona Ebner
2024-05-03  8:34 ` [pve-devel] [PATCH qemu-server 2/2] migration: add missing use statements Fiona Ebner
2024-05-23  9:03   ` [pve-devel] applied: " Fabian Grünbichler
2024-05-28  7:50     ` Fiona Ebner
2024-05-23  9:08 ` [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk Fabian Grünbichler
2024-05-23  9:29   ` Thomas Lamprecht
2024-05-28  8:01   ` Fiona Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal