From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [POC storage v3 14/34] add backup provider example
Date: Wed, 13 Nov 2024 11:52:57 +0100 [thread overview]
Message-ID: <1731491839.jxatj6iypi.astroid@yuna.none> (raw)
In-Reply-To: <20241107165146.125935-15-f.ebner@proxmox.com>
didn't give this too close a look since it's an example only, but the
hard-coded NBD indices make me wonder whether we want to have some sort
of mechanism to "reserve" NBD slots while using them, at least for *our*
usage?
On November 7, 2024 5:51 pm, Fiona Ebner wrote:
> The example uses a simple directory structure to save the backups,
> grouped by guest ID. VM backups are saved as configuration files and
> qcow2 images, with backing files when doing incremental backups.
> Container backups are saved as configuration files and a tar file or
> squashfs image (added to test the 'directory' restore mechanism).
>
> Whether to use incremental VM backups and which backup mechanisms to
> use can be configured in the storage configuration.
>
> The 'nbdinfo' binary from the 'libnbd-bin' package is required for
> backup mechanism 'nbd' for VM backups, the 'mksquashfs' binary from the
> 'squashfs-tools' package is required for backup mechanism 'squashfs' for
> containers.
>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>
> Changes in v3:
> * adapt to API changes
> * use NBD export when restoring VM image, to make incremental backups
> using qcow2 chains work again
>
> .../BackupProvider/Plugin/DirectoryExample.pm | 697 ++++++++++++++++++
> src/PVE/BackupProvider/Plugin/Makefile | 2 +-
> .../Custom/BackupProviderDirExamplePlugin.pm | 307 ++++++++
> src/PVE/Storage/Custom/Makefile | 5 +
> src/PVE/Storage/Makefile | 1 +
> 5 files changed, 1011 insertions(+), 1 deletion(-)
> create mode 100644 src/PVE/BackupProvider/Plugin/DirectoryExample.pm
> create mode 100644 src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
> create mode 100644 src/PVE/Storage/Custom/Makefile
>
> diff --git a/src/PVE/BackupProvider/Plugin/DirectoryExample.pm b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm
> new file mode 100644
> index 0000000..99825ef
> --- /dev/null
> +++ b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm
> @@ -0,0 +1,697 @@
> +package PVE::BackupProvider::Plugin::DirectoryExample;
> +
> +use strict;
> +use warnings;
> +
> +use Fcntl qw(SEEK_SET);
> +use File::Path qw(make_path remove_tree);
> +use IO::File;
> +use IPC::Open3;
> +
> +use PVE::Storage::Plugin;
> +use PVE::Tools qw(file_get_contents file_read_firstline file_set_contents run_command);
> +
> +use base qw(PVE::BackupProvider::Plugin::Base);
> +
> +use constant {
> + BLKDISCARD => 0x1277, # see linux/fs.h
> +};
> +
> +# Private helpers
> +
> +my sub log_info {
> + my ($self, $message) = @_;
> +
> + $self->{'log-function'}->('info', $message);
> +}
> +
> +my sub log_warning {
> + my ($self, $message) = @_;
> +
> + $self->{'log-function'}->('warn', $message);
> +}
> +
> +my sub log_error {
> + my ($self, $message) = @_;
> +
> + $self->{'log-function'}->('err', $message);
> +}
> +
> +# Try to use the same bitmap ID as last time for incremental backup if the storage is configured for
> +# incremental VM backup. Need to start fresh if there is no previous ID or the associated backup
> +# doesn't exist.
> +my sub get_bitmap_id {
> + my ($self, $vmid, $vmtype) = @_;
> +
> + return if $self->{'storage-plugin'}->get_vm_backup_mode($self->{scfg}) ne 'incremental';
> +
> + my $previous_info_dir = "$self->{scfg}->{path}/$vmid/";
> +
> + my $previous_info_file = "$previous_info_dir/previous-info";
> + my $info = file_read_firstline($previous_info_file) // '';
> + $self->{$vmid}->{'old-previous-info'} = $info;
> + my ($bitmap_id, $previous_backup_id) = $info =~ m/^(\d+)\s+(\d+)$/;
> + my $previous_backup_dir =
> + $previous_backup_id ? "$self->{scfg}->{path}/$vmid/$vmtype-$previous_backup_id" : undef;
so the backup ID is an epoch - wouldn't it be nicer to use the formatted
one as subdir, rather than the epoch itself?
> +
> + if ($bitmap_id && -d $previous_backup_dir) {
> + $self->{$vmid}->{'previous-backup-dir'} = $previous_backup_dir;
> + } else {
> + # need to start fresh if there is no previous ID or the associated backup doesn't exist
> + $bitmap_id = $self->{$vmid}->{'backup-time'};
> + }
> +
> + $self->{$vmid}->{'bitmap-id'} = $bitmap_id;
> + make_path($previous_info_dir);
> + die "unable to create directory $previous_info_dir\n" if !-d $previous_info_dir;
> + file_set_contents($previous_info_file, "$bitmap_id $self->{$vmid}->{'backup-time'}");
> +
> + return $bitmap_id;
> +}
> +
> +# Backup Provider API
> +
> +sub new {
> + my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
> +
> + my $self = bless {
> + scfg => $scfg,
> + storeid => $storeid,
> + 'storage-plugin' => $storage_plugin,
> + 'log-function' => $log_function,
> + }, $class;
> +
> + return $self;
> +}
> +
> +sub provider_name {
> + my ($self) = @_;
> +
> + return 'dir provider example';
> +}
> +
> +# Hooks
> +
> +my sub job_start {
> + my ($self, $start_time) = @_;
> +
> + log_info($self, "job start hook called");
> +
> + run_command(["modprobe", "nbd"]);
this duplicates the modprobe in qemu-server, but without the parameter..
> +
> + log_info($self, "backup provider initialized successfully for new job $start_time");
> +}
> +
> +sub job_hook {
> + my ($self, $phase, $info) = @_;
> +
> + if ($phase eq 'start') {
> + job_start($self, $info->{'start-time'});
> + } elsif ($phase eq 'end') {
> + log_info($self, "job end hook called");
> + } elsif ($phase eq 'abort') {
> + log_info($self, "job abort hook called with error - $info->{error}");
> + }
> +
> + # ignore unknown phase
> +
> + return;
> +}
> +
> +my sub backup_start {
> + my ($self, $vmid, $vmtype, $backup_time) = @_;
> +
> + log_info($self, "backup start hook called");
> +
> + my $backup_dir = $self->{scfg}->{path} . "/" . $self->{$vmid}->{archive};
> +
> + make_path($backup_dir);
> + die "unable to create directory $backup_dir\n" if !-d $backup_dir;
> +
> + $self->{$vmid}->{'backup-time'} = $backup_time;
> + $self->{$vmid}->{'backup-dir'} = $backup_dir;
> + $self->{$vmid}->{'task-size'} = 0;
> +}
> +
> +my sub backup_abort {
> + my ($self, $vmid, $error) = @_;
> +
> + log_info($self, "backup abort hook called");
> +
> + $self->{$vmid}->{failed} = 1;
> +
> +
> + if (my $dir = $self->{$vmid}->{'backup-dir'}) {
> + eval { remove_tree($dir) };
> + $self->{'log-warning'}->("unable to clean up $dir - $@") if $@;
> + }
> +
> + # Restore old previous-info so next attempt can re-use bitmap again
> + if (my $info = $self->{$vmid}->{'old-previous-info'}) {
> + my $previous_info_dir = "$self->{scfg}->{path}/$vmid/";
> + my $previous_info_file = "$previous_info_dir/previous-info";
> + file_set_contents($previous_info_file, $info);
> + }
> +}
> +
> +sub backup_hook {
> + my ($self, $phase, $vmid, $vmtype, $info) = @_;
> +
> + if ($phase eq 'start') {
> + backup_start($self, $vmid, $vmtype, $info->{'start-time'});
> + } elsif ($phase eq 'end') {
> + log_info($self, "backup end hook called");
> + } elsif ($phase eq 'abort') {
> + backup_abort($self, $vmid, $info->{error});
> + } elsif ($phase eq 'prepare') {
> + my $dir = $self->{$vmid}->{'backup-dir'};
> + chown($info->{'backup-user-id'}, -1, $dir)
> + or die "unable to change owner for $dir\n";
> + }
> +
> + # ignore unknown phase
> +
> + return;
> +}
> +
> +sub backup_get_mechanism {
> + my ($self, $vmid, $vmtype) = @_;
> +
> + return ('directory', undef) if $vmtype eq 'lxc';
> +
> + if ($vmtype eq 'qemu') {
> + my $backup_mechanism = $self->{'storage-plugin'}->get_vm_backup_mechanism($self->{scfg});
> + return ($backup_mechanism, get_bitmap_id($self, $vmid, $vmtype));
> + }
> +
> + die "unsupported guest type '$vmtype'\n";
> +}
> +
> +sub backup_get_archive_name {
> + my ($self, $vmid, $vmtype, $backup_time) = @_;
> +
> + return $self->{$vmid}->{archive} = "${vmid}/${vmtype}-${backup_time}";
same question here w.r.t. epoch vs RFC3339
> +}
> +
> +sub backup_get_task_size {
> + my ($self, $vmid) = @_;
> +
> + return $self->{$vmid}->{'task-size'};
> +}
> +
> +sub backup_handle_log_file {
> + my ($self, $vmid, $filename) = @_;
> +
> + my $log_dir = $self->{$vmid}->{'backup-dir'};
> + if ($self->{$vmid}->{failed}) {
> + $log_dir .= ".failed";
> + }
> + make_path($log_dir);
> + die "unable to create directory $log_dir\n" if !-d $log_dir;
> +
> + my $data = file_get_contents($filename);
> + my $target = "${log_dir}/backup.log";
> + file_set_contents($target, $data);
> +}
> +
> +my sub backup_block_device {
> + my ($self, $vmid, $devicename, $size, $path, $bitmap_mode, $next_dirty_region, $bandwidth_limit) = @_;
> +
> + # TODO honor bandwidth_limit
> +
> + my $previous_backup_dir = $self->{$vmid}->{'previous-backup-dir'};
> + my $incremental = $previous_backup_dir && $bitmap_mode eq 'reuse';
> + my $target = "$self->{$vmid}->{'backup-dir'}/${devicename}.qcow2";
> + my $target_base = $incremental ? "${previous_backup_dir}/${devicename}.qcow2" : undef;
> + my $create_cmd = ["qemu-img", "create", "-f", "qcow2", $target, $size];
> + push $create_cmd->@*, "-b", $target_base, "-F", "qcow2" if $target_base;
> + run_command($create_cmd);
> +
> + eval {
> + # allows to easily write to qcow2 target
> + run_command(["qemu-nbd", "-c", "/dev/nbd15", $target, "--format=qcow2"]);
doesn't this (potentially) clash with other NBD usage?
> +
> + my $block_size = 4 * 1024 * 1024; # 4 MiB
> +
> + my $in_fh = IO::File->new($path, "r+")
> + or die "unable to open NBD backup source - $!\n";
> + my $out_fh = IO::File->new("/dev/nbd15", "r+")
> + or die "unable to open NBD backup target - $!\n";
> +
> + my $buffer = '';
> +
> + while (scalar((my $region_offset, my $region_length) = $next_dirty_region->())) {
> + sysseek($in_fh, $region_offset, SEEK_SET)
> + // die "unable to seek '$region_offset' in NBD backup source - $!";
> + sysseek($out_fh, $region_offset, SEEK_SET)
> + // die "unable to seek '$region_offset' in NBD backup target - $!";
> +
> + my $local_offset = 0; # within the region
> + while ($local_offset < $region_length) {
> + my $remaining = $region_length - $local_offset;
> + my $request_size = $remaining < $block_size ? $remaining : $block_size;
> + my $offset = $region_offset + $local_offset;
> +
> + my $read = sysread($in_fh, $buffer, $request_size);
> +
> + die "failed to read from backup source - $!\n" if !defined($read);
> + die "premature EOF while reading backup source\n" if $read == 0;
> +
> + my $written = 0;
> + while ($written < $read) {
> + my $res = syswrite($out_fh, $buffer, $request_size - $written, $written);
> + die "failed to write to backup target - $!\n" if !defined($res);
> + die "unable to progress writing to backup target\n" if $res == 0;
> + $written += $res;
> + }
> +
> + ioctl($in_fh, BLKDISCARD, pack('QQ', int($offset), int($request_size)));
> +
> + $local_offset += $request_size;
> + }
> + }
> + };
> + my $err = $@;
> +
> + eval { run_command(["qemu-nbd", "-d", "/dev/nbd15" ]); };
> + $self->{'log-warning'}->("unable to disconnect NBD backup target - $@") if $@;
> +
> + die $err if $err;
> +}
> +
> +my sub backup_nbd {
> + my ($self, $vmid, $devicename, $size, $nbd_path, $bitmap_mode, $bitmap_name, $bandwidth_limit) = @_;
> +
> + # TODO honor bandwidth_limit
> +
> + die "need 'nbdinfo' binary from package libnbd-bin\n" if !-e "/usr/bin/nbdinfo";
> +
> + my $nbd_info_uri = "nbd+unix:///${devicename}?socket=${nbd_path}";
> + my $qemu_nbd_uri = "nbd:unix:${nbd_path}:exportname=${devicename}";
> +
> + my $cpid;
> + my $error_fh;
> + my $next_dirty_region;
> +
> + # If there is no dirty bitmap, it can be treated as if there's a full dirty one. The output of
> + # nbdinfo is a list of tuples with offset, length, type, description. The first bit of 'type' is
> + # set when the bitmap is dirty, see QEMU's docs/interop/nbd.txt
> + my $dirty_bitmap = [];
> + if ($bitmap_mode ne 'none') {
> + my $input = IO::File->new();
> + my $info = IO::File->new();
> + $error_fh = IO::File->new();
> + my $nbdinfo_cmd = ["nbdinfo", $nbd_info_uri, "--map=qemu:dirty-bitmap:${bitmap_name}"];
> + $cpid = open3($input, $info, $error_fh, $nbdinfo_cmd->@*)
> + or die "failed to spawn nbdinfo child - $!\n";
> +
> + $next_dirty_region = sub {
> + my ($offset, $length, $type);
> + do {
> + my $line = <$info>;
> + return if !$line;
> + die "unexpected output from nbdinfo - $line\n"
> + if $line !~ m/^\s*(\d+)\s*(\d+)\s*(\d+)/; # also untaints
> + ($offset, $length, $type) = ($1, $2, $3);
> + } while (($type & 0x1) == 0); # not dirty
> + return ($offset, $length);
> + };
> + } else {
> + my $done = 0;
> + $next_dirty_region = sub {
> + return if $done;
> + $done = 1;
> + return (0, $size);
> + };
> + }
> +
> + eval {
> + run_command(["qemu-nbd", "-c", "/dev/nbd0", $qemu_nbd_uri, "--format=raw", "--discard=on"]);
same question here (but with a different hard-coded index ;))
> +
> + backup_block_device(
> + $self,
> + $vmid,
> + $devicename,
> + $size,
> + '/dev/nbd0',
> + $bitmap_mode,
> + $next_dirty_region,
> + $bandwidth_limit,
> + );
> + };
> + my $err = $@;
> +
> + eval { run_command(["qemu-nbd", "-d", "/dev/nbd0" ]); };
> + $self->{'log-warning'}->("unable to disconnect NBD backup source - $@") if $@;
> +
> + if ($cpid) {
> + my $waited;
> + my $wait_limit = 5;
> + for ($waited = 0; $waited < $wait_limit && waitpid($cpid, POSIX::WNOHANG) == 0; $waited++) {
> + kill 15, $cpid if $waited == 0;
> + sleep 1;
> + }
> + if ($waited == $wait_limit) {
> + kill 9, $cpid;
> + sleep 1;
> + $self->{'log-warning'}->("unable to collect nbdinfo child process")
> + if waitpid($cpid, POSIX::WNOHANG) == 0;
> + }
> + }
> +
> + die $err if $err;
> +}
> +
> +my sub backup_vm_volume {
> + my ($self, $vmid, $devicename, $info, $bandwidth_limit) = @_;
> +
> + my $backup_mechanism = $self->{'storage-plugin'}->get_vm_backup_mechanism($self->{scfg});
> +
> + if ($backup_mechanism eq 'nbd') {
> + backup_nbd(
> + $self,
> + $vmid,
> + $devicename,
> + $info->{size},
> + $info->{'nbd-path'},
> + $info->{'bitmap-mode'},
> + $info->{'bitmap-name'},
> + $bandwidth_limit,
> + );
> + } elsif ($backup_mechanism eq 'block-device') {
> + backup_block_device(
> + $self,
> + $vmid,
> + $devicename,
> + $info->{size},
> + $info->{path},
> + $info->{'bitmap-mode'},
> + $info->{'next-dirty-region'},
> + $bandwidth_limit,
> + );
> + } else {
> + die "internal error - unknown VM backup mechansim '$backup_mechanism'\n";
> + }
> +}
> +
> +sub backup_vm {
> + my ($self, $vmid, $guest_config, $volumes, $info) = @_;
> +
> + my $target = "$self->{$vmid}->{'backup-dir'}/guest.conf";
> + file_set_contents($target, $guest_config);
> +
> + $self->{$vmid}->{'task-size'} += -s $target;
> +
> + if (my $firewall_config = $info->{'firewall-config'}) {
> + $target = "$self->{$vmid}->{'backup-dir'}/firewall.conf";
> + file_set_contents($target, $firewall_config);
> +
> + $self->{$vmid}->{'task-size'} += -s $target;
> + }
> +
> + for my $devicename (sort keys $volumes->%*) {
> + backup_vm_volume(
> + $self, $vmid, $devicename, $volumes->{$devicename}, $info->{'bandwidth-limit'});
> + }
> +}
> +
> +my sub backup_directory_tar {
> + my ($self, $vmid, $directory, $exclude_patterns, $sources, $bandwidth_limit) = @_;
> +
> + # essentially copied from PVE/VZDump/LXC.pm' archive()
> +
> + # copied from PVE::Storage::Plugin::COMMON_TAR_FLAGS
> + my @tar_flags = qw(
> + --one-file-system
> + -p --sparse --numeric-owner --acls
> + --xattrs --xattrs-include=user.* --xattrs-include=security.capability
> + --warning=no-file-ignored --warning=no-xattr-write
> + );
> +
> + my $tar = ['tar', 'cpf', '-', '--totals', @tar_flags];
> +
> + push @$tar, "--directory=$directory";
> +
> + my @exclude_no_anchored = ();
> + my @exclude_anchored = ();
> + for my $pattern ($exclude_patterns->@*) {
> + if ($pattern !~ m|^/|) {
> + push @exclude_no_anchored, $pattern;
> + } else {
> + push @exclude_anchored, $pattern;
> + }
> + }
> +
> + push @$tar, '--no-anchored';
> + push @$tar, '--exclude=lost+found';
> + push @$tar, map { "--exclude=$_" } @exclude_no_anchored;
> +
> + push @$tar, '--anchored';
> + push @$tar, map { "--exclude=.$_" } @exclude_anchored;
> +
> + push @$tar, $sources->@*;
> +
> + my $cmd = [ $tar ];
> +
> + push @$cmd, [ 'cstream', '-t', $bandwidth_limit * 1024 ] if $bandwidth_limit;
> +
> + my $target = "$self->{$vmid}->{'backup-dir'}/archive.tar";
> + push @{$cmd->[-1]}, \(">" . PVE::Tools::shellquote($target));
> +
> + my $logfunc = sub {
> + my $line = shift;
> + log_info($self, "tar: $line");
> + };
> +
> + PVE::Tools::run_command($cmd, logfunc => $logfunc);
> +
> + return;
> +};
> +
> +# NOTE This only serves as an example to illustrate the 'directory' restore mechanism. It is not
> +# fleshed out properly, e.g. I didn't check if exclusion is compatible with
> +# proxmox-backup-client/rsync or xattrs/ACL/etc. work as expected!
> +my sub backup_directory_squashfs {
> + my ($self, $vmid, $directory, $exclude_patterns, $bandwidth_limit) = @_;
> +
> + my $target = "$self->{$vmid}->{'backup-dir'}/archive.sqfs";
> +
> + my $mksquashfs = ['mksquashfs', $directory, $target, '-quiet', '-no-progress'];
> +
> + push $mksquashfs->@*, '-wildcards';
> +
> + for my $pattern ($exclude_patterns->@*) {
> + if ($pattern !~ m|^/|) { # non-anchored
> + push $mksquashfs->@*, '-e', "... $pattern";
> + } else { # anchored
> + push $mksquashfs->@*, '-e', substr($pattern, 1); # need to strip leading slash
> + }
> + }
> +
> + my $cmd = [ $mksquashfs ];
> +
> + push @$cmd, [ 'cstream', '-t', $bandwidth_limit * 1024 ] if $bandwidth_limit;
> +
> + my $logfunc = sub {
> + my $line = shift;
> + log_info($self, "mksquashfs: $line");
> + };
> +
> + PVE::Tools::run_command($cmd, logfunc => $logfunc);
> +
> + return;
> +};
> +
> +sub backup_container {
> + my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
> +
> + my $target = "$self->{$vmid}->{'backup-dir'}/guest.conf";
> + file_set_contents($target, $guest_config);
> +
> + $self->{$vmid}->{'task-size'} += -s $target;
> +
> + if (my $firewall_config = $info->{'firewall-config'}) {
> + $target = "$self->{$vmid}->{'backup-dir'}/firewall.conf";
> + file_set_contents($target, $firewall_config);
> +
> + $self->{$vmid}->{'task-size'} += -s $target;
> + }
> +
> + my $backup_mode = $self->{'storage-plugin'}->get_lxc_backup_mode($self->{scfg});
> + if ($backup_mode eq 'tar') {
> + backup_directory_tar(
> + $self,
> + $vmid,
> + $info->{directory},
> + $exclude_patterns,
> + $info->{sources},
> + $info->{'bandwidth-limit'},
> + );
> + } elsif ($backup_mode eq 'squashfs') {
> + backup_directory_squashfs(
> + $self,
> + $vmid,
> + $info->{directory},
> + $exclude_patterns,
> + $info->{'bandwidth-limit'},
> + );
> + } else {
> + die "got unexpected backup mode '$backup_mode' from storage plugin\n";
> + }
> +}
> +
> +# Restore API
> +
> +sub restore_get_mechanism {
> + my ($self, $volname, $storeid) = @_;
> +
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> + my ($vmtype) = $relative_backup_dir =~ m!^\d+/([a-z]+)-!;
> +
> + return ('qemu-img', $vmtype) if $vmtype eq 'qemu';
> +
> + if ($vmtype eq 'lxc') {
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +
> + if (-e "$self->{scfg}->{path}/${relative_backup_dir}/archive.tar") {
> + $self->{'restore-mechanisms'}->{$volname} = 'tar';
> + return ('tar', $vmtype);
> + }
> +
> + if (-e "$self->{scfg}->{path}/${relative_backup_dir}/archive.sqfs") {
> + $self->{'restore-mechanisms'}->{$volname} = 'directory';
> + return ('directory', $vmtype)
> + }
> +
> + die "unable to find archive '$volname'\n";
> + }
> +
> + die "cannot restore unexpected guest type '$vmtype'\n";
> +}
> +
> +sub restore_get_guest_config {
> + my ($self, $volname, $storeid) = @_;
> +
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> + my $filename = "$self->{scfg}->{path}/${relative_backup_dir}/guest.conf";
> +
> + return file_get_contents($filename);
> +}
> +
> +sub restore_get_firewall_config {
> + my ($self, $volname, $storeid) = @_;
> +
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> + my $filename = "$self->{scfg}->{path}/${relative_backup_dir}/firewall.conf";
> +
> + return if !-e $filename;
> +
> + return file_get_contents($filename);
> +}
> +
> +sub restore_vm_init {
> + my ($self, $volname, $storeid) = @_;
> +
> + my $res = {};
> +
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> + my $backup_dir = "$self->{scfg}->{path}/${relative_backup_dir}";
> +
> + my @backup_files = glob("$backup_dir/*");
> + for my $backup_file (@backup_files) {
> + next if $backup_file !~ m!^(.*/(.*)\.qcow2)$!;
> + $backup_file = $1; # untaint
> + $res->{$2}->{size} = PVE::Storage::Plugin::file_size_info($backup_file);
> + }
> +
> + return $res;
> +}
> +
> +sub restore_vm_cleanup {
> + my ($self, $volname, $storeid) = @_;
> +
> + return; # nothing to do
> +}
> +
> +sub restore_vm_volume_init {
> + my ($self, $volname, $storeid, $devicename, $info) = @_;
> +
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> + my $image = "$self->{scfg}->{path}/${relative_backup_dir}/${devicename}.qcow2";
> + # NOTE Backing files are not allowed by Proxmox VE when restoring. The reason is that an
> + # untrusted qcow2 image can specify an arbitrary backing file and thus leak data from the host.
> + # For the sake of the directory example plugin, an NBD export is created, but this side-steps
> + # the check and would allow the attack again. An actual implementation should check that the
> + # backing file (or rather, the whole backing chain) is safe first!
> + PVE::Tools::run_command(['qemu-nbd', '-c', '/dev/nbd7', $image]);
and another hard-coded index here - I really think we need some sort of
solution for this..
> + return {
> + 'qemu-img-path' => '/dev/nbd7',
> + };
> +}
> +
> +sub restore_vm_volume_cleanup {
> + my ($self, $volname, $storeid, $devicename, $info) = @_;
> +
> + PVE::Tools::run_command(['qemu-nbd', '-d', '/dev/nbd7']);
> +
> + return;
> +}
> +
> +my sub restore_tar_init {
> + my ($self, $volname, $storeid) = @_;
> +
> + my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> + return { 'tar-path' => "$self->{scfg}->{path}/${relative_backup_dir}/archive.tar" };
> +}
> +
> +my sub restore_directory_init {
> + my ($self, $volname, $storeid) = @_;
> +
> + my (undef, $relative_backup_dir, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
> + my $archive = "$self->{scfg}->{path}/${relative_backup_dir}/archive.sqfs";
> +
> + my $mount_point = "/run/backup-provider-example/${vmid}.mount";
> + make_path($mount_point);
> + die "unable to create directory $mount_point\n" if !-d $mount_point;
> +
> + run_command(['mount', '-o', 'ro', $archive, $mount_point]);
> +
> + return { 'archive-directory' => $mount_point };
> +}
> +
> +my sub restore_directory_cleanup {
> + my ($self, $volname, $storeid) = @_;
> +
> + my (undef, undef, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
> + my $mount_point = "/run/backup-provider-example/${vmid}.mount";
> +
> + run_command(['umount', $mount_point]);
> +
> + return;
> +}
> +
> +sub restore_container_init {
> + my ($self, $volname, $storeid, $info) = @_;
> +
> + if ($self->{'restore-mechanisms'}->{$volname} eq 'tar') {
> + return restore_tar_init($self, $volname, $storeid);
> + } elsif ($self->{'restore-mechanisms'}->{$volname} eq 'directory') {
> + return restore_directory_init($self, $volname, $storeid);
> + } else {
> + die "no restore mechanism set for '$volname'\n";
> + }
> +}
> +
> +sub restore_container_cleanup {
> + my ($self, $volname, $storeid, $info) = @_;
> +
> + if ($self->{'restore-mechanisms'}->{$volname} eq 'tar') {
> + return; # nothing to do
> + } elsif ($self->{'restore-mechanisms'}->{$volname} eq 'directory') {
> + return restore_directory_cleanup($self, $volname, $storeid);
> + } else {
> + die "no restore mechanism set for '$volname'\n";
> + }
> +}
> +
> +1;
> diff --git a/src/PVE/BackupProvider/Plugin/Makefile b/src/PVE/BackupProvider/Plugin/Makefile
> index bbd7431..bedc26e 100644
> --- a/src/PVE/BackupProvider/Plugin/Makefile
> +++ b/src/PVE/BackupProvider/Plugin/Makefile
> @@ -1,4 +1,4 @@
> -SOURCES = Base.pm
> +SOURCES = Base.pm DirectoryExample.pm
>
> .PHONY: install
> install:
> diff --git a/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm b/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
> new file mode 100644
> index 0000000..5152923
> --- /dev/null
> +++ b/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
> @@ -0,0 +1,307 @@
> +package PVE::Storage::Custom::BackupProviderDirExamplePlugin;
> +
> +use strict;
> +use warnings;
> +
> +use File::Basename qw(basename);
> +
> +use PVE::BackupProvider::Plugin::DirectoryExample;
> +use PVE::Tools;
> +
> +use base qw(PVE::Storage::Plugin);
> +
> +# Helpers
> +
> +sub get_vm_backup_mechanism {
> + my ($class, $scfg) = @_;
> +
> + return $scfg->{'vm-backup-mechanism'} // properties()->{'vm-backup-mechanism'}->{'default'};
> +}
> +
> +sub get_vm_backup_mode {
> + my ($class, $scfg) = @_;
> +
> + return $scfg->{'vm-backup-mode'} // properties()->{'vm-backup-mode'}->{'default'};
> +}
> +
> +sub get_lxc_backup_mode {
> + my ($class, $scfg) = @_;
> +
> + return $scfg->{'lxc-backup-mode'} // properties()->{'lxc-backup-mode'}->{'default'};
> +}
> +
> +# Configuration
> +
> +sub api {
> + return 11;
> +}
> +
> +sub type {
> + return 'backup-provider-dir-example';
> +}
> +
> +sub plugindata {
> + return {
> + content => [ { backup => 1, none => 1 }, { backup => 1 } ],
> + features => { 'backup-provider' => 1 },
> + };
> +}
> +
> +sub properties {
> + return {
> + 'lxc-backup-mode' => {
> + description => "How to create LXC backups. tar - create a tar archive."
> + ." squashfs - create a squashfs image. Requires squashfs-tools to be installed.",
> + type => 'string',
> + enum => [qw(tar squashfs)],
> + default => 'tar',
> + },
> + 'vm-backup-mechanism' => {
> + description => "Which mechanism to use for creating VM backups. nbd - access data via "
> + ." NBD export. block-device - access data via regular block device.",
> + type => 'string',
> + enum => [qw(nbd block-device)],
> + default => 'block-device',
> + },
> + 'vm-backup-mode' => {
> + description => "How to create VM backups. full - always create full backups."
> + ." incremental - create incremental backups when possible, fallback to full when"
> + ." necessary, e.g. VM disk's bitmap is invalid.",
> + type => 'string',
> + enum => [qw(full incremental)],
> + default => 'full',
> + },
> + };
> +}
> +
> +sub options {
> + return {
> + path => { fixed => 1 },
> + 'lxc-backup-mode' => { optional => 1 },
> + 'vm-backup-mechanism' => { optional => 1 },
> + 'vm-backup-mode' => { optional => 1 },
> + disable => { optional => 1 },
> + nodes => { optional => 1 },
> + 'prune-backups' => { optional => 1 },
> + 'max-protected-backups' => { optional => 1 },
> + };
> +}
> +
> +# Storage implementation
> +
> +# NOTE a proper backup storage should implement this
> +sub prune_backups {
> + my ($class, $scfg, $storeid, $keep, $vmid, $type, $dryrun, $logfunc) = @_;
> +
> + die "not implemented";
> +}
> +
> +sub parse_volname {
> + my ($class, $volname) = @_;
> +
> + if ($volname =~ m!^backup/((\d+)/[a-z]+-\d+)$!) {
> + my ($filename, $vmid) = ($1, $2);
> + return ('backup', $filename, $vmid);
> + }
> +
> + die "unable to parse volume name '$volname'\n";
> +}
> +
> +sub path {
> + my ($class, $scfg, $volname, $storeid, $snapname) = @_;
> +
> + die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
> +
> + my ($type, $filename, $vmid) = $class->parse_volname($volname);
> +
> + return ("$scfg->{path}/${filename}", $vmid, $type);
> +}
> +
> +sub create_base {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + die "cannot create base image in backup-provider-dir-example storage\n";
> +}
> +
> +sub clone_image {
> + my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
> +
> + die "can't clone images in backup-provider-dir-example storage\n";
> +}
> +
> +sub alloc_image {
> + my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
> +
> + die "can't allocate space in backup-provider-dir-example storage\n";
> +}
> +
> +# NOTE a proper backup storage should implement this
> +sub free_image {
> + my ($class, $storeid, $scfg, $volname, $isBase) = @_;
> +
> + # if it's a backing file, it would need to be merged into the upper image first.
> +
> + die "not implemented";
> +}
> +
> +sub list_images {
> + my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
> +
> + my $res = [];
> +
> + return $res;
> +}
> +
> +sub list_volumes {
> + my ($class, $storeid, $scfg, $vmid, $content_types) = @_;
> +
> + my $path = $scfg->{path};
> +
> + my $res = [];
> + for my $type ($content_types->@*) {
> + next if $type ne 'backup';
> +
> + my @guest_dirs = glob("$path/*");
> + for my $guest_dir (@guest_dirs) {
> + next if !-d $guest_dir || $guest_dir !~ m!/(\d+)$!;
> +
> + my $backup_vmid = basename($guest_dir);
> +
> + next if defined($vmid) && $backup_vmid != $vmid;
> +
> + my @backup_dirs = glob("$guest_dir/*");
> + for my $backup_dir (@backup_dirs) {
> + next if !-d $backup_dir || $backup_dir !~ m!/(lxc|qemu)-(\d+)$!;
> + my ($subtype, $backup_id) = ($1, $2);
> +
> + my $size = 0;
> + my @backup_files = glob("$backup_dir/*");
> + $size += -s $_ for @backup_files;
> +
> + push $res->@*, {
> + volid => "$storeid:backup/${backup_vmid}/${subtype}-${backup_id}",
> + vmid => $backup_vmid,
> + format => "directory",
> + ctime => $backup_id,
> + size => $size,
> + subtype => $subtype,
> + content => $type,
> + # TODO parent for incremental
> + };
> + }
> + }
> + }
> +
> + return $res;
> +}
> +
> +sub activate_storage {
> + my ($class, $storeid, $scfg, $cache) = @_;
> +
> + my $path = $scfg->{path};
> +
> + my $timeout = 2;
> + if (!PVE::Tools::run_fork_with_timeout($timeout, sub {-d $path})) {
> + die "unable to activate storage '$storeid' - directory '$path' does not exist or is"
> + ." unreachable\n";
> + }
> +
> + return 1;
> +}
> +
> +sub deactivate_storage {
> + my ($class, $storeid, $scfg, $cache) = @_;
> +
> + return 1;
> +}
> +
> +sub activate_volume {
> + my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
> +
> + die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
> +
> + return 1;
> +}
> +
> +sub deactivate_volume {
> + my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
> +
> + die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
> +
> + return 1;
> +}
> +
> +sub get_volume_attribute {
> + my ($class, $scfg, $storeid, $volname, $attribute) = @_;
> +
> + return;
> +}
> +
> +# NOTE a proper backup storage should implement this to support backup notes and
> +# setting protected status.
> +sub update_volume_attribute {
> + my ($class, $scfg, $storeid, $volname, $attribute, $value) = @_;
> +
> + die "attribute '$attribute' is not supported on backup-provider-dir-example volume";
> +}
> +
> +sub volume_size_info {
> + my ($class, $scfg, $storeid, $volname, $timeout) = @_;
> +
> + my (undef, $relative_backup_dir) = $class->parse_volname($volname);
> + my ($ctime) = $relative_backup_dir =~ m/-(\d+)$/;
> + my $backup_dir = "$scfg->{path}/${relative_backup_dir}";
> +
> + my $size = 0;
> + my @backup_files = glob("$backup_dir/*");
> + for my $backup_file (@backup_files) {
> + if ($backup_file =~ m!\.qcow2$!) {
> + $size += $class->file_size_info($backup_file);
> + } else {
> + $size += -s $backup_file;
> + }
> + }
> +
> + my $parent; # TODO for incremental
> +
> + return wantarray ? ($size, 'directory', $size, $parent, $ctime) : $size;
> +}
> +
> +sub volume_resize {
> + my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
> +
> + die "volume resize is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_snapshot {
> + my ($class, $scfg, $storeid, $volname, $snap) = @_;
> +
> + die "volume snapshot is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_snapshot_rollback {
> + my ($class, $scfg, $storeid, $volname, $snap) = @_;
> +
> + die "volume snapshot rollback is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_snapshot_delete {
> + my ($class, $scfg, $storeid, $volname, $snap) = @_;
> +
> + die "volume snapshot delete is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_has_feature {
> + my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
> +
> + return 0;
> +}
> +
> +sub new_backup_provider {
> + my ($class, $scfg, $storeid, $bandwidth_limit, $log_function) = @_;
> +
> + return PVE::BackupProvider::Plugin::DirectoryExample->new(
> + $class, $scfg, $storeid, $bandwidth_limit, $log_function);
> +}
> +
> +1;
> diff --git a/src/PVE/Storage/Custom/Makefile b/src/PVE/Storage/Custom/Makefile
> new file mode 100644
> index 0000000..c1e3eca
> --- /dev/null
> +++ b/src/PVE/Storage/Custom/Makefile
> @@ -0,0 +1,5 @@
> +SOURCES = BackupProviderDirExamplePlugin.pm
> +
> +.PHONY: install
> +install:
> + for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Storage/Custom/$$i; done
> diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
> index d5cc942..acd37f4 100644
> --- a/src/PVE/Storage/Makefile
> +++ b/src/PVE/Storage/Makefile
> @@ -19,4 +19,5 @@ SOURCES= \
> .PHONY: install
> install:
> for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Storage/$$i; done
> + make -C Custom install
> make -C LunCmd install
> --
> 2.39.5
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2024-11-13 10:53 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-07 16:51 [pve-devel] [RFC qemu/common/storage/qemu-server/container/manager v3 00/34] backup provider API Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH qemu v3 01/34] block/reqlist: allow adding overlapping requests Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH qemu v3 02/34] PVE backup: fixup error handling for fleecing Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH qemu v3 03/34] PVE backup: factor out setting up snapshot access " Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH qemu v3 04/34] PVE backup: save device name in device info structure Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH qemu v3 05/34] PVE backup: include device name in error when setting up snapshot access fails Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC qemu v3 06/34] PVE backup: add target ID in backup state Fiona Ebner
2024-11-12 16:46 ` Fabian Grünbichler
2024-11-13 9:22 ` Fiona Ebner
2024-11-13 9:33 ` Fiona Ebner
2024-11-13 11:16 ` Fabian Grünbichler
2024-11-13 11:40 ` Fiona Ebner
2024-11-13 12:03 ` Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [RFC qemu v3 07/34] PVE backup: get device info: allow caller to specify filter for which devices use fleecing Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC qemu v3 08/34] PVE backup: implement backup access setup and teardown API for external providers Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC qemu v3 09/34] PVE backup: implement bitmap support for external backup access Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC common v3 10/34] env: add module with helpers to run a Perl subroutine in a user namespace Fiona Ebner
2024-11-11 18:33 ` Thomas Lamprecht
2024-11-12 10:19 ` Fiona Ebner
2024-11-12 14:20 ` Fabian Grünbichler
2024-11-13 10:08 ` Fiona Ebner
2024-11-13 11:15 ` Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [RFC storage v3 11/34] add storage_has_feature() helper function Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC storage v3 12/34] plugin: introduce new_backup_provider() method Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC storage v3 13/34] extract backup config: delegate to backup provider for storages that support it Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [POC storage v3 14/34] add backup provider example Fiona Ebner
2024-11-13 10:52 ` Fabian Grünbichler [this message]
2024-11-07 16:51 ` [pve-devel] [POC storage v3 15/34] WIP Borg plugin Fiona Ebner
2024-11-13 10:52 ` Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [PATCH qemu-server v3 16/34] move nbd_stop helper to QMPHelpers module Fiona Ebner
2024-11-11 13:55 ` [pve-devel] applied: " Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [PATCH qemu-server v3 17/34] backup: move cleanup of fleecing images to cleanup method Fiona Ebner
2024-11-12 9:26 ` [pve-devel] applied: " Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [PATCH qemu-server v3 18/34] backup: cleanup: check if VM is running before issuing QMP commands Fiona Ebner
2024-11-12 9:26 ` [pve-devel] applied: " Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [PATCH qemu-server v3 19/34] backup: keep track of block-node size for fleecing Fiona Ebner
2024-11-11 14:22 ` Fabian Grünbichler
2024-11-12 9:50 ` Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC qemu-server v3 20/34] backup: allow adding fleecing images also for EFI and TPM Fiona Ebner
2024-11-12 9:26 ` Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [RFC qemu-server v3 21/34] backup: implement backup for external providers Fiona Ebner
2024-11-12 12:27 ` Fabian Grünbichler
2024-11-12 14:35 ` Fiona Ebner
2024-11-12 15:17 ` Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [PATCH qemu-server v3 22/34] restore: die early when there is no size for a device Fiona Ebner
2024-11-12 9:28 ` [pve-devel] applied: " Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [RFC qemu-server v3 23/34] backup: implement restore for external providers Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC qemu-server v3 24/34] backup restore: external: hardening check for untrusted source image Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH container v3 25/34] create: add missing include of PVE::Storage::Plugin Fiona Ebner
2024-11-12 15:22 ` [pve-devel] applied: " Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [RFC container v3 26/34] backup: implement backup for external providers Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC container v3 27/34] create: factor out tar restore command helper Fiona Ebner
2024-11-12 16:28 ` Fabian Grünbichler
2024-11-12 17:08 ` [pve-devel] applied: " Thomas Lamprecht
2024-11-07 16:51 ` [pve-devel] [RFC container v3 28/34] backup: implement restore for external providers Fiona Ebner
2024-11-12 16:27 ` Fabian Grünbichler
2024-11-07 16:51 ` [pve-devel] [RFC container v3 29/34] external restore: don't use 'one-file-system' tar flag when restoring from a directory Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC container v3 30/34] create: factor out compression option helper Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC container v3 31/34] restore tar archive: check potentially untrusted archive Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC container v3 32/34] api: add early check against restoring privileged container from external source Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [PATCH manager v3 33/34] ui: backup: also check for backup subtype to classify archive Fiona Ebner
2024-11-07 16:51 ` [pve-devel] [RFC manager v3 34/34] backup: implement backup for external providers Fiona Ebner
2024-11-12 15:50 ` [pve-devel] partially-applied: [RFC qemu/common/storage/qemu-server/container/manager v3 00/34] backup provider API Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1731491839.jxatj6iypi.astroid@yuna.none \
--to=f.gruenbichler@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox