public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stoiko Ivanov <s.ivanov@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH container/storage v2] add fsfreeze/thaw for rbd snapshots
Date: Fri,  6 Nov 2020 12:24:21 +0100	[thread overview]
Message-ID: <20201106112425.8282-1-s.ivanov@proxmox.com> (raw)

this patchset addresses #2991 and #2528.

v1->v2:
mostly incorporated Thomas' feedback (huge thanks!!):
* moved fsfreeze from pve-common to pve-container (it's only used here, and
  it avoids one versioned dependency bump).
* for this needed to drop O_CLOEXEC (only defined in PVE::Tools) flag from
  sysopen (fsfreeze(8) does not use it either...)
* increased APIVER and APIAGE in PVE::Storage
* don't use sync_container_namespace for unfreezing (no need to parse
  /proc/mounts and pass it to the child process) - moved that part to a new
  sub
* found a tiny typo in a comment (patch 1/2 for pve-storage)

original cover-letter for v1:

As discussed in #2991 (and off-list with Wolfgang B. and Dominik) - it does
not address the fundamental problem of the snapshot being created outside of
the open krbd block-device, by an independend 'rbd' call (which is most likely
the reason for the inconsistency).

However according to the reporter in #2991 it does help in their case to
actually get backups of their containers.

I put the ioctl call inside sync_container_namespace since it:
* should happen shortly after the syncfs call
* needs to happen inside the container's mount namespace (else we'd need to
  mount the filesystem in order to freeze/thaw it - see the proposed patch
  in #2528)

and I wanted to avoid to fork+nsenter for each volume twice (in
__snapshot_create_vol_snapshs_hook)

Would be grateful for feedback if this approach is ok (reading containerconfig
+ storage config in __snapshot_freeze) or if some other way would be nicer.

Tested on my testsetup with a ceph-backed container (and 2 additional
mountpoints (one ceph, one on LVM thin).


pve-storage:
Stoiko Ivanov (2):
  fix typo in comment
  add check for fsfreeze before snapshot

 PVE/Storage.pm           | 18 +++++++++++++++---
 PVE/Storage/Plugin.pm    |  4 ++++
 PVE/Storage/RBDPlugin.pm |  5 +++++
 3 files changed, 24 insertions(+), 3 deletions(-)

pve-container:
Stoiko Ivanov (2):
  add fsfreeze helper:
  snapshot creation: fsfreeze mountpoints, if needed

 src/PVE/LXC.pm            | 45 ++++++++++++++++++++++++++++++++++++---
 src/PVE/LXC/Config.pm     | 19 +++++++++++++++--
 src/test/snapshot-test.pm | 12 ++++++++++-
 3 files changed, 70 insertions(+), 6 deletions(-)

-- 
2.20.1





             reply	other threads:[~2020-11-06 11:25 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-06 11:24 Stoiko Ivanov [this message]
2020-11-06 11:24 ` [pve-devel] [PATCH storage v2 1/2] fix typo in comment Stoiko Ivanov
2020-11-06 11:24 ` [pve-devel] [PATCH storage v2 2/2] add check for fsfreeze before snapshot Stoiko Ivanov
2020-11-06 11:24 ` [pve-devel] [PATCH container v2 1/2] add fsfreeze helper: Stoiko Ivanov
2020-11-06 11:24 ` [pve-devel] [PATCH container v2 2/2] snapshot creation: fsfreeze mountpoints, if needed Stoiko Ivanov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201106112425.8282-1-s.ivanov@proxmox.com \
    --to=s.ivanov@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal