* Re: [PVE-User] PVE Firewall IPset+Alias broken in v8
2023-07-09 19:11 [PVE-User] PVE Firewall IPset+Alias broken in v8 Patrick Velder
@ 2023-07-10 15:58 ` Patrick Velder
0 siblings, 0 replies; 2+ messages in thread
From: Patrick Velder @ 2023-07-10 15:58 UTC (permalink / raw)
To: pve-user
Update:
Upon further investigation, I discovered that the error message "value
does not look like a valid IP address or CIDR network" also occurs on
functioning PVE 7.xx systems. It appears that these messages are
unrelated to the current issue. However, they can cause confusion when
troubleshooting firewall-related problems and should also be addressed.
The actual problem lies in the fact that when a global IP set is defined
at the datacenter level, which includes aliases with the prefixes "dc/"
or "guest/", the rules fail to work, also resulting in the following
error messages:
> no such alias 'xxx'
> no such alias 'yyy'
Best regards
Patrick
On 7/9/23 21:11, Patrick Velder wrote:
> Hello,
>
> Since the upgrade to PVE 8, there appears to be a problem with the
> combination of ipset and alias. When checking the firewall status
> using the command "pve-firewall status," I receive the error message
> "value does not look like a valid IP address or CIDR network" repeated
> multiple times. Despite attempting to downgrade to
> pve-firewall_4.3-2_amd64.deb, the issue remains unresolved.
>
> To further investigate and find a potential solution, I recommend
> checking the following forum threads:
>
> * https://forum.proxmox.com/threads/pve-8-pve-firewall-status-no-such-alias.130202/
> * https://forum.proxmox.com/threads/ipset-not-working-for-accepting-cluster-traffic.129599/
>
> Is that a known issue and is there maybe a workaround, since many
> rules stopped working?
>
> Thanks and best regards
> Patrick
>
From elacunza@binovo.es Wed Jul 12 08:32:36 2023
Return-Path: <elacunza@binovo.es>
X-Original-To: pve-user@lists.proxmox.com
Delivered-To: pve-user@lists.proxmox.com
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
(No client certificate requested)
by lists.proxmox.com (Postfix) with ESMTPS id CEC5BCF9A
for <pve-user@lists.proxmox.com>; Wed, 12 Jul 2023 08:32:36 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
by firstgate.proxmox.com (Proxmox) with ESMTP id AC66729E28
for <pve-user@lists.proxmox.com>; Wed, 12 Jul 2023 08:32:06 +0200 (CEST)
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
[IPv6:2a00:1450:4864:20::432])
(using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
(No client certificate requested)
by firstgate.proxmox.com (Proxmox) with ESMTPS
for <pve-user@lists.proxmox.com>; Wed, 12 Jul 2023 08:32:02 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
ffacd0b85a97d-3159adb7cb5so719635f8f.0
for <pve-user@lists.proxmox.com>; Tue, 11 Jul 2023 23:32:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=binovo.es; s=google; t=1689143515; x=1691735515;
h=content-transfer-encoding:subject:from:content-language:to
:user-agent:mime-version:date:message-id:from:to:cc:subject:date
:message-id:reply-to;
bh=iAHh1ktA1SawSwBVC+nhIebtSt0CDaa0IWfS328cgCQ=;
b=aBA9MxkqDyAhGdYynso4FiXNgzENYLToBaHG37fT4Zx3+z2Jv0Ldt226vSTEvm79Yr
LzdehlrkP6fG2yIeCRYx3uJhiK8lJxIy86Hq22RJq9I7aqshbRKYOuo1ohABSZ9dEruk
/R8kC79E00PZKQdtKUGYgEZLOCv3eYByUbi33IE/ILpiuOnTQ15RGGhCmEwfi2m4c2p0
0qQVCfJn2Sd3tykmEHkN+vyPNcnMbRyMqEnwjcC2910zFfKh5M8dOlUxM/aoZBwMBLGP
y3z934wbCyCHdWMVnoFaxHBOdqIGZJ9D3lhNGLN6BnqVC+g/9ijMC6drzumtrd2im8Dn
RlTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20221208; t=1689143515; x=1691735515;
h=content-transfer-encoding:subject:from:content-language:to
:user-agent:mime-version:date:message-id:x-gm-message-state:from:to
:cc:subject:date:message-id:reply-to;
bh=iAHh1ktA1SawSwBVC+nhIebtSt0CDaa0IWfS328cgCQ=;
b=ZydHvtTLHUsQVm7lgsc4OX6ezoP9G12QqrkwNwF+isjPDqKqB46ejzCwTQuI3iunkj
FydgSqa0RvKef/uW5OnL+ivBciLdGa96gVv4uiGRXibBJB3h1SNrCtvoKjID7mhdCSTD
oYI4hHTpCuCaLfn1r2WFLGCtTtdju05pxIrJYsif4x58JQp7ZZnZFF+I5IXZB+KUHLJu
tZ7SiN4lpbkI20CyhYYM08L0DnK8xoYL/uHGHxxfBefHJ8KcTrWiLM3bQrOseYa9ymDx
tfB1GD9diOk6EVgnr/NOW6AcOtrTJ6YR+oT1edtoCgxbWavVlPcZiH50yGM7eME/bmYZ
6wVg==
X-Gm-Message-State: ABy/qLb2OdIEHtZTUKMkGMsrLRrgOca7R1u94QvMKiQ+YNATyZx9UUrt
X2GOtUH6n9w3rkMiZU60ZYdMSU1skbPCRe7kyX4=
X-Google-Smtp-Source: APBJJlG1QREfSvbTlVWU3vS18FZJIKSrnRZmoC+L42I3N72nobYUd9yzZq0w24k/SLFiYDPlj46F2w==
X-Received: by 2002:a5d:5643:0:b0:314:c86:bee5 with SMTP id
j3-20020a5d5643000000b003140c86bee5mr16774753wrw.2.1689143515354;
Tue, 11 Jul 2023 23:31:55 -0700 (PDT)
Received: from [192.168.11.42] (192.red-88-9-88.dynamicip.rima-tde.net.
[88.9.88.192]) by smtp.gmail.com with ESMTPSA id
f4-20020a5d4dc4000000b0031424f4ef1dsm4114002wru.19.2023.07.11.23.31.54
for <pve-user@lists.proxmox.com>
(version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
Tue, 11 Jul 2023 23:31:54 -0700 (PDT)
Message-ID: <fb58e550-754f-987c-d532-de299e858967@binovo.es>
Date: Wed, 12 Jul 2023 08:31:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
Thunderbird/102.11.0
To: pve-user@lists.proxmox.com
Content-Language: es-CO
From: Eneko Lacunza <elacunza@binovo.es>
Subject: PVE 7.4 Windows VM hang
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results: 0
BAYES_00 -1.9 Bayes spam probability is 0 to 1%
DKIM_SIGNED 0.1 Message has a DKIM or DK signature,
not necessarily valid
DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature
DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's
domain
DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from
domain DMARC_PASS -0.1 DMARC pass policy
RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/,
no trust
SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record
SPF_PASS -0.001 SPF: sender matches SPF record
T_SCC_BODY_TEXT_LINE -0.01 -
X-List-Received-Date: Wed, 12 Jul 2023 06:32:36 -0000
Hi all,
We have been experiencing Windows VM hangs during the last weeks, in
previously stable cluster/VMs.
So far we have seen a Windows 7 (yes I know!) and a Windows 2016 Std
guest crash with 100% multi-cpu use. A hard stop and start leaves guest
working again.
- Windows 7 guest has crashed several times:
* June, 18th
* June, 25th
* July 4th
- Windows 2016 guest has crashed once:
* June 23th
There are more guests in the cluster; 5 linux and 1 Windows 2016; none
have crashed (all were running in other nodes).
Both guest were running on the same physical node, a Dell T350 server
with E-2356G and 32GB RAM. We have moved both to another node (with
E3-1220 v6 CPU) just to see if the issue is with the node; no crashes
since July, 5th but it's too soon to be sure.
Storage is Ceph v16.2.13, backed by HDDs with SSD partition for
journal/db (there is a mixture of filestore and bluestore OSDs).
Originally node was running since April, 20th:
* pve-kernel 6.2.9-1-pve
* qemu-server 7.4-3
* pve-qemu-kvm 7.2.0-8
on June 26th we tried upgrading:
* pve-kernel 6.2.11-2-pve
* qemu-server 7.4-4
VM conf:
boot: order=ide0;ide2
cores: 4
ide0: rbd:vm-104-disk-0,cache=writeback,size=30G
ide2: nas-backups2:iso/virtio-win-0.1.126.iso,media=cdrom,size=152204K
machine: pc-i440fx-6.2
memory: 12288
meta: creation-qemu=6.2.0,ctime=1651156825
name: Windows7
net0: virtio=7E:0E:12:3D:0A:89,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
parent: Snapshot_02_05_2022
scsi1: rbd:vm-104-disk-2,cache=writeback,size=40G
scsihw: virtio-scsi-pci
smbios1: uuid=1006a846-b056-4ce3-bd6f-7d16cfb5f573
sockets: 1
vmgenid: 27764cc2-dcab-405f-be80-bd6630ba75ba
---
boot: cdn
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
ide3: none,media=cdrom
memory: 8192
name: Windows2016
net0: virtio=3A:F9:3C:9F:70:FE,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsi0: rbd:vm-110-disk-1,cache=writeback,discard=on,size=110G
scsi1: rbd:vm-110-disk-2,cache=writeback,discard=on,size=30G
scsihw: virtio-scsi-pci
smbios1: uuid=3e82d96a-464a-483d-b178-2359d24333db
sockets: 1
Currently:
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.11-2-pve)
pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e)
pve-kernel-5.15: 7.4-4
pve-kernel-6.2: 7.4-3
pve-kernel-5.13: 7.1-9
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.9-1-pve: 6.2.9-1
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.30-2-pve: 5.15.30-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
ceph: 16.2.13-pve1
ceph-fuse: 16.2.13-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
Any ideas?
Thanks
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
^ permalink raw reply [flat|nested] 2+ messages in thread