all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall
@ 2023-08-11 16:02 Stoiko Ivanov
  2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 1/2] add fixes " Stoiko Ivanov
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Stoiko Ivanov @ 2023-08-11 16:02 UTC (permalink / raw)
  To: pve-devel

Changes taken from ubuntu's repository (at launchpad)
sending as individual cherry-picks, as we're currently based on our
own tag.

Split into 2 patches as applying the patches happens after we copy the
source (and remove debian/ubuntu specific folders)

The resulting build should in all cases be tested on an affected machine too!

Stoiko Ivanov (2):
  add fixes for downfall
  d/rules: enable mitigation config-options

 debian/rules                                  |   4 +-
 ...-init-Provide-arch_cpu_finalize_init.patch |  85 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch | 235 +++++++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  82 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  80 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  89 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch | 108 ++++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch | 217 +++++++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  80 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  75 +++
 ...022-init-Remove-check_bugs-leftovers.patch | 172 +++++
 ...nvoke-arch_cpu_finalize_init-earlier.patch |  64 ++
 ...m_encrypt_init-into-arch_cpu_finaliz.patch | 121 ++++
 ...it-Initialize-signal-frame-size-late.patch |  81 +++
 ...cpuinfo-argument-from-init-functions.patch |  76 +++
 ...7-x86-fpu-Mark-init-functions-__init.patch |  44 ++
 ...-initialization-into-arch_cpu_finali.patch |  80 +++
 ...-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch |  69 ++
 ...ondary-processors-FPU-initialization.patch |  42 ++
 ...-Add-Gather-Data-Sampling-mitigation.patch | 595 ++++++++++++++++++
 ...n-Add-force-option-to-GDS-mitigation.patch | 172 +++++
 ...eculation-Add-Kconfig-option-for-GDS.patch |  75 +++
 .../0034-KVM-Add-GDS_NO-support-to-KVM.patch  |  85 +++
 ...6-Fix-backwards-on-off-logic-about-Y.patch |  38 ++
 24 files changed, 2768 insertions(+), 1 deletion(-)
 create mode 100644 patches/kernel/0013-init-Provide-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0014-x86-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0015-ARM-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0016-ia64-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0017-m68k-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0018-mips-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0019-sh-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0020-sparc-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0021-um-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0022-init-Remove-check_bugs-leftovers.patch
 create mode 100644 patches/kernel/0023-init-Invoke-arch_cpu_finalize_init-earlier.patch
 create mode 100644 patches/kernel/0024-init-x86-Move-mem_encrypt_init-into-arch_cpu_finaliz.patch
 create mode 100644 patches/kernel/0025-x86-init-Initialize-signal-frame-size-late.patch
 create mode 100644 patches/kernel/0026-x86-fpu-Remove-cpuinfo-argument-from-init-functions.patch
 create mode 100644 patches/kernel/0027-x86-fpu-Mark-init-functions-__init.patch
 create mode 100644 patches/kernel/0028-x86-fpu-Move-FPU-initialization-into-arch_cpu_finali.patch
 create mode 100644 patches/kernel/0029-x86-mem_encrypt-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch
 create mode 100644 patches/kernel/0030-x86-xen-Fix-secondary-processors-FPU-initialization.patch
 create mode 100644 patches/kernel/0031-x86-speculation-Add-Gather-Data-Sampling-mitigation.patch
 create mode 100644 patches/kernel/0032-x86-speculation-Add-force-option-to-GDS-mitigation.patch
 create mode 100644 patches/kernel/0033-x86-speculation-Add-Kconfig-option-for-GDS.patch
 create mode 100644 patches/kernel/0034-KVM-Add-GDS_NO-support-to-KVM.patch
 create mode 100644 patches/kernel/0035-Documentation-x86-Fix-backwards-on-off-logic-about-Y.patch

-- 
2.39.2





^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pve-devel] [PATCH pve-kernel 1/2] add fixes for downfall
  2023-08-11 16:02 [pve-devel] [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Stoiko Ivanov
@ 2023-08-11 16:02 ` Stoiko Ivanov
  2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 2/2] d/rules: enable mitigation config-options Stoiko Ivanov
  2023-08-17 11:49 ` [pve-devel] applied: [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Wolfgang Bumiller
  2 siblings, 0 replies; 4+ messages in thread
From: Stoiko Ivanov @ 2023-08-11 16:02 UTC (permalink / raw)
  To: pve-devel

by cherry-picking the relevant commits from launchpad/lunar [0].
(relevant commits are based on k.o/stable commits for this)

minimally tested by booting my (ryzen) machine with this kernel and
skimming through dmesg after boot.

[0] git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/lunar

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
 ...-init-Provide-arch_cpu_finalize_init.patch |  85 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch | 235 +++++++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  82 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  80 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  89 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch | 108 ++++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch | 217 +++++++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  80 +++
 ...cpu-Switch-to-arch_cpu_finalize_init.patch |  75 +++
 ...022-init-Remove-check_bugs-leftovers.patch | 172 +++++
 ...nvoke-arch_cpu_finalize_init-earlier.patch |  64 ++
 ...m_encrypt_init-into-arch_cpu_finaliz.patch | 121 ++++
 ...it-Initialize-signal-frame-size-late.patch |  81 +++
 ...cpuinfo-argument-from-init-functions.patch |  76 +++
 ...7-x86-fpu-Mark-init-functions-__init.patch |  44 ++
 ...-initialization-into-arch_cpu_finali.patch |  80 +++
 ...-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch |  69 ++
 ...ondary-processors-FPU-initialization.patch |  42 ++
 ...-Add-Gather-Data-Sampling-mitigation.patch | 595 ++++++++++++++++++
 ...n-Add-force-option-to-GDS-mitigation.patch | 172 +++++
 ...eculation-Add-Kconfig-option-for-GDS.patch |  75 +++
 .../0034-KVM-Add-GDS_NO-support-to-KVM.patch  |  85 +++
 ...6-Fix-backwards-on-off-logic-about-Y.patch |  38 ++
 23 files changed, 2765 insertions(+)
 create mode 100644 patches/kernel/0013-init-Provide-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0014-x86-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0015-ARM-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0016-ia64-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0017-m68k-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0018-mips-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0019-sh-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0020-sparc-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0021-um-cpu-Switch-to-arch_cpu_finalize_init.patch
 create mode 100644 patches/kernel/0022-init-Remove-check_bugs-leftovers.patch
 create mode 100644 patches/kernel/0023-init-Invoke-arch_cpu_finalize_init-earlier.patch
 create mode 100644 patches/kernel/0024-init-x86-Move-mem_encrypt_init-into-arch_cpu_finaliz.patch
 create mode 100644 patches/kernel/0025-x86-init-Initialize-signal-frame-size-late.patch
 create mode 100644 patches/kernel/0026-x86-fpu-Remove-cpuinfo-argument-from-init-functions.patch
 create mode 100644 patches/kernel/0027-x86-fpu-Mark-init-functions-__init.patch
 create mode 100644 patches/kernel/0028-x86-fpu-Move-FPU-initialization-into-arch_cpu_finali.patch
 create mode 100644 patches/kernel/0029-x86-mem_encrypt-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch
 create mode 100644 patches/kernel/0030-x86-xen-Fix-secondary-processors-FPU-initialization.patch
 create mode 100644 patches/kernel/0031-x86-speculation-Add-Gather-Data-Sampling-mitigation.patch
 create mode 100644 patches/kernel/0032-x86-speculation-Add-force-option-to-GDS-mitigation.patch
 create mode 100644 patches/kernel/0033-x86-speculation-Add-Kconfig-option-for-GDS.patch
 create mode 100644 patches/kernel/0034-KVM-Add-GDS_NO-support-to-KVM.patch
 create mode 100644 patches/kernel/0035-Documentation-x86-Fix-backwards-on-off-logic-about-Y.patch

diff --git a/patches/kernel/0013-init-Provide-arch_cpu_finalize_init.patch b/patches/kernel/0013-init-Provide-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..440a7a039576
--- /dev/null
+++ b/patches/kernel/0013-init-Provide-arch_cpu_finalize_init.patch
@@ -0,0 +1,85 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:22 +0200
+Subject: [PATCH] init: Provide arch_cpu_finalize_init()
+
+check_bugs() has become a dumping ground for all sorts of activities to
+finalize the CPU initialization before running the rest of the init code.
+
+Most are empty, a few do actual bug checks, some do alternative patching
+and some cobble a CPU advertisement string together....
+
+Aside of that the current implementation requires duplicated function
+declaration and mostly empty header files for them.
+
+Provide a new function arch_cpu_finalize_init(). Provide a generic
+declaration if CONFIG_ARCH_HAS_CPU_FINALIZE_INIT is selected and a stub
+inline otherwise.
+
+This requires a temporary #ifdef in start_kernel() which will be removed
+along with check_bugs() once the architectures are converted over.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224544.957805717@linutronix.de
+
+(cherry picked from commit 7725acaa4f0c04fbefb0e0d342635b967bb7d414)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit c765faa80041002c513c6b356826e11cb78308b3)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/Kconfig        | 3 +++
+ include/linux/cpu.h | 6 ++++++
+ init/main.c         | 4 ++++
+ 3 files changed, 13 insertions(+)
+
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 12e3ddabac9d..9a75f8457283 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -285,6 +285,9 @@ config ARCH_HAS_DMA_SET_UNCACHED
+ config ARCH_HAS_DMA_CLEAR_UNCACHED
+ 	bool
+ 
++config ARCH_HAS_CPU_FINALIZE_INIT
++	bool
++
+ # Select if arch init_task must go in the __init_task_data section
+ config ARCH_TASK_STRUCT_ON_STACK
+ 	bool
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 314802f98b9d..43b0b7950e33 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -187,6 +187,12 @@ void arch_cpu_idle_enter(void);
+ void arch_cpu_idle_exit(void);
+ void arch_cpu_idle_dead(void);
+ 
++#ifdef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT
++void arch_cpu_finalize_init(void);
++#else
++static inline void arch_cpu_finalize_init(void) { }
++#endif
++
+ int cpu_report_state(int cpu);
+ int cpu_check_up_prepare(int cpu);
+ void cpu_set_state_online(int cpu);
+diff --git a/init/main.c b/init/main.c
+index e1c3911d7c70..e39055c8698f 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1138,7 +1138,11 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	taskstats_init_early();
+ 	delayacct_init();
+ 
++	arch_cpu_finalize_init();
++	/* Temporary conditional until everything has been converted */
++#ifndef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT
+ 	check_bugs();
++#endif
+ 
+ 	acpi_subsystem_init();
+ 	arch_post_acpi_subsys_init();
diff --git a/patches/kernel/0014-x86-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0014-x86-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..73b297ed794a
--- /dev/null
+++ b/patches/kernel/0014-x86-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,235 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:24 +0200
+Subject: [PATCH] x86/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of
+it has to do with actual CPU bugs.
+
+Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations().
+
+Fixup the bogus 32bit comments while at it.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
+Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de
+
+(cherry picked from commit 7c7077a72674402654f3291354720cd73cdf649e)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit d839524be6ba339640b7729353ff14156fad42a7)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/Kconfig             |  1 +
+ arch/x86/include/asm/bugs.h  |  2 --
+ arch/x86/kernel/cpu/bugs.c   | 51 +---------------------------------
+ arch/x86/kernel/cpu/common.c | 53 ++++++++++++++++++++++++++++++++++++
+ arch/x86/kernel/cpu/cpu.h    |  1 +
+ 5 files changed, 56 insertions(+), 52 deletions(-)
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index df9e15bcf6d1..598a303819da 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -70,6 +70,7 @@ config X86
+ 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
+ 	select ARCH_HAS_CACHE_LINE_SIZE
+ 	select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_CURRENT_STACK_POINTER
+ 	select ARCH_HAS_DEBUG_VIRTUAL
+ 	select ARCH_HAS_DEBUG_VM_PGTABLE	if !X86_PAE
+diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h
+index 92ae28389940..f25ca2d709d4 100644
+--- a/arch/x86/include/asm/bugs.h
++++ b/arch/x86/include/asm/bugs.h
+@@ -4,8 +4,6 @@
+ 
+ #include <asm/processor.h>
+ 
+-extern void check_bugs(void);
+-
+ #if defined(CONFIG_CPU_SUP_INTEL) && defined(CONFIG_X86_32)
+ int ppro_with_ram_bug(void);
+ #else
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index daad10e7665b..edb670b77294 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -9,7 +9,6 @@
+  *	- Andrew D. Balsa (code cleanup).
+  */
+ #include <linux/init.h>
+-#include <linux/utsname.h>
+ #include <linux/cpu.h>
+ #include <linux/module.h>
+ #include <linux/nospec.h>
+@@ -27,8 +26,6 @@
+ #include <asm/msr.h>
+ #include <asm/vmx.h>
+ #include <asm/paravirt.h>
+-#include <asm/alternative.h>
+-#include <asm/set_memory.h>
+ #include <asm/intel-family.h>
+ #include <asm/e820/api.h>
+ #include <asm/hypervisor.h>
+@@ -124,21 +121,8 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
+ DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+ EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
+ 
+-void __init check_bugs(void)
++void __init cpu_select_mitigations(void)
+ {
+-	identify_boot_cpu();
+-
+-	/*
+-	 * identify_boot_cpu() initialized SMT support information, let the
+-	 * core code know.
+-	 */
+-	cpu_smt_check_topology();
+-
+-	if (!IS_ENABLED(CONFIG_SMP)) {
+-		pr_info("CPU: ");
+-		print_cpu_info(&boot_cpu_data);
+-	}
+-
+ 	/*
+ 	 * Read the SPEC_CTRL MSR to account for reserved bits which may
+ 	 * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
+@@ -175,39 +159,6 @@ void __init check_bugs(void)
+ 	md_clear_select_mitigation();
+ 	srbds_select_mitigation();
+ 	l1d_flush_select_mitigation();
+-
+-	arch_smt_update();
+-
+-#ifdef CONFIG_X86_32
+-	/*
+-	 * Check whether we are able to run this kernel safely on SMP.
+-	 *
+-	 * - i386 is no longer supported.
+-	 * - In order to run on anything without a TSC, we need to be
+-	 *   compiled for a i486.
+-	 */
+-	if (boot_cpu_data.x86 < 4)
+-		panic("Kernel requires i486+ for 'invlpg' and other features");
+-
+-	init_utsname()->machine[1] =
+-		'0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
+-	alternative_instructions();
+-
+-	fpu__init_check_bugs();
+-#else /* CONFIG_X86_64 */
+-	alternative_instructions();
+-
+-	/*
+-	 * Make sure the first 2MB area is not mapped by huge pages
+-	 * There are typically fixed size MTRRs in there and overlapping
+-	 * MTRRs into large pages causes slow downs.
+-	 *
+-	 * Right now we don't do that with gbpages because there seems
+-	 * very little benefit for that case.
+-	 */
+-	if (!direct_gbpages)
+-		set_memory_4k((unsigned long)__va(0), 1);
+-#endif
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 2ac8ceae0ed1..0f32ecfbdeb1 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -19,11 +19,14 @@
+ #include <linux/kprobes.h>
+ #include <linux/kgdb.h>
+ #include <linux/smp.h>
++#include <linux/cpu.h>
+ #include <linux/io.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/pgtable.h>
+ #include <linux/stackprotector.h>
++#include <linux/utsname.h>
+ 
++#include <asm/alternative.h>
+ #include <asm/cmdline.h>
+ #include <asm/perf_event.h>
+ #include <asm/mmu_context.h>
+@@ -59,6 +62,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/uv/uv.h>
++#include <asm/set_memory.h>
+ #include <asm/sigframe.h>
+ #include <asm/traps.h>
+ #include <asm/sev.h>
+@@ -2360,3 +2364,52 @@ void arch_smt_update(void)
+ 	/* Check whether IPI broadcasting can be enabled */
+ 	apic_smt_update();
+ }
++
++void __init arch_cpu_finalize_init(void)
++{
++	identify_boot_cpu();
++
++	/*
++	 * identify_boot_cpu() initialized SMT support information, let the
++	 * core code know.
++	 */
++	cpu_smt_check_topology();
++
++	if (!IS_ENABLED(CONFIG_SMP)) {
++		pr_info("CPU: ");
++		print_cpu_info(&boot_cpu_data);
++	}
++
++	cpu_select_mitigations();
++
++	arch_smt_update();
++
++	if (IS_ENABLED(CONFIG_X86_32)) {
++		/*
++		 * Check whether this is a real i386 which is not longer
++		 * supported and fixup the utsname.
++		 */
++		if (boot_cpu_data.x86 < 4)
++			panic("Kernel requires i486+ for 'invlpg' and other features");
++
++		init_utsname()->machine[1] =
++			'0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
++	}
++
++	alternative_instructions();
++
++	if (IS_ENABLED(CONFIG_X86_64)) {
++		/*
++		 * Make sure the first 2MB area is not mapped by huge pages
++		 * There are typically fixed size MTRRs in there and overlapping
++		 * MTRRs into large pages causes slow downs.
++		 *
++		 * Right now we don't do that with gbpages because there seems
++		 * very little benefit for that case.
++		 */
++		if (!direct_gbpages)
++			set_memory_4k((unsigned long)__va(0), 1);
++	} else {
++		fpu__init_check_bugs();
++	}
++}
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 7c9b5893c30a..61dbb9b216e6 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -79,6 +79,7 @@ extern void detect_ht(struct cpuinfo_x86 *c);
+ extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
++void cpu_select_mitigations(void);
+ 
+ extern void x86_spec_ctrl_setup_ap(void);
+ extern void update_srbds_msr(void);
diff --git a/patches/kernel/0015-ARM-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0015-ARM-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..af8936213f49
--- /dev/null
+++ b/patches/kernel/0015-ARM-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,82 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:25 +0200
+Subject: [PATCH] ARM: cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.078124882@linutronix.de
+
+(cherry picked from commit ee31bb0524a2e7c99b03f50249a411cc1eaa411f)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 57b198863efe8ec2e2c898f8f3d501734c18afb7)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/arm/Kconfig            | 1 +
+ arch/arm/include/asm/bugs.h | 4 ----
+ arch/arm/kernel/bugs.c      | 3 ++-
+ 3 files changed, 3 insertions(+), 5 deletions(-)
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 1938a2a957bc..eac5314702b0 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -5,6 +5,7 @@ config ARM
+ 	select ARCH_32BIT_OFF_T
+ 	select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE if HAVE_KRETPROBES && FRAME_POINTER && !ARM_UNWIND
+ 	select ARCH_HAS_BINFMT_FLAT
++	select ARCH_HAS_CPU_FINALIZE_INIT if MMU
+ 	select ARCH_HAS_CURRENT_STACK_POINTER
+ 	select ARCH_HAS_DEBUG_VIRTUAL if MMU
+ 	select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
+diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h
+index 97a312ba0840..fe385551edec 100644
+--- a/arch/arm/include/asm/bugs.h
++++ b/arch/arm/include/asm/bugs.h
+@@ -1,7 +1,5 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- *  arch/arm/include/asm/bugs.h
+- *
+  *  Copyright (C) 1995-2003 Russell King
+  */
+ #ifndef __ASM_BUGS_H
+@@ -10,10 +8,8 @@
+ extern void check_writebuffer_bugs(void);
+ 
+ #ifdef CONFIG_MMU
+-extern void check_bugs(void);
+ extern void check_other_bugs(void);
+ #else
+-#define check_bugs() do { } while (0)
+ #define check_other_bugs() do { } while (0)
+ #endif
+ 
+diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c
+index 14c8dbbb7d2d..087bce6ec8e9 100644
+--- a/arch/arm/kernel/bugs.c
++++ b/arch/arm/kernel/bugs.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/init.h>
++#include <linux/cpu.h>
+ #include <asm/bugs.h>
+ #include <asm/proc-fns.h>
+ 
+@@ -11,7 +12,7 @@ void check_other_bugs(void)
+ #endif
+ }
+ 
+-void __init check_bugs(void)
++void __init arch_cpu_finalize_init(void)
+ {
+ 	check_writebuffer_bugs();
+ 	check_other_bugs();
diff --git a/patches/kernel/0016-ia64-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0016-ia64-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..d99392fc0210
--- /dev/null
+++ b/patches/kernel/0016-ia64-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,80 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:27 +0200
+Subject: [PATCH] ia64/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.137045745@linutronix.de
+
+(cherry picked from commit 6c38e3005621800263f117fb00d6787a76e16de7)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 7b593af98529e22ee2b54dda992a205bd8935a97)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/ia64/Kconfig            |  1 +
+ arch/ia64/include/asm/bugs.h | 20 --------------------
+ arch/ia64/kernel/setup.c     |  3 +--
+ 3 files changed, 2 insertions(+), 22 deletions(-)
+ delete mode 100644 arch/ia64/include/asm/bugs.h
+
+diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
+index d7e4a24e8644..25ebc90b3ec3 100644
+--- a/arch/ia64/Kconfig
++++ b/arch/ia64/Kconfig
+@@ -9,6 +9,7 @@ menu "Processor type and features"
+ config IA64
+ 	bool
+ 	select ARCH_BINFMT_ELF_EXTRA_PHDRS
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_DMA_MARK_CLEAN
+ 	select ARCH_HAS_STRNCPY_FROM_USER
+ 	select ARCH_HAS_STRNLEN_USER
+diff --git a/arch/ia64/include/asm/bugs.h b/arch/ia64/include/asm/bugs.h
+deleted file mode 100644
+index 0d6b9bded56c..000000000000
+--- a/arch/ia64/include/asm/bugs.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- *
+- * Based on <asm-alpha/bugs.h>.
+- *
+- * Modified 1998, 1999, 2003
+- *	David Mosberger-Tang <davidm@hpl.hp.com>,  Hewlett-Packard Co.
+- */
+-#ifndef _ASM_IA64_BUGS_H
+-#define _ASM_IA64_BUGS_H
+-
+-#include <asm/processor.h>
+-
+-extern void check_bugs (void);
+-
+-#endif /* _ASM_IA64_BUGS_H */
+diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
+index c05728044272..9009f1871e3b 100644
+--- a/arch/ia64/kernel/setup.c
++++ b/arch/ia64/kernel/setup.c
+@@ -1067,8 +1067,7 @@ cpu_init (void)
+ 	}
+ }
+ 
+-void __init
+-check_bugs (void)
++void __init arch_cpu_finalize_init(void)
+ {
+ 	ia64_patch_mckinley_e9((unsigned long) __start___mckinley_e9_bundles,
+ 			       (unsigned long) __end___mckinley_e9_bundles);
diff --git a/patches/kernel/0017-m68k-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0017-m68k-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..f1a6e88db0e3
--- /dev/null
+++ b/patches/kernel/0017-m68k-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,89 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:30 +0200
+Subject: [PATCH] m68k/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
+Link: https://lore.kernel.org/r/20230613224545.254342916@linutronix.de
+
+(cherry picked from commit 9ceecc2589b9d7cef6b321339ed8de484eac4b20)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 51d4827f4d3adf26415b6447d88611a35738e062)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/m68k/Kconfig            |  1 +
+ arch/m68k/include/asm/bugs.h | 21 ---------------------
+ arch/m68k/kernel/setup_mm.c  |  3 ++-
+ 3 files changed, 3 insertions(+), 22 deletions(-)
+ delete mode 100644 arch/m68k/include/asm/bugs.h
+
+diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
+index 7bff88118507..1fe5b2018745 100644
+--- a/arch/m68k/Kconfig
++++ b/arch/m68k/Kconfig
+@@ -4,6 +4,7 @@ config M68K
+ 	default y
+ 	select ARCH_32BIT_OFF_T
+ 	select ARCH_HAS_BINFMT_FLAT
++	select ARCH_HAS_CPU_FINALIZE_INIT if MMU
+ 	select ARCH_HAS_CURRENT_STACK_POINTER
+ 	select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
+ 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
+diff --git a/arch/m68k/include/asm/bugs.h b/arch/m68k/include/asm/bugs.h
+deleted file mode 100644
+index 745530651e0b..000000000000
+--- a/arch/m68k/include/asm/bugs.h
++++ /dev/null
+@@ -1,21 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- *  include/asm-m68k/bugs.h
+- *
+- *  Copyright (C) 1994  Linus Torvalds
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-#ifdef CONFIG_MMU
+-extern void check_bugs(void);	/* in arch/m68k/kernel/setup.c */
+-#else
+-static void check_bugs(void)
+-{
+-}
+-#endif
+diff --git a/arch/m68k/kernel/setup_mm.c b/arch/m68k/kernel/setup_mm.c
+index fbff1cea62ca..6f1ae01f322c 100644
+--- a/arch/m68k/kernel/setup_mm.c
++++ b/arch/m68k/kernel/setup_mm.c
+@@ -10,6 +10,7 @@
+  */
+ 
+ #include <linux/kernel.h>
++#include <linux/cpu.h>
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/delay.h>
+@@ -504,7 +505,7 @@ static int __init proc_hardware_init(void)
+ module_init(proc_hardware_init);
+ #endif
+ 
+-void check_bugs(void)
++void __init arch_cpu_finalize_init(void)
+ {
+ #if defined(CONFIG_FPU) && !defined(CONFIG_M68KFPU_EMU)
+ 	if (m68k_fputype == 0) {
diff --git a/patches/kernel/0018-mips-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0018-mips-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..f57d433c4792
--- /dev/null
+++ b/patches/kernel/0018-mips-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,108 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:32 +0200
+Subject: [PATCH] mips/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.312438573@linutronix.de
+
+(backported from commit 7f066a22fe353a827a402ee2835e81f045b1574d)
+[cascardo: only removed check_bugs from arch/mips/include/asm/bugs.h]
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 7753934cdd362695ffbc0f1db941ff6d4c72fa96)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/mips/Kconfig            |  1 +
+ arch/mips/include/asm/bugs.h | 17 -----------------
+ arch/mips/kernel/setup.c     | 13 +++++++++++++
+ 3 files changed, 14 insertions(+), 17 deletions(-)
+
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index f11dda15aa54..fcf59a375c5b 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -4,6 +4,7 @@ config MIPS
+ 	default y
+ 	select ARCH_32BIT_OFF_T if !64BIT
+ 	select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_CURRENT_STACK_POINTER if !CC_IS_CLANG || CLANG_VERSION >= 140000
+ 	select ARCH_HAS_DEBUG_VIRTUAL if !64BIT
+ 	select ARCH_HAS_FORTIFY_SOURCE
+diff --git a/arch/mips/include/asm/bugs.h b/arch/mips/include/asm/bugs.h
+index d72dc6e1cf3c..8d4cf29861b8 100644
+--- a/arch/mips/include/asm/bugs.h
++++ b/arch/mips/include/asm/bugs.h
+@@ -1,17 +1,11 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+  * Copyright (C) 2007  Maciej W. Rozycki
+- *
+- * Needs:
+- *	void check_bugs(void);
+  */
+ #ifndef _ASM_BUGS_H
+ #define _ASM_BUGS_H
+ 
+ #include <linux/bug.h>
+-#include <linux/delay.h>
+ #include <linux/smp.h>
+ 
+ #include <asm/cpu.h>
+@@ -30,17 +24,6 @@ static inline void check_bugs_early(void)
+ 		check_bugs64_early();
+ }
+ 
+-static inline void check_bugs(void)
+-{
+-	unsigned int cpu = smp_processor_id();
+-
+-	cpu_data[cpu].udelay_val = loops_per_jiffy;
+-	check_bugs32();
+-
+-	if (IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
+-		check_bugs64();
+-}
+-
+ static inline int r4k_daddiu_bug(void)
+ {
+ 	if (!IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index f1c88f8a1dc5..4d950f666ef6 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -11,6 +11,8 @@
+  * Copyright (C) 2000, 2001, 2002, 2007	 Maciej W. Rozycki
+  */
+ #include <linux/init.h>
++#include <linux/cpu.h>
++#include <linux/delay.h>
+ #include <linux/ioport.h>
+ #include <linux/export.h>
+ #include <linux/screen_info.h>
+@@ -839,3 +841,14 @@ static int __init setnocoherentio(char *str)
+ }
+ early_param("nocoherentio", setnocoherentio);
+ #endif
++
++void __init arch_cpu_finalize_init(void)
++{
++	unsigned int cpu = smp_processor_id();
++
++	cpu_data[cpu].udelay_val = loops_per_jiffy;
++	check_bugs32();
++
++	if (IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
++		check_bugs64();
++}
diff --git a/patches/kernel/0019-sh-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0019-sh-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..6329a3962aa1
--- /dev/null
+++ b/patches/kernel/0019-sh-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,217 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:33 +0200
+Subject: [PATCH] sh/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.371697797@linutronix.de
+
+(cherry picked from commit 01eb454e9bfe593f320ecbc9aaec60bf87cd453d)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 5228732d7ec3b9d13ee33b613dd3ed9c7f6a4695)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/sh/Kconfig                 |  1 +
+ arch/sh/include/asm/bugs.h      | 74 ---------------------------------
+ arch/sh/include/asm/processor.h |  2 +
+ arch/sh/kernel/idle.c           |  1 +
+ arch/sh/kernel/setup.c          | 55 ++++++++++++++++++++++++
+ 5 files changed, 59 insertions(+), 74 deletions(-)
+ delete mode 100644 arch/sh/include/asm/bugs.h
+
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index 101a0d094a66..b0284730e761 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -7,6 +7,7 @@ config SUPERH
+ 	select ARCH_HAVE_CUSTOM_GPIO_H
+ 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if (GUSA_RB || CPU_SH4A)
+ 	select ARCH_HAS_BINFMT_FLAT if !MMU
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_CURRENT_STACK_POINTER
+ 	select ARCH_HAS_GIGANTIC_PAGE
+ 	select ARCH_HAS_GCOV_PROFILE_ALL
+diff --git a/arch/sh/include/asm/bugs.h b/arch/sh/include/asm/bugs.h
+deleted file mode 100644
+index fe52abb69cea..000000000000
+--- a/arch/sh/include/asm/bugs.h
++++ /dev/null
+@@ -1,74 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __ASM_SH_BUGS_H
+-#define __ASM_SH_BUGS_H
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-/*
+- * I don't know of any Super-H bugs yet.
+- */
+-
+-#include <asm/processor.h>
+-
+-extern void select_idle_routine(void);
+-
+-static void __init check_bugs(void)
+-{
+-	extern unsigned long loops_per_jiffy;
+-	char *p = &init_utsname()->machine[2]; /* "sh" */
+-
+-	select_idle_routine();
+-
+-	current_cpu_data.loops_per_jiffy = loops_per_jiffy;
+-
+-	switch (current_cpu_data.family) {
+-	case CPU_FAMILY_SH2:
+-		*p++ = '2';
+-		break;
+-	case CPU_FAMILY_SH2A:
+-		*p++ = '2';
+-		*p++ = 'a';
+-		break;
+-	case CPU_FAMILY_SH3:
+-		*p++ = '3';
+-		break;
+-	case CPU_FAMILY_SH4:
+-		*p++ = '4';
+-		break;
+-	case CPU_FAMILY_SH4A:
+-		*p++ = '4';
+-		*p++ = 'a';
+-		break;
+-	case CPU_FAMILY_SH4AL_DSP:
+-		*p++ = '4';
+-		*p++ = 'a';
+-		*p++ = 'l';
+-		*p++ = '-';
+-		*p++ = 'd';
+-		*p++ = 's';
+-		*p++ = 'p';
+-		break;
+-	case CPU_FAMILY_UNKNOWN:
+-		/*
+-		 * Specifically use CPU_FAMILY_UNKNOWN rather than
+-		 * default:, so we're able to have the compiler whine
+-		 * about unhandled enumerations.
+-		 */
+-		break;
+-	}
+-
+-	printk("CPU: %s\n", get_cpu_subtype(&current_cpu_data));
+-
+-#ifndef __LITTLE_ENDIAN__
+-	/* 'eb' means 'Endian Big' */
+-	*p++ = 'e';
+-	*p++ = 'b';
+-#endif
+-	*p = '\0';
+-}
+-#endif /* __ASM_SH_BUGS_H */
+diff --git a/arch/sh/include/asm/processor.h b/arch/sh/include/asm/processor.h
+index 85a6c1c3c16e..73fba7c922f9 100644
+--- a/arch/sh/include/asm/processor.h
++++ b/arch/sh/include/asm/processor.h
+@@ -166,6 +166,8 @@ extern unsigned int instruction_size(unsigned int insn);
+ #define instruction_size(insn)	(2)
+ #endif
+ 
++void select_idle_routine(void);
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #include <asm/processor_32.h>
+diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c
+index f59814983bd5..a80b2a5b25c7 100644
+--- a/arch/sh/kernel/idle.c
++++ b/arch/sh/kernel/idle.c
+@@ -14,6 +14,7 @@
+ #include <linux/irqflags.h>
+ #include <linux/smp.h>
+ #include <linux/atomic.h>
++#include <asm/processor.h>
+ #include <asm/smp.h>
+ #include <asm/bl_bit.h>
+ 
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index af977ec4ca5e..cf7c0f72f293 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -43,6 +43,7 @@
+ #include <asm/smp.h>
+ #include <asm/mmu_context.h>
+ #include <asm/mmzone.h>
++#include <asm/processor.h>
+ #include <asm/sparsemem.h>
+ #include <asm/platform_early.h>
+ 
+@@ -354,3 +355,57 @@ int test_mode_pin(int pin)
+ {
+ 	return sh_mv.mv_mode_pins() & pin;
+ }
++
++void __init arch_cpu_finalize_init(void)
++{
++	char *p = &init_utsname()->machine[2]; /* "sh" */
++
++	select_idle_routine();
++
++	current_cpu_data.loops_per_jiffy = loops_per_jiffy;
++
++	switch (current_cpu_data.family) {
++	case CPU_FAMILY_SH2:
++		*p++ = '2';
++		break;
++	case CPU_FAMILY_SH2A:
++		*p++ = '2';
++		*p++ = 'a';
++		break;
++	case CPU_FAMILY_SH3:
++		*p++ = '3';
++		break;
++	case CPU_FAMILY_SH4:
++		*p++ = '4';
++		break;
++	case CPU_FAMILY_SH4A:
++		*p++ = '4';
++		*p++ = 'a';
++		break;
++	case CPU_FAMILY_SH4AL_DSP:
++		*p++ = '4';
++		*p++ = 'a';
++		*p++ = 'l';
++		*p++ = '-';
++		*p++ = 'd';
++		*p++ = 's';
++		*p++ = 'p';
++		break;
++	case CPU_FAMILY_UNKNOWN:
++		/*
++		 * Specifically use CPU_FAMILY_UNKNOWN rather than
++		 * default:, so we're able to have the compiler whine
++		 * about unhandled enumerations.
++		 */
++		break;
++	}
++
++	pr_info("CPU: %s\n", get_cpu_subtype(&current_cpu_data));
++
++#ifndef __LITTLE_ENDIAN__
++	/* 'eb' means 'Endian Big' */
++	*p++ = 'e';
++	*p++ = 'b';
++#endif
++	*p = '\0';
++}
diff --git a/patches/kernel/0020-sparc-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0020-sparc-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..032c7db551f5
--- /dev/null
+++ b/patches/kernel/0020-sparc-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,80 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:35 +0200
+Subject: [PATCH] sparc/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
+Link: https://lore.kernel.org/r/20230613224545.431995857@linutronix.de
+
+(cherry picked from commit 44ade508e3bfac45ae97864587de29eb1a881ec0)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 5f02f99c6d6fd4f2c7b77f6d01bac14cc6fae2f6)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/sparc/Kconfig            |  1 +
+ arch/sparc/include/asm/bugs.h | 18 ------------------
+ arch/sparc/kernel/setup_32.c  |  7 +++++++
+ 3 files changed, 8 insertions(+), 18 deletions(-)
+ delete mode 100644 arch/sparc/include/asm/bugs.h
+
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index dbb1760cbe8c..b67d96e3392e 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -51,6 +51,7 @@ config SPARC
+ config SPARC32
+ 	def_bool !64BIT
+ 	select ARCH_32BIT_OFF_T
++	select ARCH_HAS_CPU_FINALIZE_INIT if !SMP
+ 	select ARCH_HAS_SYNC_DMA_FOR_CPU
+ 	select CLZ_TAB
+ 	select DMA_DIRECT_REMAP
+diff --git a/arch/sparc/include/asm/bugs.h b/arch/sparc/include/asm/bugs.h
+deleted file mode 100644
+index 02fa369b9c21..000000000000
+--- a/arch/sparc/include/asm/bugs.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/* include/asm/bugs.h:  Sparc probes for various bugs.
+- *
+- * Copyright (C) 1996, 2007 David S. Miller (davem@davemloft.net)
+- */
+-
+-#ifdef CONFIG_SPARC32
+-#include <asm/cpudata.h>
+-#endif
+-
+-extern unsigned long loops_per_jiffy;
+-
+-static void __init check_bugs(void)
+-{
+-#if defined(CONFIG_SPARC32) && !defined(CONFIG_SMP)
+-	cpu_data(0).udelay_val = loops_per_jiffy;
+-#endif
+-}
+diff --git a/arch/sparc/kernel/setup_32.c b/arch/sparc/kernel/setup_32.c
+index c8e0dd99f370..c9d1ba4f311b 100644
+--- a/arch/sparc/kernel/setup_32.c
++++ b/arch/sparc/kernel/setup_32.c
+@@ -412,3 +412,10 @@ static int __init topology_init(void)
+ }
+ 
+ subsys_initcall(topology_init);
++
++#if defined(CONFIG_SPARC32) && !defined(CONFIG_SMP)
++void __init arch_cpu_finalize_init(void)
++{
++	cpu_data(0).udelay_val = loops_per_jiffy;
++}
++#endif
diff --git a/patches/kernel/0021-um-cpu-Switch-to-arch_cpu_finalize_init.patch b/patches/kernel/0021-um-cpu-Switch-to-arch_cpu_finalize_init.patch
new file mode 100644
index 000000000000..e530cd122a94
--- /dev/null
+++ b/patches/kernel/0021-um-cpu-Switch-to-arch_cpu_finalize_init.patch
@@ -0,0 +1,75 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:36 +0200
+Subject: [PATCH] um/cpu: Switch to arch_cpu_finalize_init()
+
+check_bugs() is about to be phased out. Switch over to the new
+arch_cpu_finalize_init() implementation.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Acked-by: Richard Weinberger <richard@nod.at>
+Link: https://lore.kernel.org/r/20230613224545.493148694@linutronix.de
+
+(cherry picked from commit 9349b5cd0908f8afe95529fc7a8cbb1417df9b0c)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 37d44a1fca2e73fabeaf042a5bcdff3bd8e03224)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/um/Kconfig            | 1 +
+ arch/um/include/asm/bugs.h | 7 -------
+ arch/um/kernel/um_arch.c   | 3 ++-
+ 3 files changed, 3 insertions(+), 8 deletions(-)
+ delete mode 100644 arch/um/include/asm/bugs.h
+
+diff --git a/arch/um/Kconfig b/arch/um/Kconfig
+index ad4ff3b0e91e..82709bc36df7 100644
+--- a/arch/um/Kconfig
++++ b/arch/um/Kconfig
+@@ -6,6 +6,7 @@ config UML
+ 	bool
+ 	default y
+ 	select ARCH_EPHEMERAL_INODES
++	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_FORTIFY_SOURCE
+ 	select ARCH_HAS_GCOV_PROFILE_ALL
+ 	select ARCH_HAS_KCOV
+diff --git a/arch/um/include/asm/bugs.h b/arch/um/include/asm/bugs.h
+deleted file mode 100644
+index 4473942a0839..000000000000
+--- a/arch/um/include/asm/bugs.h
++++ /dev/null
+@@ -1,7 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __UM_BUGS_H
+-#define __UM_BUGS_H
+-
+-void check_bugs(void);
+-
+-#endif
+diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
+index 786b44dc20c9..664f477fe084 100644
+--- a/arch/um/kernel/um_arch.c
++++ b/arch/um/kernel/um_arch.c
+@@ -3,6 +3,7 @@
+  * Copyright (C) 2000 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
+  */
+ 
++#include <linux/cpu.h>
+ #include <linux/delay.h>
+ #include <linux/init.h>
+ #include <linux/mm.h>
+@@ -426,7 +427,7 @@ void __init setup_arch(char **cmdline_p)
+ 	}
+ }
+ 
+-void __init check_bugs(void)
++void __init arch_cpu_finalize_init(void)
+ {
+ 	arch_check_bugs();
+ 	os_check_bugs();
diff --git a/patches/kernel/0022-init-Remove-check_bugs-leftovers.patch b/patches/kernel/0022-init-Remove-check_bugs-leftovers.patch
new file mode 100644
index 000000000000..3d3ddb113612
--- /dev/null
+++ b/patches/kernel/0022-init-Remove-check_bugs-leftovers.patch
@@ -0,0 +1,172 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:38 +0200
+Subject: [PATCH] init: Remove check_bugs() leftovers
+
+Everything is converted over to arch_cpu_finalize_init(). Remove the
+check_bugs() leftovers including the empty stubs in asm-generic, alpha,
+parisc, powerpc and xtensa.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
+Link: https://lore.kernel.org/r/20230613224545.553215951@linutronix.de
+
+(cherry picked from commit 61235b24b9cb37c13fcad5b9596d59a1afdcec30)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit f6914d2bea4df361881adc56f02dde9bddfa1b0a)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/alpha/include/asm/bugs.h   | 20 --------------------
+ arch/parisc/include/asm/bugs.h  | 20 --------------------
+ arch/powerpc/include/asm/bugs.h | 15 ---------------
+ arch/xtensa/include/asm/bugs.h  | 18 ------------------
+ include/asm-generic/bugs.h      | 11 -----------
+ init/main.c                     |  5 -----
+ 6 files changed, 89 deletions(-)
+ delete mode 100644 arch/alpha/include/asm/bugs.h
+ delete mode 100644 arch/parisc/include/asm/bugs.h
+ delete mode 100644 arch/powerpc/include/asm/bugs.h
+ delete mode 100644 arch/xtensa/include/asm/bugs.h
+ delete mode 100644 include/asm-generic/bugs.h
+
+diff --git a/arch/alpha/include/asm/bugs.h b/arch/alpha/include/asm/bugs.h
+deleted file mode 100644
+index 78030d1c7e7e..000000000000
+--- a/arch/alpha/include/asm/bugs.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-/*
+- *  include/asm-alpha/bugs.h
+- *
+- *  Copyright (C) 1994  Linus Torvalds
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-/*
+- * I don't know of any alpha bugs yet.. Nice chip
+- */
+-
+-static void check_bugs(void)
+-{
+-}
+diff --git a/arch/parisc/include/asm/bugs.h b/arch/parisc/include/asm/bugs.h
+deleted file mode 100644
+index 0a7f9db6bd1c..000000000000
+--- a/arch/parisc/include/asm/bugs.h
++++ /dev/null
+@@ -1,20 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- *  include/asm-parisc/bugs.h
+- *
+- *  Copyright (C) 1999	Mike Shaver
+- */
+-
+-/*
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Needs:
+- *	void check_bugs(void);
+- */
+-
+-#include <asm/processor.h>
+-
+-static inline void check_bugs(void)
+-{
+-//	identify_cpu(&boot_cpu_data);
+-}
+diff --git a/arch/powerpc/include/asm/bugs.h b/arch/powerpc/include/asm/bugs.h
+deleted file mode 100644
+index 01b8f6ca4dbb..000000000000
+--- a/arch/powerpc/include/asm/bugs.h
++++ /dev/null
+@@ -1,15 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-or-later */
+-#ifndef _ASM_POWERPC_BUGS_H
+-#define _ASM_POWERPC_BUGS_H
+-
+-/*
+- */
+-
+-/*
+- * This file is included by 'init/main.c' to check for
+- * architecture-dependent bugs.
+- */
+-
+-static inline void check_bugs(void) { }
+-
+-#endif	/* _ASM_POWERPC_BUGS_H */
+diff --git a/arch/xtensa/include/asm/bugs.h b/arch/xtensa/include/asm/bugs.h
+deleted file mode 100644
+index 69b29d198249..000000000000
+--- a/arch/xtensa/include/asm/bugs.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/*
+- * include/asm-xtensa/bugs.h
+- *
+- * This is included by init/main.c to check for architecture-dependent bugs.
+- *
+- * Xtensa processors don't have any bugs.  :)
+- *
+- * This file is subject to the terms and conditions of the GNU General
+- * Public License.  See the file "COPYING" in the main directory of
+- * this archive for more details.
+- */
+-
+-#ifndef _XTENSA_BUGS_H
+-#define _XTENSA_BUGS_H
+-
+-static void check_bugs(void) { }
+-
+-#endif /* _XTENSA_BUGS_H */
+diff --git a/include/asm-generic/bugs.h b/include/asm-generic/bugs.h
+deleted file mode 100644
+index 69021830f078..000000000000
+--- a/include/asm-generic/bugs.h
++++ /dev/null
+@@ -1,11 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef __ASM_GENERIC_BUGS_H
+-#define __ASM_GENERIC_BUGS_H
+-/*
+- * This file is included by 'init/main.c' to check for
+- * architecture-dependent bugs.
+- */
+-
+-static inline void check_bugs(void) { }
+-
+-#endif	/* __ASM_GENERIC_BUGS_H */
+diff --git a/init/main.c b/init/main.c
+index e39055c8698f..0370df27746f 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -104,7 +104,6 @@
+ #include <net/net_namespace.h>
+ 
+ #include <asm/io.h>
+-#include <asm/bugs.h>
+ #include <asm/setup.h>
+ #include <asm/sections.h>
+ #include <asm/cacheflush.h>
+@@ -1139,10 +1138,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	delayacct_init();
+ 
+ 	arch_cpu_finalize_init();
+-	/* Temporary conditional until everything has been converted */
+-#ifndef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT
+-	check_bugs();
+-#endif
+ 
+ 	acpi_subsystem_init();
+ 	arch_post_acpi_subsys_init();
diff --git a/patches/kernel/0023-init-Invoke-arch_cpu_finalize_init-earlier.patch b/patches/kernel/0023-init-Invoke-arch_cpu_finalize_init-earlier.patch
new file mode 100644
index 000000000000..14c08bb84d0d
--- /dev/null
+++ b/patches/kernel/0023-init-Invoke-arch_cpu_finalize_init-earlier.patch
@@ -0,0 +1,64 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:39 +0200
+Subject: [PATCH] init: Invoke arch_cpu_finalize_init() earlier
+
+X86 is reworking the boot process so that initializations which are not
+required during early boot can be moved into the late boot process and out
+of the fragile and restricted initial boot phase.
+
+arch_cpu_finalize_init() is the obvious place to do such initializations,
+but arch_cpu_finalize_init() is invoked too late in start_kernel() e.g. for
+initializing the FPU completely. fork_init() requires that the FPU is
+initialized as the size of task_struct on X86 depends on the size of the
+required FPU register buffer.
+
+Fortunately none of the init calls between calibrate_delay() and
+arch_cpu_finalize_init() is relevant for the functionality of
+arch_cpu_finalize_init().
+
+Invoke it right after calibrate_delay() where everything which is relevant
+for arch_cpu_finalize_init() has been set up already.
+
+No functional change intended.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
+Link: https://lore.kernel.org/r/20230613224545.612182854@linutronix.de
+
+(backported from commit 9df9d2f0471b4c4702670380b8d8a45b40b23a7d)
+[cascardo: fixed conflict due to call to mem_encrypt_init]
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 919915fc47211940789c8bde231b2f15d1b8d427)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ init/main.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/init/main.c b/init/main.c
+index 0370df27746f..967584e8c3af 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1111,6 +1111,9 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 		late_time_init();
+ 	sched_clock_init();
+ 	calibrate_delay();
++
++	arch_cpu_finalize_init();
++
+ 	pid_idr_init();
+ 	anon_vma_init();
+ #ifdef CONFIG_X86
+@@ -1137,8 +1140,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	taskstats_init_early();
+ 	delayacct_init();
+ 
+-	arch_cpu_finalize_init();
+-
+ 	acpi_subsystem_init();
+ 	arch_post_acpi_subsys_init();
+ 	kcsan_init();
diff --git a/patches/kernel/0024-init-x86-Move-mem_encrypt_init-into-arch_cpu_finaliz.patch b/patches/kernel/0024-init-x86-Move-mem_encrypt_init-into-arch_cpu_finaliz.patch
new file mode 100644
index 000000000000..da1720faa29d
--- /dev/null
+++ b/patches/kernel/0024-init-x86-Move-mem_encrypt_init-into-arch_cpu_finaliz.patch
@@ -0,0 +1,121 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:41 +0200
+Subject: [PATCH] init, x86: Move mem_encrypt_init() into
+ arch_cpu_finalize_init()
+
+Invoke the X86ism mem_encrypt_init() from X86 arch_cpu_finalize_init() and
+remove the weak fallback from the core code.
+
+No functional change.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.670360645@linutronix.de
+
+(backported from commit 439e17576eb47f26b78c5bbc72e344d4206d2327)
+[cascardo: really remove mem_encrypt_init from init/main.c]
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 439b49f26bc9ee74a3ac4b356c12d41f68c49cbd)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/include/asm/mem_encrypt.h |  7 ++++---
+ arch/x86/kernel/cpu/common.c       | 11 +++++++++++
+ init/main.c                        | 11 -----------
+ 3 files changed, 15 insertions(+), 14 deletions(-)
+
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index 72ca90552b6a..a95914f479b8 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -51,6 +51,8 @@ void __init mem_encrypt_free_decrypted_mem(void);
+ 
+ void __init sev_es_init_vc_handling(void);
+ 
++void __init mem_encrypt_init(void);
++
+ #define __bss_decrypted __section(".bss..decrypted")
+ 
+ #else	/* !CONFIG_AMD_MEM_ENCRYPT */
+@@ -82,13 +84,12 @@ early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
+ 
+ static inline void mem_encrypt_free_decrypted_mem(void) { }
+ 
++static inline void mem_encrypt_init(void) { }
++
+ #define __bss_decrypted
+ 
+ #endif	/* CONFIG_AMD_MEM_ENCRYPT */
+ 
+-/* Architecture __weak replacement functions */
+-void __init mem_encrypt_init(void);
+-
+ void add_encrypt_protection_map(void);
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 0f32ecfbdeb1..637817d0d819 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -18,6 +18,7 @@
+ #include <linux/init.h>
+ #include <linux/kprobes.h>
+ #include <linux/kgdb.h>
++#include <linux/mem_encrypt.h>
+ #include <linux/smp.h>
+ #include <linux/cpu.h>
+ #include <linux/io.h>
+@@ -2412,4 +2413,14 @@ void __init arch_cpu_finalize_init(void)
+ 	} else {
+ 		fpu__init_check_bugs();
+ 	}
++
++	/*
++	 * This needs to be called before any devices perform DMA
++	 * operations that might use the SWIOTLB bounce buffers. It will
++	 * mark the bounce buffers as decrypted so that their usage will
++	 * not cause "plain-text" data to be decrypted when accessed. It
++	 * must be called after late_time_init() so that Hyper-V x86/x64
++	 * hypercalls work when the SWIOTLB bounce buffers are decrypted.
++	 */
++	mem_encrypt_init();
+ }
+diff --git a/init/main.c b/init/main.c
+index 967584e8c3af..7533b4da4fb2 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -96,7 +96,6 @@
+ #include <linux/cache.h>
+ #include <linux/rodata_test.h>
+ #include <linux/jump_label.h>
+-#include <linux/mem_encrypt.h>
+ #include <linux/kcsan.h>
+ #include <linux/init_syscalls.h>
+ #include <linux/stackdepot.h>
+@@ -783,8 +782,6 @@ void __init __weak thread_stack_cache_init(void)
+ }
+ #endif
+ 
+-void __init __weak mem_encrypt_init(void) { }
+-
+ void __init __weak poking_init(void) { }
+ 
+ void __init __weak pgtable_cache_init(void) { }
+@@ -1087,14 +1084,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	 */
+ 	locking_selftest();
+ 
+-	/*
+-	 * This needs to be called before any devices perform DMA
+-	 * operations that might use the SWIOTLB bounce buffers. It will
+-	 * mark the bounce buffers as decrypted so that their usage will
+-	 * not cause "plain-text" data to be decrypted when accessed.
+-	 */
+-	mem_encrypt_init();
+-
+ #ifdef CONFIG_BLK_DEV_INITRD
+ 	if (initrd_start && !initrd_below_start_ok &&
+ 	    page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) {
diff --git a/patches/kernel/0025-x86-init-Initialize-signal-frame-size-late.patch b/patches/kernel/0025-x86-init-Initialize-signal-frame-size-late.patch
new file mode 100644
index 000000000000..44958b2e75de
--- /dev/null
+++ b/patches/kernel/0025-x86-init-Initialize-signal-frame-size-late.patch
@@ -0,0 +1,81 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:42 +0200
+Subject: [PATCH] x86/init: Initialize signal frame size late
+
+No point in doing this during really early boot. Move it to an early
+initcall so that it is set up before possible user mode helpers are started
+during device initialization.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.727330699@linutronix.de
+
+(cherry picked from commit 54d9a91a3d6713d1332e93be13b4eaf0fa54349d)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit cae51198acf57beecfe60bd11710d15b0f0a2856)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/include/asm/sigframe.h | 2 --
+ arch/x86/kernel/cpu/common.c    | 3 ---
+ arch/x86/kernel/signal.c        | 4 +++-
+ 3 files changed, 3 insertions(+), 6 deletions(-)
+
+diff --git a/arch/x86/include/asm/sigframe.h b/arch/x86/include/asm/sigframe.h
+index 5b1ed650b124..84eab2724875 100644
+--- a/arch/x86/include/asm/sigframe.h
++++ b/arch/x86/include/asm/sigframe.h
+@@ -85,6 +85,4 @@ struct rt_sigframe_x32 {
+ 
+ #endif /* CONFIG_X86_64 */
+ 
+-void __init init_sigframe_size(void);
+-
+ #endif /* _ASM_X86_SIGFRAME_H */
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 637817d0d819..256083661fb2 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -64,7 +64,6 @@
+ #include <asm/cpu_device_id.h>
+ #include <asm/uv/uv.h>
+ #include <asm/set_memory.h>
+-#include <asm/sigframe.h>
+ #include <asm/traps.h>
+ #include <asm/sev.h>
+ 
+@@ -1599,8 +1598,6 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ 
+ 	fpu__init_system(c);
+ 
+-	init_sigframe_size();
+-
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Regardless of whether PCID is enumerated, the SDM says
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 004cb30b7419..cfeec3ee877e 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -182,7 +182,7 @@ get_sigframe(struct ksignal *ksig, struct pt_regs *regs, size_t frame_size,
+ static unsigned long __ro_after_init max_frame_size;
+ static unsigned int __ro_after_init fpu_default_state_size;
+ 
+-void __init init_sigframe_size(void)
++static int __init init_sigframe_size(void)
+ {
+ 	fpu_default_state_size = fpu__get_fpstate_size();
+ 
+@@ -194,7 +194,9 @@ void __init init_sigframe_size(void)
+ 	max_frame_size = round_up(max_frame_size, FRAME_ALIGNMENT);
+ 
+ 	pr_info("max sigframe size: %lu\n", max_frame_size);
++	return 0;
+ }
++early_initcall(init_sigframe_size);
+ 
+ unsigned long get_sigframe_size(void)
+ {
diff --git a/patches/kernel/0026-x86-fpu-Remove-cpuinfo-argument-from-init-functions.patch b/patches/kernel/0026-x86-fpu-Remove-cpuinfo-argument-from-init-functions.patch
new file mode 100644
index 000000000000..b73ba2a3e878
--- /dev/null
+++ b/patches/kernel/0026-x86-fpu-Remove-cpuinfo-argument-from-init-functions.patch
@@ -0,0 +1,76 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:43 +0200
+Subject: [PATCH] x86/fpu: Remove cpuinfo argument from init functions
+
+Nothing in the call chain requires it
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.783704297@linutronix.de
+
+(cherry picked from commit 1f34bb2a24643e0087652d81078e4f616562738d)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit df2f3fc430e187551eb4aaa14aa21640d7ef44ca)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/include/asm/fpu/api.h | 2 +-
+ arch/x86/kernel/cpu/common.c   | 2 +-
+ arch/x86/kernel/fpu/init.c     | 6 +++---
+ 3 files changed, 5 insertions(+), 5 deletions(-)
+
+diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
+index 503a577814b2..b475d9a582b8 100644
+--- a/arch/x86/include/asm/fpu/api.h
++++ b/arch/x86/include/asm/fpu/api.h
+@@ -109,7 +109,7 @@ extern void fpu_reset_from_exception_fixup(void);
+ 
+ /* Boot, hotplug and resume */
+ extern void fpu__init_cpu(void);
+-extern void fpu__init_system(struct cpuinfo_x86 *c);
++extern void fpu__init_system(void);
+ extern void fpu__init_check_bugs(void);
+ extern void fpu__resume_cpu(void);
+ 
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 256083661fb2..794eb851cb0d 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1596,7 +1596,7 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ 
+ 	sld_setup(c);
+ 
+-	fpu__init_system(c);
++	fpu__init_system();
+ 
+ #ifdef CONFIG_X86_32
+ 	/*
+diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
+index 851eb13edc01..5001df943828 100644
+--- a/arch/x86/kernel/fpu/init.c
++++ b/arch/x86/kernel/fpu/init.c
+@@ -71,7 +71,7 @@ static bool fpu__probe_without_cpuid(void)
+ 	return fsw == 0 && (fcw & 0x103f) == 0x003f;
+ }
+ 
+-static void fpu__init_system_early_generic(struct cpuinfo_x86 *c)
++static void fpu__init_system_early_generic(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_CPUID) &&
+ 	    !test_bit(X86_FEATURE_FPU, (unsigned long *)cpu_caps_cleared)) {
+@@ -211,10 +211,10 @@ static void __init fpu__init_system_xstate_size_legacy(void)
+  * Called on the boot CPU once per system bootup, to set up the initial
+  * FPU state that is later cloned into all processes:
+  */
+-void __init fpu__init_system(struct cpuinfo_x86 *c)
++void __init fpu__init_system(void)
+ {
+ 	fpstate_reset(&current->thread.fpu);
+-	fpu__init_system_early_generic(c);
++	fpu__init_system_early_generic();
+ 
+ 	/*
+ 	 * The FPU has to be operational for some of the
diff --git a/patches/kernel/0027-x86-fpu-Mark-init-functions-__init.patch b/patches/kernel/0027-x86-fpu-Mark-init-functions-__init.patch
new file mode 100644
index 000000000000..3c079636e99e
--- /dev/null
+++ b/patches/kernel/0027-x86-fpu-Mark-init-functions-__init.patch
@@ -0,0 +1,44 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:45 +0200
+Subject: [PATCH] x86/fpu: Mark init functions __init
+
+No point in keeping them around.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.841685728@linutronix.de
+
+(cherry picked from commit 1703db2b90c91b2eb2d699519fc505fe431dde0e)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 368569c00f730c2f530d3d5431fd3fe8ca81cba3)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/kernel/fpu/init.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
+index 5001df943828..998a08f17e33 100644
+--- a/arch/x86/kernel/fpu/init.c
++++ b/arch/x86/kernel/fpu/init.c
+@@ -53,7 +53,7 @@ void fpu__init_cpu(void)
+ 	fpu__init_cpu_xstate();
+ }
+ 
+-static bool fpu__probe_without_cpuid(void)
++static bool __init fpu__probe_without_cpuid(void)
+ {
+ 	unsigned long cr0;
+ 	u16 fsw, fcw;
+@@ -71,7 +71,7 @@ static bool fpu__probe_without_cpuid(void)
+ 	return fsw == 0 && (fcw & 0x103f) == 0x003f;
+ }
+ 
+-static void fpu__init_system_early_generic(void)
++static void __init fpu__init_system_early_generic(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_CPUID) &&
+ 	    !test_bit(X86_FEATURE_FPU, (unsigned long *)cpu_caps_cleared)) {
diff --git a/patches/kernel/0028-x86-fpu-Move-FPU-initialization-into-arch_cpu_finali.patch b/patches/kernel/0028-x86-fpu-Move-FPU-initialization-into-arch_cpu_finali.patch
new file mode 100644
index 000000000000..a753d943730e
--- /dev/null
+++ b/patches/kernel/0028-x86-fpu-Move-FPU-initialization-into-arch_cpu_finali.patch
@@ -0,0 +1,80 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 14 Jun 2023 01:39:46 +0200
+Subject: [PATCH] x86/fpu: Move FPU initialization into
+ arch_cpu_finalize_init()
+
+Initializing the FPU during the early boot process is a pointless
+exercise. Early boot is convoluted and fragile enough.
+
+Nothing requires that the FPU is set up early. It has to be initialized
+before fork_init() because the task_struct size depends on the FPU register
+buffer size.
+
+Move the initialization to arch_cpu_finalize_init() which is the perfect
+place to do so.
+
+No functional change.
+
+This allows to remove quite some of the custom early command line parsing,
+but that's subject to the next installment.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230613224545.902376621@linutronix.de
+
+(cherry picked from commit b81fac906a8f9e682e513ddd95697ec7a20878d4)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 010f3814ec351195c9d0a9a408798f9c66fdb906)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/kernel/cpu/common.c | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 794eb851cb0d..9b53d1cb424d 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1596,8 +1596,6 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ 
+ 	sld_setup(c);
+ 
+-	fpu__init_system();
+-
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Regardless of whether PCID is enumerated, the SDM says
+@@ -2283,8 +2281,6 @@ void cpu_init(void)
+ 
+ 	doublefault_init_cpu_tss();
+ 
+-	fpu__init_cpu();
+-
+ 	if (is_uv_system())
+ 		uv_cpu_init();
+ 
+@@ -2300,6 +2296,7 @@ void cpu_init_secondary(void)
+ 	 */
+ 	cpu_init_exception_handling();
+ 	cpu_init();
++	fpu__init_cpu();
+ }
+ #endif
+ 
+@@ -2394,6 +2391,13 @@ void __init arch_cpu_finalize_init(void)
+ 			'0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
+ 	}
+ 
++	/*
++	 * Must be before alternatives because it might set or clear
++	 * feature bits.
++	 */
++	fpu__init_system();
++	fpu__init_cpu();
++
+ 	alternative_instructions();
+ 
+ 	if (IS_ENABLED(CONFIG_X86_64)) {
diff --git a/patches/kernel/0029-x86-mem_encrypt-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch b/patches/kernel/0029-x86-mem_encrypt-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch
new file mode 100644
index 000000000000..0b6207bebc71
--- /dev/null
+++ b/patches/kernel/0029-x86-mem_encrypt-Unbreak-the-AMD_MEM_ENCRYPT-n-build.patch
@@ -0,0 +1,69 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Fri, 16 Jun 2023 22:15:31 +0200
+Subject: [PATCH] x86/mem_encrypt: Unbreak the AMD_MEM_ENCRYPT=n build
+
+Moving mem_encrypt_init() broke the AMD_MEM_ENCRYPT=n because the
+declaration of that function was under #ifdef CONFIG_AMD_MEM_ENCRYPT and
+the obvious placement for the inline stub was the #else path.
+
+This is a leftover of commit 20f07a044a76 ("x86/sev: Move common memory
+encryption code to mem_encrypt.c") which made mem_encrypt_init() depend on
+X86_MEM_ENCRYPT without moving the prototype. That did not fail back then
+because there was no stub inline as the core init code had a weak function.
+
+Move both the declaration and the stub out of the CONFIG_AMD_MEM_ENCRYPT
+section and guard it with CONFIG_X86_MEM_ENCRYPT.
+
+Fixes: 439e17576eb4 ("init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init()")
+Reported-by: kernel test robot <lkp@intel.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Closes: https://lore.kernel.org/oe-kbuild-all/202306170247.eQtCJPE8-lkp@intel.com/
+
+(cherry picked from commit 0a9567ac5e6a40cdd9c8cd15b19a62a15250f450)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 305ba9053fdf1503a6717e3a96a7d9e0cd48ef15)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/include/asm/mem_encrypt.h | 10 ++++++----
+ 1 file changed, 6 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index a95914f479b8..8f513372cd8d 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -17,6 +17,12 @@
+ 
+ #include <asm/bootparam.h>
+ 
++#ifdef CONFIG_X86_MEM_ENCRYPT
++void __init mem_encrypt_init(void);
++#else
++static inline void mem_encrypt_init(void) { }
++#endif
++
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 
+ extern u64 sme_me_mask;
+@@ -51,8 +57,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
+ 
+ void __init sev_es_init_vc_handling(void);
+ 
+-void __init mem_encrypt_init(void);
+-
+ #define __bss_decrypted __section(".bss..decrypted")
+ 
+ #else	/* !CONFIG_AMD_MEM_ENCRYPT */
+@@ -84,8 +88,6 @@ early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
+ 
+ static inline void mem_encrypt_free_decrypted_mem(void) { }
+ 
+-static inline void mem_encrypt_init(void) { }
+-
+ #define __bss_decrypted
+ 
+ #endif	/* CONFIG_AMD_MEM_ENCRYPT */
diff --git a/patches/kernel/0030-x86-xen-Fix-secondary-processors-FPU-initialization.patch b/patches/kernel/0030-x86-xen-Fix-secondary-processors-FPU-initialization.patch
new file mode 100644
index 000000000000..14105f839b63
--- /dev/null
+++ b/patches/kernel/0030-x86-xen-Fix-secondary-processors-FPU-initialization.patch
@@ -0,0 +1,42 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Juergen Gross <jgross@suse.com>
+Date: Mon, 3 Jul 2023 15:00:32 +0200
+Subject: [PATCH] x86/xen: Fix secondary processors' FPU initialization
+
+Moving the call of fpu__init_cpu() from cpu_init() to start_secondary()
+broke Xen PV guests, as those don't call start_secondary() for APs.
+
+Call fpu__init_cpu() in Xen's cpu_bringup(), which is the Xen PV
+replacement of start_secondary().
+
+Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()")
+Signed-off-by: Juergen Gross <jgross@suse.com>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
+Acked-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20230703130032.22916-1-jgross@suse.com
+
+(cherry picked from commit fe3e0a13e597c1c8617814bf9b42ab732db5c26e)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 96617ee9a5943f6c58fa503257e18b191e84d117)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/xen/smp_pv.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 6175f2c5c822..e97bab7b0010 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -63,6 +63,7 @@ static void cpu_bringup(void)
+ 
+ 	cr4_init();
+ 	cpu_init();
++	fpu__init_cpu();
+ 	touch_softlockup_watchdog();
+ 
+ 	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
diff --git a/patches/kernel/0031-x86-speculation-Add-Gather-Data-Sampling-mitigation.patch b/patches/kernel/0031-x86-speculation-Add-Gather-Data-Sampling-mitigation.patch
new file mode 100644
index 000000000000..9575840e95ad
--- /dev/null
+++ b/patches/kernel/0031-x86-speculation-Add-Gather-Data-Sampling-mitigation.patch
@@ -0,0 +1,595 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Date: Wed, 12 Jul 2023 19:43:11 -0700
+Subject: [PATCH] x86/speculation: Add Gather Data Sampling mitigation
+
+Gather Data Sampling (GDS) is a hardware vulnerability which allows
+unprivileged speculative access to data which was previously stored in
+vector registers.
+
+Intel processors that support AVX2 and AVX512 have gather instructions
+that fetch non-contiguous data elements from memory. On vulnerable
+hardware, when a gather instruction is transiently executed and
+encounters a fault, stale data from architectural or internal vector
+registers may get transiently stored to the destination vector
+register allowing an attacker to infer the stale data using typical
+side channel techniques like cache timing attacks.
+
+This mitigation is different from many earlier ones for two reasons.
+First, it is enabled by default and a bit must be set to *DISABLE* it.
+This is the opposite of normal mitigation polarity. This means GDS can
+be mitigated simply by updating microcode and leaving the new control
+bit alone.
+
+Second, GDS has a "lock" bit. This lock bit is there because the
+mitigation affects the hardware security features KeyLocker and SGX.
+It needs to be enabled and *STAY* enabled for these features to be
+mitigated against GDS.
+
+The mitigation is enabled in the microcode by default. Disable it by
+setting gather_data_sampling=off or by disabling all mitigations with
+mitigations=off. The mitigation status can be checked by reading:
+
+    /sys/devices/system/cpu/vulnerabilities/gather_data_sampling
+
+Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
+
+(cherry picked from commit 8974eb588283b7d44a7c91fa09fcbaf380339f3a)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit a82fd9ff16b574fc42677c7b5f9e05b2f965d709)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ .../ABI/testing/sysfs-devices-system-cpu      |  13 +-
+ .../hw-vuln/gather_data_sampling.rst          |  99 ++++++++++++++
+ Documentation/admin-guide/hw-vuln/index.rst   |   1 +
+ .../admin-guide/kernel-parameters.txt         |  41 ++++--
+ arch/x86/include/asm/cpufeatures.h            |   1 +
+ arch/x86/include/asm/msr-index.h              |  11 ++
+ arch/x86/kernel/cpu/bugs.c                    | 129 ++++++++++++++++++
+ arch/x86/kernel/cpu/common.c                  |  34 +++--
+ arch/x86/kernel/cpu/cpu.h                     |   1 +
+ drivers/base/cpu.c                            |   8 ++
+ 10 files changed, 310 insertions(+), 28 deletions(-)
+ create mode 100644 Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index f54867cadb0f..13c01b641dc7 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -513,17 +513,18 @@ Description:	information about CPUs heterogeneity.
+ 		cpu_capacity: capacity of cpuX.
+ 
+ What:		/sys/devices/system/cpu/vulnerabilities
++		/sys/devices/system/cpu/vulnerabilities/gather_data_sampling
++		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
++		/sys/devices/system/cpu/vulnerabilities/l1tf
++		/sys/devices/system/cpu/vulnerabilities/mds
+ 		/sys/devices/system/cpu/vulnerabilities/meltdown
++		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++		/sys/devices/system/cpu/vulnerabilities/retbleed
++		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+-		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+-		/sys/devices/system/cpu/vulnerabilities/l1tf
+-		/sys/devices/system/cpu/vulnerabilities/mds
+ 		/sys/devices/system/cpu/vulnerabilities/srbds
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+-		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
+-		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
+-		/sys/devices/system/cpu/vulnerabilities/retbleed
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+new file mode 100644
+index 000000000000..74dab6af7fe1
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+@@ -0,0 +1,99 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++GDS - Gather Data Sampling
++==========================
++
++Gather Data Sampling is a hardware vulnerability which allows unprivileged
++speculative access to data which was previously stored in vector registers.
++
++Problem
++-------
++When a gather instruction performs loads from memory, different data elements
++are merged into the destination vector register. However, when a gather
++instruction that is transiently executed encounters a fault, stale data from
++architectural or internal vector registers may get transiently forwarded to the
++destination vector register instead. This will allow a malicious attacker to
++infer stale data using typical side channel techniques like cache timing
++attacks. GDS is a purely sampling-based attack.
++
++The attacker uses gather instructions to infer the stale vector register data.
++The victim does not need to do anything special other than use the vector
++registers. The victim does not need to use gather instructions to be
++vulnerable.
++
++Because the buffers are shared between Hyper-Threads cross Hyper-Thread attacks
++are possible.
++
++Attack scenarios
++----------------
++Without mitigation, GDS can infer stale data across virtually all
++permission boundaries:
++
++	Non-enclaves can infer SGX enclave data
++	Userspace can infer kernel data
++	Guests can infer data from hosts
++	Guest can infer guest from other guests
++	Users can infer data from other users
++
++Because of this, it is important to ensure that the mitigation stays enabled in
++lower-privilege contexts like guests and when running outside SGX enclaves.
++
++The hardware enforces the mitigation for SGX. Likewise, VMMs should  ensure
++that guests are not allowed to disable the GDS mitigation. If a host erred and
++allowed this, a guest could theoretically disable GDS mitigation, mount an
++attack, and re-enable it.
++
++Mitigation mechanism
++--------------------
++This issue is mitigated in microcode. The microcode defines the following new
++bits:
++
++ ================================   ===   ============================
++ IA32_ARCH_CAPABILITIES[GDS_CTRL]   R/O   Enumerates GDS vulnerability
++                                          and mitigation support.
++ IA32_ARCH_CAPABILITIES[GDS_NO]     R/O   Processor is not vulnerable.
++ IA32_MCU_OPT_CTRL[GDS_MITG_DIS]    R/W   Disables the mitigation
++                                          0 by default.
++ IA32_MCU_OPT_CTRL[GDS_MITG_LOCK]   R/W   Locks GDS_MITG_DIS=0. Writes
++                                          to GDS_MITG_DIS are ignored
++                                          Can't be cleared once set.
++ ================================   ===   ============================
++
++GDS can also be mitigated on systems that don't have updated microcode by
++disabling AVX. This can be done by setting "clearcpuid=avx" on the kernel
++command-line.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The mitigation can be disabled by setting "gather_data_sampling=off" or
++"mitigations=off" on the kernel command line. Not specifying either will
++default to the mitigation being enabled.
++
++GDS System Information
++------------------------
++The kernel provides vulnerability status information through sysfs. For
++GDS this can be accessed by the following sysfs file:
++
++/sys/devices/system/cpu/vulnerabilities/gather_data_sampling
++
++The possible values contained in this file are:
++
++ ============================== =============================================
++ Not affected                   Processor not vulnerable.
++ Vulnerable                     Processor vulnerable and mitigation disabled.
++ Vulnerable: No microcode       Processor vulnerable and microcode is missing
++                                mitigation.
++ Mitigation: Microcode          Processor is vulnerable and mitigation is in
++                                effect.
++ Mitigation: Microcode (locked) Processor is vulnerable and mitigation is in
++                                effect and cannot be disabled.
++ Unknown: Dependent on
++ hypervisor status              Running on a virtual guest processor that is
++                                affected but with no way to know if host
++                                processor is mitigated or vulnerable.
++ ============================== =============================================
++
++GDS Default mitigation
++----------------------
++The updated microcode will enable the mitigation by default. The kernel's
++default action is to leave the mitigation enabled.
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index e0614760a99e..436fac0bd9c3 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -19,3 +19,4 @@ are configurable at compile, boot or run time.
+    l1d_flush.rst
+    processor_mmio_stale_data.rst
+    cross-thread-rsb.rst
++   gather_data_sampling.rst
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index c0d8867359bc..380e1e46ffa1 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1610,6 +1610,20 @@
+ 			Format: off | on
+ 			default: on
+ 
++	gather_data_sampling=
++			[X86,INTEL] Control the Gather Data Sampling (GDS)
++			mitigation.
++
++			Gather Data Sampling is a hardware vulnerability which
++			allows unprivileged speculative access to data which was
++			previously stored in vector registers.
++
++			This issue is mitigated by default in updated microcode.
++			The mitigation may have a performance impact but can be
++			disabled.
++
++			off:	Disable GDS mitigation.
++
+ 	gcov_persist=	[GCOV] When non-zero (default), profiling data for
+ 			kernel modules is saved and remains accessible via
+ 			debugfs, even when the module is unloaded/reloaded.
+@@ -3245,24 +3259,25 @@
+ 				Disable all optional CPU mitigations.  This
+ 				improves system performance, but it may also
+ 				expose users to several CPU vulnerabilities.
+-				Equivalent to: nopti [X86,PPC]
+-					       if nokaslr then kpti=0 [ARM64]
+-					       nospectre_v1 [X86,PPC]
+-					       nobp=0 [S390]
+-					       nospectre_v2 [X86,PPC,S390,ARM64]
+-					       spectre_v2_user=off [X86]
+-					       spec_store_bypass_disable=off [X86,PPC]
+-					       ssbd=force-off [ARM64]
+-					       nospectre_bhb [ARM64]
++				Equivalent to: if nokaslr then kpti=0 [ARM64]
++					       gather_data_sampling=off [X86]
++					       kvm.nx_huge_pages=off [X86]
+ 					       l1tf=off [X86]
+ 					       mds=off [X86]
+-					       tsx_async_abort=off [X86]
+-					       kvm.nx_huge_pages=off [X86]
+-					       srbds=off [X86,INTEL]
++					       mmio_stale_data=off [X86]
+ 					       no_entry_flush [PPC]
+ 					       no_uaccess_flush [PPC]
+-					       mmio_stale_data=off [X86]
++					       nobp=0 [S390]
++					       nopti [X86,PPC]
++					       nospectre_bhb [ARM64]
++					       nospectre_v1 [X86,PPC]
++					       nospectre_v2 [X86,PPC,S390,ARM64]
+ 					       retbleed=off [X86]
++					       spec_store_bypass_disable=off [X86,PPC]
++					       spectre_v2_user=off [X86]
++					       srbds=off [X86,INTEL]
++					       ssbd=force-off [ARM64]
++					       tsx_async_abort=off [X86]
+ 
+ 				Exceptions:
+ 					       This does not have any effect on
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 8f39c46197b8..93f232eb9786 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -467,5 +467,6 @@
+ #define X86_BUG_RETBLEED		X86_BUG(27) /* CPU is affected by RETBleed */
+ #define X86_BUG_EIBRS_PBRSB		X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
+ #define X86_BUG_SMT_RSB			X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */
++#define X86_BUG_GDS			X86_BUG(30) /* CPU is affected by Gather Data Sampling */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 52a09dbc2c26..b030a03ca8d6 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -153,6 +153,15 @@
+ 						 * Not susceptible to Post-Barrier
+ 						 * Return Stack Buffer Predictions.
+ 						 */
++#define ARCH_CAP_GDS_CTRL		BIT(25)	/*
++						 * CPU is vulnerable to Gather
++						 * Data Sampling (GDS) and
++						 * has controls for mitigation.
++						 */
++#define ARCH_CAP_GDS_NO			BIT(26)	/*
++						 * CPU is not vulnerable to Gather
++						 * Data Sampling (GDS).
++						 */
+ 
+ #define ARCH_CAP_XAPIC_DISABLE		BIT(21)	/*
+ 						 * IA32_XAPIC_DISABLE_STATUS MSR
+@@ -176,6 +185,8 @@
+ #define RNGDS_MITG_DIS			BIT(0)	/* SRBDS support */
+ #define RTM_ALLOW			BIT(1)	/* TSX development mode */
+ #define FB_CLEAR_DIS			BIT(3)	/* CPU Fill buffer clear disable */
++#define GDS_MITG_DIS			BIT(4)	/* Disable GDS mitigation */
++#define GDS_MITG_LOCKED			BIT(5)	/* GDS mitigation locked */
+ 
+ #define MSR_IA32_SYSENTER_CS		0x00000174
+ #define MSR_IA32_SYSENTER_ESP		0x00000175
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index edb670b77294..a1c1c8e4995c 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -46,6 +46,7 @@ static void __init taa_select_mitigation(void);
+ static void __init mmio_select_mitigation(void);
+ static void __init srbds_select_mitigation(void);
+ static void __init l1d_flush_select_mitigation(void);
++static void __init gds_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -159,6 +160,7 @@ void __init cpu_select_mitigations(void)
+ 	md_clear_select_mitigation();
+ 	srbds_select_mitigation();
+ 	l1d_flush_select_mitigation();
++	gds_select_mitigation();
+ }
+ 
+ /*
+@@ -644,6 +646,120 @@ static int __init l1d_flush_parse_cmdline(char *str)
+ }
+ early_param("l1d_flush", l1d_flush_parse_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"GDS: " fmt
++
++enum gds_mitigations {
++	GDS_MITIGATION_OFF,
++	GDS_MITIGATION_UCODE_NEEDED,
++	GDS_MITIGATION_FULL,
++	GDS_MITIGATION_FULL_LOCKED,
++	GDS_MITIGATION_HYPERVISOR,
++};
++
++static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL;
++
++static const char * const gds_strings[] = {
++	[GDS_MITIGATION_OFF]		= "Vulnerable",
++	[GDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
++	[GDS_MITIGATION_FULL]		= "Mitigation: Microcode",
++	[GDS_MITIGATION_FULL_LOCKED]	= "Mitigation: Microcode (locked)",
++	[GDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
++};
++
++void update_gds_msr(void)
++{
++	u64 mcu_ctrl_after;
++	u64 mcu_ctrl;
++
++	switch (gds_mitigation) {
++	case GDS_MITIGATION_OFF:
++		rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++		mcu_ctrl |= GDS_MITG_DIS;
++		break;
++	case GDS_MITIGATION_FULL_LOCKED:
++		/*
++		 * The LOCKED state comes from the boot CPU. APs might not have
++		 * the same state. Make sure the mitigation is enabled on all
++		 * CPUs.
++		 */
++	case GDS_MITIGATION_FULL:
++		rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++		mcu_ctrl &= ~GDS_MITG_DIS;
++		break;
++	case GDS_MITIGATION_UCODE_NEEDED:
++	case GDS_MITIGATION_HYPERVISOR:
++		return;
++	};
++
++	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++
++	/*
++	 * Check to make sure that the WRMSR value was not ignored. Writes to
++	 * GDS_MITG_DIS will be ignored if this processor is locked but the boot
++	 * processor was not.
++	 */
++	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl_after);
++	WARN_ON_ONCE(mcu_ctrl != mcu_ctrl_after);
++}
++
++static void __init gds_select_mitigation(void)
++{
++	u64 mcu_ctrl;
++
++	if (!boot_cpu_has_bug(X86_BUG_GDS))
++		return;
++
++	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
++		gds_mitigation = GDS_MITIGATION_HYPERVISOR;
++		goto out;
++	}
++
++	if (cpu_mitigations_off())
++		gds_mitigation = GDS_MITIGATION_OFF;
++	/* Will verify below that mitigation _can_ be disabled */
++
++	/* No microcode */
++	if (!(x86_read_arch_cap_msr() & ARCH_CAP_GDS_CTRL)) {
++		gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
++		goto out;
++	}
++
++	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++	if (mcu_ctrl & GDS_MITG_LOCKED) {
++		if (gds_mitigation == GDS_MITIGATION_OFF)
++			pr_warn("Mitigation locked. Disable failed.\n");
++
++		/*
++		 * The mitigation is selected from the boot CPU. All other CPUs
++		 * _should_ have the same state. If the boot CPU isn't locked
++		 * but others are then update_gds_msr() will WARN() of the state
++		 * mismatch. If the boot CPU is locked update_gds_msr() will
++		 * ensure the other CPUs have the mitigation enabled.
++		 */
++		gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
++	}
++
++	update_gds_msr();
++out:
++	pr_info("%s\n", gds_strings[gds_mitigation]);
++}
++
++static int __init gds_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!boot_cpu_has_bug(X86_BUG_GDS))
++		return 0;
++
++	if (!strcmp(str, "off"))
++		gds_mitigation = GDS_MITIGATION_OFF;
++
++	return 0;
++}
++early_param("gather_data_sampling", gds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Spectre V1 : " fmt
+ 
+@@ -2385,6 +2501,11 @@ static ssize_t retbleed_show_state(char *buf)
+ 	return sysfs_emit(buf, "%s\n", retbleed_strings[retbleed_mitigation]);
+ }
+ 
++static ssize_t gds_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -2434,6 +2555,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_RETBLEED:
+ 		return retbleed_show_state(buf);
+ 
++	case X86_BUG_GDS:
++		return gds_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -2498,4 +2622,9 @@ ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, cha
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED);
+ }
++
++ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_GDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 9b53d1cb424d..d950fb5ac0b4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1262,6 +1262,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define RETBLEED	BIT(3)
+ /* CPU is affected by SMT (cross-thread) return predictions */
+ #define SMT_RSB		BIT(4)
++/* CPU is affected by GDS */
++#define GDS		BIT(5)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+@@ -1274,19 +1276,21 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_X,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(CANNONLAKE_L,	X86_STEPPING_ANY,		RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO | GDS),
++	VULNBL_INTEL_STEPPINGS(ICELAKE_X,	X86_STEPPING_ANY,		MMIO | GDS),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPINGS(0x0, 0x0),	MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(COMETLAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(TIGERLAKE_L,	X86_STEPPING_ANY,		GDS),
++	VULNBL_INTEL_STEPPINGS(TIGERLAKE,	X86_STEPPING_ANY,		GDS),
+ 	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED),
++	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+@@ -1415,6 +1419,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	if (cpu_matches(cpu_vuln_blacklist, SMT_RSB))
+ 		setup_force_cpu_bug(X86_BUG_SMT_RSB);
+ 
++	/*
++	 * Check if CPU is vulnerable to GDS. If running in a virtual machine on
++	 * an affected processor, the VMM may have disabled the use of GATHER by
++	 * disabling AVX2. The only way to do this in HW is to clear XCR0[2],
++	 * which means that AVX will be disabled.
++	 */
++	if (cpu_matches(cpu_vuln_blacklist, GDS) && !(ia32_cap & ARCH_CAP_GDS_NO) &&
++	    boot_cpu_has(X86_FEATURE_AVX))
++		setup_force_cpu_bug(X86_BUG_GDS);
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+@@ -1977,6 +1991,8 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+ 	validate_apic_and_package_id(c);
+ 	x86_spec_ctrl_setup_ap();
+ 	update_srbds_msr();
++	if (boot_cpu_has_bug(X86_BUG_GDS))
++		update_gds_msr();
+ 
+ 	tsx_ap_init();
+ }
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 61dbb9b216e6..d9aeb335002d 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -83,6 +83,7 @@ void cpu_select_mitigations(void);
+ 
+ extern void x86_spec_ctrl_setup_ap(void);
+ extern void update_srbds_msr(void);
++extern void update_gds_msr(void);
+ 
+ extern u64 x86_read_arch_cap_msr(void);
+ 
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 7af8e33735a3..cc6cf06ce88e 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -577,6 +577,12 @@ ssize_t __weak cpu_show_retbleed(struct device *dev,
+ 	return sysfs_emit(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_gds(struct device *dev,
++			    struct device_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -588,6 +594,7 @@ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+ static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
+ static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
+ static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL);
++static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -601,6 +608,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_srbds.attr,
+ 	&dev_attr_mmio_stale_data.attr,
+ 	&dev_attr_retbleed.attr,
++	&dev_attr_gather_data_sampling.attr,
+ 	NULL
+ };
+ 
diff --git a/patches/kernel/0032-x86-speculation-Add-force-option-to-GDS-mitigation.patch b/patches/kernel/0032-x86-speculation-Add-force-option-to-GDS-mitigation.patch
new file mode 100644
index 000000000000..093144b6b18b
--- /dev/null
+++ b/patches/kernel/0032-x86-speculation-Add-force-option-to-GDS-mitigation.patch
@@ -0,0 +1,172 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Date: Wed, 12 Jul 2023 19:43:12 -0700
+Subject: [PATCH] x86/speculation: Add force option to GDS mitigation
+
+The Gather Data Sampling (GDS) vulnerability allows malicious software
+to infer stale data previously stored in vector registers. This may
+include sensitive data such as cryptographic keys. GDS is mitigated in
+microcode, and systems with up-to-date microcode are protected by
+default. However, any affected system that is running with older
+microcode will still be vulnerable to GDS attacks.
+
+Since the gather instructions used by the attacker are part of the
+AVX2 and AVX512 extensions, disabling these extensions prevents gather
+instructions from being executed, thereby mitigating the system from
+GDS. Disabling AVX2 is sufficient, but we don't have the granularity
+to do this. The XCR0[2] disables AVX, with no option to just disable
+AVX2.
+
+Add a kernel parameter gather_data_sampling=force that will enable the
+microcode mitigation if available, otherwise it will disable AVX on
+affected systems.
+
+This option will be ignored if cmdline mitigations=off.
+
+This is a *big* hammer.  It is known to break buggy userspace that
+uses incomplete, buggy AVX enumeration.  Unfortunately, such userspace
+does exist in the wild:
+
+	https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html
+
+[ dhansen: add some more ominous warnings about disabling AVX ]
+
+Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
+
+(cherry picked from commit 553a5c03e90a6087e88f8ff878335ef0621536fb)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit b73421edcd9b8f1b1db51168e4568667d74422db)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ .../hw-vuln/gather_data_sampling.rst          | 18 +++++++++++++----
+ .../admin-guide/kernel-parameters.txt         |  8 +++++++-
+ arch/x86/kernel/cpu/bugs.c                    | 20 ++++++++++++++++++-
+ 3 files changed, 40 insertions(+), 6 deletions(-)
+
+diff --git a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+index 74dab6af7fe1..40b7a6260010 100644
+--- a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
++++ b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+@@ -60,14 +60,21 @@ bits:
+  ================================   ===   ============================
+ 
+ GDS can also be mitigated on systems that don't have updated microcode by
+-disabling AVX. This can be done by setting "clearcpuid=avx" on the kernel
+-command-line.
++disabling AVX. This can be done by setting gather_data_sampling="force" or
++"clearcpuid=avx" on the kernel command-line.
++
++If used, these options will disable AVX use by turning on XSAVE YMM support.
++However, the processor will still enumerate AVX support.  Userspace that
++does not follow proper AVX enumeration to check both AVX *and* XSAVE YMM
++support will break.
+ 
+ Mitigation control on the kernel command line
+ ---------------------------------------------
+ The mitigation can be disabled by setting "gather_data_sampling=off" or
+-"mitigations=off" on the kernel command line. Not specifying either will
+-default to the mitigation being enabled.
++"mitigations=off" on the kernel command line. Not specifying either will default
++to the mitigation being enabled. Specifying "gather_data_sampling=force" will
++use the microcode mitigation when available or disable AVX on affected systems
++where the microcode hasn't been updated to include the mitigation.
+ 
+ GDS System Information
+ ------------------------
+@@ -83,6 +90,9 @@ The possible values contained in this file are:
+  Vulnerable                     Processor vulnerable and mitigation disabled.
+  Vulnerable: No microcode       Processor vulnerable and microcode is missing
+                                 mitigation.
++ Mitigation: AVX disabled,
++ no microcode                   Processor is vulnerable and microcode is missing
++                                mitigation. AVX disabled as mitigation.
+  Mitigation: Microcode          Processor is vulnerable and mitigation is in
+                                 effect.
+  Mitigation: Microcode (locked) Processor is vulnerable and mitigation is in
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 380e1e46ffa1..5fef2f65f634 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1620,7 +1620,13 @@
+ 
+ 			This issue is mitigated by default in updated microcode.
+ 			The mitigation may have a performance impact but can be
+-			disabled.
++			disabled. On systems without the microcode mitigation
++			disabling AVX serves as a mitigation.
++
++			force:	Disable AVX to mitigate systems without
++				microcode mitigation. No effect if the microcode
++				mitigation is present. Known to cause crashes in
++				userspace with buggy AVX enumeration.
+ 
+ 			off:	Disable GDS mitigation.
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index a1c1c8e4995c..0cc3c4f09dd7 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -652,6 +652,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
+ enum gds_mitigations {
+ 	GDS_MITIGATION_OFF,
+ 	GDS_MITIGATION_UCODE_NEEDED,
++	GDS_MITIGATION_FORCE,
+ 	GDS_MITIGATION_FULL,
+ 	GDS_MITIGATION_FULL_LOCKED,
+ 	GDS_MITIGATION_HYPERVISOR,
+@@ -662,6 +663,7 @@ static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL
+ static const char * const gds_strings[] = {
+ 	[GDS_MITIGATION_OFF]		= "Vulnerable",
+ 	[GDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
++	[GDS_MITIGATION_FORCE]		= "Mitigation: AVX disabled, no microcode",
+ 	[GDS_MITIGATION_FULL]		= "Mitigation: Microcode",
+ 	[GDS_MITIGATION_FULL_LOCKED]	= "Mitigation: Microcode (locked)",
+ 	[GDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
+@@ -687,6 +689,7 @@ void update_gds_msr(void)
+ 		rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+ 		mcu_ctrl &= ~GDS_MITG_DIS;
+ 		break;
++	case GDS_MITIGATION_FORCE:
+ 	case GDS_MITIGATION_UCODE_NEEDED:
+ 	case GDS_MITIGATION_HYPERVISOR:
+ 		return;
+@@ -721,10 +724,23 @@ static void __init gds_select_mitigation(void)
+ 
+ 	/* No microcode */
+ 	if (!(x86_read_arch_cap_msr() & ARCH_CAP_GDS_CTRL)) {
+-		gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
++		if (gds_mitigation == GDS_MITIGATION_FORCE) {
++			/*
++			 * This only needs to be done on the boot CPU so do it
++			 * here rather than in update_gds_msr()
++			 */
++			setup_clear_cpu_cap(X86_FEATURE_AVX);
++			pr_warn("Microcode update needed! Disabling AVX as mitigation.\n");
++		} else {
++			gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
++		}
+ 		goto out;
+ 	}
+ 
++	/* Microcode has mitigation, use it */
++	if (gds_mitigation == GDS_MITIGATION_FORCE)
++		gds_mitigation = GDS_MITIGATION_FULL;
++
+ 	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+ 	if (mcu_ctrl & GDS_MITG_LOCKED) {
+ 		if (gds_mitigation == GDS_MITIGATION_OFF)
+@@ -755,6 +771,8 @@ static int __init gds_parse_cmdline(char *str)
+ 
+ 	if (!strcmp(str, "off"))
+ 		gds_mitigation = GDS_MITIGATION_OFF;
++	else if (!strcmp(str, "force"))
++		gds_mitigation = GDS_MITIGATION_FORCE;
+ 
+ 	return 0;
+ }
diff --git a/patches/kernel/0033-x86-speculation-Add-Kconfig-option-for-GDS.patch b/patches/kernel/0033-x86-speculation-Add-Kconfig-option-for-GDS.patch
new file mode 100644
index 000000000000..63a75b4632ca
--- /dev/null
+++ b/patches/kernel/0033-x86-speculation-Add-Kconfig-option-for-GDS.patch
@@ -0,0 +1,75 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Date: Wed, 12 Jul 2023 19:43:13 -0700
+Subject: [PATCH] x86/speculation: Add Kconfig option for GDS
+
+Gather Data Sampling (GDS) is mitigated in microcode. However, on
+systems that haven't received the updated microcode, disabling AVX
+can act as a mitigation. Add a Kconfig option that uses the microcode
+mitigation if available and disables AVX otherwise. Setting this
+option has no effect on systems not affected by GDS. This is the
+equivalent of setting gather_data_sampling=force.
+
+Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
+
+(cherry picked from commit 53cf5797f114ba2bd86d23a862302119848eff19)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit 92bd969bbe475c5bca376d007ed6558085b237ba)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/Kconfig           | 19 +++++++++++++++++++
+ arch/x86/kernel/cpu/bugs.c |  4 ++++
+ 2 files changed, 23 insertions(+)
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 598a303819da..8451e0f36c66 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2640,6 +2640,25 @@ config SLS
+ 	  against straight line speculation. The kernel image might be slightly
+ 	  larger.
+ 
++config GDS_FORCE_MITIGATION
++	bool "Force GDS Mitigation"
++	depends on CPU_SUP_INTEL
++	default n
++	help
++	  Gather Data Sampling (GDS) is a hardware vulnerability which allows
++	  unprivileged speculative access to data which was previously stored in
++	  vector registers.
++
++	  This option is equivalent to setting gather_data_sampling=force on the
++	  command line. The microcode mitigation is used if present, otherwise
++	  AVX is disabled as a mitigation. On affected systems that are missing
++	  the microcode any userspace code that unconditionally uses AVX will
++	  break with this option set.
++
++	  Setting this option on systems not vulnerable to GDS has no effect.
++
++	  If in doubt, say N.
++
+ endif
+ 
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 0cc3c4f09dd7..819a8aa0c706 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -658,7 +658,11 @@ enum gds_mitigations {
+ 	GDS_MITIGATION_HYPERVISOR,
+ };
+ 
++#if IS_ENABLED(CONFIG_GDS_FORCE_MITIGATION)
++static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE;
++#else
+ static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL;
++#endif
+ 
+ static const char * const gds_strings[] = {
+ 	[GDS_MITIGATION_OFF]		= "Vulnerable",
diff --git a/patches/kernel/0034-KVM-Add-GDS_NO-support-to-KVM.patch b/patches/kernel/0034-KVM-Add-GDS_NO-support-to-KVM.patch
new file mode 100644
index 000000000000..0d9aa6d7d366
--- /dev/null
+++ b/patches/kernel/0034-KVM-Add-GDS_NO-support-to-KVM.patch
@@ -0,0 +1,85 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Date: Wed, 12 Jul 2023 19:43:14 -0700
+Subject: [PATCH] KVM: Add GDS_NO support to KVM
+
+Gather Data Sampling (GDS) is a transient execution attack using
+gather instructions from the AVX2 and AVX512 extensions. This attack
+allows malicious code to infer data that was previously stored in
+vector registers. Systems that are not vulnerable to GDS will set the
+GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM
+guests that may think they are on vulnerable systems that are, in
+fact, not affected. Guests that are running on affected hosts where
+the mitigation is enabled are protected as if they were running
+on an unaffected system.
+
+On all hosts that are not affected or that are mitigated, set the
+GDS_NO bit.
+
+Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
+
+(cherry picked from commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit cd25885269804c59063c52ef587bde0d8fe17131)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ arch/x86/kernel/cpu/bugs.c | 7 +++++++
+ arch/x86/kvm/x86.c         | 7 ++++++-
+ 2 files changed, 13 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 819a8aa0c706..63ec50ef7d7c 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -673,6 +673,13 @@ static const char * const gds_strings[] = {
+ 	[GDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
+ };
+ 
++bool gds_ucode_mitigated(void)
++{
++	return (gds_mitigation == GDS_MITIGATION_FULL ||
++		gds_mitigation == GDS_MITIGATION_FULL_LOCKED);
++}
++EXPORT_SYMBOL_GPL(gds_ucode_mitigated);
++
+ void update_gds_msr(void)
+ {
+ 	u64 mcu_ctrl_after;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 1c5775d51495..7d8b14f8807e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -310,6 +310,8 @@ u64 __read_mostly host_xcr0;
+ 
+ static struct kmem_cache *x86_emulator_cache;
+ 
++extern bool gds_ucode_mitigated(void);
++
+ /*
+  * When called, it means the previous get/set msr reached an invalid msr.
+  * Return true if we want to ignore/silent this failed msr access.
+@@ -1598,7 +1600,7 @@ static unsigned int num_msr_based_features;
+ 	 ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \
+ 	 ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
+ 	 ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
+-	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO)
++	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO)
+ 
+ static u64 kvm_get_arch_capabilities(void)
+ {
+@@ -1655,6 +1657,9 @@ static u64 kvm_get_arch_capabilities(void)
+ 		 */
+ 	}
+ 
++	if (!boot_cpu_has_bug(X86_BUG_GDS) || gds_ucode_mitigated())
++		data |= ARCH_CAP_GDS_NO;
++
+ 	return data;
+ }
+ 
diff --git a/patches/kernel/0035-Documentation-x86-Fix-backwards-on-off-logic-about-Y.patch b/patches/kernel/0035-Documentation-x86-Fix-backwards-on-off-logic-about-Y.patch
new file mode 100644
index 000000000000..16d769cf8818
--- /dev/null
+++ b/patches/kernel/0035-Documentation-x86-Fix-backwards-on-off-logic-about-Y.patch
@@ -0,0 +1,38 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Dave Hansen <dave.hansen@linux.intel.com>
+Date: Tue, 1 Aug 2023 07:31:07 -0700
+Subject: [PATCH] Documentation/x86: Fix backwards on/off logic about YMM
+ support
+
+These options clearly turn *off* XSAVE YMM support.  Correct the
+typo.
+
+Reported-by: Ben Hutchings <ben@decadent.org.uk>
+Fixes: 553a5c03e90a ("x86/speculation: Add force option to GDS mitigation")
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+
+(cherry picked from commit 1b0fc0345f2852ffe54fb9ae0e12e2ee69ad6a20)
+CVE-2022-40982
+Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
+Acked-by: Roxana Nicolescu <roxana.nicolescu@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
+(cherry picked from commit f88fa53e3623291b52b8a6656c1ea9a5d6f6f284)
+Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
+---
+ Documentation/admin-guide/hw-vuln/gather_data_sampling.rst | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+index 40b7a6260010..264bfa937f7d 100644
+--- a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
++++ b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst
+@@ -63,7 +63,7 @@ GDS can also be mitigated on systems that don't have updated microcode by
+ disabling AVX. This can be done by setting gather_data_sampling="force" or
+ "clearcpuid=avx" on the kernel command-line.
+ 
+-If used, these options will disable AVX use by turning on XSAVE YMM support.
++If used, these options will disable AVX use by turning off XSAVE YMM support.
+ However, the processor will still enumerate AVX support.  Userspace that
+ does not follow proper AVX enumeration to check both AVX *and* XSAVE YMM
+ support will break.
-- 
2.39.2





^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pve-devel] [PATCH pve-kernel 2/2] d/rules: enable mitigation config-options
  2023-08-11 16:02 [pve-devel] [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Stoiko Ivanov
  2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 1/2] add fixes " Stoiko Ivanov
@ 2023-08-11 16:02 ` Stoiko Ivanov
  2023-08-17 11:49 ` [pve-devel] applied: [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Wolfgang Bumiller
  2 siblings, 0 replies; 4+ messages in thread
From: Stoiko Ivanov @ 2023-08-11 16:02 UTC (permalink / raw)
  To: pve-devel

CONFIG_ARCH_HAS_CPU_FINALIZE_INIT and CONFIG_GDS_FORCE_MITIGATION
follows commit 3edbe24ed004516bd910f6e97fbd4b62cf589239
in ubuntu-upstream/master-next

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
 debian/rules | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/debian/rules b/debian/rules
index b4bfb5c14e20..9a26a0bf4317 100755
--- a/debian/rules
+++ b/debian/rules
@@ -96,7 +96,9 @@ PMX_CONFIG_OPTS= \
 -e CONFIG_SECURITY_LOCKDOWN_LSM \
 -e CONFIG_SECURITY_LOCKDOWN_LSM_EARLY \
 --set-str CONFIG_LSM lockdown,yama,integrity,apparmor \
--e CONFIG_PAGE_TABLE_ISOLATION
+-e CONFIG_PAGE_TABLE_ISOLATION \
+-e CONFIG_ARCH_HAS_CPU_FINALIZE_INIT \
+-e CONFIG_GDS_FORCE_MITIGATION
 
 debian/control: $(wildcard debian/*.in)
 	sed -e 's/@@KVNAME@@/$(KVNAME)/g' < debian/proxmox-kernel.prerm.in > debian/$(PMX_KERNEL_PKG).prerm
-- 
2.39.2





^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pve-devel] applied: [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall
  2023-08-11 16:02 [pve-devel] [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Stoiko Ivanov
  2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 1/2] add fixes " Stoiko Ivanov
  2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 2/2] d/rules: enable mitigation config-options Stoiko Ivanov
@ 2023-08-17 11:49 ` Wolfgang Bumiller
  2 siblings, 0 replies; 4+ messages in thread
From: Wolfgang Bumiller @ 2023-08-17 11:49 UTC (permalink / raw)
  To: Stoiko Ivanov; +Cc: pve-devel

applied, thanks




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-08-17 11:49 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-11 16:02 [pve-devel] [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Stoiko Ivanov
2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 1/2] add fixes " Stoiko Ivanov
2023-08-11 16:02 ` [pve-devel] [PATCH pve-kernel 2/2] d/rules: enable mitigation config-options Stoiko Ivanov
2023-08-17 11:49 ` [pve-devel] applied: [PATCH pve-kernel 0/2] cherry-picks and config-options for downfall Wolfgang Bumiller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal