Why Processor Choice Matters for Privacy Hosting
Most discussions about server processors focus on benchmark scores: how fast does a database query return? How many parallel compile jobs can the system handle? These are legitimate questions, but they miss the privacy dimension entirely.
For a privacy hosting environment, the processor is the root of the trust chain. It determines:
- Whether memory can be transparently encrypted so that even a privileged host operator cannot read guest VM memory in plaintext
- Whether individual virtual machines can be cryptographically isolated from the hypervisor and from each other
- Whether the hardware itself can attest to the integrity of the software stack running on it
- How many isolated workloads can run simultaneously without sharing CPU resources in ways that open side-channel attacks
None of these properties appear in the standard benchmark reports. But they are exactly the properties that separate a truly private hosting environment from one that merely calls itself private. AMD EPYC and Intel Xeon have both developed hardware security technologies to address these concerns — but with very different levels of maturity, breadth of deployment, and real-world effectiveness.
AMD EPYC: Architecture and Generations
AMD relaunched its server processor line under the EPYC brand in 2017 with the first-generation Naples architecture (EPYC 7001 series). Each subsequent generation has brought substantial advances in core count, memory bandwidth, and — critically — hardware security features.
The four main EPYC generations to know are:
- Naples (EPYC 7001, 2017): The foundation. Up to 32 cores, 8-channel DDR4, introduction of Secure Memory Encryption in its initial form. Proved the architecture was competitive but was limited in per-core performance.
- Rome (EPYC 7002, 2019): A major leap. Up to 64 cores on a single socket using the 7nm Zen 2 architecture. Introduced second-generation SEV (Secure Encrypted Virtualization), dramatically improving VM isolation capabilities. This was the generation that established EPYC as a genuine data centre contender.
- Milan (EPYC 7003, 2021): Up to 64 cores on Zen 3, with the introduction of SEV-SNP (Secure Nested Paging) — the most significant hardware privacy feature in modern server processors. SEV-SNP adds memory integrity protection and remote attestation to the encryption already present in earlier generations.
- Genoa (EPYC 9004, 2022–present): The current flagship. Up to 96 cores per socket on the Zen 4 architecture, with optional 3D V-Cache variants delivering up to 1.1 GB of stacked L3 cache per processor. Full SEV-SNP support, PCIe 5.0, DDR5 with 12-channel memory controllers in the highest-tier variants. The EPYC 9654 and 9754 represent some of the highest throughput server processors ever shipped for general commercial use.
The EPYC 9004 series is the architecture powering providers like Evolushost, a European infrastructure provider that has built its VPS platform specifically around AMD EPYC for both performance and privacy reasons. Evolushost's EPYC VPS plans expose these hardware security features directly to customers, rather than abstracting them away behind a generic cloud interface.
Intel Xeon: Architecture and Generations
Intel's Xeon line has dominated data centres for decades and remains the default choice at many hyperscale providers. The most relevant recent generations for a security comparison are:
- Ice Lake (Xeon 3rd Gen Scalable, 2021): Up to 40 cores, introduced Intel SGX (Software Guard Extensions) at a meaningful scale, and brought improvements to Intel Total Memory Encryption (TME). SGX allows applications to run in isolated "enclaves" with cryptographic protection, a useful feature for specific workloads.
- Sapphire Rapids (Xeon 4th Gen Scalable, 2023): Up to 60 cores, with the introduction of Intel TDX (Trust Domain Extensions) — Intel's answer to AMD SEV-SNP. TDX allows entire virtual machines to run in encrypted Trust Domains, isolated from the hypervisor. Also includes AMX (Advanced Matrix Extensions) for AI workloads. DDR5 support and PCIe 5.0 bring it up to parity with EPYC Genoa in interconnect standards.
Xeon processors are capable and well-supported, but Intel's hardware security technologies — particularly TDX — have seen slower adoption and more limited real-world deployment than AMD's equivalent features. Intel SGX, in particular, has had a difficult history with side-channel vulnerabilities (SGAxe, LVI, Plundervolt) that have required multiple mitigation rounds and eroded confidence in the enclave model for high-assurance use cases.
Key Technical Comparison
The table below compares the current flagship offerings from both platforms across the dimensions most relevant to privacy hosting.
| Feature | AMD EPYC 9004 (Genoa) | Intel Xeon Scalable 4th Gen (Sapphire Rapids) | Privacy Advantage |
|---|---|---|---|
| Max cores per socket | 96 (EPYC 9654) | 60 (Xeon 8490H) | AMD — more isolated vCPU allocation |
| Memory encryption | AMD SME / TME | Intel TME | Comparable at rest; AMD broader key support |
| VM memory isolation | SEV, SEV-ES, SEV-SNP | Intel TDX (newer, limited deployment) | AMD — mature, widely deployed since 2019 |
| Hypervisor bypass encryption | Yes (SEV-SNP) | Yes (TDX) — limited ecosystem | AMD — broader hypervisor support (KVM, Hyper-V) |
| Remote attestation | SEV-SNP attestation | TDX attestation | AMD — more production deployments |
| Application enclave | Limited (SEV scope) | Intel SGX (troubled history) | Neither — SGX has known vulnerabilities |
| L3 cache (top tier) | Up to 1.1 GB (3D V-Cache) | Up to 112.5 MB | AMD — large cache reduces memory round-trips |
| Memory channels | 12-channel DDR5 (Genoa-X) | 8-channel DDR5 | AMD — higher memory bandwidth |
| Memory bandwidth (peak) | ~460 GB/s (Genoa) | ~307 GB/s (Sapphire Rapids) | AMD — critical for encryption throughput |
| Power efficiency | Better perf/watt (Zen 4) | Higher TDP for comparable perf | AMD — lower operating cost |
| PCIe generation | PCIe 5.0 | PCIe 5.0 | Parity |
| Known microarchitectural vulnerabilities (historical) | Fewer disclosed | Spectre, MDS, SGAxe, LVI, Plundervolt | AMD — smaller historical attack surface |
EPYC's Memory Encryption Advantage Explained
The most important technical differentiator for privacy hosting is AMD's tiered memory encryption stack: SME, SEV, SEV-ES, and SEV-SNP. Understanding what each layer does — and what it protects against — is essential to evaluating any VPS or server provider's privacy claims.
Secure Memory Encryption (SME)
AMD SME encrypts the contents of system RAM using a key generated by the processor's dedicated security processor (AMD-SP). The encryption is transparent to the operating system and applications — data is encrypted as it flows to DRAM and decrypted as it flows back to the CPU. This protects against physical memory attacks: cold-boot attacks, DRAM bus sniffing, and similar hardware-level attempts to extract data from RAM modules directly.
Transparent Memory Encryption (TME) on Intel Xeon provides a comparable function, encrypting the full DRAM with a single key managed by the processor. Both platforms offer this baseline protection. The divergence begins at the virtualisation layer.
Secure Encrypted Virtualization (SEV)
AMD SEV extends memory encryption to individual virtual machines. Each guest VM receives its own unique encryption key, generated by the AMD Secure Processor and never exposed to the host hypervisor, the host operating system, or other VMs on the same physical host. When the hypervisor reads the memory of a guest VM, it sees ciphertext — not the plaintext data inside the guest.
This is a fundamental shift in the trust model for VPS hosting. Without SEV, a compromised or malicious hypervisor can read the memory of any guest VM running on the host. With SEV enabled, that access yields only encrypted data that the hypervisor cannot decrypt, because it does not hold the key. The key exists only inside the AMD Secure Processor and inside the guest VM's memory as it's actively used by the CPU core.
SEV Encrypted State (SEV-ES)
SEV-ES adds protection for CPU register state. Without this extension, a hypervisor could inspect or modify CPU registers when a VM exits to the hypervisor — for example, during a context switch or interrupt handling. SEV-ES encrypts the entire CPU state on VM exit, preventing the hypervisor from reading or tampering with register contents, stack pointers, and other execution state that could leak sensitive data or enable manipulation of guest execution.
SEV Secure Nested Paging (SEV-SNP)
SEV-SNP, introduced with Milan and fully supported in Genoa, adds two critical properties that earlier SEV versions lacked: memory integrity protection and remote attestation.
Memory integrity protection prevents a malicious hypervisor from remapping guest memory pages — a class of attack called replay attacks or memory aliasing attacks — where the hypervisor swaps in stale or modified memory pages to corrupt the guest's execution. SEV-SNP uses a hardware-managed Reverse Map Table (RMP) to ensure that each physical memory page can only be owned by one entity at a time, and that the page contents cannot be silently substituted.
Remote attestation allows a guest VM to cryptographically prove to a remote party that it is running on genuine AMD SEV-SNP hardware, that the firmware and software stack are unmodified, and that its memory is encrypted. This enables a use case that was previously impossible: a third party can verify, without trusting the host operator, that a VM is running in a genuine confidential computing environment.
Together, SME + SEV + SEV-ES + SEV-SNP form a complete confidential computing stack. A VM running on an EPYC processor with SEV-SNP enabled is cryptographically isolated from the hypervisor, the host OS, other VMs, and physical memory attacks. This is the strongest hardware privacy guarantee commercially available in the server market today.
Real-World Benchmark Context
Raw benchmark comparisons between EPYC and Xeon have consistently favoured AMD across the Genoa generation for most server workloads. The differences are not marginal — they reflect genuine architectural advantages. Here is what the data shows for the workloads most relevant to privacy hosting:
Database workloads (PostgreSQL, MySQL)
EPYC's combination of high core counts, large L3 cache, and wide memory bandwidth makes it exceptionally well-suited for database servers. In published benchmarks from independent testing organisations, EPYC 9654 (96 cores) consistently outperforms the top Xeon Sapphire Rapids parts on transactions per second by 30–50% in multi-threaded OLTP workloads, while maintaining better latency percentiles at high concurrency. For an email server hosting thousands of users, the database layer is often the primary bottleneck — EPYC's advantage here translates directly to responsiveness under load.
Compilation and build workloads
Development environments and CI/CD pipelines benefit enormously from high core counts. With up to 96 cores on a single socket, EPYC Genoa can sustain parallel compile jobs that would saturate a 60-core Xeon. In practice, this means faster container builds, shorter test cycles, and more concurrent development environments per physical host — directly relevant to anyone using a privacy VPS for software development.
Encryption throughput
AES-NI acceleration is present on both platforms, but EPYC's wider memory bandwidth (up to 460 GB/s vs. Xeon's 307 GB/s) means that encryption-heavy workloads — full-disk encryption, TLS termination, encrypted backups — can sustain higher throughput without hitting memory bandwidth limits. When running an email server where every message is encrypted at rest, encryption throughput is not an academic concern.
Which to Choose for Specific Workloads
Email servers with encryption at rest
EPYC is the clear choice. The combination of SEV-SNP for VM-level isolation, high memory bandwidth for encryption throughput, and large core counts for handling concurrent connections makes it the optimal platform for a privacy-focused mail server. The hardware memory encryption guarantees that even if another tenant on the same physical host were somehow able to access raw DRAM (a scenario that should be impossible with a properly configured hypervisor, but which SME protects against regardless), they would see only ciphertext.
This is precisely why enemail runs on EPYC infrastructure — the hardware-level encryption guarantees align with the service's zero-knowledge architecture. The processor is the first line of defence, before any software security measure is considered.
Encrypted storage servers and NAS workloads
Encrypted storage workloads are heavily I/O-bound and encryption-compute-bound. EPYC's memory bandwidth advantage and strong AES-NI performance make it more suitable than Xeon for hosts running encrypted ZFS pools, LUKS-encrypted volumes, or distributed encrypted storage systems. The 3D V-Cache variant (EPYC 9754X) is particularly effective here, as the massive L3 cache can absorb random I/O patterns that would otherwise thrash DRAM.
Developer workloads (containers, CI, sandboxed environments)
For a developer who wants a private VPS with strong isolation guarantees, EPYC with SEV-SNP provides something Xeon cannot match: hardware-enforced separation between the developer's environment and anything the host operator can observe. On an EPYC-powered VPS with SEV-SNP enabled, the host hypervisor operates outside the trust boundary — by design and by hardware enforcement. For developers working with sensitive codebases, cryptographic keys, or client data, this is a meaningful security property.
High-traffic web applications
This is the one category where Xeon can close the gap. Intel's mature ecosystem, broad software optimisations, and long enterprise history mean that some specific workloads — particularly those using Intel-specific acceleration libraries or running older software stacks optimised for Intel architectures — may perform comparably or better on Xeon. For new workloads without Intel-specific optimisations, EPYC's core count and memory bandwidth advantages reassert themselves quickly.
Conclusion: EPYC Wins for Privacy Hosting
The comparison is not particularly close when privacy is the primary criterion. AMD EPYC has built a mature, production-proven hardware security stack — SME, SEV, SEV-ES, SEV-SNP — that provides genuine cryptographic isolation between virtual machines and the hypervisor layer above them. Intel's TDX is a real technology, but it arrived later, has seen less deployment, and lacks the ecosystem of tooling and hypervisor support that AMD's confidential computing stack has accumulated over five years of production use.
Beyond the security architecture, EPYC simply wins on the performance metrics that matter most for privacy-relevant workloads: core count (up to 96 vs. 60), memory bandwidth (up to 460 GB/s vs. 307 GB/s), cache capacity (up to 1.1 GB with 3D V-Cache vs. 112 MB), and power efficiency. These advantages translate into more isolated VM environments per host, faster encryption throughput, and lower operating costs — all of which benefit the end user.
If you are choosing a VPS or dedicated server with privacy as a genuine requirement rather than a marketing claim, the processor generation matters. Look for providers who explicitly run AMD EPYC hardware and who enable SEV-SNP for guest VMs. Evolushost's EPYC VPS plans are built on the Genoa generation with these features enabled — a rare combination of the right hardware and the right European jurisdiction (no US parent company, no CLOUD Act exposure) that makes the privacy promise technically coherent rather than aspirational.
Hardware security is not a substitute for good software architecture, zero-knowledge encryption, or a trustworthy provider. But it is the foundation. The processor is where trust either begins or is compromised. For privacy hosting, that means AMD EPYC.
Privacy hosting built on AMD EPYC infrastructure.
enemail runs on EPYC-powered servers in the EU — zero-knowledge encryption, hardware-level VM isolation, and no US jurisdiction. Start with a free account.
Create your free account