What Is AMD EPYC?
AMD EPYC is a family of server-class processors designed specifically for data centre workloads. First introduced in 2017 under the codename Naples (EPYC 7001 series), the platform has gone through four major generations, each dramatically expanding memory capacity, core counts, and security capabilities.
The four generations at a glance
- 1st Gen — Naples (EPYC 7001, 2017): The opening move. Up to 32 cores per socket, 8-channel DDR4, and PCIe 3.0. Naples established EPYC's foundational architecture: multiple CPU dies (chiplets) connected via AMD's Infinity Fabric, allowing more cores per socket than competing monolithic designs.
- 2nd Gen — Rome (EPYC 7002, 2019): A generational leap. Rome moved to the 7 nm TSMC process and doubled core counts to 64 per socket. It introduced AMD's Secure Memory Encryption (SME) and Secure Encrypted Virtualisation (SEV) — the hardware features that make EPYC so significant for privacy hosting. PCIe 4.0 support arrived here as well.
- 3rd Gen — Milan (EPYC 7003, 2021): Refined and matured. Milan introduced SEV-SNP (Secure Nested Paging), a critical upgrade that adds integrity protections on top of encrypted VM memory. Up to 64 cores, 8-channel DDR4-3200, and doubled L3 cache compared to Rome.
- 4th Gen — Genoa (EPYC 9004, 2022–2023): The current flagship. Genoa moves to the 5 nm process and scales to 96 cores per socket. It introduces 12-channel DDR5 memory, PCIe 5.0, and CXL 1.1 support. The memory bandwidth numbers are extraordinary: a dual-socket Genoa system can push well over 900 GB/s of memory throughput. SEV-SNP is fully supported and production-ready.
The chiplet architecture is worth understanding in detail because it underpins so many of EPYC's advantages. Rather than building one large monolithic die, AMD connects multiple smaller compute dies (CCDs, or Core Chiplets Dies) via the high-bandwidth Infinity Fabric. This approach has a manufacturing yield advantage — smaller dies are easier to produce without defects — but it also allows AMD to scale core counts in ways that monolithic designs cannot match economically. A single Genoa socket can contain up to 12 CCDs, each contributing 8 cores, for 96 physical cores in total.
Why EPYC Dominates Privacy-First Hosting
For most enterprise workloads, the competition between EPYC and Intel Xeon comes down to price and performance benchmarks. For privacy-sensitive hosting, two EPYC-specific features change the calculation entirely: Secure Memory Encryption (SME) and Secure Encrypted Virtualisation (SEV).
AMD SME: Encrypting RAM at the hardware level
Secure Memory Encryption is a hardware feature built into the EPYC memory controller. When SME is enabled, the processor uses a 128-bit AES key — generated at boot time and stored inside the CPU itself, never exposed to software — to encrypt data as it is written to DRAM and decrypt it as it is read back. The entire process is transparent to the operating system and applications.
The practical implication for hosting is significant. A cold-boot attack — where an attacker with physical access to a server freezes the RAM modules and reads them in another machine — yields only ciphertext when SME is active. The encryption key never leaves the processor package. An adversary who somehow obtains your server's physical RAM walks away with nothing readable.
AMD SEV and SEV-SNP: Encrypting individual virtual machines
Secure Encrypted Virtualisation goes a step further by encrypting the memory of individual virtual machines with separate keys. In a standard VPS environment, the hypervisor has read access to the memory of every VM it hosts. SEV changes this: each VM's memory is encrypted with a key that the VM owner controls, and which the hypervisor cannot access — even theoretically.
SEV-SNP (Secure Nested Paging), introduced with Milan and production-hardened with Genoa, adds memory integrity protection on top of SEV encryption. This prevents a malicious hypervisor from remapping VM memory pages to leak data, a class of attack that pure encryption does not address. With SEV-SNP active, a VM can verify the integrity of its own memory pages and detect any tampering from the host layer.
For privacy VPS hosting, this matters enormously. It means that even the hosting provider — who runs the hypervisor — cannot read the memory contents of a tenant's VM. This is a hardware-enforced privacy guarantee that no amount of software policy or contractual commitment can match.
Core density and cache size
Privacy workloads are often compute-intensive. End-to-end encrypted email, for instance, involves constant cryptographic operations: key lookups, message signing, encryption and decryption, TLS termination for every connection. These operations benefit from large L3 caches, which keep frequently accessed cryptographic data close to the processor without expensive trips to main memory.
EPYC Genoa ships with up to 384 MB of L3 cache in its 96-core configuration — an extraordinary number that dwarfs comparable Intel Xeon offerings. For email and privacy application servers, larger cache directly translates to lower latency and higher throughput per core.
The high core count also enables better VM consolidation ratios. A hosting provider running EPYC hardware can offer more VMs per physical host, which reduces the cost per VPS without compromising the isolation guarantees that SEV provides.
NUMA architecture and memory bandwidth
EPYC uses a Non-Uniform Memory Access (NUMA) architecture, where each CCD has preferred memory channels that it can access at lower latency. For privacy workloads, this means that with proper NUMA-aware configuration, a VPS running on EPYC hardware can sustain extremely high cryptographic throughput without memory access becoming a bottleneck.
The 12-channel DDR5 memory subsystem on Genoa provides roughly 460 GB/s of memory bandwidth per socket — more than double what Intel's competing Sapphire Rapids architecture delivers. For workloads that are memory-bandwidth bound, such as large-scale key derivation or bulk file encryption, this is a decisive advantage.
EPYC vs. Intel Xeon for VPS Hosting
The hosting market has historically been dominated by Intel Xeon. Understanding what has changed — and what the concrete differences mean for privacy — requires looking at the comparison across several dimensions.
| Feature | AMD EPYC Genoa | Intel Xeon Sapphire Rapids |
|---|---|---|
| Max cores per socket | 96 | 60 |
| Hardware memory encryption | SME (AES-128, all RAM) | TME (limited support) |
| VM memory encryption | SEV-SNP (per-VM keys, integrity) | Not available |
| Max L3 cache per socket | 384 MB | 112.5 MB |
| Memory channels | 12 × DDR5 | 8 × DDR5 |
| Memory bandwidth (per socket) | ~460 GB/s | ~307 GB/s |
| Price/core ratio | Lower | Higher |
Intel's answer to SEV is its Trust Domain Extensions (TDX) technology, which provides VM isolation roughly analogous to SEV-SNP. However, TDX requires 4th-generation Xeon Scalable processors (Sapphire Rapids) and compatible hypervisors, and production deployments remain less common than AMD's more mature SEV ecosystem. For privacy-oriented VPS hosting today, EPYC with SEV-SNP is the more widely deployed and battle-tested solution.
The price/performance gap is also concrete. Because EPYC packs more cores per socket, a hosting provider can provision more VPS instances per physical server. This economics flows downstream to customers: EPYC VPS plans from EU providers typically offer significantly more vCPUs and RAM per euro than equivalent Xeon-based offerings, without any compromise on isolation or security guarantees.
What to Look for in an EPYC VPS Provider
Not all EPYC VPS offerings are equal. The processor is necessary but not sufficient. When evaluating providers, the following criteria separate serious privacy infrastructure from marketing claims.
EU jurisdiction and data residency
For European users and GDPR-regulated workloads, the physical location of your server and the legal jurisdiction of your provider matter as much as the hardware. A US-headquartered provider running servers in Frankfurt is still subject to US legal process — including CLOUD Act orders that do not require notification to the server's physical host country.
Choose a provider incorporated in the EU, with servers in the EU, subject only to EU and member-state law. Germany, Austria, the Netherlands, and Iceland are particularly strong choices due to their strong national privacy traditions and GDPR enforcement records. An EPYC server in a US-jurisdiction data centre offers the hardware security features but not the legal protection.
Actual SEV/SEV-SNP support
EPYC hardware supports SEV, but the feature must be enabled at the hypervisor level by the provider. Ask explicitly whether your provider enables AMD SEV or SEV-SNP on their EPYC hosts. Providers who deploy KVM with SEV-SNP enabled can offer a measurably higher level of VM isolation than those who run EPYC hardware without activating its security features.
Dedicated vs. shared physical hosts
Even with SEV active, some privacy use cases benefit from dedicated physical hardware. If you are processing data subject to strict regulatory requirements, or running an application where even the metadata of co-tenancy is a concern, look for providers that offer dedicated EPYC hosts — where the physical machine is assigned exclusively to your organisation.
SLA and uptime commitments
Privacy-critical infrastructure — email servers, encrypted storage, VPN endpoints — typically requires high availability. Look for providers offering at least 99.9% uptime SLA with clear compensation terms, redundant power and network connectivity, and documented incident response procedures. A provider who cannot articulate their failure recovery process is not ready to host infrastructure you depend on.
Transparent network infrastructure
Look for providers who publish their autonomous system number (ASN), are transparent about their upstream network providers, and operate their own hardware rather than reselling capacity from a larger cloud provider. First-party infrastructure means the provider has direct control over their security posture — they are not dependent on a wholesale provider's policies and access controls.
Top Use Cases for EPYC VPS Hosting
The technical advantages of EPYC translate into concrete benefits for specific workload categories. Here is where EPYC VPS infrastructure makes the most difference.
Private email servers
Self-hosted email is one of the most demanding privacy use cases. A well-configured mail server must handle TLS connections, perform DKIM signing for every outbound message, run spam filtering (which involves pattern-matching against large databases), manage encrypted storage for mailboxes, and maintain high availability around the clock. EPYC's large L3 cache reduces the latency of cryptographic operations, its high core count allows simultaneous processing of many connections, and SEV encryption protects mailbox data at the memory level even from the hosting provider's administrative access.
Services like enemail run their privacy-focused email infrastructure on EPYC-based servers specifically because the hardware provides defence in depth that goes beyond what software encryption alone can offer. When the processor itself encrypts VM memory with keys that the hypervisor cannot access, zero-knowledge architecture gets a hardware-level foundation to stand on.
Encrypted storage and file sync
Client-side encrypted storage services — analogous to Nextcloud or Cryptomator deployments — perform constant AES encryption and decryption as files are uploaded, downloaded, and synced. EPYC processors include hardware AES acceleration (AES-NI) across all cores, and the high memory bandwidth of Genoa means that bulk file operations do not hit memory bottlenecks even under heavy concurrent load. Combined with SME protecting data in RAM, EPYC provides encryption coverage at the storage, memory, and hardware levels simultaneously.
Developer workloads and CI/CD pipelines
Development teams working on privacy-sensitive applications — fintech, healthtech, legal tech — often need isolated build environments where source code, test data, and artefacts are protected from infrastructure-level access. EPYC's high core count and SEV isolation make it possible to run many isolated build containers on a single physical host while maintaining strong guarantees that co-tenant builds cannot interfere with or observe each other. The large L3 cache also speeds up compilation workloads significantly compared to lower-cache architectures.
Privacy-oriented applications and APIs
VPN endpoints, anonymisation proxies, secure messaging backends, and privacy-preserving analytics services all benefit from EPYC's characteristics. These applications are typically network-intensive and cryptographically heavy — exactly the profile that EPYC's memory bandwidth, cache size, and hardware AES acceleration address. Running such services on EPYC with SEV-SNP active also means that the application's session keys and user data are protected from hypervisor-level access, a meaningful security improvement over commodity cloud infrastructure.
Database servers with sensitive data
Encrypted database deployments — PostgreSQL with encrypted tablespaces, or dedicated encrypted-at-rest systems — benefit from EPYC's architecture in two ways. First, the large cache keeps frequently accessed index pages in processor cache, improving query performance without the privacy cost of expanding RAM exposure. Second, SME ensures that data written to DRAM during query processing is encrypted at the hardware level, protecting against memory-scraping attacks that target database servers specifically.
Conclusion: EPYC Is the Right Foundation for Privacy Hosting in 2025
The case for AMD EPYC in privacy-sensitive hosting is not merely about raw performance benchmarks. It rests on a specific set of architectural decisions — hardware memory encryption, per-VM encryption with integrity protection, chiplet-based core density, and massive cache capacity — that align precisely with what privacy workloads require.
Intel's Xeon remains a capable server processor, but its lack of mature, production-deployed VM memory encryption is a meaningful gap for any provider that takes hypervisor-level isolation seriously. EPYC with SEV-SNP active provides a hardware guarantee that no competing x86 server processor can currently match: the host system literally cannot read the memory contents of its guests.
For European users and organisations subject to GDPR, combining EPYC's hardware security with an EU-jurisdictioned provider creates infrastructure that is private at multiple layers simultaneously — legally, architecturally, and cryptographically. That layered approach to privacy is what distinguishes serious infrastructure from ordinary commodity hosting.
If you are evaluating EPYC-based VPS options, Evolushost is one of the European providers that has built its infrastructure around 4th-generation EPYC hardware with SEV support, EU data residency, and transparent first-party infrastructure. Their EPYC VPS plans are a practical starting point for privacy workloads that need the combination of hardware security and EU legal protection that this article has outlined.
The processor inside your server is not a neutral detail. It determines whether your provider can read your VM's memory, whether a cold-boot attack yields readable data, and whether the cryptographic operations your application depends on are fast enough to serve users without compromise. In 2025, AMD EPYC is the processor that gets all of those answers right for privacy-first hosting.