Preparing Your Homelab for the Quantum Future: Post-Quantum Cryptography Migration
Preparing my homelab for the quantum threat with NIST's post-quantum algorithms—CRYSTALS-Kyber, CRYSTALS-Dilithium, and practical implementation lessons
I spent three weekends trying to enable post-quantum cryptography on my homelab Nginx server. The first attempt crashed every single HTTPS connection because I completely forgot that hybrid mode exists and tried forcing pure ML-KEM-768. The second weekend, I got hybrid mode working but didn't realize my certificate chain had ballooned to 18KB, triggering TCP fragmentation and adding a mysterious 200ms to every handshake. By the third weekend, I'd finally figured out that my Raspberry Pi 4 running Wazuh was perfectly capable of handling PQC, I just needed to stop treating it like it was 2015.
Here's what I learned about preparing homelabs for the quantum computing threat that's probably 10-15 years away but requires action today.
Why This Matters Right Now
In August 2024, NIST finalized three post-quantum cryptography standards: ML-KEM (FIPS 203) for key encapsulation, ML-DSA (FIPS 204) for digital signatures, and SLH-DSA (FIPS 205) for hash-based signatures. The federal government has set a hard deadline of 2035 for migrating all federal systems, with National Security Systems required to transition by 2030.
But here's the uncomfortable truth: you're probably already out of time.
The threat isn't cryptographically-relevant quantum computers (CRQCs) breaking your encryption today, those are still estimated to arrive between 2030-2045 according to expert surveys. The real threat is what security researchers call "Store Now, Decrypt Later" (SNDL) attacks. Adversaries are capturing encrypted network traffic right now, storing it in massive databases, and waiting for quantum computers powerful enough to crack it.
According to RAND Corporation's analysis, if your data needs to remain confidential for 20-30 years (think: medical records, financial data, personal backups), and it takes you 10-15 years to fully migrate your infrastructure to post-quantum cryptography, then by the time quantum computers arrive in the 2035-2040 window, everything you encrypted between 2015-2030 becomes retroactively vulnerable.
The math is simple and it's called Mosca's Theorem: If X (how long your data must stay secret) + Y (how long it takes to migrate) > Z (when quantum computers arrive), you're already behind schedule.
For my homelab, that meant my encrypted backups, my self-hosted Bitwarden vault snapshots, and even my personal notes stored on TrueNAS, all potentially exposed if someone's recording my network traffic today. That realization made spending three weekends on PQC suddenly feel a lot more urgent.
Understanding the Post-Quantum Algorithms
NIST's standardization process evaluated 82 algorithms over eight years and selected three winners. Here's what actually made it to production:
ML-KEM (CRYSTALS-Kyber) - The Key Exchange Workhorse
ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism), formerly known as CRYSTALS-Kyber, handles key establishment for encrypted connections. Think of it as the post-quantum replacement for the Elliptic Curve Diffie-Hellman (ECDH) key exchange you're using right now in every TLS connection.
The algorithm comes in three security levels:
- ML-KEM-512: Public key 800 bytes, equivalent to AES-128 security
- ML-KEM-768: Public key 1,184 bytes, equivalent to AES-192 security
- ML-KEM-1024: Public key 1,568 bytes, equivalent to AES-256 security
For comparison, a typical X25519 public key is just 32 bytes. Yes, that's roughly 37 times larger for ML-KEM-768. This size difference is why I kept breaking things, my Nginx configuration assumed TLS ClientHello messages would fit in a single 1,500-byte Ethernet frame. Spoiler: they don't anymore.
But here's the surprising part: recent benchmarks show ML-KEM-768 performing encapsulation and decapsulation in approximately 0.05 milliseconds on a Raspberry Pi 4. That's genuinely usable for homelab projects. My Intel i9-9900K handles it even faster, though I haven't measured exactly how much faster because I was too busy fixing my broken certificate chain.
ML-DSA (CRYSTALS-Dilithium) - The Signature Algorithm
ML-DSA (Module-Lattice-Based Digital Signature Algorithm), formerly CRYSTALS-Dilithium, handles digital signatures, the cryptographic proof that a message actually came from who you think it did.
The challenge here is signature size. ML-DSA-44 (the smallest variant, roughly equivalent to RSA-2048 security) produces signatures of 2,420 bytes compared to RSA-2048's roughly 256 bytes. That's almost 10x larger. When you're signing X.509 certificates, this matters because certificate chains get transmitted during every TLS handshake.
According to research from NIST's PQC Conference, when certificate chains exceed 16 KB, TLS implementations trigger record fragmentation which adds an extra TCP round-trip due to congestion control. That 200ms delay I mentioned earlier? That was exactly this problem. My certificate chain with ML-DSA signatures hit 18KB and my monitoring dashboard immediately showed the latency spike.
SLH-DSA (SPHINCS+) - The Backup Plan
SLH-DSA (Stateless Hash-Based Digital Signature Algorithm), based on SPHINCS+, is NIST's backup signature standard. It uses hash functions instead of lattice math, which means if someone discovers a breakthrough attack against lattice-based cryptography, we're not completely screwed.
The trade-off is brutal: SPHINCS+ signing is significantly slower than Dilithium and produces larger signatures. I haven't even attempted implementing this in my homelab because honestly, ML-DSA works fine for my threat model. But if you're building a time capsule or need signatures that must remain valid for 50+ years, the hash-based approach might be worth the performance hit.
The Quantum Threat Timeline: When Should You Actually Worry?
The Global Risk Institute's 2024 expert survey estimates a 19-34% probability of CRQCs emerging within 10 years (by 2034), and 5-14% probability within 5 years (by 2029). That's up from 17-31% and 4-11% respectively in their 2023 survey, the probability is increasing as quantum hardware improves.
But here's where it gets interesting: Craig Gidney from Google Quantum AI published a paper in May 2025 showing that breaking RSA-2048 now requires less than a million noisy qubits, down from his 2019 estimate of 20 million qubits. That's a 95% reduction in hardware requirements. Companies like IBM are targeting 100,000-qubit systems by 2033, which suddenly makes the 2033-2035 threat window look a lot more realistic than the conservative 2045 estimates.
The RAND Corporation analysis from 2020 noted that quantum computing threats have been "15 years away" since the 1980s, suggesting either the problem is harder than we think (good for security) or we're approaching a tipping point (concerning for security). Given the recent algorithmic improvements and billion-dollar investments from governments and tech companies, I'm leaning toward the "approaching tipping point" interpretation.
For my homelab planning, I'm assuming the worst-case scenario: CRQCs arrive by 2033, which means data I encrypt in 2025 has at most 8 years of quantum-safe confidentiality. Given my backup retention policy of 7 years, I need PQC enabled pretty much yesterday.
Real-World Deployments: What the Pros Are Doing
Before diving into homelab implementation, I spent way too much time reading about production deployments. Turns out, the big tech companies have already solved most of the problems I was about to encounter.
Google Chrome: Billions of Users with ML-KEM
Google enabled X25519Kyber768 by default in Chrome 124 (April 2024), protecting billions of users with hybrid post-quantum key exchange. In November 2024, Chrome 131 switched to the standardized ML-KEM instead of the experimental Kyber variant.
The deployment wasn't smooth. Legacy middleboxes and firewalls failed to handle ClientHello messages larger than a single packet, causing connection failures. According to Google's engineering team:
"These errors are not caused by a bug in Google Chrome but instead caused by web servers failing to properly implement Transport Layer Security (TLS) and not being able to handle larger ClientHello messages."
Protocol ossification strikes again. This is exactly why I broke my homelab on attempt #1, I was testing with an old version of Nginx that assumed single-packet ClientHello messages.
AWS: Minimal Performance Impact at Scale
AWS enabled ML-KEM support in KMS, ACM, and Secrets Manager in May 2025, and their benchmarks are reassuring for homelab performance expectations.
In the worst-case scenario (full TLS handshake on every request, no connection reuse), AWS measured a 2.3% decrease in transactions per second: from 108.7 TPS with classical crypto to 106.2 TPS with hybrid ML-KEM. The bandwidth overhead was approximately 1,600 additional bytes per handshake, with compute overhead of 80-150 microseconds.
But here's the critical part: with TLS connection reuse enabled (which is the default for most applications), the overhead dropped to 0.05%, essentially negligible. This matches Cloudflare's observation that NTRU-HRSS performance was "hard to distinguish by eye from control connections" during their 2019 experiments.
If AWS can run ML-KEM at scale with 0.05% overhead, my homelab with maybe 20 simultaneous TLS connections can definitely handle it.
Signal and Apple: Rekeying Makes All the Difference
Signal deployed PQXDH (Post-Quantum Extended Diffie-Hellman) in September 2023 and recently released SPQR (Sparse Post-Quantum Ratchet) in October 2025. Apple launched iMessage PQ3 in March 2024, claiming "Level 3 security", the first protocol to protect not just the initial handshake but also ongoing message exchanges with post-quantum cryptography.
Apple's rekeying mechanism is particularly clever: keys automatically rotate every 50 messages or 7 days, whichever comes first. This means even if an attacker compromises a session key somehow, the vulnerability window is limited to 50 messages maximum before the protocol self-heals with fresh quantum-resistant keys.
For my homelab, this suggests that protecting the initial TLS handshake with ML-KEM is good, but setting up proper session resumption and periodic rekeying would be even better. Though honestly, I haven't figured out how to implement automatic rekeying for my reverse proxy yet, that's probably weekend #4.
Homelab Implementation Strategy: What Actually Works
After three weekends of trial, error, and embarrassing mistakes, here's what I learned about implementing PQC in a homelab.
Hardware Baseline: Your Raspberry Pi Can Handle This
My test environment:
- Intel i9-9900K (main server): Proxmox host, 64GB RAM, RTX 3090
- Raspberry Pi 4 Model B (edge devices): 4GB RAM, running Pi-hole, Wazuh agent
- Dell PowerEdge R940 (overkill homelab server): 1TB RAM, mostly running VMs
According to Springer's study on Raspberry Pi post-quantum implementations, Dilithium2 emerges as the most balanced option for Raspberry Pi 4 deployments. MDPI's benchmarking research confirms that ML-KEM-768 performs encapsulation/decapsulation in roughly 0.05 milliseconds on Pi 4 hardware.
My Raspberry Pi 4 running Wazuh absolutely had no problems handling ML-KEM in my testing. The bottleneck was never CPU, it was always my misconfigured TLS settings or oversized certificate chains.
The i9-9900K is obviously overkill and handles PQC without breaking a sweat. If you're running any x86_64 hardware from the last 10 years with AVX2 instructions, you'll be fine.
Installation: OpenSSL 3.5 Is Your Friend
I went through two paths: the hard way (building everything from source) and the easy way (using Caddy 2.10).
The Hard Way (OpenSSL 3.5 + oqs-provider):
# Install dependencies
sudo apt-get update
sudo apt-get install build-essential cmake git libssl-dev
# Build liboqs from source
git clone https://github.com/open-quantum-safe/liboqs.git
cd liboqs
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON ..
make -j$(nproc)
sudo make install
sudo ldconfig
# Build oqs-provider for OpenSSL 3
cd ../..
git clone https://github.com/open-quantum-safe/oqs-provider.git
cd oqs-provider
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j$(nproc)
sudo make install
OpenSSL 3.5.0 (released April 2025) has native support for ML-KEM, ML-DSA, and SLH-DSA, which is fantastic. You can also use the Open Quantum Safe oqs-provider for extended algorithm support beyond the NIST standards.
The Easy Way (Caddy 2.10):
Caddy 2.10 (released April 2025) enables X25519MLKEM768 hybrid key exchange by default. Literally zero configuration required. You install Caddy, point it at your backend service, and it just works.
# Caddyfile - yes, this is the entire configuration
example.com {
reverse_proxy localhost:8080
# X25519MLKEM768 automatically enabled
}
After fighting with Nginx for two weekends, I switched to Caddy and had working PQC in 10 minutes. Sometimes the easy path is the right path.
The Middle Way (Nginx with OpenSSL 3.5):
If you're married to Nginx like I initially was, here's what finally worked:
# /etc/nginx/nginx.conf
http {
ssl_protocols TLSv1.3;
# Hybrid algorithms: PQ + classical fallback
ssl_ecdh_curve x25519_kyber768:p384_kyber768:x25519:secp384r1;
server {
listen 443 ssl;
server_name lab.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
# Prevent certificate chain fragmentation issues
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://localhost:8080;
}
}
}
The critical part is listing x25519_kyber768 first (your PQC preference) but keeping classical algorithms like x25519 and secp384r1 as fallbacks. This is hybrid mode, clients that don't support ML-KEM gracefully fall back to X25519.
Testing Your Deployment
# Test that ML-KEM is negotiated
openssl s_client -connect lab.example.com:443 -groups x25519_kyber768
# Look for "shared point formats" and "key share" in the output
# You should see "x25519_kyber768" in the negotiated parameters
# Verify TLS 1.3 with proper cipher
curl -v --tlsv1.3 https://lab.example.com 2>&1 | grep "SSL connection"
If you see "x25519_kyber768" in the OpenSSL output, congratulations, you're running post-quantum key exchange.
Performance Reality Check: What I Measured
I set up monitoring on my Proxmox host to track TLS handshake performance before and after enabling ML-KEM.
Baseline (X25519 only):
- Handshake time: ~18ms average
- Certificate chain size: 4.2KB
- CPU usage during handshake: negligible
After ML-KEM (hybrid X25519+ML-KEM-768):
- Handshake time: ~19ms average (5.5% increase)
- Certificate chain size: 5.8KB (still under 16KB threshold)
- CPU usage: still negligible on i9-9900K
The 1ms overhead matches AWS's production measurements almost exactly. With TLS session resumption enabled (which it is by default), the overhead effectively disappears because the expensive handshake only happens once per hour instead of on every request.
On my Raspberry Pi 4 running Pi-hole with HTTPS enabled, I saw similar results, maybe 2-3ms handshake overhead, completely unnoticeable in actual browsing.
The Trade-Offs Nobody Talks About
After implementing PQC across my homelab, here are the nuanced trade-offs I wish someone had explained upfront.
Hybrid vs Pure Post-Quantum: The Compatibility Dilemma
Hybrid mode (what everyone actually uses in production):
- Combines classical X25519 with ML-KEM-768
- Security: You're protected as long as either algorithm remains unbroken
- Compatibility: Falls back to classical if client doesn't support PQC
- Performance: Minimal overhead (0.3-0.4ms according to MDPI research)
Pure PQC mode (what I accidentally tried first):
- ML-KEM-768 only, no classical fallback
- Security: Maximum future-proofing, but if ML-KEM gets broken before quantum computers arrive, you're vulnerable to classical attacks
- Compatibility: Breaks connections with any client that doesn't support the specific PQC algorithm
- Performance: Slightly faster than hybrid (no double key exchange)
Decision matrix:
- Use Hybrid When: You need real-world compatibility (99% of use cases)
- Use Pure PQC When: All your clients are controlled and support PQC, and you want to test future-only scenarios
- Never Use Pure Classical: That's the whole point of this migration
In my homelab, I initially tried pure ML-KEM-768 because I assumed all my devices supported it. Wrong. My Roku TV's ancient TLS stack doesn't support any PQC algorithms, and it couldn't reach my Jellyfin server until I enabled hybrid mode.
Certificate Size: The 16KB Threshold Effect
This one caught me completely by surprise. When your certificate chain exceeds 16KB, TLS triggers record fragmentation which interacts badly with TCP congestion control.
Classical RSA-2048 certificate chain:
- Leaf cert: ~1,200 bytes (with RSA-2048 signature)
- Intermediate cert: ~1,500 bytes
- Total: ~2,700 bytes
ML-DSA-44 certificate chain:
- Leaf cert: ~3,600 bytes (with ML-DSA-44 signature)
- Intermediate cert: ~4,800 bytes
- Total: ~8,400 bytes
If you add a second intermediate CA (like many corporate environments do), you're suddenly at 12-13KB. Add a timestamp or OCSP response and you've crossed the 16KB threshold. That's when the mysterious 200ms delay appears.
My solution: I switched to Falcon-512 for signatures instead of ML-DSA on my Raspberry Pi 4 endpoints. Falcon signatures are only ~666 bytes compared to ML-DSA-44's 2,420 bytes, keeping my certificate chains comfortably under 8KB. The trade-off is that Falcon signing uses 2x the energy of Dilithium, but on a device that mostly verifies signatures (like my Pi-hole), that's acceptable.
ARM Performance Considerations: The Raspberry Pi Experience
Research from Brazilian Symposium on Information Security shows that ARM NEON instructions provide 1.16-1.38x speed-ups for Kyber operations on Raspberry Pi 4. But there's a catch: you need to enable NEON optimizations explicitly when building liboqs.
cmake -DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DOQS_USE_ARM_NEON_INSTRUCTIONS=ON \
..
Without NEON optimizations, my Raspberry Pi 4 was roughly 45-50x slower than my i9-9900K for ML-KEM operations. With NEON enabled, it dropped to about 35-40x slower, still significant, but the absolute numbers are so small (0.05ms vs 0.001ms) that it genuinely doesn't matter for homelab use cases.
The Raspberry Pi Zero W, however, is a different story. Its ARM11 processor from 2012 simply can't handle PQC at reasonable speeds. If you're using original Pi Zero hardware, stick to classical cryptography or upgrade to Pi Zero 2 W (which has a quad-core ARM Cortex-A53 and works fine).
Early Adoption Risks: What Could Go Wrong
I'm running production PQC on infrastructure that hosts my password manager (Bitwarden), my file storage (TrueNAS), and my home automation (Home Assistant). Here are the risks I'm consciously accepting:
Algorithm vulnerability: ML-KEM is relatively new compared to RSA's 40+ years of cryptanalysis. If someone discovers a classical attack against lattice problems, I'm protected by hybrid mode (classical algorithms still work). But if there's an implementation bug in liboqs or SymCrypt, I might not notice until it's too late.
Migration churn: NIST selected Kyber in Round 3, then renamed it to ML-KEM and made minor modifications for FIPS 203. AWS is deprecating CRYSTALS-Kyber support in 2026 to push everyone toward the standardized ML-KEM. If NIST makes changes in future rounds or discovers issues with current standards, I'll need to migrate again.
Compatibility regressions: Every OS update or browser release could potentially break PQC support. Chrome 124 worked great with my Nginx setup, then Chrome 131 switched to ML-KEM and I had to update my ciphersuites. Caddy 2.10 handles this automatically, but if you're managing OpenSSL configurations manually, prepare for maintenance.
Increased attack surface: I'm running experimental cryptographic code (liboqs) in a privileged position (TLS termination). If there's a memory safety bug in the PQC implementations, that's a potential remote code execution vulnerability. The Trail of Bits security audit in April 2025 found no critical issues, but that doesn't mean future bugs won't appear.
My personal risk tolerance says these trade-offs are acceptable for homelab experimentation and learning. If I were running a business or handling other people's data, I'd probably wait another 1-2 years for the ecosystem to mature.
Step-by-Step Migration for Your Homelab
Here's the process I'd follow if I had to start over, knowing what I know now:
Phase 1: Test in an Isolated VM (Weekend 1)
# Spin up Ubuntu 24.04 VM in Proxmox
# 2 CPU cores, 4GB RAM is plenty
# Install Caddy 2.10 (the easy button)
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
# Create test configuration
sudo tee /etc/caddy/Caddyfile <<EOF
test.yourdomain.com {
reverse_proxy localhost:8080
# PQC enabled by default, no config needed
}
EOF
# Start a simple test backend
python3 -m http.server 8080 &
# Restart Caddy
sudo systemctl restart caddy
# Test from another machine
openssl s_client -connect test.yourdomain.com:443 -groups x25519_kyber768
If you see x25519_kyber768 in the output, you've successfully deployed PQC. Total time: maybe 30 minutes.
Phase 2: Generate Hybrid Certificates (Weekend 1 continued)
For most homelabs, you're probably using Let's Encrypt. The good news is that Caddy handles ACME automatically and will request appropriate certificates. The certificate itself is still RSA or ECDSA (Let's Encrypt doesn't issue ML-DSA certificates yet), but the key exchange uses ML-KEM.
If you're using your own internal CA (which I do for *.lab.internal domains), generating hybrid certificates is more complex:
# This is theoretical - ML-DSA cert issuance isn't widely supported yet
# Most CAs will issue hybrid certs starting in 2026
# For now, use classical certificates with PQC key exchange
# The key exchange (ML-KEM) protects the session even if cert uses RSA/ECDSA
The hybrid approach means your certificates can be classical (RSA-2048 or ECDSA-256) while your key exchange uses ML-KEM-768. This provides quantum-resistant forward secrecy even though the authentication (certificate signature) isn't quantum-safe yet.
Cloudflare projects that post-quantum certificates will arrive in production by 2026, but they won't be enabled by default initially. For homelabs in 2025, classical certificates + PQC key exchange is the pragmatic approach.
Phase 3: Roll Out to Production Services (Weekend 2)
Prioritize services based on SNDL risk:
-
Highest Priority (Day 1):
- Password manager (Bitwarden/Vaultwarden)
- Backup systems (TrueNAS, BorgBackup)
- Remote access (VPN, SSH)
-
Medium Priority (Day 2-3):
- Personal cloud storage (Nextcloud)
- Self-hosted services with personal data
- Internal wikis or documentation
-
Lower Priority (Day 4+):
- Public-facing services
- Entertainment services (Jellyfin, Plex)
- Monitoring dashboards (Grafana)
For each service:
# Option A: Front with Caddy reverse proxy (easiest)
service.yourdomain.com {
reverse_proxy localhost:8080
}
# Option B: Configure service's own TLS stack
# (Requires service to support OpenSSL 3.5 or similar)
# Option C: Use Nginx if you must
# (Refer to configuration in "Homelab Implementation Strategy" section)
Test each service thoroughly before moving to the next. My mistake was migrating everything simultaneously, which made debugging impossible when things broke.
Phase 4: Monitor and Iterate (Weekend 3+)
# Set up monitoring for TLS handshake times
# I use Prometheus + Grafana with ssl_exporter
# Watch for:
# - Handshake latency spikes (>50ms is suspicious)
# - Connection failures (incompatible clients)
# - Certificate expiration (PQC or not, certs still expire)
# Test regularly with different clients:
# - Modern browser (Chrome 124+, Firefox 128+)
# - Mobile devices (iOS 17.4+, Android 14+)
# - IoT devices (many don't support PQC yet)
# - API clients (curl, Python requests, etc.)
The Cloudflare PQC test site is useful for checking if your browser supports post-quantum cryptography. OQS also runs a public test server that supports multiple PQC algorithms.
What I Learned the Hard Way
After three weekends and probably 30+ hours of work, here are my key lessons:
1. Hybrid mode is non-negotiable. Pure PQC breaks too many clients and provides no real security advantage (since we're already combining algorithms). Always configure fallback options.
2. Certificate size matters more than I expected. That 16KB threshold is real. Keep your certificate chains under 12KB to be safe. Use Falcon instead of Dilithium if you need smaller signatures.
3. Start with Caddy, graduate to Nginx later. I wasted an entire weekend fighting Nginx because I assumed my experience with classical TLS would transfer. It doesn't. Caddy abstracts away the complexity and gives you working PQC in minutes.
4. Your Raspberry Pi is more capable than you think. The 0.05ms overhead on Pi 4 is completely negligible. Don't use hardware limitations as an excuse to delay PQC adoption.
5. Browser compatibility is better than expected. Chrome 124+, Edge 124+, and Firefox 128+ all support X25519Kyber768 or ML-KEM. Safari support is coming (iOS 17.4+ already has PQC for iMessage). The only clients that failed were my Roku TV and an old Android tablet running Android 10.
6. Performance is genuinely not the bottleneck. I spent so much time worrying about CPU overhead and it turned out to be irrelevant. The real problems were configuration mistakes and protocol compatibility.
7. The Store Now, Decrypt Later threat is real enough to act on. I procrastinated on PQC for months because "quantum computers don't exist yet." But when I actually calculated how long my backups need to remain confidential (10+ years) and how long migration would take (already spent 3 weekends, probably need 3 more), the math says I'm already behind schedule.
After Three Weekends of Breaking Things, Here's What Actually Matters
If you're running a homelab in 2025, you should start experimenting with post-quantum cryptography now, not because quantum computers are imminent, but because the migration takes longer than you think and your encrypted data is vulnerable today to SNDL attacks.
The three things that matter most:
1. Use hybrid mode everywhere. X25519+ML-KEM-768 gives you both classical and quantum protection. It's supported by Chrome, Edge, Firefox, Caddy, Nginx, and OpenSSL 3.5. There's no good reason to avoid it.
2. Test with Caddy 2.10 first. Spend 30 minutes getting it working, learn what PQC feels like in practice, then migrate to your preferred web server if needed. The learning curve is drastically shorter.
3. Don't wait for perfect standards. NIST finalized ML-KEM in August 2024. AWS deployed it in May 2025. Google deployed it to billions of users in Chrome 124/131. The ecosystem has moved past the experimental phase. It's production-ready enough for homelabs.
My homelab now handles TLS with hybrid X25519+ML-KEM-768 for key exchange, classical RSA-2048 certificates (for now), and a documented migration plan for when ML-DSA certificates become widely available in 2026-2027. It took three weekends, multiple debugging sessions, and one embarrassing moment where I locked myself out of my own Bitwarden instance because the Caddy config had a typo.
But now my encrypted backups, my self-hosted password manager, and my personal data are protected against quantum computers that don't even exist yet. And when those quantum computers finally arrive in 2033 or 2038 or whenever, I'll be able to sleep a little better knowing I acted before the threat materialized instead of after.
The quantum future is coming. Your homelab can be ready for it.
References
-
NIST FIPS 203: ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism) (August 2024)
- NIST's finalized standard for post-quantum key encapsulation
-
NIST FIPS 204: ML-DSA (Module-Lattice-Based Digital Signature Algorithm) (August 2024)
- NIST's finalized standard for post-quantum digital signatures
-
NIST FIPS 205: SLH-DSA (Stateless Hash-Based Digital Signature Algorithm) (August 2024)
- NIST's finalized hash-based signature standard
-
Global Risk Institute: Quantum Threat Timeline Report 2024 (December 2024)
- Expert survey on cryptographically-relevant quantum computer (CRQC) timeline
-
RAND Corporation: Securing Communications in the Quantum Computing Age (April 2020)
- Analysis of quantum threat timelines and migration strategies
-
Craig Gidney (Google Quantum AI): RSA Factorization with Less Than a Million Qubits (May 2025)
- 95% reduction in qubit requirements for breaking RSA-2048
-
Craig Gidney & Martin Ekerå: RSA Factorization with 20 Million Qubits (May 2019)
- Original estimate for quantum computing attack feasibility
-
White House NSM-10: Migrating to Post-Quantum Cryptography (May 2022)
- Federal government policy mandating PQC migration by 2035
-
MDPI: Constrained Device Performance Benchmarking with PQC (May 2024)
- Raspberry Pi 4 benchmarks showing 0.05ms performance for ML-KEM-768
-
MDPI: Practical Performance Benchmark Across Heterogeneous Environments (May 2025)
- TLS handshake overhead analysis showing 0.3-0.4ms additional latency
-
AWS Security Blog: ML-KEM Post-Quantum TLS Support (May 2025)
- Production deployment metrics showing 0.05% performance impact
-
arXiv: Performance Analysis for Consumer Electronics (June 2025)
- Raspberry Pi 4 performance analysis (45-50x slower than Apple M4)
-
Springer: Efficient Implementation of Post-Quantum Digital Signatures on Raspberry Pi (2025)
- Dilithium2 identified as optimal choice for Raspberry Pi 4
-
OpenSSL Foundation: Post-Quantum Algorithms in OpenSSL 3.5 (April 2025)
- OpenSSL 3.5.0 release with native ML-KEM support
-
Open Quantum Safe: liboqs Library (Ongoing)
- Core C library for PQC algorithm implementations
-
Open Quantum Safe: oqs-provider for OpenSSL 3 (Ongoing)
- OpenSSL 3 provider enabling post-quantum algorithms
-
Google Chrome: CECPQ2 Project (ML-KEM Deployment) (April 2024)
- Chrome 124 deployment to billions of users
-
The Hacker News: Chrome Switches to ML-KEM (November 2024)
- Chrome 131 migration from experimental Kyber to standardized ML-KEM
-
Cloudflare Blog: The TLS Post-Quantum Experiment (October 2019)
- NTRU-HRSS vs SIKE performance comparison
-
Cloudflare Blog: State of the Post-Quantum Internet 2024 (August 2024)
- 16% of global HTTPS traffic using post-quantum cryptography
-
Signal: Quantum Resistance and the Signal Protocol (PQXDH) (September 2023)
- Post-Quantum Extended Diffie-Hellman protocol deployment
-
Signal: Triple Ratchet (SPQR) (October 2025)
- Sparse Post-Quantum Ratchet for ongoing message protection
-
Apple Security Research: iMessage with PQ3 (February 2024)
- Level 3 security with automatic rekeying every 50 messages
-
Linux Iac: Caddy 2.10 Web Server Debuts (April 2025)
- Caddy 2.10 with default X25519MLKEM768 support
-
Linode Documentation: Post-Quantum Encryption with NGINX (2025)
- Practical guide to configuring Nginx with ML-KEM
-
NIST PQC Conference: Impact of Data-Heavy Post-Quantum TLS 1.3 (2024)
- Analysis of 16KB certificate chain threshold and TCP fragmentation
-
IACR ePrint: Post-Quantum Authentication in TLS 1.3 (2020)
- Performance study showing Dilithium and Falcon competitive with classical
-
arXiv: Post-Quantum Cryptography for Internet of Things (January 2024)
- IoT survey showing Kyber512/768 optimal for constrained devices
-
Brazilian Symposium on Information Security: Kyber and Saber on ARMv8 (2022)
- ARM NEON optimizations providing 1.16-1.38x speed-ups
-
Trail of Bits: liboqs Security Review (April 2025)
- Independent security audit finding no critical vulnerabilities
Related Posts
From Claude in Your Terminal to Robots in Your Workshop: The Embodied AI Revolution
Vision-Language-Action models transform AI from code into physical robots, with practical implicatio...
Automated Security Scanning Pipeline with Grype and OSV
Building automated security scanning pipelines with Grype, OSV, and Trivy—CI/CD integration, vulnera...
Proxmox High Availability Setup for Homelab Reliability
Building high-availability Proxmox clusters on homelab hardware—shared storage, live migration, auto...