This is Part 3 of the PQC Migration Playbook. Part 1 covered cryptographic discovery. Part 2 covered risk assessment. This final part covers remediation -- the actual work of deploying post-quantum algorithms -- and the architectural principle of crypto-agility that ensures you never face a migration this difficult again.
You have your CBOM. You have your risk scores. You have your Board Number and leadership buy-in. Now it is time to migrate.
The Remediation Strategy
Remediation is not a single action. It is a phased program that replaces quantum-vulnerable cryptography across your entire infrastructure while maintaining operational continuity. The key principle is: deploy hybrid first, validate thoroughly, then expand coverage.
Phase 1: Hybrid Key Exchange on TLS (Months 1-6)
This is where you get the most risk reduction with the least operational disruption. Deploying hybrid ML-KEM key exchange on TLS endpoints is a server-side configuration change that provides immediate HNDL protection.
What hybrid mode means: Every TLS 1.3 handshake negotiates both a classical key exchange (X25519) and a post-quantum key exchange (ML-KEM-768). The session key is derived from both shared secrets. An attacker must break both X25519 and ML-KEM to compromise the session. This means:
- If a quantum computer breaks X25519 but ML-KEM holds, the session is secure.
- If an unexpected attack breaks ML-KEM but X25519 holds, the session is secure.
- Only if both algorithms fail simultaneously is the session compromised.
- Internet-facing web servers and API gateways (highest interception risk)
- VPN concentrators and remote access gateways
- Email servers (SMTP, IMAP with STARTTLS)
- Internal load balancers and reverse proxies
- Service-to-service communication (east-west traffic)
- Confirm that hybrid key exchange is being negotiated by checking the TLS session details. In Chrome DevTools, the Security tab shows the key exchange algorithm for each connection.
- Monitor for TLS handshake failures. Some middleboxes, firewalls, and legacy clients may not handle the larger ClientHello message that hybrid key exchange produces (approximately 1.2 KB of additional key share data).
- Measure latency impact. ML-KEM key exchange is computationally faster than ECDH, but the larger key shares add bytes to the handshake. In practice, the net latency impact is negligible on modern networks. On bandwidth-constrained links, you may see a few milliseconds of additional latency.
# Nginx configuration with hybrid + classical fallback
sslconfcommand Groups X25519MLKEM768:X25519:P-256;
The priority order ensures that clients supporting hybrid will negotiate hybrid, while legacy clients seamlessly fall back to classical key exchange.
Phase 2: Certificate and Signature Migration (Months 6-24)
Signature migration is more complex than key exchange migration because it requires coordination across the entire trust chain. You cannot issue an ML-DSA server certificate unless your CA, intermediate CA, and all relying parties (browsers, clients, certificate validators) support ML-DSA verification.
Internal PKI migration:
For services where you control both the server and all clients, you can migrate signatures immediately:
- Update your internal CA to issue ML-DSA-65 certificates alongside existing ECDSA certificates.
- Configure servers to present both certificates (dual-stack mode).
- Update internal clients to prefer ML-DSA certificates.
- Once all clients support ML-DSA, deprecate ECDSA certificates.
For public-facing services where you do not control the client population:
- Monitor browser and client support for ML-DSA certificate verification. As of early 2026, experimental support exists in Chrome and Firefox but is not enabled by default.
- Issue dual certificates from CAs that support ML-DSA (several major CAs have begun offering this).
- Implement certificate selection logic that serves ML-DSA certificates to clients that support them and ECDSA certificates to clients that do not.
- Track the percentage of connections using ML-DSA certificates over time.
Code signing has different constraints than TLS:
- Code signatures persist for the lifetime of the signed artifact. Firmware signed today may be verified 10-15 years from now.
- Verification happens offline, so the full certificate chain must be available at verification time.
- Many code signing frameworks support dual signatures, allowing you to sign with both ECDSA and ML-DSA.
Phase 3: Application-Layer Cryptography (Months 12-36)
Application-layer cryptographic operations are the most diverse and require the most case-by-case analysis:
JWT tokens: If your JWTs use RS256 (RSA-PKCS1-SHA256) or ES256 (ECDSA-P256-SHA256), plan the migration:
- For internal APIs: Switch to a PQC-compatible JWT signing algorithm when library support is available. In the interim, keep token lifetimes short (minutes, not days) to minimize the integrity shelf life.
- For public APIs: Coordinate with API consumers. Publish a deprecation timeline for classical-only JWT verification.
S/MIME email encrypted with RSA is directly vulnerable to HNDL attacks. Options:
- Migrate to ML-KEM-based S/MIME when client support is available.
- For highly sensitive communications, consider switching to end-to-end encrypted platforms that already support PQC (Signal has deployed ML-KEM since 2023).
- Evaluate whether the confidentiality shelf life of your email content justifies the urgency of migration.
Data-at-rest encryption typically uses symmetric algorithms (AES-256-GCM), which are quantum-resistant with sufficient key sizes. However, the key management system may use RSA or ECDH for key wrapping and key transport. Ensure your KMS supports PQC key wrapping.
Custom protocols:
Any application that implements its own cryptographic protocol (key exchange, encryption, signing) needs a dedicated migration plan. This is where the CBOM's dependency mapping is essential -- it tells you exactly which applications have custom cryptographic implementations.
Phase 4: Embedded Systems and Legacy Infrastructure (Months 24-60+)
This is the long tail of the migration. Embedded systems, hardware security modules, and legacy protocols present unique challenges:
Firmware-upgradeable devices: Push firmware updates that include PQC-compatible cryptographic libraries. Prioritize devices that handle sensitive data or are deployed in high-risk environments.
Non-upgradeable devices: Devices with hardcoded cryptographic implementations that cannot be updated via firmware present the hardest migration challenge. Options:
- Replace the hardware. This is expensive but may be the only option for devices with long remaining operational lifetimes.
- Network isolation. Place the device behind a PQC-capable gateway that terminates quantum-vulnerable connections and re-encrypts with PQC before reaching the wider network.
- Accept the risk with documentation. For devices with short remaining operational lifetimes or low data sensitivity, formal risk acceptance may be appropriate.
SCADA and OT protocols: Industrial control systems often use legacy protocols with fixed cryptographic parameters. Migration may require coordination with equipment vendors, system integrators, and regulatory bodies. Start vendor engagement early and plan for multi-year timelines.
Building Crypto-Agility
The post-quantum migration should be the last time your organization faces a multi-year cryptographic migration program. The principle that makes this possible is crypto-agility: the ability to change cryptographic algorithms without changing application code.
What Crypto-Agility Looks Like
Configuration-driven algorithm selection: Cryptographic algorithms are specified in configuration, not in code. Switching from ML-KEM-768 to ML-KEM-1024 (or a future algorithm) requires a configuration change, not a code change.
# crypto-config.yaml
tls:
key_exchange:
primary: X25519MLKEM768
fallback: X25519
signature:
primary: ML-DSA-65
fallback: ECDSA-P256
jwt:
signing_algorithm: ML-DSA-65
verification_algorithms:
- ML-DSA-65
- ES256 # backward compatibility
Abstract cryptographic interfaces: Application code does not call specific algorithm implementations. It calls an abstraction layer that delegates to the configured algorithm.
// Crypto-agile interface
interface KeyExchange {
generateKeyPair(): Promise<KeyPair>;
encapsulate(publicKey: Uint8Array): Promise<EncapsulationResult>;
decapsulate(
secretKey: Uint8Array,
ciphertext: Uint8Array
): Promise<Uint8Array>;
}
// Algorithm selection is configuration-driven const kex = getCryptoProvider().getKeyExchange(); // Returns ML-KEM, ECDH, or hybrid const { ciphertext, sharedSecret } = await kex.encapsulate(peerPublicKey);
Algorithm negotiation in protocols: Use protocol mechanisms that allow peers to negotiate algorithms dynamically, rather than hardcoding algorithm choices. TLS cipher suite negotiation is the model for this pattern.
Versioned cryptographic artifacts: When you sign or encrypt data, include metadata that identifies the algorithm used. This allows verifiers and decryptors to select the correct algorithm without hardcoded assumptions.
{
"algorithm": "ML-DSA-65",
"algorithmVersion": "FIPS-204-final",
"signature": "base64...",
"signedAt": "2026-03-17T00:00:00Z"
}
Why Crypto-Agility Matters Beyond PQC
The need for crypto-agility does not end with the PQC migration:
- Algorithm deprecation: As cryptanalysis advances, algorithms that are considered safe today may be weakened or broken. AES-256 is not going to be broken by a quantum computer, but hash function weaknesses (like the SHA-1 collision in 2017) demonstrate that cryptographic assumptions can change.
- New standards: NIST's additional signature scheme evaluation is ongoing. New algorithms will be standardized in coming years. Organizations with crypto-agile architecture can adopt them smoothly.
- Regulatory changes: Compliance requirements for cryptographic algorithms evolve. CNSA 2.0 mandates specific algorithms and timelines. PCI-DSS, HIPAA, and other frameworks will update their cryptographic requirements.
- Performance optimization: As hardware and library implementations mature, you may want to switch between algorithm variants (e.g., from SLH-DSA to FN-DSA for smaller signatures) without a major migration effort.
Tracking Migration Progress
Establish metrics and reporting from day one:
Migration coverage: Percentage of cryptographic assets in each state:
- Quantum-vulnerable (classical only)
- Hybrid (classical + PQC)
- Fully PQC
CBOM drift: Number of new quantum-vulnerable assets introduced since the last scan. This catches regressions and new deployments that do not follow the PQC standard.
Risk reduction velocity: Number of high-risk assets remediated per month. This tells you whether your migration pace is sufficient to meet your Mosca Inequality deadline.
Vendor PQC readiness: Percentage of third-party vendors and services that support PQC. Track this as a leading indicator -- vendor delays will affect your migration timeline.
The End State
The post-quantum migration is complete when:
- 100% of cryptographic assets in your CBOM use PQC or hybrid algorithms.
- Your HNDL Score is below 25 (Low risk tier).
- Your Mosca Inequality is satisfied with margin (X + Y is well below Z).
- Your architecture is crypto-agile, enabling future algorithm changes via configuration.
- Continuous monitoring is in place to detect and remediate cryptographic drift.
- All vendor dependencies have confirmed PQC support timelines.
Conclusion
The PQC migration is the largest coordinated change in cryptographic infrastructure since the introduction of public-key cryptography itself. It touches every protocol, every certificate, every key, and every application that uses asymmetric cryptography. The scope is daunting.
But the path is clear. Discover what you have (Part 1). Assess the risk (Part 2). Remediate with hybrid deployment and build crypto-agility for the future (this article). The standards are finalized. The implementations are mature. The tools exist.
Use Q by Wentzel's HNDL calculator to quantify your starting point. Consult the algorithm reference to understand your migration targets. And start today -- because the harvest is already underway, and every day of delay is a day of data that may eventually be read.