Back to Blog
Part 1 of 3

PQC Migration Playbook, Part 1: Cryptographic Discovery

Ryan Wentzel··Updated March 29, 2026·8 min read
pqcmigrationcbomdiscoveryseries

PQC Migration Playbook

Part 1 of 3

This is Part 1 of the PQC Migration Playbook, a three-part series covering the complete lifecycle of a post-quantum cryptography migration. Part 1 covers discovery and inventory. Part 2 covers risk assessment. Part 3 covers remediation and crypto-agility.

Every post-quantum migration begins with the same question: what cryptography are we actually running? The answer is almost never what organizations expect. Most enterprises have a general sense that they use TLS, maybe some RSA certificates, and "whatever the cloud provider does." But the reality is orders of magnitude more complex. A typical mid-size organization has thousands of cryptographic assets spread across applications, infrastructure, third-party services, embedded systems, and shadow IT. Until you can see all of them, you cannot plan a migration.

This is the discovery phase, and it is the foundation everything else builds on.

Why Discovery Comes First

The temptation is to skip ahead. Security teams read about ML-KEM, see that their web server supports it, upgrade the TLS configuration, and declare the migration underway. But upgrading one TLS endpoint while ignoring the other 500 services, 3,000 certificates, and unknown number of application-layer cryptographic operations creates a false sense of progress.

Discovery is the discipline of seeing the full picture before acting. It answers three fundamental questions:

  1. What cryptographic algorithms, protocols, keys, and certificates exist in our infrastructure?
  2. Which of those assets are quantum-vulnerable?
  3. What applications and data flows depend on each asset?
Without answers to all three questions, your migration plan has blind spots. Blind spots become vulnerabilities. Vulnerabilities become breaches.

The Discovery Process

Phase 1: Network-Level Discovery

Start with what is visible on the wire. Network-level discovery scans your infrastructure to identify every endpoint that speaks TLS, SSH, IPsec, or other cryptographic protocols, then records the algorithms and parameters in use.

What to capture for each TLS endpoint:

  • IP address and hostname
  • TLS protocol version (TLS 1.2, TLS 1.3)
  • Supported cipher suites (ordered by preference)
  • Negotiated key exchange algorithm (ECDHE-P256, X25519, RSA, X25519MLKEM768)
  • Server certificate: subject, issuer, signature algorithm, key type, key size, expiration
  • Certificate chain: intermediate and root CA certificates with their signature algorithms
  • OCSP stapling configuration
  • Client certificate requirements (if mutual TLS)
What to capture for SSH endpoints:
  • Host key algorithms (ssh-rsa, ecdsa-sha2-nistp256, ssh-ed25519)
  • Key exchange algorithms (diffie-hellman-group14-sha256, curve25519-sha256)
  • Supported MACs and ciphers
What to capture for VPN/IPsec endpoints:
  • IKE version and phase 1 parameters
  • Key exchange group (DH Group 14, ECDH-P256)
  • Authentication method (RSA signatures, ECDSA, PSK)
  • ESP/AH cipher and integrity algorithms
Network scanning provides the broadest coverage with the least effort. A single scan can identify thousands of endpoints and their cryptographic configurations in hours. However, it only captures cryptography that is visible on the network -- application-layer encryption, data-at-rest encryption, and embedded cryptographic operations are invisible to network scanning.

Phase 2: Application-Layer Discovery

Application-layer discovery goes deeper by analyzing source code, binaries, and runtime behavior to identify cryptographic operations that happen inside applications.

Source code analysis scans your codebases for calls to cryptographic libraries and APIs:

  • OpenSSL, BoringSSL, and libssl function calls (e.g., EVPEncryptInitex, SSLCTXsetcipherlist)
  • Language-specific crypto modules: Python's cryptography and pycryptodome, Java's javax.crypto, Node's crypto, Go's crypto/tls and crypto/rsa
  • JWT library configurations: jsonwebtoken, jose, nimbus-jose-jwt (check for RS256, ES256 algorithm selections)
  • XML signature libraries (XMLDSig with RSA or ECDSA)
  • S/MIME and PGP libraries
For each cryptographic API call, record:
  • The algorithm being used (RSA-2048, ECDSA-P256, AES-256-GCM, etc.)
  • The purpose (key exchange, signing, encryption, hashing)
  • The data flow (what data does this protect? where does it come from and go?)
  • Whether the algorithm is configurable or hardcoded
Dependency analysis examines your software supply chain for cryptographic libraries:
  • Use SBOM tools (Syft, SPDX) to enumerate all dependencies
  • Flag dependencies that include cryptographic code: openssl, libsodium, bouncycastle, ring, aws-lc-rs
  • Check the version of each cryptographic library against known PQC support
  • Identify transitive dependencies that pull in cryptographic code without your direct knowledge
Runtime analysis captures actual cryptographic operations in production:
  • Instrument cryptographic libraries to log algorithm usage, key sizes, and operation counts
  • Use eBPF or OpenSSL tracing to capture TLS handshake parameters from running processes
  • Monitor certificate validation events to identify which CAs and key types are in active use
Runtime analysis is the most accurate but also the most operationally complex. It captures what is actually happening in production, including cryptographic operations from third-party code and dynamically loaded libraries that static analysis might miss.

Phase 3: Certificate and Key Inventory

Certificates and keys deserve special attention because they are both the most numerous cryptographic assets and the ones most likely to be missed.

Certificate sources to inventory:

  • Internal PKI / Certificate Authority (every certificate ever issued, not just active ones)
  • Cloud provider certificate managers: AWS ACM, Azure Key Vault, GCP Certificate Manager
  • CDN and edge certificates: Cloudflare, Akamai, Fastly, AWS CloudFront
  • Let's Encrypt and other ACME-provisioned certificates
  • Code signing certificates stored in CI/CD systems
  • S/MIME certificates in email systems
  • Client certificates for mutual TLS authentication
  • Certificates embedded in Docker images, VM images, and firmware
For each certificate, record:
  • Subject and Subject Alternative Names (SANs)
  • Issuer and full chain to root
  • Signature algorithm (RSA-SHA256, ECDSA-SHA256, etc.)
  • Key type and size (RSA-2048, ECDSA-P256, Ed25519)
  • Validity period (not before, not after)
  • Key usage extensions
  • Applications and services that depend on this certificate
Key material inventory:
  • SSH host keys and user keys (check authorized_keys files across all servers)
  • PGP/GPG keys used for code signing, email encryption, or secret management
  • KMS keys in cloud providers (AWS KMS, Azure Key Vault, GCP Cloud KMS)
  • HSM-stored keys (what algorithms are supported by your HSM firmware?)
  • API signing keys for third-party service integrations

Phase 4: Third-Party and Shadow IT Discovery

Your cryptographic surface extends beyond systems you directly control:

  • SaaS integrations: What encryption does your CRM, email provider, collaboration platform, and ERP use? Can you verify their TLS configuration? Do they support PQC key exchange?
  • Partner and vendor connections: B2B integrations often use dedicated VPN tunnels, mutual TLS, or API keys with specific signing algorithms.
  • Shadow IT: Departments may have deployed services without IT oversight. Cloud account scanning can identify unauthorized resources with cryptographic configurations.
  • IoT and OT devices: Building management systems, SCADA controllers, medical devices, and industrial equipment often have embedded cryptographic implementations that cannot be easily discovered through network scanning.

Building Your CBOM

The output of the discovery phase is a Cryptographic Bill of Materials (CBOM) -- a structured, machine-readable inventory of every cryptographic asset in your infrastructure. See our detailed CBOM guide for the CycloneDX format specification and integration details.

Your CBOM should be organized to support the next phase (risk assessment) by including:

  • Quantum safety classification for each asset: safe, vulnerable, or unknown
  • Data sensitivity mapping: what data does each cryptographic asset protect, and what is its confidentiality shelf life?
  • Dependency graph: which applications and services depend on each cryptographic asset?
  • Migration target: for each quantum-vulnerable asset, what is the recommended PQC replacement?

Common Discovery Challenges

Scale

Large enterprises can have tens of thousands of cryptographic assets. Manual inventory is not feasible. Invest in automated scanning tools and integrate discovery into your existing asset management and CMDB workflows.

Visibility

You cannot scan what you cannot see. Air-gapped networks, embedded systems, and third-party managed services may not be accessible to your scanning tools. For these, rely on configuration review, vendor questionnaires, and contractual requirements.

Accuracy

Network scanning captures what is configured, not necessarily what is negotiated. A server may support both RSA and ECDH key exchange, but network scanning alone does not tell you what percentage of clients actually negotiate RSA. Runtime analysis fills this gap.

Organizational Resistance

Discovery sometimes reveals uncomfortable truths. Teams may be running outdated configurations, using deprecated algorithms, or operating services that IT did not know about. Frame discovery as a baseline, not an audit. The goal is to understand the current state, not to assign blame.

What Comes Next

With your CBOM complete, you have the map. In Part 2: Risk Assessment, we will use that map to score every cryptographic asset by quantum risk, apply the Mosca Inequality to determine your migration deadline, and prioritize the remediation effort based on business impact.

Discovery is not glamorous work. It does not produce a demo-ready dashboard or a headline-grabbing PQC deployment announcement. But it is the difference between a migration that succeeds and one that leaves critical assets unprotected because nobody knew they existed. Do the work. Build the inventory. Everything else depends on it.

Ryan Wentzel

Founder of Q by Wentzel. Building tools to help organizations assess and manage their post-quantum cryptography risk. Focused on making PQC migration measurable, actionable, and accessible.

NetflixOracleFigmaCoinbaseDellServiceNowAppleDeloitteNikeAWSJPMorgan ChaseT-MobileAtlassianBoschStripeL'OrealDatadogMicrosoftPalantirHPRobinhoodEYSonyCanvaVisaAutoCADDiscordBellAdobeCharles SchwabE*TRADENVIDIAGoogleJohnson & JohnsonFidelityClaudeMastercardIntuitBoeingAT&TShopifyPwCOpenAIKPMGIBMDatabricksSalesforceGitHubAmerican ExpressWorkday