Dynex provides a qubit-agnostic compute environment that integrates neuromorphic p-qubit hardware, large-scale algorithmic-qubit emulators, and third-party quantum processing units into a single, unified execution layer. The platform abstracts device-specific characteristics and presents a consistent programming and operational model for optimization, simulation, and quantum-inspired workloads.
The system is designed to support heterogeneous quantum and classical devices with differing qubit modalities, topologies, noise characteristics, and control interfaces, while maintaining a coherent computational workflow for the end user.

1. Architecture Overview
1.1 Unified Compute Layer
All supported devices—Dynex Apollo, Dynex GPU/CPU qNodes, and external QPUs (IBM, IonQ, Rigetti, D-Wave, QuEra, IQM, etc.)—are represented as Dynex Quantum Nodes.
Each node operates under the coordination of the Dynex Engine, which manages:
workload routing,
resource scheduling,
cross-node synchronization, and
result aggregation.
This ensures consistent execution semantics irrespective of the backend’s underlying qubit technology or computational paradigm.
1.2 Hybrid Execution Model
The platform supports mixed-modality computation. Workflows may combine:
continuous-time annealing (Apollo),
large-scale algorithmic-qubit simulation (GPU/CPU), and
gate- or annealing-based quantum operations (third-party QPUs).
The execution planner determines backend selection based on:
problem graph density,
latency tolerance,
required connectivity,
device-specific constraints,
and operational availability.
2. Apollo Integration
The Apollo p-qubit chip is a primary compute node in the Dynex architecture. It implements a 10,000-element stochastic computing fabric based on quantum driven probabilistic bits (p-qubits) operating in continuous time. The chip is fabricated in 16 nm mixed-signal CMOS and operates entirely at room temperature.

> Watch the Apollo video
> Download Apollo Datasheet (PDF)
> Scientific Publications
2.1 Fundamental Characteristics
p-Qubit Fabric: 10,000 parallel stochastic units
Connectivity: Δ256 Hyperion topology (up to 256 couplings per node)
State Dynamics: continuous-time stochastic switching governed by local fields
Entropy: one Integrated Quantum Entropy Unit (IQEU) per p-qubit
Performance:
~10⁸ flips/s per p-qubit
≤10 fJ energy per flip
~0.5 W total power consumption
Operating Environment: 0–85°C, no cryogenics or laser control required
Control Interface: tightly coupled Dynex Control Unit (DCU) for bias/coupling scheduling

2.2 Technical Role in the Platform
Apollo provides native support for:
Ising/QUBO minimization via thermodynamic relaxation
Boltzmann sampling
generative probabilistic modeling
analog vector–matrix multiplication for inference tasks
Its Δ256 topology significantly reduces embedding overhead compared to low-degree architectures (e.g., Pegasus Δ=15, Zephyr Δ=20). This enables direct execution of dense industrial problem graphs without decomposition into multiple virtual qubits.

3. Third-Party QPU Integration
Dynex integrates a broad range of quantum devices to provide coverage across several computing models. Examples include:
Provider | Device | Technology |
|---|---|---|
IBM | Eagle, others | Superconducting transmon (gate model) |
IonQ | Aria, Forte | Trapped-ion (gate model) |
Rigetti | Ankaa series | Superconducting (gate model) |
D-Wave | Advantage / Advantage2 | Quantum annealing |
QuEra | Aquila | Neutral-atom / Rydberg analog simulation |
IQM | Garnet, Emerald | Superconducting |
The Dynex Engine abstracts away device-specific APIs, differing coherence properties, and topology constraints. This allows external QPUs to serve as complementary resources for circuit-based workflows, verification tasks, or algorithmic benchmarking.
4. Dynex GPU/CPU qNodes (Algorithmic Qubits)
The Dynex platform includes high-performance algorithmic-qubit simulators implemented on GPU and CPU clusters. These nodes provide:
up to 1 million algorithmic qubits,
deterministic reproducibility,
flexible embedding,
and compatibility with the same SDK used for Apollo and QPUs.
GPU qNodes are particularly suited for:
large-scale sweeps,
embedding validation,
debugging of Hamiltonian structures,
and scenarios where massive problem sizes exceed practical physical qubit counts.
5. SDK and Programming Model
The Dynex SDK offers a unified API that supports:
QUBO matrices,
Ising Hamiltonians,
quantum circuit definitions (compiled to effective Ising Hamiltonians),
graph-based embeddings,
probabilistic inference models,
analog VMM structures.

5.1 Backend Abstraction
The SDK handles:
conversion of circuits to Hamiltonians (Feynman–Kitaev style reductions),
embedding onto target topologies (including Δ256),
device-specific instruction formatting,
secure transport via bi-directional gRPC,
real-time streaming of partial or full results.
This decouples algorithm development from hardware-specific considerations.

6. Runtime Environment
6.1 Serverless Execution
The platform uses a fully serverless scheduling layer:
Nodes are provisioned on demand
Failures trigger automatic rerouting
Results are streamed incrementally where applicable
Long-running annealing or sampling tasks maintain state consistency
6.2 Decentralized Architecture
Quantum nodes can operate:
in Dynex-managed data centers,
in partner facilities,
or across federated deployments.
State synchronization between nodes is handled through encrypted channels and time-aligned execution windows, enabling multi-node hybrid workflows.

7. Application Domains
The qubit-agnostic platform supports tasks spanning several categories:
7.1 Optimization
QUBO/Ising minimization
Scheduling
Routing
Portfolio and risk optimization
Constraint satisfaction (SAT, MaxCut)
7.2 Simulation and Sampling
Boltzmann sampling
Bayesian inference
Statistical mechanical models
High-dimensional stochastic systems
7.3 Quantum-Circuit-to-Hamiltonian Execution
The platform supports circuit workloads via Hamiltonian reduction:
Parse circuit
Construct propagation and penalty constraints
Generate sparse Ising/QUBO representation
Embed to chosen backend
Execute via annealing or sampling
7.4 Analog VMM and ML Acceleration
Apollo’s analog VMM units enable:
energy-efficient inference
synaptic weight accumulation
analog vector–matrix operations
> Industries
> Examples
> Use-Cases
8. Apollo Technical Data-Sheet
Technical reference material, specifications, and system architecture diagrams are available in the full datasheet:
