Documentary: Dynex, Neuromorphic Quantum Computing for solving Real-World Problems, At Scale.
The world’s first documentary on neuromorphic quantum computing was showcased at the Guanajuato International Film Festival, presented by IDEA GTO in Leòn, Mexico, on July 20, 2024.
Dynex Reference Guide Book
Neuromorphic Computing for Computer Scientists: A complete guide to Neuromorphic Computing on the Dynex Neuromorphic Cloud Computing Platform, Dynex Developers, 2024, 249 pages, available as eBook, paperback and hardcover.
> Amazon.com
> Amazon.co.uk
> Amazon.de
Today, even while running its quantum circuits on GPU-based emulation, Dynex is already a market leader in the realm of quantum computing, thanks to the scalability of its unique circuit design approach. Unlike traditional quantum computing methods, which face exponential growth in resource demands as qubits increase, Dynex's neuromorphic-inspired architecture requires only linearly scaling resources. This enables the efficient computation of highly complex quantum circuits and algorithms on classical hardware, surpassing the capabilities of many competitors. By allowing simulations with a greater number of qubits than previously possible, Dynex provides a highly scalable, practical solution for industries seeking the power of quantum computation—solidifying its leadership in the rapidly evolving quantum technology landscape.
Dynex is rapidly building a strong ecosystem of customers and partners across a variety of industries, all leveraging its innovative quantum computing solutions. The platform has attracted attention from large-scale enterprises, particularly in industries such as motorsports, where Dynex’s quantum optimization has been utilized for real-time decision-making, as demonstrated in the European Le Mans Series. Additionally, academic institutions like Sona College have begun incorporating Dynex’s technology into their curriculum, reflecting the growing recognition of Dynex’s contributions to quantum education. On the partnership front, Dynex collaborates with GPU providers, cloud service companies, and software developers to enhance scalability and accessibility for its decentralized quantum computing platform. These strategic partnerships not only enable the integration of Dynex technology into existing infrastructures but also create new opportunities for industries to unlock the potential of quantum computing, further cementing Dynex’s role as a leader in this transformative field.
Dynex's neuromorphic quantum computing approach, for which a patent is currently pending, represents a significant leap forward in quantum technology. Launched in 2022, this approach began with the efficient emulation of quantum circuits on Graphics Processing Units (GPUs), drawing from concepts in neuromorphic computing. One of the core principles behind Dynex's methodology is the use of ion drift effects—phenomena typically observed in memristors—to construct quantum circuits. This marks a departure from traditional quantum computing methods, which are generally dependent on Schrödinger's equations and require exponentially increasing computational resources as the number of qubits grows. In contrast, Dynex's approach requires only linearly increasing resources, making it much more efficient and scalable. This breakthrough enables the computation of quantum circuits and the execution of quantum algorithms that involve far more qubits than was previously feasible with conventional methods.
The advantages of this approach are twofold. Firstly, the ability to emulate quantum circuits on classical hardware such as GPUs offers an unprecedented level of flexibility and accessibility in quantum computation. Because the emulation process only requires linear scaling of resources, Dynex can simulate highly complex quantum systems without facing the exponential bottlenecks that have historically limited quantum computing efforts. This means that even with existing hardware, Dynex is able to push the boundaries of what is computationally possible, achieving simulations with far more qubits than competing quantum architectures. This innovation alone opens up new horizons for quantum applications across multiple industries, from pharmaceuticals and material science to finance and cryptography.
The second major advantage lies in the physical realization of Dynex's quantum circuits. Unlike many other quantum technologies that require extreme conditions, such as ultra-cold temperatures to function, Dynex's circuits can be fabricated as silicon-based quantum chips. These chips are designed to operate at room temperature, which not only reduces the complexity and cost of maintaining quantum systems but also makes it far easier to scale the technology for widespread commercial use. This compatibility with silicon manufacturing processes means that Dynex can leverage existing semiconductor infrastructure to produce quantum chips at scale—something that remains a significant challenge for many quantum computing approaches today.
In 2024, Dynex took another step forward by introducing the next iteration of its decentralized quantum computing platform. This iteration includes the introduction of dedicated quantum nodes, which further enhance the efficiency of the Dynex platform by optimizing how GPU resources are allocated and managed. These quantum nodes allow the platform to process increasingly complex quantum computations more effectively, and they represent a significant milestone in Dynex's vision of creating a decentralized, highly efficient quantum computing ecosystem.
Looking ahead, Dynex has ambitious plans for the future of its quantum computing technology. In 2025, the company is set to unveil "Apollo," its first silicon-based quantum chip featuring 1,000 qubits. Apollo will be able to operate in real-time and at room temperature, representing a major breakthrough in both the performance and practicality of quantum processors. This will mark the beginning of Dynex's transition from emulated quantum computing on classical hardware to fully operational quantum chips. The following year, in 2026, Dynex plans to release the first commercially available quantum processing unit (QPU), named "Athene." Athene will support up to 10,000 qubits, offering an unprecedented level of computational power for a broad range of quantum applications.
Dynex's roadmap extends far beyond these initial milestones. The company has laid out a clear and detailed plan for scaling its QPU technology, with a goal of achieving 1 million qubits by 2034. While this may seem like an ambitious target, Dynex's confidence is well-founded. The global research community is increasingly turning its attention to silicon-based quantum chips, and Dynex is positioned at the forefront of this rapidly advancing field. With a patent-pending approach that combines neuromorphic computing principles, memristor technology, and efficient emulation on traditional hardware, Dynex is uniquely positioned to lead the charge in quantum innovation.
Furthermore, the company's decentralized quantum platform, combined with its silicon-based hardware solutions, represents a future-proof approach to quantum computing. By leveraging both the scalability of its emulation techniques and the efficiency of its quantum nodes, Dynex is paving the way for a new era of accessible and powerful quantum computing. As the company continues to build on these foundational technologies, the possibilities for innovation in fields ranging from artificial intelligence to molecular modeling will only continue to expand. With its eyes set firmly on the future, Dynex is poised to be a key player in the next quantum revolution.
Ion Drift based Quantum Hard- & Software
Dynex’s patent-pending neuromorphic quantum computing is a type of quantum computing that utilizes ion drifting of electrons. It works differently from superconducting-based quantum computing by using memristive elements that can quickly react to changes, helping the system swiftly find the best solutions.
Quantum computing employs some quantum phenomena to process information. It has been hailed as the future of computing but it is plagued by serious hurdles when it comes to its practical realisation. Neuromorphic quantum computing is a paradigm that overcomes the exponentially increasing complexity of Schrödinger's equations. It employs physical dynamical systems and exploits time non-locality (memory) to compute by using ion drift electrons and can therefore be efficiently emulated as Digital Twin in software and its path towards hardware is more straightforward, with only a liner increasing complexity. In essence, n.quantum computing is leveraging memory and physics to compute efficiently, a concept studied intensively by a number of researchers and institutions, including a $2M CORDIS funded project by the European Union amongst others.
Harnessing quantum advantage today,
a decade ahead of projections
Market researchers are forecasting that full quantum advantage, characterized by effective error correction and scalability, will be achieved by 2040. However, Dynex's n.quantum computing technology enables customers to leverage this quantum advantage today, effectively providing a technological leap of more than a decade. By integrating advanced neuromorphic quantum computing capabilities, Dynex empowers users with unprecedented computational power and efficiency, positioning itself at the forefront of technological innovation well ahead of the projected timeline.
Neuromorphic quantum computing:
A milestone in quantum history
Neuromorphic quantum computing, made publicly available in 2023, represents one of the most significant milestones in the history of quantum computing. As depicted in the timeline of key algorithm development, various breakthroughs have been made since the 1980s, including the Deutsch algorithm, Grover's algorithm, and quantum machine learning algorithms. The introduction of neuromorphic quantum computing builds upon these advancements by utilizing neuromorphic circuits that emulate the brain's architecture to perform quantum computations. This breakthrough technology enhances the efficiency and scalability of quantum computing, addressing limitations of traditional quantum hardware. By leveraging ion drifting electrons, we are now able to tackle complex computational problems with unprecedented speed and accuracy, marking a revolutionary leap forward in the field of quantum computing.
The Dynex quantum advantage today
Dynex achieves quantum advantage by efficiently computing a Digital Twin of a physical system through the following steps:
A n.quantum computing problem is being submitted to the Dynex cloud with the Dynex SDK;
The problem is then converted into a circuit layout consisting of ion drifting capable memristor-based logic gates. This circuit seeks the optimal energy ground state based on voltages and current;
Next, the circuit layout is transformed into a system of ordinary differential equations (ODEs) by applying their equations of motion, effectively creating a "Digital Twin" of the physical system. We have published the specific equations of motion used in this process;
This system of ODEs is solved on our distributed network of GPUs, similar to how the trajectory of the moon landing was simulated by considering the equations of motion for the Earth and the moon.
Once the desired number of integration steps (simulated time) is reached, the voltages on the circuit are read and passed back as the result of the computation.
Utilising Digital Twins of physical systems for computation has been demonstrated to exhibit similar characteristics to quantum superposition, quantum entanglement and quantum tunnelling effects. This is evidenced in works such as "Topological Field Theory and Computing with Instantons"[11] or "Superconducting Quantum Many-Body Circuits for Quantum Simulation and Computing"[10] amongst others.
These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system.
The NSF San Diego Supercomputer Center performed a stress-test to measure the capability of a similar memristor based simulated system on finding approximate solutions to hard combinatorial optimization problems. These fall into a class which is known to require exponentially growing resources in the worst cases, even to generate approximations.
They showed that in a region where state of the art algorithms demonstrate this exponential growth, simulations of a memristor based system only require time and memory resources that scale linearly. Up to 64 × 10^6 variables (corresponding to about 1 billion literals), namely the largest case that they could fit on a single node with 128 GB of DRAM where simulated, supporting the theory of Dynex' n.quantum computing.
They also measured the memory requirement to perform the simulations. Usage scaled linearly with increasing problem size, whereas traditional methods require exponentially growing memory. This stress test shows the considerable advantage of non-combinatorial, physics inspired approaches over standard combinatorial ones.
> Dynex Publications
> Dynex Benchmarks
Practical n.quantum computing with Dynex
Dynex's platform supports a variety of tools for creating and working with both n.quantum gate circuits and n.quantum annealing models. Quantum programs are mapped onto a Dynex n.quantum circuit and then computed by the contributing workers. This ensures that both traditional quantum algorithms and quantum gate circuits can be computed without modifications on the Dynex platform using the Python-based Dynex SDK. It also provides libraries compatible with Google TensorFlow, IBM Qiskit, PyTorch, Scikit-Learn, and others. All source codes are publicly available.
Dynex: Native Support for Quantum Gate Circuits
Dynex's platform natively supports quantum gate circuits, which are integral to many well-known quantum algorithms. Programmers familiar with quantum gate circuit languages such as Qiskit, Cirq, Pennylane, and OpenQASM will find it straightforward to run their computations on the Dynex neuromorphic computing platform. These tools allow for the creation of quantum circuits, enabling the execution of famous algorithms like Shor's algorithm (for efficient problem-solving in number theory), Grover's search algorithm (for unstructured search), Simon's algorithm (for finding hidden periods), and the Deutsch-Jozsa algorithm (for determining the parity of a function).
Quantum gate circuits are a fundamental aspect of quantum computing, employing quantum bits (qubits) to perform computations. Unlike classical bits, qubits can exist in multiple states simultaneously due to quantum superposition, and can be entangled, allowing for the representation and manipulation of complex data structures. Quantum gate circuits manipulate these qubits using a series of quantum gates, analogous to classical logic gates, to perform specific operations.
The versatility of quantum gate circuits allows them to implement a wide range of quantum algorithms. Shor's algorithm, for instance, leverages quantum parallelism for efficient problem-solving in number theory. Grover's algorithm offers a quadratic speedup for unstructured search problems, showcasing the potential of quantum computing in database searches and optimization tasks. Simon's algorithm and the Deutsch-Jozsa algorithm further demonstrate the power of quantum computing in solving problems that are infeasible for classical systems, highlighting the unique advantages of quantum superposition and entanglement.
The support for these quantum gate circuits on the Dynex platform means that researchers and developers can seamlessly transition their existing quantum algorithms and applications to leverage Dynex's neuromorphic quantum computing capabilities. This integration facilitates the exploration of new computational paradigms and the development of advanced quantum applications, pushing the boundaries of what is possible with quantum computing.
Dynex: Native Support for Quantum Annealing
The Dynex platform also excels in computing Ising and QUBO problems, which play a pivotal role in the field of quantum computing, establishing themselves as the de-facto standard for mapping complex optimization and machine learning problems onto quantum systems. These frameworks are instrumental in leveraging the unique capabilities of quantum computers to solve problems that are intractable for classical computers.
The Ising model, originally introduced in statistical mechanics, describes a system of spins that can be in one of two states. This model has been adapted to represent optimization problems, where the goal is to minimize an energy function describing the interactions between spins. Similarly, the QUBO framework represents optimization problems with binary variables, where the objective is to minimize a quadratic polynomial. Both models are equivalent and can be transformed into one another, allowing a broad range of problems to be addressed using either formulation.
The significance of Ising and QUBO problems in quantum computing lies in their natural fit with quantum annealing and gate-based quantum algorithms. Quantum annealers, for instance, directly implement the Ising model to find the ground state of a system, which corresponds to the optimal solution of the problem. This method exploits quantum tunnelling and entanglement to escape local minima, offering a potential advantage over classical optimization techniques. Gate-based quantum computers, on the other hand, use quantum algorithms like the Quantum Approximate Optimization Algorithm (QAOA) to solve QUBO problems. These algorithms use quantum superposition and interference to explore the solution space more efficiently than classical algorithms, potentially leading to faster solution times for certain problems.
The adoption of Ising and QUBO as standards in quantum computing is due to their versatility and the direct mapping of various optimization and machine learning tasks onto quantum hardware. From logistics and finance to drug discovery and artificial intelligence, the ability to frame problems within the Ising or QUBO model opens up new avenues for solving complex challenges with quantum computing. This standardization also facilitates the development of quantum algorithms and the benchmarking of quantum hardware, accelerating progress in the quantum computing field.
Challenges of traditional quantum gates and annealing architecture
Quantum computing is an emerging technology with enormous potential to solve complex problems, because it effectively applies the properties of quantum mechanics, such as superposition and entanglement. However, like any technology, there are disadvantages in some of the architectures:
Error Correction
Just like in classical computing’s early days, error correction is a major painpoint for quantum computing today. Quantum computers are sensitive to noise and difficult to calibrate. Unlike traditional computers that would experience a bit flip from 0 to 1 or vice versa, quantum errors are more difficult to correct because qubits can take an infinite number of states.
Hardware & Temperature
Because quantum computers need to slow down atoms to near stillness, their processors must be kept at or around absolute zero (-273°C). Even the tiniest of fluctuations can cause unwanted movement, so it’s just as important to make sure that they’re under no atmospheric pressure and that they are insulated from the earth’s magnetic field.
Scalability
While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge.
Given the constraints of most other quantum hardware architectures, there is a growing interest in simulating quantum computers on classical hardware. One of the most promising methods involves reformulating quantum algorithms into their corresponding partial differential equations (PDEs), governed by Schrödinger's equations. Schrödinger's equations, which describe the quantum state of a system, provide a theoretical framework for simulating quantum dynamics on classical systems. However, this approach presents its own set of challenges: the computational resources required to solve these PDEs scale exponentially with the size of the quantum system being simulated.
This exponential growth in resource requirements stems from the inherent complexity of quantum systems, where the state space increases exponentially with the number of qubits. Consequently, while the theoretical formulation offers a pathway to simulate quantum algorithms, the practical limitations of classical computational resources pose significant barriers. Overcoming these barriers is crucial for advancing the field of quantum computing and for enabling more accurate and scalable simulations that can bridge the gap between current quantum capabilities and their potential applications.
Manifestations of Neuromorphic Quantum Computing
Neuromorphic quantum computing represents a significant evolution in the field of quantum computing by merging quantum principles with neuromorphic engineering. This hybrid approach aims to overcome some of the limitations of traditional quantum systems, such as error correction and scalability, by mimicking the brain's processing capabilities. As a result, neuromorphic quantum computing holds the potential to accelerate the development of quantum technologies and expand their practical applications across various industries.
Neuromorphic Quantum Computing (abbreviated as ‘n.quantum computing’) [1,2] is an unconventional computing type of computing that uses neuromorphic computing's ion drift to perform quantum operations [3,4]. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing [5,6,7,8,9].
Both traditional quantum computing and neuromorphic quantum computing [1] are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing [1] and quantum computing share similar physical properties during computation [9,10].
Neuromorphic quantum architecture:
Ion Drift Quantum Circuits
Quantum Computing: Utilizes quantum circuits with qubits implemented using superconductors, trapped ions, or other quantum technologies. These systems often require extremely low temperatures and sophisticated error correction methods to maintain coherence and reduce error rates.
Neuromorphic Quantum Computing: In nature, physical systems tend to evolve toward their lowest energy state: objects slide down hills, hot things cool down, and so on. This behaviour also applies to neuromorphic systems. To imagine this, think of a traveler looking for the best solution by finding the lowest valley in the energy landscape that represents the problem. Classical algorithms seek the lowest valley by placing the traveler at some point in the landscape and allowing that traveler to move based on local variations. While it is generally most efficient to move downhill and avoid climbing hills that are too high, such classical algorithms are prone to leading the traveler into nearby valleys that may not be the global minimum. Numerous trials are typically required, with many travellers beginning their journeys from different points.
In contrast, neuromorphic quantum computing begins with the traveler simultaneously occupying many coordinates thanks to the phenomenon of inherent parallelism. The probability of being at any given coordinate smoothly evolves as annealing progresses, with the probability increasing around the coordinates of deep valleys. Instantonic jumps allows the traveller to pass through hills—rather than be forced to climb them—reducing the chance of becoming trapped in valleys that are not the global minimum. Long range order further improves the outcome by allowing the traveler to discover correlations between the coordinates that lead to deep valleys.
Entanglement in a Dynex ion drift quantum circuit: measurement of 10 entangled qubits evolving to their optimal solution state. Note the parallel behaviour caused by their long-range correlation.
Dynex's patent-pending neuromorphic quantum computing taps directly into an unimaginably vast fabric of reality—the strange and counterintuitive world of physics and biology inspired computing. Rather than store information using bits represented by 0s or 1s as conventional computers do, neuromorphic quantum computers use ion drifting of electrons. The dynamic long range behaviour, along with its trend towards optimal energy and instantonic effects, enable neuromorphic quantum computers to consider and manipulate many combinations of bits simultaneously.
> Whitepaper: Dynex Neuromorphic Quantum Circuits’ Equations of Motion
> Whitepaper: Dynex Data Flow
n.Quantum Entanglement Effects
Quantum computing utilise certain principles of quantum mechanics, such as entanglement, to enable their components to be interconnected over vast distances. This interconnection means that a change in one component can instantly affect others, no matter how far apart they are. This characteristic facilitates the computer's ability to swiftly determine the most energy-efficient state.
Similarly, n.quantum computing exhibits a related phenomenon through long-distance connections stemming from a state known as criticality [11]. In this setup, each gate within the circuit is designed to be responsive to distant gates, creating a network where each gate can influence and be influenced by others across the system.
As the circuit evolves, the connections between different gates reach a critical state, making the system prone to sudden, large-scale changes or "avalanches" triggered by minor disturbances anywhere in the circuit. These avalanches, driven by the circuit's long-range correlations, enable the system to quickly find the lowest energy state, significantly speeding up computation compared to traditional methods [9,11].
n.Quantum Tunnelling Effects
Quantum computing also harness the principle of quantum tunnelling to find the most efficient solution. This process enables them to bypass energy barriers and access lower energy states that represent improved solutions. In contrast, n.quantum computing circuits [12] utilise the behaviour of electrical currents and voltages to expedite the transition to more favourable states. The concept of an instanton [9,11], borrowed from the study of dynamical systems, explains how these circuits navigate through energy barriers. This happens as the system encounters "saddle" points within the electrical landscape, which possess characteristics that attract the system initially but then repel it as it gets closer. This interaction results in the system being propelled away from these points at high speed, effectively allowing it to jump from one state to another across the landscape.
This phenomenon [11] mirrors quantum tunnelling but occurs in the realm of voltages and currents. In neuromorphic quantum computing circuits, this behaviour emerges from the collective action of memristor based gates, which induce widespread fluctuations in voltages, thereby swiftly steering the circuit towards superior configurations.
These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system [9].
Error Handling
Quantum Computing: There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.
Neuromorphic Quantum Computing: In contrast to quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems. It inherently manages errors and noise, reducing the need for extensive error correction protocols.
Scalability
Quantum Computing: Scalability is limited by the physical requirements of maintaining qubits and their coherence. Increasing the number of qubits while managing errors is a major challenge.
Neuromorphic Quantum Computing: Promises improved scalability by leveraging neuromorphic principles to create large-scale, efficient quantum systems that can process information more like a human brain, offering more practical scalability.
Real-World Applications
There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.
In contrast to traditional quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems [9,11].
EU Horizon 2020 Project “Neuromorphic Quantum Computing”
The European Union-funded project "Neuromorphic Quantum Computing" [2] resulted in the publication of eighteen peer-reviewed articles. Its objective was to introduce hardware inspired by the human brain with quantum functionalities. The project aimed to construct superconducting quantum neural networks to facilitate the creation of dedicated neuromorphic quantum machine learning hardware, which could potentially outperform traditional von Neumann architectures in future generations. This achievement represented the merger of two forefront developments in information processing: machine learning and quantum computing, into a novel technology. Contrary to conventional machine learning techniques that simulate neural functions in software on von Neumann hardware, neuromorphic quantum hardware was projected to provide a significant advantage by enabling parallel training on various batches of real-world data. This feature was anticipated to confer a quantum benefit. Neuromorphic hardware architectures were seen as critically important for both classical and quantum computing, especially for distributed and embedded computing tasks where the extensive scaling of current architectures could not offer a sustainable solution.
Turing complete n.quantum circuits
The Dynex machine represents a class of general-purpose computational systems based on memory architectures, where data processing and storage occur in the same physical location. In our analysis of the memory characteristics of the Dynex machine, we demonstrate its universal computational capabilities, confirming that it is Turing-complete. Additionally, the machine exhibits intrinsic parallelism, functional polymorphism, and significant information overhead, allowing for exponential data compression directly within its collective memory states. Furthermore, we establish that the Dynex machine can solve NP-complete problems in polynomial time, akin to the functionality of a non-deterministic Turing machine. However, unlike traditional computational models, the Dynex machine requires only a polynomial quantity of memory cells due to its inherent information overhead. While these findings do not constitute a proof that NP=P within the framework of Turing computation, they signal a paradigm shift from the von Neumann architecture, advancing towards a model of brain-like neural computation.
Efficient n.quantum circuit simulation
Dynex' patent pending technology enables the reformulation of quantum algorithms into a system of ordinary differential equations (ODEs), which require computational resources that scale linearly with the size of the quantum system being simulated. This approach contrasts with the current state-of-the-art, where quantum algorithms are typically reformulated into partial differential equations (PDEs) governed by Schrödinger's equations. In such cases, the computational resources necessary to solve these PDEs scale exponentially with the system's size, presenting significant challenges for large-scale simulations.
It is important to realise that computing is fundamentally a physical process. The statement may seem obvious when considering the physical processes harnessed by the electronic components of computers (for example, transistors), however, virtually any physical process can be harnessed for some form of computation. Note, that we are speaking of Alan Turing’s model of computation, that is, a mapping (transition function) between two sets of finite symbols (input and output) in discrete time.
It is important to distinguish between continuous and discrete time: Dynex n.quantum circuits operate in continuous time, though, their simulations on modern computers require the discretisation of time. Continuous time is physical time: a fundamental physical quantity. Discrete time is not a physical quantity, and might be best understood as counting time: counting something (function calls, integration steps, etc.) to give an indication (perhaps approximation) of the physical time. In the literature of Physics and other Physical Sciences, physical time has an assigned SI unit of seconds, whereas in Computer Science and related disciplines, counting time is dimensionless.
Ion drift & the fourth missing circuit element
Modern computers rely on the implementation of uni-directional logic gates that represent Boolean functions. Circuits built to simulate Boolean functions are desirable because they are deterministic: A unique input has a unique, reproducible output.
They relegate the task of logic to central processing units (CPUs). However, the resources required for the task might exhaust the resources present within the CPU, specifically, cache memory. For typical processes on modern computers, random-access memory (RAM) is the memory used for data and machine code, and is external to the CPU. The physical separation of CPU and RAM results in what is known as the von Neumann bottleneck, a slow down in computation caused by the transfer of information between physical locations.
Ion drift refers to the movement of ions under the influence of an electric field, for example in a memristor. This phenomenon can be used to build quantum hardware (as an alternative to superconducting materials, ion traps or photons). The plot shows the measurement of three qubits in a Dynex ion drift quantum circuit, computing a simple quantum algorithm, instantly evolving to the solution.
To overcome the von Neumann bottleneck, it was proposed to perform computing with and in memory, utilising ideal memristors. Distinct from in-memory computation, it is an efficient computing paradigm that uses memory to process and store information in the same physical location.
Memristors, a class of non-linear circuit elements, exploit ion drift phenomena to achieve their unique memory and resistance-switching properties. Unlike traditional resistors, memristors can retain a memory of past electrical states, making them ideal for non-volatile memory applications. The underlying mechanism involves the movement of ions, typically oxygen vacancies or metal cations, within the memristor's material structure. When an external voltage is applied, these ions drift through the memristive medium, modulating its local conductivity. The resulting change in resistance is dependent on the history of ion displacement and remains stable even after the external stimulus is removed. This ion drift process is governed by the non-linear dynamics of charge transport and electrochemical migration, allowing memristors to mimic synaptic behaviour in neuromorphic systems. By harnessing ion drift phenomena, memristors provide an efficient means of data storage and computing that is both scalable and energy-efficient. As a result, they present an attractive avenue for advancing computational technologies beyond conventional quantum frameworks.
A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they are non-volatile, meaning that they retain memory without power.
The original concept for memristors, as conceived in 1971 by Professor Leon Chua at the University of California, Berkeley, was a nonlinear, passive two-terminal electrical component that linked electric charge and magnetic flux (“The missing circuit element“). Since then, the definition of memristor has been broadened to include any form of non-volatile memory that is based on resistance switching, which increases the flow of current in one direction and decreases the flow of current in the opposite direction.
Scientists at HP Labs built the first working memristor in 2008 and since that time, researchers in many large IT companies have explored how memristors can be used to create smaller, faster, low-power computers that do not require data to be transferred between volatile and non-volatile memory.
Dynex' patent pending digital Dynex circuit is realised as a memristor based bi-directional logic circuit to leverage ion drifting of electrons for quantum computing. These circuits differ from traditional logic circuits in that input and output terminals are no longer distinct. In a traditional logic circuit, some input is given and the output is the result of computation performed on the input, via uni-directional logic gates. In contrast, a memristor based bi-directional logic circuit can be operated by assigning the output terminals, then reading the input terminals.
Superposition in a Dynex ion drift circuit: measurement of a 3 qubit quantum gate in superposition: When applying +1.0V on the output terminal v3 of an OR gate, the input terminals v1 and v2 automatically converge to satisfy the logical OR in reverse.
Self-organizing logic represents a newly proposed framework that facilitates the solution of Boolean truth tables in a "reverse" manner. This approach enables the satisfaction of logical propositions at the gates regardless of which terminal(s) the truth value is assigned, a property referred to as "terminal-agnostic logic." The realization of such logic requires the presence of time non-locality, or memory. A practical implementation of self-organizing logic gates can be achieved through the combination of circuit elements both with and without memory.
By employing such an implementation, numerical simulations demonstrate that self-organizing logic gates utilize elementary instantons to converge toward equilibrium states. Instantons, which are classical trajectories of the non-linear equations governing the behavior of self-organizing logic gates, serve to connect topologically distinct critical points in the system's phase space. A linear analysis at these critical points reveals that instantons link the initial dynamic state—characterized by at least one unstable direction—directly to the final fixed point. Moreover, it can be shown that the memory content of these gates influences only the relaxation time required to achieve a logically consistent solution.
When solving the corresponding stochastic differential equations, it is observed that although noise and perturbations may alter the instanton’s trajectory within the phase space, the initial and final critical points remain unaffected. As a result, even under conditions of high noise levels, the gates self-organize to arrive at the correct solution.
It is important to clarify that the self-organizing logic discussed here bears no relation to the invertible universal Toffoli gate, commonly used in quantum computation. Toffoli gates are genuinely one-to-one invertible, operating with three-bit inputs and three-bit outputs. In contrast, self-organizing logic gates are only required to satisfy the correct logical proposition, without establishing a one-to-one correspondence between a specific number of input and output terminals. It is worth noting, however, the existence of another form of bi-directional logic recently explored using stochastic units, known as p-bits, which fluctuate across all possible consistent inputs. Unlike that stochastic approach, the self-organizing logic examined here is deterministic.
Given the central role of time, a dynamical systems perspective is the most natural framework for describing these gates. Specifically, non-linear electronic (non-quantum) circuit elements, both with and without memory, have been proposed as practical building blocks for the realization of self-organizing logic gates.
By assembling self-organizing logic gates into appropriately designed architectures, circuits capable of efficiently solving complex problems can be constructed. These circuits achieve this by mapping the equilibrium (fixed) points of the system to the solution of the given problem. Furthermore, it has been proven that if such systems are engineered to be point-dissipative, they will not exhibit chaotic behavior or periodic orbits, provided equilibrium points exist.
Subsequent work has shown, through the application of topological field theory (TFT) to dynamical systems, that these circuits can be described by a Witten-type TFT. These systems support long-range order mediated by instantons, which represent classical trajectories arising from the non-linear equations of motion that govern these circuits.
References
[1] Pehle, Christian; Wetterich, Christof (2021-03-30), Neuromorphic quantum computing, arXiv:2005.01533
[2] "Neuromrophic Quantum Computing | Quromorphic Project | Fact Sheet | H2020". CORDIS | European Commission. doi:10.3030/828826. Retrieved 2024-03-18
[3] Wetterich, C. (2019-11-01). "Quantum computing with classical bits". Nuclear Physics B. 948: 114776. arXiv:1806.05960. Bibcode:2019NuPhB.94814776W. doi:10.1016/j.nuclphysb.2019.114776. ISSN 0550-3213
[4] Pehle, Christian; Meier, Karlheinz; Oberthaler, Markus; Wetterich, Christof (2018-10-24), Emulating quantum computation with artificial neural networks, arXiv:1810.10335
[5] Carleo, Giuseppe; Troyer, Matthias (2017-02-10). "Solving the quantum many-body problem with artificial neural networks". Science. 355 (6325): 602–606. arXiv:1606.02318. Bibcode:2017Sci...355..602C. doi:10.1126/science.aag2302. ISSN 0036-8075. PMID 28183973
[6] Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe (2018-02-26). "Neural-network quantum state tomography". Nature Physics. 14 (5): 447–450. arXiv:1703.05334. Bibcode:2018NatPh..14..447T. doi:10.1038/s41567-018-0048-5. ISSN 1745-2481
[7] Sharir, Or; Levine, Yoav; Wies, Noam; Carleo, Giuseppe; Shashua, Amnon (2020-01-16). "Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems". Physical Review Letters. 124 (2): 020503. arXiv:1902.04057. Bibcode:2020PhRvL.124b0503S. doi:10.1103/PhysRevLett.124.020503. PMID 32004039
[8] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), TensorFlow Quantum: A Software Framework for Quantum Machine Learning, arXiv:2003.02989
[9] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), TensorFlow Quantum: A Software Framework for Quantum Machine Learning, arXiv:2003.02989
[10] Wilkinson, Samuel A.; Hartmann, Michael J. (2020-06-08). "Superconducting quantum many-body circuits for quantum simulation and computing". Applied Physics Letters. 116 (23). arXiv:2003.08838. Bibcode:2020ApPhL.116w0501W. doi:10.1063/5.0008202. ISSN 0003-6951
[11] Di Ventra, Massimiliano; Traversa, Fabio L.; Ovchinnikov, Igor V. (2017-08-07). "Topological Field Theory and Computing with Instantons". Annalen der Physik. 529 (12). arXiv:1609.03230. Bibcode:2017AnP...52900123D. doi:10.1002/andp.201700123. ISSN 0003-3804
[12] Gonzalez-Raya, Tasio; Lukens, Joseph M.; Céleri, Lucas C.; Sanz, Mikel (2020-02-14). "Quantum Memristors in Frequency-Entangled Optical Fields". Materials. 13 (4): 864. arXiv:1912.10019. Bibcode:2020Mate...13..864G. doi:10.3390/ma13040864. ISSN 1996-1944. PMC 7079656. PMID 32074986