Industries

Industries

# n.quantum computing

### n.quantum computing

#### n.quantum computing

##### Documentary: Dynex, Neuromorphic Quantum Computing for solving Real-World Problems, At Scale.

The world’s first documentary on neuromorphic quantum computing was showcased at the Guanajuato International Film Festival, presented by IDEA GTO in Leòn, Mexico, on July 20, 2024.

##### Dynex Reference Guide

Neuromorphic Computing for Computer Scientists: A complete guide to Neuromorphic Computing on the Dynex Neuromorphic Cloud Computing Platform, Dynex Developers, 2024, 249 pages, available as eBook, paperback and hardcover.

> Amazon.com

> Amazon.co.uk

> Amazon.de

##### Neuromorphic quantum computing in simple terms

Neuromorphic quantum computing** **is a special type of computing that combines ideas from brain-like computing with quantum technology to solve problems. It works differently from regular quantum computing by using a network of connected components that can quickly react to changes, helping the system to swiftly find the best solutions.

Dynex is applying a Digital Twin of a physical neuromorphic quantum computing machine, which is operated on thousands of GPUs in parallel, delivering unparalleled quantum computing performance at scale for real-world applications.

Quantum computing employs some quantum phenomena to process information. It has been hailed as the future of computing but it is plagued by serious hurdles when it comes to its practical realization. Neuromorphic quantum computing is a paradigm that instead employs non-quantum dynamical systems and exploits time non-locality (memory) to compute. It can be efficiently emulated as Digital Twin in software and its path towards hardware is more straightforward. In essence, n.quantum computing is leveraging memory and physics to compute efficiently, a concept studied intensively by a number of researchers and institutions, including a $2M CORDIS funded project by the European Union amongst others.

This technology is exciting because it leads to computer systems that are faster and capable of handling complex tasks more efficiently than traditional computers.

##### Harnessing quantum advantage today,

a decade ahead of projections

Market researchers are forecasting that full quantum advantage, characterized by effective error correction and scalability, will be achieved by 2040. However, Dynex's n.quantum computing technology enables customers to leverage this quantum advantage today, effectively providing a technological leap of more than a decade. By integrating advanced neuromorphic quantum computing capabilities, Dynex empowers users with unprecedented computational power and efficiency, positioning itself at the forefront of technological innovation well ahead of the projected timeline.

##### Neuromorphic quantum computing:

A milestone in quantum history

Neuromorphic quantum computing, made publicly available in 2023, represents one of the most significant milestones in the history of quantum computing. As depicted in the timeline of key algorithm development, various breakthroughs have been made since the 1980s, including the Deutsch algorithm, Grover's algorithm, and quantum machine learning algorithms. The introduction of neuromorphic quantum computing builds upon these advancements by utilizing neuromorphic circuits that emulate the brain's architecture to perform quantum computations. This breakthrough technology enhances the efficiency and scalability of quantum computing, addressing limitations of traditional quantum hardware. By leveraging neuromorphic systems, we are now able to tackle complex computational problems with unprecedented speed and accuracy, marking a revolutionary leap forward in the field of quantum computing.

##### The Dynex quantum advantage

Dynex achieves quantum advantage by efficiently computing a Digital Twin of a physical system through the following steps:

A n.quantum computing problem is being submitted to the Dynex cloud with the Dynex SDK;

The problem is then converted into a circuit layout consisting of memristor-based logic gates. This circuit seeks the optimal energy ground state based on voltages and current;

Next, the circuit layout is transformed into a system of ordinary differential equations (ODEs) by applying their equations of motion, effectively creating a "Digital Twin" of the physical system. We have published the specific equations of motion used in this process;

This system of ODEs is solved on our distributed network of GPUs, similar to how the trajectory of the moon landing was simulated by considering the equations of motion for the Earth and the moon.

Once the desired number of integration steps (simulated time) is reached, the voltages on the circuit are read and passed back as the result of the computation.

Utilising Digital Twins of physical systems for computation has been demonstrated to exhibit similar characteristics to quantum superposition, quantum entanglement and quantum tunnelling effects. This is evidenced in works such as "Topological Field Theory and Computing with Instantons"[11] or "Superconducting Quantum Many-Body Circuits for Quantum Simulation and Computing"[10] amongst others.

These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system.

The NSF San Diego Supercomputer Center performed a stress-test to measure the capability of a similar memristor based simulated system on finding approximate solutions to hard combinatorial optimization problems. These fall into a class which is known to require exponentially growing resources in the worst cases, even to generate approximations.

They showed that in a region where state of the art algorithms demonstrate this exponential growth, simulations of a memristor based system only require time and memory resources that scale linearly. Up to 64 × 10^6 variables (corresponding to about 1 billion literals), namely the largest case that they could fit on a single node with 128 GB of DRAM where simulated, supporting the theory of Dynex' n.quantum computing.

They also measured the memory requirement to perform the simulations. Usage scaled linearly with increasing problem size, whereas traditional methods require exponentially growing memory. This stress test shows the considerable advantage of non-combinatorial, physics inspired approaches over standard combinatorial ones.

> Dynex Publications

> Dynex Benchmarks

##### Practical n.quantum computing with Dynex

Dynex's platform supports a variety of tools for creating and working with both n.quantum gate circuits and n.quantum annealing models. Quantum programs are mapped onto a Dynex n.quantum circuit and then computed by the contributing workers. This ensures that both traditional quantum algorithms and quantum gate circuits can be computed without modifications on the Dynex platform using the Python-based Dynex SDK. It also provides libraries compatible with Google TensorFlow, IBM Qiskit, PyTorch, Scikit-Learn, and others. All source codes are publicly available.

###### Dynex: Native Support for Quantum Gate Circuits

Dynex's platform natively supports quantum gate circuits, which are integral to many well-known quantum algorithms. Programmers familiar with quantum gate circuit languages such as Qiskit, Cirq, Pennylane, and OpenQASM will find it straightforward to run their computations on the Dynex neuromorphic computing platform. These tools allow for the creation of quantum circuits, enabling the execution of famous algorithms like Shor's algorithm (for efficient problem-solving in number theory), Grover's search algorithm (for unstructured search), Simon's algorithm (for finding hidden periods), and the Deutsch-Jozsa algorithm (for determining the parity of a function).

Quantum gate circuits are a fundamental aspect of quantum computing, employing quantum bits (qubits) to perform computations. Unlike classical bits, qubits can exist in multiple states simultaneously due to quantum superposition, and can be entangled, allowing for the representation and manipulation of complex data structures. Quantum gate circuits manipulate these qubits using a series of quantum gates, analogous to classical logic gates, to perform specific operations.

The versatility of quantum gate circuits allows them to implement a wide range of quantum algorithms. Shor's algorithm, for instance, leverages quantum parallelism for efficient problem-solving in number theory. Grover's algorithm offers a quadratic speedup for unstructured search problems, showcasing the potential of quantum computing in database searches and optimization tasks. Simon's algorithm and the Deutsch-Jozsa algorithm further demonstrate the power of quantum computing in solving problems that are infeasible for classical systems, highlighting the unique advantages of quantum superposition and entanglement.

The support for these quantum gate circuits on the Dynex platform means that researchers and developers can seamlessly transition their existing quantum algorithms and applications to leverage Dynex's neuromorphic quantum computing capabilities. This integration facilitates the exploration of new computational paradigms and the development of advanced quantum applications, pushing the boundaries of what is possible with quantum computing.

###### Dynex: Native Support for Quantum Annealing

The Dynex platform also excels in computing Ising and QUBO problems, which play a pivotal role in the field of quantum computing, establishing themselves as the de-facto standard for mapping complex optimization and machine learning problems onto quantum systems. These frameworks are instrumental in leveraging the unique capabilities of quantum computers to solve problems that are intractable for classical computers.

The Ising model, originally introduced in statistical mechanics, describes a system of spins that can be in one of two states. This model has been adapted to represent optimization problems, where the goal is to minimize an energy function describing the interactions between spins. Similarly, the QUBO framework represents optimization problems with binary variables, where the objective is to minimize a quadratic polynomial. Both models are equivalent and can be transformed into one another, allowing a broad range of problems to be addressed using either formulation.

The significance of Ising and QUBO problems in quantum computing lies in their natural fit with quantum annealing and gate-based quantum algorithms. Quantum annealers, for instance, directly implement the Ising model to find the ground state of a system, which corresponds to the optimal solution of the problem. This method exploits quantum tunnelling and entanglement to escape local minima, offering a potential advantage over classical optimization techniques. Gate-based quantum computers, on the other hand, use quantum algorithms like the Quantum Approximate Optimization Algorithm (QAOA) to solve QUBO problems. These algorithms use quantum superposition and interference to explore the solution space more efficiently than classical algorithms, potentially leading to faster solution times for certain problems.

The adoption of Ising and QUBO as standards in quantum computing is due to their versatility and the direct mapping of various optimization and machine learning tasks onto quantum hardware. From logistics and finance to drug discovery and artificial intelligence, the ability to frame problems within the Ising or QUBO model opens up new avenues for solving complex challenges with quantum computing. This standardization also facilitates the development of quantum algorithms and the benchmarking of quantum hardware, accelerating progress in the quantum computing field.

##### Challenges of traditional quantum gates and annealing architecture

Quantum computing is an emerging technology with enormous potential to solve complex problems, because it effectively applies the properties of quantum mechanics, such as superposition and entanglement. However, like any technology, there are disadvantages in some of the architectures:

**Error Correction**

Just like in classical computing’s early days, error correction is a major painpoint for quantum computing today. Quantum computers are sensitive to noise and difficult to calibrate. Unlike traditional computers that would experience a bit flip from 0 to 1 or vice versa, quantum errors are more difficult to correct because qubits can take an infinite number of states.

**Hardware & Temperature**

Because quantum computers need to slow down atoms to near stillness, their processors must be kept at or around absolute zero (-273°C). Even the tiniest of fluctuations can cause unwanted movement, so it’s just as important to make sure that they’re under no atmospheric pressure and that they are insulated from the earth’s magnetic field.

**Scalability**

While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge.

#### Manifestations of Neuromorphic Quantum Computing

Neuromorphic quantum computing represents a significant evolution in the field of quantum computing by merging quantum principles with neuromorphic engineering. This hybrid approach aims to overcome some of the limitations of traditional quantum systems, such as error correction and scalability, by mimicking the brain's processing capabilities. As a result, neuromorphic quantum computing holds the potential to accelerate the development of quantum technologies and expand their practical applications across various industries.

Neuromorphic Quantum Computing (abbreviated as ‘n.quantum computing’) [1,2] is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations [3,4]. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing [5,6,7,8,9].

Both traditional quantum computing and neuromorphic quantum computing [1] are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing [1] and quantum computing share similar physical properties during computation [9,10].

##### Neuromorphic quantum architecture

**Quantum Computing:** Utilizes quantum circuits with qubits implemented using superconductors, trapped ions, or other quantum technologies. These systems often require extremely low temperatures and sophisticated error correction methods to maintain coherence and reduce error rates.

**Neuromorphic Quantum Computing:** In nature, physical systems tend to evolve toward their lowest energy state: objects slide down hills, hot things cool down, and so on. This behaviour also applies to neuromorphic systems. To imagine this, think of a traveler looking for the best solution by finding the lowest valley in the energy landscape that represents the problem. Classical algorithms seek the lowest valley by placing the traveler at some point in the landscape and allowing that traveler to move based on local variations. While it is generally most efficient to move downhill and avoid climbing hills that are too high, such classical algorithms are prone to leading the traveler into nearby valleys that may not be the global minimum. Numerous trials are typically required, with many travellers beginning their journeys from different points.

In contrast, neuromorphic annealing begins with the traveler simultaneously occupying many coordinates thanks to the phenomenon of inherent parallelism. The probability of being at any given coordinate smoothly evolves as annealing progresses, with the probability increasing around the coordinates of deep valleys. Instantonic jumps allows the traveller to pass through hills—rather than be forced to climb them—reducing the chance of becoming trapped in valleys that are not the global minimum. Long range order further improves the outcome by allowing the traveler to discover correlations between the coordinates that lead to deep valleys.

To speed computation, our neuromorphic platform taps directly into an unimaginably vast fabric of reality—the strange and counterintuitive world of physics and biology inspired computing. Rather than store information using bits represented by 0s or 1s as conventional computers do, neuromorphic computers use voltages and current. The dynamic long range behaviour, along with its trend towards optimal energy and instantonic effects, enable neuromorphic computers to consider and manipulate many combinations of bits simultaneously.

> Whitepaper: Dynex Neuromorphic Quantum Circuits’ Equations of Motion

> Whitepaper: Dynex Data Flow

###### n.Quantum Entanglement Effects

Quantum computing utilise certain principles of quantum mechanics, such as entanglement, to enable their components to be interconnected over vast distances. This interconnection means that a change in one component can instantly affect others, no matter how far apart they are. This characteristic facilitates the computer's ability to swiftly determine the most energy-efficient state. Similarly, n.quantum computing exhibits a related phenomenon through long-distance connections stemming from a state known as criticality [11]. In this setup, each gate within the circuit is designed to be responsive to distant gates, creating a network where each gate can influence and be influenced by others across the system. As the circuit evolves, the connections between different gates reach a critical state, making the system prone to sudden, large-scale changes or "avalanches" triggered by minor disturbances anywhere in the circuit. These avalanches, driven by the circuit's long-range correlations, enable the system to quickly find the lowest energy state, significantly speeding up computation compared to traditional methods [9,11].

###### n.Quantum Tunnelling Effects

Quantum computing also harness the principle of quantum tunnelling to find the most efficient solution. This process enables them to bypass energy barriers and access lower energy states that represent improved solutions. In contrast, n.quantum computing circuits [12] utilise the behaviour of electrical currents and voltages to expedite the transition to more favourable states. The concept of an instanton [9,11], borrowed from the study of dynamical systems, explains how these circuits navigate through energy barriers. This happens as the system encounters "saddle" points within the electrical landscape, which possess characteristics that attract the system initially but then repel it as it gets closer. This interaction results in the system being propelled away from these points at high speed, effectively allowing it to jump from one state to another across the landscape.

This phenomenon [11] mirrors quantum tunnelling but occurs in the realm of voltages and currents. In neuromorphic quantum computing circuits, this behaviour emerges from the collective action of memristor based gates, which induce widespread fluctuations in voltages, thereby swiftly steering the circuit towards superior configurations.

These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system [9].

##### Error Handling

**Quantum Computing**: There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.

**Neuromorphic Quantum Computing**: In contrast to quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems. It inherently manages errors and noise, reducing the need for extensive error correction protocols.

##### Scalability

**Quantum Computing**: Scalability is limited by the physical requirements of maintaining qubits and their coherence. Increasing the number of qubits while managing errors is a major challenge.

**Neuromorphic Quantum Computing**: Promises improved scalability by leveraging neuromorphic principles to create large-scale, efficient quantum systems that can process information more like a human brain, offering more practical scalability.

##### Real-World Applications

There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.

In contrast to traditional quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems [9,11].

##### EU Horizon 2020 Project “Neuromorphic Quantum Computing”

The European Union-funded project "Neuromorphic Quantum Computing" [2] resulted in the publication of eighteen peer-reviewed articles. Its objective was to introduce hardware inspired by the human brain with quantum functionalities. The project aimed to construct superconducting quantum neural networks to facilitate the creation of dedicated neuromorphic quantum machine learning hardware, which could potentially outperform traditional von Neumann architectures in future generations. This achievement represented the merger of two forefront developments in information processing: machine learning and quantum computing, into a novel technology. Contrary to conventional machine learning techniques that simulate neural functions in software on von Neumann hardware, neuromorphic quantum hardware was projected to provide a significant advantage by enabling parallel training on various batches of real-world data. This feature was anticipated to confer a quantum benefit. Neuromorphic hardware architectures were seen as critically important for both classical and quantum computing, especially for distributed and embedded computing tasks where the extensive scaling of current architectures could not offer a sustainable solution.

##### Turing complete n.quantum circuits

A Dynex machine is a class of general-purpose computing machines based on memory systems, where information is processed and stored at the same physical location. We analysed the memory properties of the Dynex machine to demonstrate that they possess universal computing power—they are Turing- complete—, intrinsic parallelism, functional polymorphism, and information overhead, namely that their collective states can support exponential data compression directly in memory through their collective states. Moreover, we show that the Dynex machine is capable of solving NP-complete problems in polynomial time, just like a non-deterministic Turing machine. The Dynex machine, however, requires only a polynomial number of memory cells due to its information overhead. It is important to note that even though these results do not prove NP=P within the Turing paradigm, the concept of Dynex machines represents a paradigm shift from the current von Neumann architecture, bringing us closer to the concept of brain-like neural computation.

##### Efficient n.quantum circuit simulation

It is important to realise that computing is fundamentally a physical process. The statement may seem obvious when considering the physical processes harnessed by the electronic components of computers (for example, transistors), however, virtually any physical process can be harnessed for some form of computation. Note, that we are speaking of Alan Turing’s model of computation, that is, a mapping (transition function) between two sets of finite symbols (input and output) in discrete time.

It is important to distinguish between continuous and discrete time: Dynex n.quantum circuits operate in continuous time, though, their simulations on modern computers require the discretisation of time. Continuous time is physical time: a fundamental physical quantity. Discrete time is not a physical quantity, and might be best understood as counting time: counting something (function calls, integration steps, etc.) to give an indication (perhaps approximation) of the physical time. In the literature of Physics and other Physical Sciences, physical time has an assigned SI unit of seconds, whereas in Computer Science and related disciplines, counting time is dimensionless.

##### The fourth missing circuit element

Modern computers rely on the implementation of uni-directional logic gates that represent Boolean functions. Circuits built to simulate Boolean functions are desirable because they are deterministic: A unique input has a unique, reproducible output.

Modern computers relegate the task of logic to central processing units (CPUs). However, the resources required for the task might exhaust the resources present within the CPU, specifically, cache memory. For typical processes on modern computers, random-access memory (RAM) is the memory used for data and machine code, and is external to the CPU. The physical separation of CPU and RAM results in what is known as the von Neumann bottleneck, a slow down in computation caused by the transfer of information between physical locations.

To overcome the von Neumann bottleneck, it was proposed to perform computing with and in memory, utilising ideal memristors. Distinct from in-memory computation, it is an efficient computing paradigm that uses memory to process and store information in the same physical location.

A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they are non-volatile, meaning that they retain memory without power.

The original concept for memristors, as conceived in 1971 by Professor Leon Chua at the University of California, Berkeley, was a nonlinear, passive two-terminal electrical component that linked electric charge and magnetic flux (“The missing circuit element“). Since then, the definition of memristor has been broadened to include any form of non-volatile memory that is based on resistance switching, which increases the flow of current in one direction and decreases the flow of current in the opposite direction.

Memristors, which are considered to be a sub-category of resistive RAM, are one of several storage technologies that have been predicted to replace flash memory. Scientists at HP Labs built the first working memristor in 2008 and since that time, researchers in many large IT companies have explored how memristors can be used to create smaller, faster, low-power computers that do not require data to be transferred between volatile and non-volatile memory.

A digital Dynex circuit is realised as a memristor based bi-directional logic circuit. These circuits differ from traditional logic circuits in that input and output terminals are no longer distinct. In a traditional logic circuit, some input is given and the output is the result of computation performed on the input, via uni-directional logic gates. In contrast, a memristor based bi-directional logic circuit can be operated by assigning the output terminals, then reading the input terminals.

Self-organising logic is a recently-suggested framework that allows the solution of Boolean truth tables “in reverse,” i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned (“terminal-agnostic logic”). It can be realised if time non-locality (memory) is present. A practical realisation of self-organising logic gates can be done by combining circuit elements with and without memory. By employing one such realisation, it can be shown numerically, that self-organising logic gates exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the non-linear equations of motion describing self-organising logic gates, and connect topologically distinct critical points in the phase space. By linear analysis at those points it can be shown that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. It can also be shown that the memory content of these gates only affects the relaxation time to reach the logically consistent solution. By solving the corresponding stochastic differential equations, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space, but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organise to the correct solution.

Note that the self-organising logic we consider here has no relation to the invertible universal Toffoli gate that is employed, e.g., in quantum computation. Toffoli gates are truly one-to-one invertible, having 3-bit inputs and 3-bit outputs. On the other hand, self-organising logic gates need only to satisfy the correct logical proposition, without a one-to-one relation between any number of input and output terminals. Instead, it is worth mentioning another type of bi-directional logic that has been recently discussed in using stochastic units (called p-bits). These units fluctuate among all possible consistent inputs. However, in contrast to that work, the invertible logic we consider here is deterministic.

With time being a fundamental ingredient, a dynamical systems approach is most natural to describe such gates. In particular, non-linear electronic (non-quantum) circuit elements with and without memory have been suggested as building blocks to realise self-organising logic gates in practice.

By assembling self-organising logic gates with the appropriate architecture, one then obtains circuits that can solve complex problems efficiently by mapping the equilibrium (fixed) points of such circuits to the solution of the problem at hand. Moreover, it has been proved that, if those systems are engineered to be point dissipative, then, if equilibrium points are present, they do not show chaotic behaviour or periodic orbits.

It was subsequently demonstrated, using topological field theory (TFT) applied to dynamical systems, that these circuits are described by a Witten-type TFT, and they support long-range order, mediated by instantons. Instantons are classical trajectories of the non-linear equations of motion describing these circuits.

###### References

[1] Pehle, Christian; Wetterich, Christof (2021-03-30), *Neuromorphic quantum computing*, arXiv:2005.01533

[2] "Neuromrophic Quantum Computing | Quromorphic Project | Fact Sheet | H2020". *CORDIS | European Commission*. doi:10.3030/828826. Retrieved 2024-03-18

[3] Wetterich, C. (2019-11-01). "Quantum computing with classical bits". *Nuclear Physics B*. **948**: 114776. arXiv:1806.05960. Bibcode:2019NuPhB.94814776W. doi:10.1016/j.nuclphysb.2019.114776. ISSN 0550-3213

[4] Pehle, Christian; Meier, Karlheinz; Oberthaler, Markus; Wetterich, Christof (2018-10-24), *Emulating quantum computation with artificial neural networks*, arXiv:1810.10335

[5] Carleo, Giuseppe; Troyer, Matthias (2017-02-10). "Solving the quantum many-body problem with artificial neural networks". *Science*. **355** (6325): 602–606. arXiv:1606.02318. Bibcode:2017Sci...355..602C. doi:10.1126/science.aag2302. ISSN 0036-8075. PMID 28183973

[6] Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe (2018-02-26). "Neural-network quantum state tomography". *Nature Physics*. **14** (5): 447–450. arXiv:1703.05334. Bibcode:2018NatPh..14..447T. doi:10.1038/s41567-018-0048-5. ISSN 1745-2481

[7] Sharir, Or; Levine, Yoav; Wies, Noam; Carleo, Giuseppe; Shashua, Amnon (2020-01-16). "Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems". *Physical Review Letters*. **124** (2): 020503. arXiv:1902.04057. Bibcode:2020PhRvL.124b0503S. doi:10.1103/PhysRevLett.124.020503. PMID 32004039

[8] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), *TensorFlow Quantum: A Software Framework for Quantum Machine Learning*, arXiv:2003.02989

[9] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), *TensorFlow Quantum: A Software Framework for Quantum Machine Learning*, arXiv:2003.02989

[10] Wilkinson, Samuel A.; Hartmann, Michael J. (2020-06-08). "Superconducting quantum many-body circuits for quantum simulation and computing". *Applied Physics Letters*. **116** (23). arXiv:2003.08838. Bibcode:2020ApPhL.116w0501W. doi:10.1063/5.0008202. ISSN 0003-6951

[11] Di Ventra, Massimiliano; Traversa, Fabio L.; Ovchinnikov, Igor V. (2017-08-07). "Topological Field Theory and Computing with Instantons". *Annalen der Physik*. **529** (12). arXiv:1609.03230. Bibcode:2017AnP...52900123D. doi:10.1002/andp.201700123. ISSN 0003-3804

[12] Gonzalez-Raya, Tasio; Lukens, Joseph M.; Céleri, Lucas C.; Sanz, Mikel (2020-02-14). "Quantum Memristors in Frequency-Entangled Optical Fields". *Materials*. **13** (4): 864. arXiv:1912.10019. Bibcode:2020Mate...13..864G. doi:10.3390/ma13040864. ISSN 1996-1944. PMC 7079656. PMID 32074986

##### Documentary: Dynex, Neuromorphic Quantum Computing for solving Real-World Problems, At Scale.

The world’s first documentary on neuromorphic quantum computing was showcased at the Guanajuato International Film Festival, presented by IDEA GTO in Leòn, Mexico, on July 20, 2024.

##### Dynex Reference Guide

Neuromorphic Computing for Computer Scientists: A complete guide to Neuromorphic Computing on the Dynex Neuromorphic Cloud Computing Platform, Dynex Developers, 2024, 249 pages, available as eBook, paperback and hardcover.

> Amazon.com

> Amazon.co.uk

> Amazon.de

##### Neuromorphic quantum computing in simple terms

Neuromorphic quantum computing** **is a special type of computing that combines ideas from brain-like computing with quantum technology to solve problems. It works differently from regular quantum computing by using a network of connected components that can quickly react to changes, helping the system to swiftly find the best solutions.

Dynex is applying a Digital Twin of a physical neuromorphic quantum computing machine, which is operated on thousands of GPUs in parallel, delivering unparalleled quantum computing performance at scale for real-world applications.

Quantum computing employs some quantum phenomena to process information. It has been hailed as the future of computing but it is plagued by serious hurdles when it comes to its practical realization. Neuromorphic quantum computing is a paradigm that instead employs non-quantum dynamical systems and exploits time non-locality (memory) to compute. It can be efficiently emulated as Digital Twin in software and its path towards hardware is more straightforward. In essence, n.quantum computing is leveraging memory and physics to compute efficiently, a concept studied intensively by a number of researchers and institutions, including a $2M CORDIS funded project by the European Union amongst others.

This technology is exciting because it leads to computer systems that are faster and capable of handling complex tasks more efficiently than traditional computers.

##### Harnessing quantum advantage today,

a decade ahead of projections

Market researchers are forecasting that full quantum advantage, characterized by effective error correction and scalability, will be achieved by 2040. However, Dynex's n.quantum computing technology enables customers to leverage this quantum advantage today, effectively providing a technological leap of more than a decade. By integrating advanced neuromorphic quantum computing capabilities, Dynex empowers users with unprecedented computational power and efficiency, positioning itself at the forefront of technological innovation well ahead of the projected timeline.

##### Neuromorphic quantum computing:

A milestone in quantum history

Neuromorphic quantum computing, made publicly available in 2023, represents one of the most significant milestones in the history of quantum computing. As depicted in the timeline of key algorithm development, various breakthroughs have been made since the 1980s, including the Deutsch algorithm, Grover's algorithm, and quantum machine learning algorithms. The introduction of neuromorphic quantum computing builds upon these advancements by utilizing neuromorphic circuits that emulate the brain's architecture to perform quantum computations. This breakthrough technology enhances the efficiency and scalability of quantum computing, addressing limitations of traditional quantum hardware. By leveraging neuromorphic systems, we are now able to tackle complex computational problems with unprecedented speed and accuracy, marking a revolutionary leap forward in the field of quantum computing.

##### The Dynex quantum advantage

Dynex achieves quantum advantage by efficiently computing a Digital Twin of a physical system through the following steps:

A n.quantum computing problem is being submitted to the Dynex cloud with the Dynex SDK;

The problem is then converted into a circuit layout consisting of memristor-based logic gates. This circuit seeks the optimal energy ground state based on voltages and current;

Next, the circuit layout is transformed into a system of ordinary differential equations (ODEs) by applying their equations of motion, effectively creating a "Digital Twin" of the physical system. We have published the specific equations of motion used in this process;

This system of ODEs is solved on our distributed network of GPUs, similar to how the trajectory of the moon landing was simulated by considering the equations of motion for the Earth and the moon.

Once the desired number of integration steps (simulated time) is reached, the voltages on the circuit are read and passed back as the result of the computation.

Utilising Digital Twins of physical systems for computation has been demonstrated to exhibit similar characteristics to quantum superposition, quantum entanglement and quantum tunnelling effects. This is evidenced in works such as "Topological Field Theory and Computing with Instantons"[11] or "Superconducting Quantum Many-Body Circuits for Quantum Simulation and Computing"[10] amongst others.

These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system.

The NSF San Diego Supercomputer Center performed a stress-test to measure the capability of a similar memristor based simulated system on finding approximate solutions to hard combinatorial optimization problems. These fall into a class which is known to require exponentially growing resources in the worst cases, even to generate approximations.

They showed that in a region where state of the art algorithms demonstrate this exponential growth, simulations of a memristor based system only require time and memory resources that scale linearly. Up to 64 × 10^6 variables (corresponding to about 1 billion literals), namely the largest case that they could fit on a single node with 128 GB of DRAM where simulated, supporting the theory of Dynex' n.quantum computing.

They also measured the memory requirement to perform the simulations. Usage scaled linearly with increasing problem size, whereas traditional methods require exponentially growing memory. This stress test shows the considerable advantage of non-combinatorial, physics inspired approaches over standard combinatorial ones.

> Dynex Publications

> Dynex Benchmarks

##### Practical n.quantum computing with Dynex

Dynex's platform supports a variety of tools for creating and working with both n.quantum gate circuits and n.quantum annealing models. Quantum programs are mapped onto a Dynex n.quantum circuit and then computed by the contributing workers. This ensures that both traditional quantum algorithms and quantum gate circuits can be computed without modifications on the Dynex platform using the Python-based Dynex SDK. It also provides libraries compatible with Google TensorFlow, IBM Qiskit, PyTorch, Scikit-Learn, and others. All source codes are publicly available.

###### Dynex: Native Support for Quantum Gate Circuits

Dynex's platform natively supports quantum gate circuits, which are integral to many well-known quantum algorithms. Programmers familiar with quantum gate circuit languages such as Qiskit, Cirq, Pennylane, and OpenQASM will find it straightforward to run their computations on the Dynex neuromorphic computing platform. These tools allow for the creation of quantum circuits, enabling the execution of famous algorithms like Shor's algorithm (for efficient problem-solving in number theory), Grover's search algorithm (for unstructured search), Simon's algorithm (for finding hidden periods), and the Deutsch-Jozsa algorithm (for determining the parity of a function).

Quantum gate circuits are a fundamental aspect of quantum computing, employing quantum bits (qubits) to perform computations. Unlike classical bits, qubits can exist in multiple states simultaneously due to quantum superposition, and can be entangled, allowing for the representation and manipulation of complex data structures. Quantum gate circuits manipulate these qubits using a series of quantum gates, analogous to classical logic gates, to perform specific operations.

The versatility of quantum gate circuits allows them to implement a wide range of quantum algorithms. Shor's algorithm, for instance, leverages quantum parallelism for efficient problem-solving in number theory. Grover's algorithm offers a quadratic speedup for unstructured search problems, showcasing the potential of quantum computing in database searches and optimization tasks. Simon's algorithm and the Deutsch-Jozsa algorithm further demonstrate the power of quantum computing in solving problems that are infeasible for classical systems, highlighting the unique advantages of quantum superposition and entanglement.

The support for these quantum gate circuits on the Dynex platform means that researchers and developers can seamlessly transition their existing quantum algorithms and applications to leverage Dynex's neuromorphic quantum computing capabilities. This integration facilitates the exploration of new computational paradigms and the development of advanced quantum applications, pushing the boundaries of what is possible with quantum computing.

###### Dynex: Native Support for Quantum Annealing

The Dynex platform also excels in computing Ising and QUBO problems, which play a pivotal role in the field of quantum computing, establishing themselves as the de-facto standard for mapping complex optimization and machine learning problems onto quantum systems. These frameworks are instrumental in leveraging the unique capabilities of quantum computers to solve problems that are intractable for classical computers.

The Ising model, originally introduced in statistical mechanics, describes a system of spins that can be in one of two states. This model has been adapted to represent optimization problems, where the goal is to minimize an energy function describing the interactions between spins. Similarly, the QUBO framework represents optimization problems with binary variables, where the objective is to minimize a quadratic polynomial. Both models are equivalent and can be transformed into one another, allowing a broad range of problems to be addressed using either formulation.

The significance of Ising and QUBO problems in quantum computing lies in their natural fit with quantum annealing and gate-based quantum algorithms. Quantum annealers, for instance, directly implement the Ising model to find the ground state of a system, which corresponds to the optimal solution of the problem. This method exploits quantum tunnelling and entanglement to escape local minima, offering a potential advantage over classical optimization techniques. Gate-based quantum computers, on the other hand, use quantum algorithms like the Quantum Approximate Optimization Algorithm (QAOA) to solve QUBO problems. These algorithms use quantum superposition and interference to explore the solution space more efficiently than classical algorithms, potentially leading to faster solution times for certain problems.

The adoption of Ising and QUBO as standards in quantum computing is due to their versatility and the direct mapping of various optimization and machine learning tasks onto quantum hardware. From logistics and finance to drug discovery and artificial intelligence, the ability to frame problems within the Ising or QUBO model opens up new avenues for solving complex challenges with quantum computing. This standardization also facilitates the development of quantum algorithms and the benchmarking of quantum hardware, accelerating progress in the quantum computing field.

##### Challenges of traditional quantum gates and annealing architecture

Quantum computing is an emerging technology with enormous potential to solve complex problems, because it effectively applies the properties of quantum mechanics, such as superposition and entanglement. However, like any technology, there are disadvantages in some of the architectures:

**Error Correction**

Just like in classical computing’s early days, error correction is a major painpoint for quantum computing today. Quantum computers are sensitive to noise and difficult to calibrate. Unlike traditional computers that would experience a bit flip from 0 to 1 or vice versa, quantum errors are more difficult to correct because qubits can take an infinite number of states.

**Hardware & Temperature**

Because quantum computers need to slow down atoms to near stillness, their processors must be kept at or around absolute zero (-273°C). Even the tiniest of fluctuations can cause unwanted movement, so it’s just as important to make sure that they’re under no atmospheric pressure and that they are insulated from the earth’s magnetic field.

**Scalability**

While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge.

#### Manifestations of Neuromorphic Quantum Computing

Neuromorphic quantum computing represents a significant evolution in the field of quantum computing by merging quantum principles with neuromorphic engineering. This hybrid approach aims to overcome some of the limitations of traditional quantum systems, such as error correction and scalability, by mimicking the brain's processing capabilities. As a result, neuromorphic quantum computing holds the potential to accelerate the development of quantum technologies and expand their practical applications across various industries.

Neuromorphic Quantum Computing (abbreviated as ‘n.quantum computing’) [1,2] is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations [3,4]. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing [5,6,7,8,9].

Both traditional quantum computing and neuromorphic quantum computing [1] are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing [1] and quantum computing share similar physical properties during computation [9,10].

##### Neuromorphic quantum architecture

**Quantum Computing:** Utilizes quantum circuits with qubits implemented using superconductors, trapped ions, or other quantum technologies. These systems often require extremely low temperatures and sophisticated error correction methods to maintain coherence and reduce error rates.

**Neuromorphic Quantum Computing:** In nature, physical systems tend to evolve toward their lowest energy state: objects slide down hills, hot things cool down, and so on. This behaviour also applies to neuromorphic systems. To imagine this, think of a traveler looking for the best solution by finding the lowest valley in the energy landscape that represents the problem. Classical algorithms seek the lowest valley by placing the traveler at some point in the landscape and allowing that traveler to move based on local variations. While it is generally most efficient to move downhill and avoid climbing hills that are too high, such classical algorithms are prone to leading the traveler into nearby valleys that may not be the global minimum. Numerous trials are typically required, with many travellers beginning their journeys from different points.

In contrast, neuromorphic annealing begins with the traveler simultaneously occupying many coordinates thanks to the phenomenon of inherent parallelism. The probability of being at any given coordinate smoothly evolves as annealing progresses, with the probability increasing around the coordinates of deep valleys. Instantonic jumps allows the traveller to pass through hills—rather than be forced to climb them—reducing the chance of becoming trapped in valleys that are not the global minimum. Long range order further improves the outcome by allowing the traveler to discover correlations between the coordinates that lead to deep valleys.

To speed computation, our neuromorphic platform taps directly into an unimaginably vast fabric of reality—the strange and counterintuitive world of physics and biology inspired computing. Rather than store information using bits represented by 0s or 1s as conventional computers do, neuromorphic computers use voltages and current. The dynamic long range behaviour, along with its trend towards optimal energy and instantonic effects, enable neuromorphic computers to consider and manipulate many combinations of bits simultaneously.

> Whitepaper: Dynex Neuromorphic Quantum Circuits’ Equations of Motion

> Whitepaper: Dynex Data Flow

###### n.Quantum Entanglement Effects

Quantum computing utilise certain principles of quantum mechanics, such as entanglement, to enable their components to be interconnected over vast distances. This interconnection means that a change in one component can instantly affect others, no matter how far apart they are. This characteristic facilitates the computer's ability to swiftly determine the most energy-efficient state. Similarly, n.quantum computing exhibits a related phenomenon through long-distance connections stemming from a state known as criticality [11]. In this setup, each gate within the circuit is designed to be responsive to distant gates, creating a network where each gate can influence and be influenced by others across the system. As the circuit evolves, the connections between different gates reach a critical state, making the system prone to sudden, large-scale changes or "avalanches" triggered by minor disturbances anywhere in the circuit. These avalanches, driven by the circuit's long-range correlations, enable the system to quickly find the lowest energy state, significantly speeding up computation compared to traditional methods [9,11].

###### n.Quantum Tunnelling Effects

Quantum computing also harness the principle of quantum tunnelling to find the most efficient solution. This process enables them to bypass energy barriers and access lower energy states that represent improved solutions. In contrast, n.quantum computing circuits [12] utilise the behaviour of electrical currents and voltages to expedite the transition to more favourable states. The concept of an instanton [9,11], borrowed from the study of dynamical systems, explains how these circuits navigate through energy barriers. This happens as the system encounters "saddle" points within the electrical landscape, which possess characteristics that attract the system initially but then repel it as it gets closer. This interaction results in the system being propelled away from these points at high speed, effectively allowing it to jump from one state to another across the landscape.

This phenomenon [11] mirrors quantum tunnelling but occurs in the realm of voltages and currents. In neuromorphic quantum computing circuits, this behaviour emerges from the collective action of memristor based gates, which induce widespread fluctuations in voltages, thereby swiftly steering the circuit towards superior configurations.

These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system [9].

##### Error Handling

**Quantum Computing**: There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.

**Neuromorphic Quantum Computing**: In contrast to quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems. It inherently manages errors and noise, reducing the need for extensive error correction protocols.

##### Scalability

**Quantum Computing**: Scalability is limited by the physical requirements of maintaining qubits and their coherence. Increasing the number of qubits while managing errors is a major challenge.

**Neuromorphic Quantum Computing**: Promises improved scalability by leveraging neuromorphic principles to create large-scale, efficient quantum systems that can process information more like a human brain, offering more practical scalability.

##### Real-World Applications

There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.

In contrast to traditional quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems [9,11].

##### EU Horizon 2020 Project “Neuromorphic Quantum Computing”

The European Union-funded project "Neuromorphic Quantum Computing" [2] resulted in the publication of eighteen peer-reviewed articles. Its objective was to introduce hardware inspired by the human brain with quantum functionalities. The project aimed to construct superconducting quantum neural networks to facilitate the creation of dedicated neuromorphic quantum machine learning hardware, which could potentially outperform traditional von Neumann architectures in future generations. This achievement represented the merger of two forefront developments in information processing: machine learning and quantum computing, into a novel technology. Contrary to conventional machine learning techniques that simulate neural functions in software on von Neumann hardware, neuromorphic quantum hardware was projected to provide a significant advantage by enabling parallel training on various batches of real-world data. This feature was anticipated to confer a quantum benefit. Neuromorphic hardware architectures were seen as critically important for both classical and quantum computing, especially for distributed and embedded computing tasks where the extensive scaling of current architectures could not offer a sustainable solution.

##### Turing complete n.quantum circuits

A Dynex machine is a class of general-purpose computing machines based on memory systems, where information is processed and stored at the same physical location. We analysed the memory properties of the Dynex machine to demonstrate that they possess universal computing power—they are Turing- complete—, intrinsic parallelism, functional polymorphism, and information overhead, namely that their collective states can support exponential data compression directly in memory through their collective states. Moreover, we show that the Dynex machine is capable of solving NP-complete problems in polynomial time, just like a non-deterministic Turing machine. The Dynex machine, however, requires only a polynomial number of memory cells due to its information overhead. It is important to note that even though these results do not prove NP=P within the Turing paradigm, the concept of Dynex machines represents a paradigm shift from the current von Neumann architecture, bringing us closer to the concept of brain-like neural computation.

##### Efficient n.quantum circuit simulation

It is important to realise that computing is fundamentally a physical process. The statement may seem obvious when considering the physical processes harnessed by the electronic components of computers (for example, transistors), however, virtually any physical process can be harnessed for some form of computation. Note, that we are speaking of Alan Turing’s model of computation, that is, a mapping (transition function) between two sets of finite symbols (input and output) in discrete time.

It is important to distinguish between continuous and discrete time: Dynex n.quantum circuits operate in continuous time, though, their simulations on modern computers require the discretisation of time. Continuous time is physical time: a fundamental physical quantity. Discrete time is not a physical quantity, and might be best understood as counting time: counting something (function calls, integration steps, etc.) to give an indication (perhaps approximation) of the physical time. In the literature of Physics and other Physical Sciences, physical time has an assigned SI unit of seconds, whereas in Computer Science and related disciplines, counting time is dimensionless.

##### The fourth missing circuit element

Modern computers rely on the implementation of uni-directional logic gates that represent Boolean functions. Circuits built to simulate Boolean functions are desirable because they are deterministic: A unique input has a unique, reproducible output.

Modern computers relegate the task of logic to central processing units (CPUs). However, the resources required for the task might exhaust the resources present within the CPU, specifically, cache memory. For typical processes on modern computers, random-access memory (RAM) is the memory used for data and machine code, and is external to the CPU. The physical separation of CPU and RAM results in what is known as the von Neumann bottleneck, a slow down in computation caused by the transfer of information between physical locations.

To overcome the von Neumann bottleneck, it was proposed to perform computing with and in memory, utilising ideal memristors. Distinct from in-memory computation, it is an efficient computing paradigm that uses memory to process and store information in the same physical location.

A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they are non-volatile, meaning that they retain memory without power.

The original concept for memristors, as conceived in 1971 by Professor Leon Chua at the University of California, Berkeley, was a nonlinear, passive two-terminal electrical component that linked electric charge and magnetic flux (“The missing circuit element“). Since then, the definition of memristor has been broadened to include any form of non-volatile memory that is based on resistance switching, which increases the flow of current in one direction and decreases the flow of current in the opposite direction.

Memristors, which are considered to be a sub-category of resistive RAM, are one of several storage technologies that have been predicted to replace flash memory. Scientists at HP Labs built the first working memristor in 2008 and since that time, researchers in many large IT companies have explored how memristors can be used to create smaller, faster, low-power computers that do not require data to be transferred between volatile and non-volatile memory.

A digital Dynex circuit is realised as a memristor based bi-directional logic circuit. These circuits differ from traditional logic circuits in that input and output terminals are no longer distinct. In a traditional logic circuit, some input is given and the output is the result of computation performed on the input, via uni-directional logic gates. In contrast, a memristor based bi-directional logic circuit can be operated by assigning the output terminals, then reading the input terminals.

Self-organising logic is a recently-suggested framework that allows the solution of Boolean truth tables “in reverse,” i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned (“terminal-agnostic logic”). It can be realised if time non-locality (memory) is present. A practical realisation of self-organising logic gates can be done by combining circuit elements with and without memory. By employing one such realisation, it can be shown numerically, that self-organising logic gates exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the non-linear equations of motion describing self-organising logic gates, and connect topologically distinct critical points in the phase space. By linear analysis at those points it can be shown that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. It can also be shown that the memory content of these gates only affects the relaxation time to reach the logically consistent solution. By solving the corresponding stochastic differential equations, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space, but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organise to the correct solution.

Note that the self-organising logic we consider here has no relation to the invertible universal Toffoli gate that is employed, e.g., in quantum computation. Toffoli gates are truly one-to-one invertible, having 3-bit inputs and 3-bit outputs. On the other hand, self-organising logic gates need only to satisfy the correct logical proposition, without a one-to-one relation between any number of input and output terminals. Instead, it is worth mentioning another type of bi-directional logic that has been recently discussed in using stochastic units (called p-bits). These units fluctuate among all possible consistent inputs. However, in contrast to that work, the invertible logic we consider here is deterministic.

With time being a fundamental ingredient, a dynamical systems approach is most natural to describe such gates. In particular, non-linear electronic (non-quantum) circuit elements with and without memory have been suggested as building blocks to realise self-organising logic gates in practice.

By assembling self-organising logic gates with the appropriate architecture, one then obtains circuits that can solve complex problems efficiently by mapping the equilibrium (fixed) points of such circuits to the solution of the problem at hand. Moreover, it has been proved that, if those systems are engineered to be point dissipative, then, if equilibrium points are present, they do not show chaotic behaviour or periodic orbits.

It was subsequently demonstrated, using topological field theory (TFT) applied to dynamical systems, that these circuits are described by a Witten-type TFT, and they support long-range order, mediated by instantons. Instantons are classical trajectories of the non-linear equations of motion describing these circuits.

###### References

[1] Pehle, Christian; Wetterich, Christof (2021-03-30), *Neuromorphic quantum computing*, arXiv:2005.01533

[2] "Neuromrophic Quantum Computing | Quromorphic Project | Fact Sheet | H2020". *CORDIS | European Commission*. doi:10.3030/828826. Retrieved 2024-03-18

[3] Wetterich, C. (2019-11-01). "Quantum computing with classical bits". *Nuclear Physics B*. **948**: 114776. arXiv:1806.05960. Bibcode:2019NuPhB.94814776W. doi:10.1016/j.nuclphysb.2019.114776. ISSN 0550-3213

[4] Pehle, Christian; Meier, Karlheinz; Oberthaler, Markus; Wetterich, Christof (2018-10-24), *Emulating quantum computation with artificial neural networks*, arXiv:1810.10335

[5] Carleo, Giuseppe; Troyer, Matthias (2017-02-10). "Solving the quantum many-body problem with artificial neural networks". *Science*. **355** (6325): 602–606. arXiv:1606.02318. Bibcode:2017Sci...355..602C. doi:10.1126/science.aag2302. ISSN 0036-8075. PMID 28183973

[6] Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe (2018-02-26). "Neural-network quantum state tomography". *Nature Physics*. **14** (5): 447–450. arXiv:1703.05334. Bibcode:2018NatPh..14..447T. doi:10.1038/s41567-018-0048-5. ISSN 1745-2481

[7] Sharir, Or; Levine, Yoav; Wies, Noam; Carleo, Giuseppe; Shashua, Amnon (2020-01-16). "Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems". *Physical Review Letters*. **124** (2): 020503. arXiv:1902.04057. Bibcode:2020PhRvL.124b0503S. doi:10.1103/PhysRevLett.124.020503. PMID 32004039

[8] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), *TensorFlow Quantum: A Software Framework for Quantum Machine Learning*, arXiv:2003.02989

[9] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), *TensorFlow Quantum: A Software Framework for Quantum Machine Learning*, arXiv:2003.02989

[10] Wilkinson, Samuel A.; Hartmann, Michael J. (2020-06-08). "Superconducting quantum many-body circuits for quantum simulation and computing". *Applied Physics Letters*. **116** (23). arXiv:2003.08838. Bibcode:2020ApPhL.116w0501W. doi:10.1063/5.0008202. ISSN 0003-6951

[11] Di Ventra, Massimiliano; Traversa, Fabio L.; Ovchinnikov, Igor V. (2017-08-07). "Topological Field Theory and Computing with Instantons". *Annalen der Physik*. **529** (12). arXiv:1609.03230. Bibcode:2017AnP...52900123D. doi:10.1002/andp.201700123. ISSN 0003-3804

[12] Gonzalez-Raya, Tasio; Lukens, Joseph M.; Céleri, Lucas C.; Sanz, Mikel (2020-02-14). "Quantum Memristors in Frequency-Entangled Optical Fields". *Materials*. **13** (4): 864. arXiv:1912.10019. Bibcode:2020Mate...13..864G. doi:10.3390/ma13040864. ISSN 1996-1944. PMC 7079656. PMID 32074986

##### Documentary: Dynex, Neuromorphic Quantum Computing for solving Real-World Problems, At Scale.

The world’s first documentary on neuromorphic quantum computing was showcased at the Guanajuato International Film Festival, presented by IDEA GTO in Leòn, Mexico, on July 20, 2024.

##### Dynex Reference Guide

Neuromorphic Computing for Computer Scientists: A complete guide to Neuromorphic Computing on the Dynex Neuromorphic Cloud Computing Platform, Dynex Developers, 2024, 249 pages, available as eBook, paperback and hardcover.

> Amazon.com

> Amazon.co.uk

> Amazon.de

##### Neuromorphic quantum computing in simple terms

Neuromorphic quantum computing** **is a special type of computing that combines ideas from brain-like computing with quantum technology to solve problems. It works differently from regular quantum computing by using a network of connected components that can quickly react to changes, helping the system to swiftly find the best solutions.

Dynex is applying a Digital Twin of a physical neuromorphic quantum computing machine, which is operated on thousands of GPUs in parallel, delivering unparalleled quantum computing performance at scale for real-world applications.

Quantum computing employs some quantum phenomena to process information. It has been hailed as the future of computing but it is plagued by serious hurdles when it comes to its practical realization. Neuromorphic quantum computing is a paradigm that instead employs non-quantum dynamical systems and exploits time non-locality (memory) to compute. It can be efficiently emulated as Digital Twin in software and its path towards hardware is more straightforward. In essence, n.quantum computing is leveraging memory and physics to compute efficiently, a concept studied intensively by a number of researchers and institutions, including a $2M CORDIS funded project by the European Union amongst others.

This technology is exciting because it leads to computer systems that are faster and capable of handling complex tasks more efficiently than traditional computers.

##### Harnessing quantum advantage today,

a decade ahead of projections

Market researchers are forecasting that full quantum advantage, characterized by effective error correction and scalability, will be achieved by 2040. However, Dynex's n.quantum computing technology enables customers to leverage this quantum advantage today, effectively providing a technological leap of more than a decade. By integrating advanced neuromorphic quantum computing capabilities, Dynex empowers users with unprecedented computational power and efficiency, positioning itself at the forefront of technological innovation well ahead of the projected timeline.

##### Neuromorphic quantum computing:

A milestone in quantum history

Neuromorphic quantum computing, made publicly available in 2023, represents one of the most significant milestones in the history of quantum computing. As depicted in the timeline of key algorithm development, various breakthroughs have been made since the 1980s, including the Deutsch algorithm, Grover's algorithm, and quantum machine learning algorithms. The introduction of neuromorphic quantum computing builds upon these advancements by utilizing neuromorphic circuits that emulate the brain's architecture to perform quantum computations. This breakthrough technology enhances the efficiency and scalability of quantum computing, addressing limitations of traditional quantum hardware. By leveraging neuromorphic systems, we are now able to tackle complex computational problems with unprecedented speed and accuracy, marking a revolutionary leap forward in the field of quantum computing.

##### The Dynex quantum advantage

Dynex achieves quantum advantage by efficiently computing a Digital Twin of a physical system through the following steps:

A n.quantum computing problem is being submitted to the Dynex cloud with the Dynex SDK;

The problem is then converted into a circuit layout consisting of memristor-based logic gates. This circuit seeks the optimal energy ground state based on voltages and current;

Next, the circuit layout is transformed into a system of ordinary differential equations (ODEs) by applying their equations of motion, effectively creating a "Digital Twin" of the physical system. We have published the specific equations of motion used in this process;

This system of ODEs is solved on our distributed network of GPUs, similar to how the trajectory of the moon landing was simulated by considering the equations of motion for the Earth and the moon.

Once the desired number of integration steps (simulated time) is reached, the voltages on the circuit are read and passed back as the result of the computation.

Utilising Digital Twins of physical systems for computation has been demonstrated to exhibit similar characteristics to quantum superposition, quantum entanglement and quantum tunnelling effects. This is evidenced in works such as "Topological Field Theory and Computing with Instantons"[11] or "Superconducting Quantum Many-Body Circuits for Quantum Simulation and Computing"[10] amongst others.

These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system.

The NSF San Diego Supercomputer Center performed a stress-test to measure the capability of a similar memristor based simulated system on finding approximate solutions to hard combinatorial optimization problems. These fall into a class which is known to require exponentially growing resources in the worst cases, even to generate approximations.

They showed that in a region where state of the art algorithms demonstrate this exponential growth, simulations of a memristor based system only require time and memory resources that scale linearly. Up to 64 × 10^6 variables (corresponding to about 1 billion literals), namely the largest case that they could fit on a single node with 128 GB of DRAM where simulated, supporting the theory of Dynex' n.quantum computing.

They also measured the memory requirement to perform the simulations. Usage scaled linearly with increasing problem size, whereas traditional methods require exponentially growing memory. This stress test shows the considerable advantage of non-combinatorial, physics inspired approaches over standard combinatorial ones.

> Dynex Publications

> Dynex Benchmarks

##### Practical n.quantum computing with Dynex

Dynex's platform supports a variety of tools for creating and working with both n.quantum gate circuits and n.quantum annealing models. Quantum programs are mapped onto a Dynex n.quantum circuit and then computed by the contributing workers. This ensures that both traditional quantum algorithms and quantum gate circuits can be computed without modifications on the Dynex platform using the Python-based Dynex SDK. It also provides libraries compatible with Google TensorFlow, IBM Qiskit, PyTorch, Scikit-Learn, and others. All source codes are publicly available.

###### Dynex: Native Support for Quantum Gate Circuits

Dynex's platform natively supports quantum gate circuits, which are integral to many well-known quantum algorithms. Programmers familiar with quantum gate circuit languages such as Qiskit, Cirq, Pennylane, and OpenQASM will find it straightforward to run their computations on the Dynex neuromorphic computing platform. These tools allow for the creation of quantum circuits, enabling the execution of famous algorithms like Shor's algorithm (for efficient problem-solving in number theory), Grover's search algorithm (for unstructured search), Simon's algorithm (for finding hidden periods), and the Deutsch-Jozsa algorithm (for determining the parity of a function).

Quantum gate circuits are a fundamental aspect of quantum computing, employing quantum bits (qubits) to perform computations. Unlike classical bits, qubits can exist in multiple states simultaneously due to quantum superposition, and can be entangled, allowing for the representation and manipulation of complex data structures. Quantum gate circuits manipulate these qubits using a series of quantum gates, analogous to classical logic gates, to perform specific operations.

The versatility of quantum gate circuits allows them to implement a wide range of quantum algorithms. Shor's algorithm, for instance, leverages quantum parallelism for efficient problem-solving in number theory. Grover's algorithm offers a quadratic speedup for unstructured search problems, showcasing the potential of quantum computing in database searches and optimization tasks. Simon's algorithm and the Deutsch-Jozsa algorithm further demonstrate the power of quantum computing in solving problems that are infeasible for classical systems, highlighting the unique advantages of quantum superposition and entanglement.

The support for these quantum gate circuits on the Dynex platform means that researchers and developers can seamlessly transition their existing quantum algorithms and applications to leverage Dynex's neuromorphic quantum computing capabilities. This integration facilitates the exploration of new computational paradigms and the development of advanced quantum applications, pushing the boundaries of what is possible with quantum computing.

###### Dynex: Native Support for Quantum Annealing

The Dynex platform also excels in computing Ising and QUBO problems, which play a pivotal role in the field of quantum computing, establishing themselves as the de-facto standard for mapping complex optimization and machine learning problems onto quantum systems. These frameworks are instrumental in leveraging the unique capabilities of quantum computers to solve problems that are intractable for classical computers.

The Ising model, originally introduced in statistical mechanics, describes a system of spins that can be in one of two states. This model has been adapted to represent optimization problems, where the goal is to minimize an energy function describing the interactions between spins. Similarly, the QUBO framework represents optimization problems with binary variables, where the objective is to minimize a quadratic polynomial. Both models are equivalent and can be transformed into one another, allowing a broad range of problems to be addressed using either formulation.

The significance of Ising and QUBO problems in quantum computing lies in their natural fit with quantum annealing and gate-based quantum algorithms. Quantum annealers, for instance, directly implement the Ising model to find the ground state of a system, which corresponds to the optimal solution of the problem. This method exploits quantum tunnelling and entanglement to escape local minima, offering a potential advantage over classical optimization techniques. Gate-based quantum computers, on the other hand, use quantum algorithms like the Quantum Approximate Optimization Algorithm (QAOA) to solve QUBO problems. These algorithms use quantum superposition and interference to explore the solution space more efficiently than classical algorithms, potentially leading to faster solution times for certain problems.

The adoption of Ising and QUBO as standards in quantum computing is due to their versatility and the direct mapping of various optimization and machine learning tasks onto quantum hardware. From logistics and finance to drug discovery and artificial intelligence, the ability to frame problems within the Ising or QUBO model opens up new avenues for solving complex challenges with quantum computing. This standardization also facilitates the development of quantum algorithms and the benchmarking of quantum hardware, accelerating progress in the quantum computing field.

##### Challenges of traditional quantum gates and annealing architecture

Quantum computing is an emerging technology with enormous potential to solve complex problems, because it effectively applies the properties of quantum mechanics, such as superposition and entanglement. However, like any technology, there are disadvantages in some of the architectures:

**Error Correction**

Just like in classical computing’s early days, error correction is a major painpoint for quantum computing today. Quantum computers are sensitive to noise and difficult to calibrate. Unlike traditional computers that would experience a bit flip from 0 to 1 or vice versa, quantum errors are more difficult to correct because qubits can take an infinite number of states.

**Hardware & Temperature**

Because quantum computers need to slow down atoms to near stillness, their processors must be kept at or around absolute zero (-273°C). Even the tiniest of fluctuations can cause unwanted movement, so it’s just as important to make sure that they’re under no atmospheric pressure and that they are insulated from the earth’s magnetic field.

**Scalability**

While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge.

#### Manifestations of Neuromorphic Quantum Computing

Neuromorphic quantum computing represents a significant evolution in the field of quantum computing by merging quantum principles with neuromorphic engineering. This hybrid approach aims to overcome some of the limitations of traditional quantum systems, such as error correction and scalability, by mimicking the brain's processing capabilities. As a result, neuromorphic quantum computing holds the potential to accelerate the development of quantum technologies and expand their practical applications across various industries.

Neuromorphic Quantum Computing (abbreviated as ‘n.quantum computing’) [1,2] is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations [3,4]. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing [5,6,7,8,9].

Both traditional quantum computing and neuromorphic quantum computing [1] are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing [1] and quantum computing share similar physical properties during computation [9,10].

##### Neuromorphic quantum architecture

**Quantum Computing:** Utilizes quantum circuits with qubits implemented using superconductors, trapped ions, or other quantum technologies. These systems often require extremely low temperatures and sophisticated error correction methods to maintain coherence and reduce error rates.

**Neuromorphic Quantum Computing:** In nature, physical systems tend to evolve toward their lowest energy state: objects slide down hills, hot things cool down, and so on. This behaviour also applies to neuromorphic systems. To imagine this, think of a traveler looking for the best solution by finding the lowest valley in the energy landscape that represents the problem. Classical algorithms seek the lowest valley by placing the traveler at some point in the landscape and allowing that traveler to move based on local variations. While it is generally most efficient to move downhill and avoid climbing hills that are too high, such classical algorithms are prone to leading the traveler into nearby valleys that may not be the global minimum. Numerous trials are typically required, with many travellers beginning their journeys from different points.

In contrast, neuromorphic annealing begins with the traveler simultaneously occupying many coordinates thanks to the phenomenon of inherent parallelism. The probability of being at any given coordinate smoothly evolves as annealing progresses, with the probability increasing around the coordinates of deep valleys. Instantonic jumps allows the traveller to pass through hills—rather than be forced to climb them—reducing the chance of becoming trapped in valleys that are not the global minimum. Long range order further improves the outcome by allowing the traveler to discover correlations between the coordinates that lead to deep valleys.

To speed computation, our neuromorphic platform taps directly into an unimaginably vast fabric of reality—the strange and counterintuitive world of physics and biology inspired computing. Rather than store information using bits represented by 0s or 1s as conventional computers do, neuromorphic computers use voltages and current. The dynamic long range behaviour, along with its trend towards optimal energy and instantonic effects, enable neuromorphic computers to consider and manipulate many combinations of bits simultaneously.

> Whitepaper: Dynex Neuromorphic Quantum Circuits’ Equations of Motion

> Whitepaper: Dynex Data Flow

###### n.Quantum Entanglement Effects

Quantum computing utilise certain principles of quantum mechanics, such as entanglement, to enable their components to be interconnected over vast distances. This interconnection means that a change in one component can instantly affect others, no matter how far apart they are. This characteristic facilitates the computer's ability to swiftly determine the most energy-efficient state. Similarly, n.quantum computing exhibits a related phenomenon through long-distance connections stemming from a state known as criticality [11]. In this setup, each gate within the circuit is designed to be responsive to distant gates, creating a network where each gate can influence and be influenced by others across the system. As the circuit evolves, the connections between different gates reach a critical state, making the system prone to sudden, large-scale changes or "avalanches" triggered by minor disturbances anywhere in the circuit. These avalanches, driven by the circuit's long-range correlations, enable the system to quickly find the lowest energy state, significantly speeding up computation compared to traditional methods [9,11].

###### n.Quantum Tunnelling Effects

Quantum computing also harness the principle of quantum tunnelling to find the most efficient solution. This process enables them to bypass energy barriers and access lower energy states that represent improved solutions. In contrast, n.quantum computing circuits [12] utilise the behaviour of electrical currents and voltages to expedite the transition to more favourable states. The concept of an instanton [9,11], borrowed from the study of dynamical systems, explains how these circuits navigate through energy barriers. This happens as the system encounters "saddle" points within the electrical landscape, which possess characteristics that attract the system initially but then repel it as it gets closer. This interaction results in the system being propelled away from these points at high speed, effectively allowing it to jump from one state to another across the landscape.

This phenomenon [11] mirrors quantum tunnelling but occurs in the realm of voltages and currents. In neuromorphic quantum computing circuits, this behaviour emerges from the collective action of memristor based gates, which induce widespread fluctuations in voltages, thereby swiftly steering the circuit towards superior configurations.

These inherent physical mechanisms enable both, n.quantum computing circuits and quantum computing to navigate towards the best possible solution out of a vast array of potential configurations, by effectively mapping the solution into their system [9].

##### Error Handling

**Quantum Computing**: There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.

**Neuromorphic Quantum Computing**: In contrast to quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems. It inherently manages errors and noise, reducing the need for extensive error correction protocols.

##### Scalability

**Quantum Computing**: Scalability is limited by the physical requirements of maintaining qubits and their coherence. Increasing the number of qubits while managing errors is a major challenge.

**Neuromorphic Quantum Computing**: Promises improved scalability by leveraging neuromorphic principles to create large-scale, efficient quantum systems that can process information more like a human brain, offering more practical scalability.

##### Real-World Applications

There are currently a number of significant engineering obstacles to construct useful quantum computers capable of solving real-world problems at scale. The major challenge in quantum computing is maintaining the coherence of entangled qubits’ quantum states; they suffer from quantum decoherence and state fidelity from outside noise (vibrations, fluctuations in temperature, electromagnetic waves). To overcome noise interference, quantum computers are often isolated in large refrigerators cooled to near absolute zero (colder than outer space) to shield them and in turn, reduce errors in calculation. Although there are error-correcting techniques being deployed, there are currently no existing quantum computers capable of maintaining the full coherence required to solve industrial-sized problems at scale today. Therefore, they are mostly limited to solving toy-sized problems.

In contrast to traditional quantum computing, n.quantum computing possesses the capability to be efficiently emulated on contemporary computers through software, as well as to be physically constructed using conventional electrical components. Quantum algorithms can be computed efficiently with n.quantum computing at scale, potentially solving real-world problems [9,11].

##### EU Horizon 2020 Project “Neuromorphic Quantum Computing”

The European Union-funded project "Neuromorphic Quantum Computing" [2] resulted in the publication of eighteen peer-reviewed articles. Its objective was to introduce hardware inspired by the human brain with quantum functionalities. The project aimed to construct superconducting quantum neural networks to facilitate the creation of dedicated neuromorphic quantum machine learning hardware, which could potentially outperform traditional von Neumann architectures in future generations. This achievement represented the merger of two forefront developments in information processing: machine learning and quantum computing, into a novel technology. Contrary to conventional machine learning techniques that simulate neural functions in software on von Neumann hardware, neuromorphic quantum hardware was projected to provide a significant advantage by enabling parallel training on various batches of real-world data. This feature was anticipated to confer a quantum benefit. Neuromorphic hardware architectures were seen as critically important for both classical and quantum computing, especially for distributed and embedded computing tasks where the extensive scaling of current architectures could not offer a sustainable solution.

##### Turing complete n.quantum circuits

A Dynex machine is a class of general-purpose computing machines based on memory systems, where information is processed and stored at the same physical location. We analysed the memory properties of the Dynex machine to demonstrate that they possess universal computing power—they are Turing- complete—, intrinsic parallelism, functional polymorphism, and information overhead, namely that their collective states can support exponential data compression directly in memory through their collective states. Moreover, we show that the Dynex machine is capable of solving NP-complete problems in polynomial time, just like a non-deterministic Turing machine. The Dynex machine, however, requires only a polynomial number of memory cells due to its information overhead. It is important to note that even though these results do not prove NP=P within the Turing paradigm, the concept of Dynex machines represents a paradigm shift from the current von Neumann architecture, bringing us closer to the concept of brain-like neural computation.

##### Efficient n.quantum circuit simulation

It is important to realise that computing is fundamentally a physical process. The statement may seem obvious when considering the physical processes harnessed by the electronic components of computers (for example, transistors), however, virtually any physical process can be harnessed for some form of computation. Note, that we are speaking of Alan Turing’s model of computation, that is, a mapping (transition function) between two sets of finite symbols (input and output) in discrete time.

It is important to distinguish between continuous and discrete time: Dynex n.quantum circuits operate in continuous time, though, their simulations on modern computers require the discretisation of time. Continuous time is physical time: a fundamental physical quantity. Discrete time is not a physical quantity, and might be best understood as counting time: counting something (function calls, integration steps, etc.) to give an indication (perhaps approximation) of the physical time. In the literature of Physics and other Physical Sciences, physical time has an assigned SI unit of seconds, whereas in Computer Science and related disciplines, counting time is dimensionless.

##### The fourth missing circuit element

Modern computers rely on the implementation of uni-directional logic gates that represent Boolean functions. Circuits built to simulate Boolean functions are desirable because they are deterministic: A unique input has a unique, reproducible output.

Modern computers relegate the task of logic to central processing units (CPUs). However, the resources required for the task might exhaust the resources present within the CPU, specifically, cache memory. For typical processes on modern computers, random-access memory (RAM) is the memory used for data and machine code, and is external to the CPU. The physical separation of CPU and RAM results in what is known as the von Neumann bottleneck, a slow down in computation caused by the transfer of information between physical locations.

To overcome the von Neumann bottleneck, it was proposed to perform computing with and in memory, utilising ideal memristors. Distinct from in-memory computation, it is an efficient computing paradigm that uses memory to process and store information in the same physical location.

A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they are non-volatile, meaning that they retain memory without power.

The original concept for memristors, as conceived in 1971 by Professor Leon Chua at the University of California, Berkeley, was a nonlinear, passive two-terminal electrical component that linked electric charge and magnetic flux (“The missing circuit element“). Since then, the definition of memristor has been broadened to include any form of non-volatile memory that is based on resistance switching, which increases the flow of current in one direction and decreases the flow of current in the opposite direction.

Memristors, which are considered to be a sub-category of resistive RAM, are one of several storage technologies that have been predicted to replace flash memory. Scientists at HP Labs built the first working memristor in 2008 and since that time, researchers in many large IT companies have explored how memristors can be used to create smaller, faster, low-power computers that do not require data to be transferred between volatile and non-volatile memory.

A digital Dynex circuit is realised as a memristor based bi-directional logic circuit. These circuits differ from traditional logic circuits in that input and output terminals are no longer distinct. In a traditional logic circuit, some input is given and the output is the result of computation performed on the input, via uni-directional logic gates. In contrast, a memristor based bi-directional logic circuit can be operated by assigning the output terminals, then reading the input terminals.

Self-organising logic is a recently-suggested framework that allows the solution of Boolean truth tables “in reverse,” i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned (“terminal-agnostic logic”). It can be realised if time non-locality (memory) is present. A practical realisation of self-organising logic gates can be done by combining circuit elements with and without memory. By employing one such realisation, it can be shown numerically, that self-organising logic gates exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the non-linear equations of motion describing self-organising logic gates, and connect topologically distinct critical points in the phase space. By linear analysis at those points it can be shown that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. It can also be shown that the memory content of these gates only affects the relaxation time to reach the logically consistent solution. By solving the corresponding stochastic differential equations, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space, but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organise to the correct solution.

Note that the self-organising logic we consider here has no relation to the invertible universal Toffoli gate that is employed, e.g., in quantum computation. Toffoli gates are truly one-to-one invertible, having 3-bit inputs and 3-bit outputs. On the other hand, self-organising logic gates need only to satisfy the correct logical proposition, without a one-to-one relation between any number of input and output terminals. Instead, it is worth mentioning another type of bi-directional logic that has been recently discussed in using stochastic units (called p-bits). These units fluctuate among all possible consistent inputs. However, in contrast to that work, the invertible logic we consider here is deterministic.

With time being a fundamental ingredient, a dynamical systems approach is most natural to describe such gates. In particular, non-linear electronic (non-quantum) circuit elements with and without memory have been suggested as building blocks to realise self-organising logic gates in practice.

By assembling self-organising logic gates with the appropriate architecture, one then obtains circuits that can solve complex problems efficiently by mapping the equilibrium (fixed) points of such circuits to the solution of the problem at hand. Moreover, it has been proved that, if those systems are engineered to be point dissipative, then, if equilibrium points are present, they do not show chaotic behaviour or periodic orbits.

It was subsequently demonstrated, using topological field theory (TFT) applied to dynamical systems, that these circuits are described by a Witten-type TFT, and they support long-range order, mediated by instantons. Instantons are classical trajectories of the non-linear equations of motion describing these circuits.

###### References

[1] Pehle, Christian; Wetterich, Christof (2021-03-30), *Neuromorphic quantum computing*, arXiv:2005.01533

[2] "Neuromrophic Quantum Computing | Quromorphic Project | Fact Sheet | H2020". *CORDIS | European Commission*. doi:10.3030/828826. Retrieved 2024-03-18

[3] Wetterich, C. (2019-11-01). "Quantum computing with classical bits". *Nuclear Physics B*. **948**: 114776. arXiv:1806.05960. Bibcode:2019NuPhB.94814776W. doi:10.1016/j.nuclphysb.2019.114776. ISSN 0550-3213

[4] Pehle, Christian; Meier, Karlheinz; Oberthaler, Markus; Wetterich, Christof (2018-10-24), *Emulating quantum computation with artificial neural networks*, arXiv:1810.10335

[5] Carleo, Giuseppe; Troyer, Matthias (2017-02-10). "Solving the quantum many-body problem with artificial neural networks". *Science*. **355** (6325): 602–606. arXiv:1606.02318. Bibcode:2017Sci...355..602C. doi:10.1126/science.aag2302. ISSN 0036-8075. PMID 28183973

[6] Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe (2018-02-26). "Neural-network quantum state tomography". *Nature Physics*. **14** (5): 447–450. arXiv:1703.05334. Bibcode:2018NatPh..14..447T. doi:10.1038/s41567-018-0048-5. ISSN 1745-2481

[7] Sharir, Or; Levine, Yoav; Wies, Noam; Carleo, Giuseppe; Shashua, Amnon (2020-01-16). "Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems". *Physical Review Letters*. **124** (2): 020503. arXiv:1902.04057. Bibcode:2020PhRvL.124b0503S. doi:10.1103/PhysRevLett.124.020503. PMID 32004039

[8] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), *TensorFlow Quantum: A Software Framework for Quantum Machine Learning*, arXiv:2003.02989

[9] Broughton, Michael; Verdon, Guillaume; McCourt, Trevor; Martinez, Antonio J.; Yoo, Jae Hyeon; Isakov, Sergei V.; Massey, Philip; Halavati, Ramin; Niu, Murphy Yuezhen (2021-08-26), *TensorFlow Quantum: A Software Framework for Quantum Machine Learning*, arXiv:2003.02989

[10] Wilkinson, Samuel A.; Hartmann, Michael J. (2020-06-08). "Superconducting quantum many-body circuits for quantum simulation and computing". *Applied Physics Letters*. **116** (23). arXiv:2003.08838. Bibcode:2020ApPhL.116w0501W. doi:10.1063/5.0008202. ISSN 0003-6951

[11] Di Ventra, Massimiliano; Traversa, Fabio L.; Ovchinnikov, Igor V. (2017-08-07). "Topological Field Theory and Computing with Instantons". *Annalen der Physik*. **529** (12). arXiv:1609.03230. Bibcode:2017AnP...52900123D. doi:10.1002/andp.201700123. ISSN 0003-3804

[12] Gonzalez-Raya, Tasio; Lukens, Joseph M.; Céleri, Lucas C.; Sanz, Mikel (2020-02-14). "Quantum Memristors in Frequency-Entangled Optical Fields". *Materials*. **13** (4): 864. arXiv:1912.10019. Bibcode:2020Mate...13..864G. doi:10.3390/ma13040864. ISSN 1996-1944. PMC 7079656. PMID 32074986