A network-theoretical perspective on oscillator-based Ising machines

The standard van Neumann computer excels at many things. However, it can be very inefficient in solving optimization problems with a large solution space. For that reason, a novel analog approach, the oscillator-based Ising machine, has been proposed as a better alternative for dealing with such problems. In this work, we review the concept of oscillator-based Ising machine and address how optimization problems can be mapped onto such machines when the quadratic unconstrained binary optimization (QUBO) formulation is given. Furthermore, we provide an ideal circuit that can be used in combination with the wave digital concept for real-time simulated annealing. The functionality of this circuit is explained on the basis of a Lyapunov stability analysis. The latter also provides an answer for the decision-making question: When has the Ising machine solved a mapped problem? At the end, we provide emulation results demonstrating the correlation between functionality and stability. Our results are based on an eigenvalue analysis of the linearized system and demonstrate that all eigenvalues converge to the left complex half plane, once an optimization problem has been optimally solved. This implies that the system enters an attractor region and is hence forced to eventually converge to the optimal solution.


Summary
The standard van Neumann computer excels at many things. However, it can be very inefficient in solving optimization problems with a large solution space. For that reason, a novel analog approach, the oscillator-based Ising machine, has been proposed as a better alternative for dealing with such problems. In this work, we review the concept of oscillator-based Ising machine and address how optimization problems can be mapped onto such machines when the quadratic unconstrained binary optimization (QUBO) formulation is given. Furthermore, we provide an ideal circuit that can be used in combination with the wave digital concept for real-time simulated annealing. The functionality of this circuit is explained on the basis of a Lyapunov stability analysis. The latter also provides an answer for the decision-making question: When has the Ising machine solved a mapped problem? At the end, we provide emulation results demonstrating the correlation between functionality and stability. Our results are based on an eigenvalue analysis of the linearized system and demonstrate that all eigenvalues converge to the left complex half plane, once an optimization problem has been optimally solved. This implies that the system enters an attractor region and is hence forced to eventually converge to the optimal solution.

K E Y W O R D S
Ising machine, Kuramoto model, Lyapunov stability, unconventional computing

| INTRODUCTION
As the progress in van-Neumann-based technology begins to stagnate, scientific research is slowly gravitating towards finding alternative technologies that can supplement the deficiencies of standard computers. 1,2 In this context, oscillator-based Ising machines are a promising approach for solving combinatorial optimization problems with a large solution space. 3 Their hardware implementation is commonly based on a network of diffusively coupled (nonlinear) oscillators. [4][5][6][7][8] Although in recent years, we have also seen a number of optical implementations, so-called coherent Ising machines. [9][10][11][12] In principle, all Ising machines are built, such that they have the natural tendency to minimize a scalar function, the so-called Ising Hamiltonian, which relates to the Ising model 13 and describes the energy of a lattice consisting of magnetically coupled spins, where e μ denotes a unit vector with a one in the μ-th entry. The vector of spin variables s has the entries s ν describing the orientation of the ν-th spin in the lattice. In general, these spins can either point upwards s ν ¼ 1 or downwards s ν ¼ À1. The mutual coupling is described by the coupling coefficient matrix J, whose μν-th element, J μν , describes the magnetic coupling between the μ-th and ν-th spin in the lattice. Moreover, the vector h is the external coupling vector, whose ν-th entry describes an external coupling between the ν-th spin and an external magnetic field. In the following, we refer to the corresponding term as the Zeeman term.
To solve a combinatorial optimization problem, one must map the considered problem onto the Ising Hamiltonian. In the past decade, many optimization problems (including Karp's 21 NP-problems) have been successfully mapped onto the Ising Hamiltonian. [14][15][16][17][18][19][20] Here, mapping refers to the procedure of formulating a Hamiltonian, similar to (1), whose minimum encodes the optimal solution of the given problem; see Lucas. 14 This means that the minimum can only be achieved when an optimal spin configuration (ground state) is found, where the latter corresponds to the solution of the problem. However, even when an adequate Hamiltonian description of a problem is found, mapping the problem onto the coupling coefficients within J and h is not always straightforward. To our utmost knowledge, reports on technical Ising machine implementations are either specific to certain optimization tasks or provide mapping procedures that are specific to the used hardware. For example, many reports on practical Ising machine implementations only discuss the mapping of max-cut problems. 4,5,21 Other reports only take the polarity of the couplings coefficients J μν into account 8,22 but not the weights that may appear in certain optimization tasks. In this work, we plan to elaborate on the mapping procedure and partially generalize it by looking at the Ising machine from a network-theoretical perspective, that is, a hardware-independent setting. This way, we can provide a general mapping scheme that maps an arbitrary quadratic unconstrained binary optimization (QUBO) problem onto the coupling coefficients that is neither problem-nor hardware-specific. Our research result is a mathematical framework, which directly yields the coupling coefficients of graph-related optimization problems, in case the underlying QUBO formulation is given.
In previous work, 23 we have proposed a real-time capable algorithm for emulating an oscillator-based Ising machine that can solve optimization problems with no Zeeman term. The algorithm is based on the wave digital concept, which is a powerful tool for emulating the behavior of electrical circuits. In particular, this concept is known for its numerical robustness and massive parallelism when it comes to emulating large electrical networks. 24 In this work, we aim to present a generalization of the circuit, on which the algorithm is based, that also considers the Zeeman term. To explain the functionality and robustness of the proposed algorithm, we recapitulate the stability analysis presented in Wang and Roychowdhury, 4 which is based on Lyapunov's stability theory. Here, we propose a modified Lyapunov function that now fulfills the properties of a strict Lyapunov function, as opposed to the one found in literature. 4 Furthermore, we provide a novel concept for finding out when an oscillator-based Ising machine has solved a mapped problem besides convergency. This concept serves as both an explanation and a numerical proof of the Ising machine's functionality.
The remainder of this paper is structured as follows: Section 2 gives some preliminary remarks on the nomenclature used throughout this work. In Section 3, we introduce a framework for mapping optimization problems onto the Ising machine. Section 4 deals with the functionality, stability, and modeling of a phase oscillator-based Ising machine. In Section 5, we show emulation results to validate the discussions of the previous section. Finally, Section 6 summarizes our contributions and gives a brief outlook on future research in the context of this work.

| PRELIMINARIES
A graph G ¼ ðV,EÞ consists of jVj ¼ n vertices and jEj ¼ m edges. Two vertices μ and ν are said to be adjacent if the edge set E contains a corresponding edge denoted by ðμ, νÞ E. In most cases, the edges are undirected, that is, when ðμ, νÞ E, then this indirectly implies ðν, μÞ E, unless the problem is explicitly declared to have directed edges. The matrix A denotes the adjacency matrix of the graph, which is always symmetric A ¼ A T and contains zeros on its main diagonal, as we do not consider any form of self loops. The matrix A denotes the co-adjacency matrix of the graph and is given by the following: This matrix is always symmetric A ¼ A T and contains zeros on its main diagonal. Lastly, a complete graph is a graph, where a path exists from every vertex to every other vertex in the graph. A set of vertices (or oscillators) are said to be completely coupled/connected, if their graph representation constitutes a complete graph.

| PROBLEM MAPPING
Mapping an optimization problem to an oscillator-based Ising machine first requires formulating the problem as a QUBO problem: Here, we refer to x as a vector of bit variables and Q ℝ nÂn as the coefficient matrix, which encodes the optimization problem. The problem of minimizing the Ising Hamiltonian is essentially a QUBO, since it can always be stated as where s e is an extended vector of spin variables and J e is an extended coefficient matrix, which now also encodes the Zeeman term. In the following, we refer to the problem of minimizing the Hamiltonian defined in (4) as an Ising problem. Note, the relation between the two variables x and s is bijective: Therefore, every QUBO can be reformulated as an Ising problem and vice versa.
To give an example of problem mapping, we consider the maximum cut problem (max-cut), arguably the most popular problem in association with the Ising machine. This is partly due to the direct correspondence of the optimization problem with the Ising model. Given a weighted graph G ¼ ðV, EÞ with the weighted adjacency matrix W , the maximum cut problem asks for a partitioning of the graph's vertices into two complementary sets, such that the number of edges between the sets is maximal, Evidently, the coupling coefficients in (1) are determined by the following: One spin variable sometimes do not suffice for representing local decisions, which have more than two possible outcomes. In this case, a common approach is to use multiple spin variables. Thus, we obtain subsets of spin variables, which are mutually coupled to encode the considered problem. For that reason, it can be beneficial to represent the spin variables in the form of a matrix, whose columns each represent a subset of spin variables: Here, m is the number of subsets, k is the number of spin variables in each subset, while vecðSÞ denotes the vectorization of the matrix S. Equivalent to (6), we define a matrix of bit variables: Throughout this manuscript, we will show that the Hamiltonian representation of many graph-related tasks can be formulated as where Q λ ℝ mÂm and R λ ℝ kÂk are real matrices (usually adjacency matrices), q ℓ ℝ k and r ℓ ℝ m are real vectors, and trðÁÞ denotes the trace operator. The relationship between (10) and the canonical formulation of the Ising Hamiltonian (1) is given by the following: where N denotes the Kronecker product. A thorough derivation of this relationship is given in Appendix A. Now, we would like to give a graph-theoretical interpretation of this mapping scheme, so we rewrite the partitioned Hamiltonians as a linear combination of bilinear forms: The first sum maps m topologically identical disjoint subgraphs onto the Ising Hamiltonian. Their interconnections are described by R λ , where the edge weights of the μ-th subgraph are uniformly scaled by the factor q μμ . The second sum maps the interconnection between the vertices of the μ-th and ν-th subgraphs. Again, the interconnections and their weights are described by the adjacency matrix R λ and are uniformly scaled by the factor q μν . By comparing the coefficients of (1) with (11), we obtain the following relationship between both representations: To map an Ising problem to the Ising machine, we must encode the coupling coefficients into the couplings of the oscillators. Figure 1 depicts the solution of an Ising problem with four coupling coefficients J μν . Here, the coefficients have been mapped onto the conductances of the Ising machine depicted on the right side. The following sections give a systematic derivation of the coupling coefficients from the underlying QUBO formulation. Here, we will deal with different kinds of optimization problems (including NP-problems), where each problem will introduce a new type of cost function or constraint. The problems are hierarchically organized by the difficulty of their QUBO formulation. The purpose of this analysis is to obtain a reservoir of different cost functions and constraints, whose mapping onto the Ising machine should be the simplified by our contribution. The resulting formalism is especially useful for systematically mapping densely constrained problems onto the Ising machine. In particular, we would like to draw the reader's attention to the recurring vector-valued representations, which appear in many graph-based optimization problems. These terms usually appear as multiple sums over node or edge subsets in the original QUBO formulations; see Lucas 14 for an overview. Such a representation makes it very difficult to discern the coupling structure and the associated weights. However, this interpretation becomes very simple and intuitive, when the suggested formalism is applied.

| Binary integer linear programming (BILP)
BILP is an optimization task that asks us to maximize a linear cost function subject to a set of linear equality constraints 25 : The entries of the vector d ℝ n scale the bit variables according to their importance or impact. The entries of W ℝ kÂn are the coefficients of the considered linear constraints, while b contains the constant equality constraints. The QUBO formulation associated with this problem reads, 14 where Á k k denotes the Euclidian norm. The coefficients c ν weigh the different parts of the Hamiltonian according to their importance. In some Ising problems, certain choices of these coefficients are beneficial for enhancing the performance of the Ising machine. 14 Now, since we have no subsets of bit variables, the corresponding Hamiltonian of the Ising problem can be directly obtained by applying (5): Comparing the coefficients of (14c) with (1) then yields the desired coupling coefficients: The offset appearing in (14c) is not important w.r.t. the coupling coefficients. However, it ensures that the minimum of the Hamiltonian, corresponding to the solution of the problem, stays at zero.

| Minimal vertex cover
Given an undirected graph G ¼ ðV, EÞ with m vertices and p edges, a vertex cover is a set of vertices covering all edges, i.e. every edge is incident to a vertex in the set. The minimal vertex cover problem asks for the minimal number of vertices that must be colored in order to obtain a vertex cover 26 : Left: an arbitrary spin configuration for an Ising lattice. Right: compact representation of an Ising machine with the same configuration as the left side. The appearing oscillators assume a relative phase shift of either 0 or π. A relative phase shift of π is interpreted as "spin up" (blue), while a relative phase shift of 0 is interpreted as "spin down" (red).
Here, the validity of the vertex cover is embedded into the inequality constraint. The QUBO formulation of this problem reads 14 : with the adjacency matrix A of G. Again, no subsets of bit variables appear in this problem. Applying (5) yields The desired coefficients are given by the following:

| Graph coloring
Given an undirected graph G ¼ ðV,EÞ with m vertices and p edges, the graph coloring problem asks us to color the graph with k colors such that two adjacent vertices never share the same color. Here, we have two conditions that must be fulfilled: every vertex must have exactly one color and two adjacent vertices should not have the same color. Considering that every color can only be represented by one binary variable, we must introduce k binary variables for the different colors of each vertex. We can then formulate the optimization problem as follows: min x 1 , s:t: A possible QUBO formulation reads 14 where A denotes the adjacency matrix of G. Every column of X contains k bit variables x ν,ϰ for each possible color of the ν-th vertex. To obtain the equivalent Ising problem, we apply (9), which yields the Hamiltonian From (13), we can deduce the desired coupling coefficients: A graphical illustration of the coupling scheme is given Figure 2.

| Hamiltonian cycle
Given a directed or undirected graph G ¼ ðV, EÞ with m vertices and p edges, can one find a cycle that visits every vertex exactly once? Essentially, a Hamiltonian cycle imposes two conditions: every vertex must be visited exactly once and only adjacent vertices are visitable from the current vertex. To formulate this problem as a binary optimization problem, one can introduce m bit variables representing the current positioning in the graph for every graph traversal. However, this implies that we must add the condition that the current position must always be unique. Overall, we can formulate the problem as follows: min x 1 s:t: The corresponding QUBO formulation then reads 14 where A is the co-adjacency matrix defined in (2) and where every column of X represents a subset of bit variables. After applying the bijective mapping relation (9), we obtain the Hamiltonian of the Ising problem: The desired coupling coefficients are given by the following: A graphical illustration of the coupling scheme is given in Figure 3.

| Traveling salesman problem
Let G ¼ ðV, EÞ be a directed or undirected weighted graph with m vertices and the weighted adjacency matrix W , where each edge (μ, ν) is associated with the weight w μν representing a distance between two cities. Can one find a Hamiltonian cycle such that the distance covered while traveling through the cycle is minimal? The optimization task is the same as (17a) except that we now have a nonconstant cost function: w μν x μν s:t: Extending the Ising formulation of the Hamiltonian cycle problem to cover the traveling salesman problem is achieved by adding the following term to (17b), This ensures that the minimum of the Hamiltonian is given by a variable configuration that provides the shortest Hamiltonian cycle. In order to avoid violating the conditions of a Hamiltonian cycle, the constant coefficient c 2 must be chosen to be in the interval 0 < c 0 < c 1 = maxðw μν Þ. 14 Now, apply (9) to obtain the following: Finally, the desired coupling coefficients are finally given by the following: A graphical illustration of the resulting coupling is presented in Figure 4.
To conclude this section, we have provided a systematic procedure for mapping QUBO formulations from literature onto the coupling coefficients of an Ising problem. In principle, we argue that almost any graph-related task is formulated as a linear combination of the terms that have appeared in the discussed problems. In fact, even problems that are not necessarily graph-related can be interpreted this way. For example, problems with images, such as image restoration, 27 can be interpreted as a lattice of vertices with nearest-neighbor interaction. By calculating the associated adjacency matrix, one can easily map the problem onto our suggested formalism and obtain the coupling coefficients. Thus, we have bridged the gap between the QUBO formulation and the coupling coefficients needed for mapping a problem onto the Ising machine. For the sake of convenience, the essential relations are summarized in Table A1 of Appendix A.

| AN ISING MACHINE BASED ON PHASE OSCILLATORS
Now that we have addressed the systematic mapping of optimization problems oscillator-based Ising machines, we can discuss the reasons, which qualify oscillator networks to function as Ising machines. The general idea is to let a diffusively coupled oscillator network emulate the behavior of coupled spins, whose energy is described by the Ising Hamiltonian. If the Ising Hamiltonian is somehow mapped onto the energy of the electrical system, then we expect the free-F I G U R E 4 Compact representation of the coupling scheme for the traveling salesman problem. The blue and green couplings are the same as in Figure 3. The magenta coupling is explicitly illustrated on the right side of the figure.
running system to find a state configuration that minimizes its energy and consequently the mapped Hamiltonian. However, this requires the oscillators to act like the spins from the Ising model. Therefore, every oscillator is subjected to a special type of forcing, so-called subharmonic injection locking (SHIL), which enforces bistable phase behavior in the stationary state, 28 that is, a phase shift of 0 or π with respect to some reference oscillation. To explain the functionality of the considered circuit in a formal but intuitive manner, we approximate the circuit's dynamics by phase reduction. 29 This results in a phase model, similar to the Kuramoto model, 30 which captures the phase behavior of the original circuit through a system of differential equations: Here, φ μ denotes the phase shift of the μ-th oscillator. The constant Δω μ represents the difference between the frequency of the μ-th oscillator ω μ and the network's central frequency ω 0 . The subfunction ω c μ represents the interoscillator coupling between the μ-th oscillator and the remainder of the network, with the bias coupling strength k c and the couplings weights J μν . Similarly, ω s μ represents the SHIL coupling of the μ-th oscillator with the coupling strength k s . For a detailed derivation of the phase model, we refer the interested reader to Wang and Roychowdhury. 4

| Stability analysis
The functionality of the phase oscillator-based Ising machine can be understood by analyzing the stability of the system. The considered phase model, in particular, is a special case, where this stability analysis is relatively simple, due to it being a gradient system. The trajectories of the said system always follow the steepest descent of an associated scalar potential function, 31 which leads to asymptotic stability in the sense of Lyapunov. A sufficient condition for the existence of a potential function is the symmetry of the Jacobian associated with the vector field of the considered system. 32 In the following, we derive an analytical expression for the Jacobian and show it to be indeed symmetric. To that end, we first rewrite (19) as Σ : where Δω ¼ ½Δω 1 , Δω 2 , …, Δω n T , φ ¼ ½φ 1 , φ 2 , …,φ n T . Furthermore, we introduce the vector sðφÞ ¼ e ȷφ , which projects the phase values φ μ onto continuous spin values s μ ½À1,1, which can be interpreted like the spin variables of the Ising Hamiltonian. In general, the partial derivatives of the phase equation only consist of two types of elements: Consequently, the Jacobian in analytical form can be given by the following: Here, J f c ðφÞ is a Hermitian matrix containing the partial derivatives of ω c , whereas J f s ðφÞ is a real diagonal matrix containing the partial derivatives of ω s . Hence, the Jacobian J f , as a linear combination, is also Hermitian. Therefore, the considered network has a scalar potential function V ðφÞ, which can be obtained by integrating the state equation with respect φ and choosing the integration constant such that the negative gradient of V ðφÞ coincides with the state equation, with The function V ðφÞ is a Lyapunov function candidate that must be modified, so it remains a potential function but fulfills the conditions of a strict Lyapunov function. We refer the interested to Lakshmikantham et al 33 for an overview on the properties and applications of Lyapunov functions. Now, it can be verified that V ðφÞ fulfills two of the three conditions of a strict Lyapunov function. The third condition requires a valid Lyapunov function to be positive semidefinite. Up to this point, V ðφÞ is only positive definite if the coupling coefficients J μν are nonnegative, which generally does not hold. To construct a strict Lyapunov function, we suggest applying a simple rectification: The entries of J þ are the absolute values of the entries of J. This change ensures that the potential function stays positive, when the coupling coefficients are negative without having any effects on its gradient. Equation (24) now fulfills all the properties of a strict Lyapunov function as opposed to the Lyapunov function appearing in literature. 4 The asymptotic stability of the network can be inferred from examining the time derivative of its Lyapunov function: Here, we make use of (23) to obtain the last equality. The application of a SHIL-signal forces the oscillators to assume the values φ μ f0, πg in the stationary state, such that we have ℜ s T s f g¼ n and ℜ s H Js f g¼ s T Js. Thus, the Lyapunov function can be rewritten as where cðφÞ (asymptotically) represents a constant offset. The Lyapunov function coincides with the Ising Hamiltonian up to a constant offset. Equation (25) shows that the network minimizes the derived Lyapunov function and therefore also naturally minimizes the Ising Hamiltonian, which explains its functionality as an Ising machine.

| Electrical model
In this section, we synthesize an electrical circuit for the considered phase model. This circuit is used for two purposes: to emulate the behavior of the circuit and to extend the underlying model, so it incorporates the Zeeman term. To this end, we apply an equivalency transformation to (19), where we associate the phase φ μ with a scaled voltage quantity πû=u μ resulting in with the voltage normalization constantû ¼ 1V. Now, consider the circuit on the left of Figure 5, which is governed by the following: Equation (28) is now used to model the dynamics of (27). Since ω c μ represents an inter-oscillator interaction, it is associated with the external coupling current i μ . The term ω s μ accounts for the SHIL-signal, which induces a selfcoupling term in (19) and is therefore associated with the current i G μ . Finally, the residual current j μ is associated with the frequency degeneration Δω μ resulting in the following: F I G U R E 5 Left: equivalent circuit of the phase oscillator. Right: electrical coupling element representing the phase coupling in the modified Kuramoto model.
We can now use Ohm's law to obtain an expression for the nonlinear conductance: The external coupling currents model the inter-oscillator coupling of the original oscillator network. Considering that each phase oscillator (left side of Figure 5) is a reduced representative of an actual oscillator, we see that the topology of the original network is retained in the circuit of the phase model. Thus, the current i μ results from coupling multiple phase oscillators via a diffusive coupling network, as depicted in Figure 6 on the example of four coupled phase oscillators. The coupling elements are given by nonlinear conductors (right side of Figure 5), because they implement the nonlinear coupling of the phase model. The conductances can, again, be determined by first writing out the particular Ohmic relation: Here, the orientation of the currents j μν is chosen, so they originate from phase oscillator with lower indices and flow towards phase oscillators with higher indices; see Figure 6. This simplifies the description of the current flow and enables using Kirchhoff's current law to obtain the relationship: With Equations (31) and (32), the coupling conductances can be formulated as where sið0Þ ¼ 1, otherwise siðxÞ ¼ sinðxÞ=x.
Note that the coupling of the original oscillator network, whose behavior is described by the phase model, is linear. This statement is based on the fact that the coupling coefficients J μν are constant.

| Implementing the Zeeman term
Up to this point, the electrical model does not support incorporating the Zeeman term. However, this term appears in many Ising formulations and should therefore be considered. The equivalent formulation of the Ising Hamiltonian in (4) reveals a possible realization of the Zeeman term, where all oscillators are coupled to a reference oscillator with a locked phase value of s nþ1 ¼ þ1. The Zeeman term coefficients, given by J μ½nþ1 ¼ h μ =2, are then incorporated into the conductances coupling the phase oscillators to the reference oscillator: Practically, it is a very simple approach that comes with one drawback. Simulations of the Ising machine using this approach show that the reference oscillator can break out of its locked state. In such situations, the phase evolution of the Ising machine is distorted, which usually results in an invalid or suboptimal solution. A much more reliable approach is to make use of an ideal voltage source supplying the reference signal; see the left of Figure 7. To decouple the circuit, we can split it at the node of the voltage source into n resistive voltage sources; see the right side of Figure 7. Then, every resistive voltage source can be combined with the corresponding oscillator circuit, see the left of Figure 8. To omit the voltage source, we can simply redefine G μ½nþ1 as follows: Now, we have two parallel conductors, which can be simplified to one conductor with the conductance: These equivalent transformations allow reusing the same circuit as the one presented on the left side of Figure 5 but with a different internal conductance GðuÞ; see the right side of Figure 8.

| EMULATION RESULTS AND DISCUSSION
Now that we have derived an electrical circuit for the considered phase model, we now briefly discuss our preferred method of emulation. Here, we make use of the wave digital concept, 34 which is a powerful tool for emulating the behavior of large electrical networks in a highly parallel fashion. In previous work, 23 we have discussed how efficient wave digital models can be derived for the considered phase model. The electrical model of this work presents an extension of our past results, because it now also implements the Zeeman term. However, the electrical model is structurally identical, which allows exploiting the same methods discussed in Ochs et al 23 to emulate the phase oscillator-based Ising machine.
To test the Ising machine on one practical example, we solve a max-cut problem with a graph from Cook 35 that is known to have a cut size of 24; see the right side of Figure 9. The emulation parameters used for the electrical circuit are given in Table 1. Furthermore, the machine is started at some random initial states in the interval ½0,1V.
The machine partitions the graph into the two subgraphs, as denoted by the gray and white vertices on the left side of Figure 9. This partitioning can be interpreted from the output voltages of the oscillators u μ on the top left of Figure 9. The central top figure shows the oscillations of the original network, which have been reconstructed using the relationshipũ Overall, the machine evolves, so it minimizes the Ising Hamiltonian, which can be seen on the bottom right side of Figure 9. Since the problem of minimizing the cut size has been mapped to the Hamiltonian (using the coefficients in (7)), the cut size is maximized when the Hamiltonian is minimized. The correlation between the stability and functionality can be clearly seen on the central bottom figure of Figure 9. Here, we calculated the eigenvalues of the Jacobian (22) and plotted the eigenvalue with the maximal real part. All other eigenvalues are not plotted but lie beneath the drawn one. The plot implies that some eigenvalues initially have a positive real part but eventually evolve to have a negative real part once the machine finds the optimal solution. In other words, the optimal solution of the given task has been mapped onto a stable equilibrium in the phase space. This is, for the first time, a numerical proof of stability, which relates to previous analysis 4,8 but has yet to be conducted. Furthermore, it has another important implication, which we now would like to discuss: when the machine is turned on, it is not possible to know when it has solved the problem. However, if all eigenvalues have a negative real part, then this implies that the system trajectories have entered a local attractor with a stable equilibrium. Once the system enters this attractor, the machine is not able to exit the attractor's domain (unless externally perturbed), which implies that the machine has solved the problem. Otherwise, it is only possible to presume that the machine is done when it exhibits some convergency behavior, which is not necessarily true. In principle, one could argue that it is possible to decode the solution after 1s in this example.

| CONCLUSION AND OUTLOOK
Overall, this work extends previous findings on problem mapping and stability of oscillator-based Ising machines. In the first part, we have discussed the mapping of optimization problems onto an oscillator-based Ising machine. To simplify this process, we have proposed a mathematical framework which aids in mapping quadratic unconstrained optimization problems onto the couplings of an Ising machine. Here, we reviewed four representative optimization problems with different cost functions and constraint types to give an overview on how quadratic unconstrained optimization problems can generally be mapped onto an Ising machine. This analysis has shown that many problems can indeed be systematically mapped by an appropriate grouping of the decision variables.
In the second part of this work, we recapitulated the concept of a phase oscillator-based Ising machine. Here, we gave a generalization of our previous work by synthesizing an ideal circuit that also considers the linear term appearing in the Ising Hamiltonian, the so-called Zeeman term. In combination with the wave digital concept, this circuit can be used as an algorithm for simulated annealing. To explain the functionality and robustness of the proposed algorithm, we have conducted a thorough stability analysis. The results of this analysis have been numerically verified on a practical example, where we have discussed the effects of problem mapping on the phase space of the phase oscillator-based Ising machine. In a future work, we hope to find a systematic mapping procedure of general binary optimization problems onto an oscillator-based Ising machine.  (1) and (10) is derived. Here, we make use of the trace operator equalities, trðMNOÞ ¼ trðOMNÞ ¼ trðNOMÞ , trðM T NÞ ¼ vecðMÞ T vecðNÞ , ðA1Þ and the Kronecker product formula, where all appearing matrices are real and squared. Applying these equalities to every H λ allows for rewriting (10) as: H λ , with H λ ¼ s T ½Q λ R λ s and H ℓ ¼ ½q ℓ r ℓ T s : ðA3Þ T A B L E A 1 Mapping between the sum terms from QUBO formulations and terms of the suggested framework.

Sum terms in QUBO formulation
Representation within framework Coupling coefficient matrix J s μ,ϰ s ν,ϰþ1 trðAS T P T SÞ J ¼ ÀA N P T Abbreviation: QUBO, quadratic unconstrained binary optimization.