An Integrated All‐Optical Ising Machine with Unlimited Spin Array Size and Coupling

Artificial spin systems are used to solve combinatorial optimization problems by mapping them to the ground state search of the Ising model. Ising machines of various designs, in fields as diverse as photonics and electronics, have been proposed and demonstrated in recent years. One important mathematical operation required in photonic Ising machines for nearly all Hamiltonian minimization algorithms is repeated matrix–vector multiplication with the vector size limited by the physical size of the optical system. Herein, an integrated photonic Ising machine based on matrix partitioning and phase encoding, which effectively allows the use of physically small optical circuits to handle arbitrarily large spin vectors, is proposed. All the complicated calculations are carried out optically, rather than electronically, in this approach. How the required surface area of a hypothetical chip‐based photonic Ising machine can be resized according to convenience, with the trade‐off being the solution time, and that this trade‐off can be done in a way that does not impact the algorithm's efficacy, is shown. Through simulation, the system is predicted to have a high success rate, high noise tolerance, and high error tolerance.


Introduction
Combinatorial optimization problems are pervasive across many fields, including finance, logistics, biology, and chemistry. [1]owever, finding the optimal solution is challenging or even impossible due to the exponentially increasing number of possibilities as the problem size grows.To address this, researchers have explored using artificial spin systems to solve such problems by mapping them onto the ground-state search of the Ising modela mathematical model that ordinarily describes the interactions between magnetic moments in a lattice structure. [1,2]It was first proposed by Wilhelm Lenz in 1920 [3] and later developed significantly by Ernst Ising in 1925. [4]Ising machines have since been used to study various physical Ising problems and have recently gained attention due to their potential to solve nondeterministic polynomial-time hard (NP-hard) problems.[13][14] Among these platforms, the photonic Ising machine shows great advantages in power consumption, calculation speed, scalability, and noise tolerance. [1,2]In a photonic Ising machine, the spins are typically mapped to the phase of a propagating mode (with respect to a reference beam), with in-phase corresponding to "spin up" and out-of-phase corresponding to "spin down".In recent years, many studies have demonstrated that photonic Ising machines can efficiently solve the maximum cut (MAXCUT) problem, a well-known NP-hard benchmarking problem. [15,16]However, most of the reported photonic Ising machines are based on free space [17] or discrete fiber components, [18] which have large system sizes.With respect to previous work, [17] the advantage of the proposed work lies in the potential photonic integration.In addition, to reduce the Ising energy and ultimately reach the ground state (GS), an algorithm is required in some Ising machines-although there are a few possibilities, [19][20][21][22] matrix multiplication is a key step in all of the algorithms suitable for chip-based photonic implementations.
For example, in some implementations of coherent Ising machines, [15,23] the systems rely on digital methods like fieldprogrammable gate arrays (FPGAs) to perform matrix multiplications electronically rather than optically. [18]However, because the speed of optical components is generally much faster than that of FPGAs, the overall time required to solve an Ising problem is largely determined by the FPGA components rather than the photonic portion of the system.[28][29] Photonic circuits have high bandwidth, low latency, and low power consumption, making them ideal for such high-performance computing applications. [13,24,25,30] fundamental constraint if the matrix-vector multiplication is performed optically is that the required surface area of a hypothetical photonic chip will necessarily increase with the square of the spin vector size.This means that while the time to solution of an all-optical Ising system may be extremely fast, the total spin size will be greatly limited due to the generally large size of integrated photonics chips, the largest vector size reported till now being 64 bits in a silicon photonics implementation. [31]n this work, we propose an integrated photonic Ising machine (IPIM) that can perform matrix multiplication in an optical way, based on waveguides and optical components such as Mach-Zehnder interferometers (MZIs) and attenuators, but we adjust the algorithm in such a way that time multiplexing of matrix-vector multiplication becomes possible; the Ising spin vector length is thus no longer inherently limited by the physical chip area.Our algorithm allows a smooth trade-off between photonic chip area and solution time without sacrificing the robustness of the algorithm itself.To evaluate the potential of our IPIM design, we conducted simulations to assess its performance in solving MAXCUT problems.Using photonic circuits, our IPIM design can effectively handle unlimited spin array sizes through time multiplexing, while maintaining a high success rate, high noise tolerance, and high error tolerance.High speed is also ensured by having all the important calculations implemented photonically rather than electronically.

Ising Model
The Ising model is a widely used artificial spin system involving N spins, each of which can be in one of two final states: up (σ n ¼ 1) or down (σ n ¼ À1).Each spin is coupled with one or more other spins, and the coupling between spin σ n and spin σ m is determined by J mn , which is a measure of the interaction strength between the spins.An Ising machine is designed to find the global minimum of the Hamiltonian (H) as described by Equation (1).For a given problem, there is a specific coupling topology that corresponds to a real symmetric N Â N matrix J, which encodes all interaction strengths between spins in the system.The spin states can be represented by vector σ of length N, where each element of σ corresponds to the state of a single spin.Therefore, Equation (1) can be rewritten in the form of matrix multiplication as shown in Equation (2), where σ T denotes the transpose of the vector σ.
The job of an Ising machine is to minimize the Hamiltonian, thereby determining σ given J in the process.Keeping in mind that J is constant, the process of finding the correct spin states is typically iterative, with candidate spin states σ updated algorithmically.The algorithm we propose is based on simulated bifurcation, upon which the spin-coupled system spontaneously evolves toward a fixed equilibrium point, ideally corresponding to the GS of an Ising problem. [32]Within such a gain-dissipative dynamical system governed by an approximate Ising Hamiltonian, characterized with specific nonlinearity and a set of hyperparameters, analog spins can deterministically converge to discrete values centered around zero via pitchfork bifurcations. [32]When the system reaches stable states, the optimal solution for a selected MAXCUT problem can be extracted by taking the sign information of the final spin configuration.Consequently, it becomes feasible to efficiently address a wide spectrum of optimization problems in the form of quadratic unconstrained binary optimization formulations through simulating the bifurcation behaviors of spin ensembles within analog Ising systems. [33] Results

Iteration Equations
We first describe the algorithm mathematically, and then describe how it can be implemented optically.The goal of the algorithm is to determine σ, all elements of which should ultimately be either À1 or þ1.During the iterative process, we allow spins to take on analog values between À1 or þ1 and denote the spin vector during the k th iteration as x k .Initial values of all spins are set to 0 (midway between the two possible final states).Equation ( 3) and ( 4) are then updated iteratively, with k being the counter.The status of n th spin during the k th iteration is represented by x k n , an analog value that ranges between À1 and þ1.The sign of x k n at the final iteration is used to determine σ k n .To kick off the process, we begin the first iteration with f 1 n given a random value between À0.05 and þ0.05 for all n (the range itself is arbitrary).f k n is a feedback signal injected to Equation (3), and g k n represents Gaussian noise, which will be explained later.
This algorithm is similar to the one used in the "poor man's" coherent Ising machine, [15] except that here the cosine function is used instead of cosine squared.It is worth mentioning that in our scheme, the optical wave undergoes direct traversal through the subsequent unit for computation, so here we use the cosine equation, while in ref. [15], the optical intensity is detected, effectively using cosine squared.(The algorithm performance is detailed later).During our detection phase, coherent detection is employed to recover the phase information, and it does not contribute to the overall error of the scheme.Here, x k n can take on negative values.Note that Equation (3) and (4) can be further interpreted into matrix/vector forms.
In Equation ( 6), a and b are free parameters which are adjusted to aid convergence and I is an N Â N identity matrix.T is an N Â N transfer matrix applied through the entire iteration process.

IPIM and Matrix-Processing Unit Design
Figure 1 illustrates how this algorithm can be implemented optically.A single laser provides a reference beam with an amplitude of Ẽ0 , which is then divided through a splitting tree into N beams of uniform global phase shifts (we first assume we have exactly N beams corresponding to a vector σ of length N, but importantly, we will later show how larger vector lengths can be accommodated with a limited number of beams).Referring to Figure 1, we can denote the electrical field of each beam of light at position 1 as Ẽ0 .Each beam is directed through a Mach-Zehnder intensity modulator (MZM), and although the π 2 phase shift in Equation ( 3) effectively gives us a sine, it is interpreted as a cosine with a bias so that a standard MZM can be used.The fields at position 2 become Ẽk n ¼ Ẽ0 sin φ k n ¼ Ẽ0 x k n , where φ k n is the additional phase shift introduced along one MZM arm where the global additional phase shift has been ignored.Here in our case, sin . This shows that the vector components f k n are electronically imparted to phase modulators within the MZMs between positions 1 and 2, and along with noise g k n they become the total phase We now take note of the fact that the vector x k is completely encoded by the amplitude of N light beams, and it has been shown that any linear mathematical operation on this set of numbers can be accomplished by a mesh of MZIs. [13,24,25]he details of this process will be shown in the next section; for now, we shall assume that a "Matrix Processing Unit (MPU)" equipped with suitable programming that incorporates the matrix J subsequently produces the result denoted as f kþ1 .So at position 3, the field is Ẽk0 If we were to measure the intensity of the beams at position 3, we could hypothetically determine f k n but without its sign; we therefore opt for homodyne detection (HD).This permits extraction of both amplitude and phase of Ẽk 0 n , allowing us to find jf k n j from the amplitude and its sign from the phase.At position 4, the fields corresponding to a particular Ẽk 0 n before and after HD are depicted in the lower section of Figure 1 along with their final intensities I 1 and I 2 .To obtain f k n , we need only to electronically subtract the two measured intensities and scale by a constant factor, which is simple enough that it can even be implemented with analog electronic circuitry.New phase biases f k n ¼ I 2 ÀI 1 2 ðE 0 Þ2 are then arranged accordingly for the subsequent iteration and the process repeats.We show later that this process tends to converge for various Ising problems, including MAXCUT.The key advantage of this process is that all-optical computation is essentially achieved with no electronic bottleneck.It is plausible that the entire feedback system could be implemented with fast analog electronics, eliminating analogdigital conversion in the measurement process as well.Now, we describe how the MPU functions as the central core of this design.To accomplish the objective of the IPIM, it is essential to execute the N Â N transfer matrix T in Equation ( 6) in a photonic way.It has previously been demonstrated that a unitary matrix operation can be accomplished with a mesh of MZIs. [13,24,25]However, T might not be a unitary matrix, which implies that the MPU needs to be capable of handling any real matrix.Through singular value decomposition, [34] any real matrix T can be decomposed as T ¼ U ⋅ Σ ⋅ V T , where U and V T are unitary matrices and Σ is a diagonal matrix.The diagonal matrix operation Σ can be accomplished by a column of programmable attenuators, i.e., MZMs. [35]This completes the description of the system if there are exactly N beams corresponding to a vector σ of length N.
In practice, for a given Ising problem, N can be an exceedingly large number, making it impractical to have N beams exiting the splitting tree on a single chip with the subsequent optical matrix operations.We now turn our attention to how the system will function if the number of waveguides after the splitting tree Figure 1.Schematic of how the proposed scheme can be implemented optically.The system starts with an input light that is split into M beams (four beams as illustrated for instance).To ensure uniform phase, these beams are guided through waveguides before entering MZMs, which modulate the light using ratio frequency input signals.Next, the MPU is employed, followed by homodyne detection (measurement of intensities).The feedback signal is calculated directly from the measured intensities using only simple operations.Finally, the signal is sent back to amplitude modulators via digitalanalog converters.In this scenario, the IPIM will act as a feedback oscillator.
M is insufficiently large to accommodate N spins.To handle arbitrarily large numbers of Ising spins, we partition the large matrix T of size N by N into submatrices T sub of size M by M and deal with them sequentially.Rather than directly processing T, the MPU now exclusively handles a smaller T sub .Consider as an example of an Ising computer limited to M ¼ 4, as shown in the top portion of Figure 2. Here, 12 MZIs and 4 adjustable attenuators (implemented by tunable MZIs) are employed for the MPU of an IPIM.We decompose the 4 Â 4 submatrix T sub as U ⋅ Σ ⋅ V T , where we have redefined U, V T , and Σ to be the decomposition of a particular T sub rather than the entire matrix T.
S MZI i¼2,5,8,11 ¼ Equation ( 7)-( 10) represent the scattering matrices of the MZIs inside the MPU in an example where M ¼ 4, where both parameters θ i and δ n can be adjusted.Each MZI implements a rotation matrix multiplier, affecting the phases of the two input beams.Each attenuator scales the magnitude.In this case, two sets of MZIs will act as the two unitary matrices U and V T while a row of attenuators acts as the diagonal matrix Σ. Equation ( 11) mathematically outlines the functioning of the MPU when applied to the input light.The calculation of parameters θ i Figure 2. Working principle of the integrated photonic Ising machine (IPIM).To illustrate the design of the MPU, a 4 Â 4 IPIM is considered here, comprising 12 MZIs and 4 attenuators.The process commences with an input coherent beam, which is split into equivalent beams (in this illustration, M ¼ 4).Subsequently, these beams pass through a series of MZIs to represent the matrix V T , followed by M adjustable attenuators (implemented by tunable MZIs), which represents Σ.Finally, another set of MðM À 1Þ=2 MZIs is utilized, representing U. Any matrix comprising N elements can be partitioned into multiple submatrices, each containing M elements.As the light traverses the MPU, each pass processes a submatrix.In cases where the total number of elements N is not evenly divisible by M, the surplus elements can be populated with zeros.In this way, σ and J can be divided into N=M d e(the smallest integer greater than N=M) subvectors and N=M d e 2 submatrices, respectively.
and δ n from a particular matrix T sub is described in refs.[13,24,25], but it is notable that because for any given Ising problem the J is fixed (and hence all the T sub are constant), and the calculation of these parameters needs to be done only a single time.As the system iterates through various submatrices, the appropriate parameters θ i and δ n need to be applied to the optical components within the MPU, but no calculation steps are required to do this.It is worth mentioning that the required parameters can be configured using external electrical signals while the system is in operation.In this case, the only critical factor is the time taken for light to travel.By calculating and preprogramming the interval time between each parameter change, the system's speed remains unaffected.
The key advantage of the proposed IPIM is that it can allow unlimited spin array sizes, which is realized through matrix decomposition.Breaking a matrix apart is a type of time multiplexing, so scaling is indeed very important.Let's consider the situation if the IPIM works with a vector of size M but we need to compute with a vector of size N; then the optical calculation will scale in time by a factor of N=M d e 2 .However, the number of input and output operations, instead of being just N inputs and N outputs, will be N=M d e 2 Â M inputs and N=M d e 2 Â M outputs as each submatrix must be computed individually, plus addition operations to assemble the result.If M is too small, then it is unlikely that the optical processor would offer any advantage over other Ising machines.The advantages only appear when M is large to avoid the increase in required numbers of input and output operations.
For a larger Ising problem, a large electronic processor may potentially solve it in a single iteration.Nevertheless, the augmented time and energy demands for matrix computation could be greater when compared to the photonic method. [1,2]Here, employing matrix decomposition to address larger Ising problems helps strike a balance between the size of the Ising problem and the associated computation time and energy.In particular, advancements in photonic technology enable larger decomposed subsystems [31] and the advantages of our photonic Ising machine will be even more apparent.

Spin Evolution
In an Ising machine, the sign information is crucial as it determines the interactions between the spins in the system.As described earlier, the Ising model is a mathematical model of ferromagnetism, which describes the interaction between magnetic spins in a lattice.In the Ising model, the spin interaction is determined by the sign of the coupling strength between the spins.Specifically, spins with the same orientation have a negative coupling, while those with opposite orientations have a positive coupling.In the context of a photonic Ising machine, the spins are typically represented by the phase states of light.The sign of the coupling strength between two spins in the Ising model is therefore equivalent to the relative phase between the corresponding phases of the light.Thus, the phase information of the light is crucial in determining the sign of the coupling strengths between spins in an Ising machine.
To test the effectiveness of the cosine algorithm as an optimization problem solver, we consider MAXCUT, which is a well-known NP-hard problem often used for Ising computer benchmarking. [32]By setting connecting edges in the graph to be antimagnetic, with coupling topology J mn ¼ À1, MAXCUT can be mapped to Ising problems.We have carried out simulations to study and predict the performance of our IPIM in solving MAXCUT problems.
Because our system involves electronic and analog signals, especially at the point of MZM signal input, there will be Gaussian noise present.This kind of noise can impact the efficiency and stability of the system.To study the noise tolerance of our IPIM, the iteration Equation ( 5) and ( 6) in matrix form were programmed with Gaussian noise, which follows the normal distribution (Equation ( 12)).
where r n is random variable so the elements in the matrix of noise follow a Gaussian distribution with a standard deviation of d.By adjusting d, the magnitude of Gaussian noise applied to each MZI can be modeled.Initial conditions are also noisy for the same reason, which we initiate in the simulation by a random f 1 .To determine whether the system will converge for any given problem, we need to compare x k n and x kþ1 n in Equation (3).By combining Equation ( 3) and ( 4), we can obtain Equation ( 13), where c k n is defined in Equation ( 14) by limiting Ising problem to antimagnetic situations where J mn is negative and Gaussian noise g k = 0 to eliminate the randomness and better study the algorithm.It should be noted that c k n is not a function but a constant for the n th spin during the k th iteration.As the system evolves, typically c k n 6 ¼ c kþ1 n , but if the system is to ultimately converge, then for large values of k, we desire that c k n ¼ c kþ1 n : In Equation ( 14), we note that x k n refers to a given Ising spin n, and x k m refers to Ising spins which are coupled to it.Let us now study how these influence c k n .
When c k n < 0, it means that the net effect of coupled spin states is to flip spin x k n , since according to Equation ( 1), this will . Inset figures depict the spin states using checkboards, with a color gradient ranging from yellow to blue representing the positive and negative phases of spins.Orange arrows serve as indicators for predicting whether spins will flip over in the subsequent iteration.Accordingly, spin-state evolution when c k n belongs to the following ranges: d) decrease the Ising energy of the system, and since according to Equation ( 13) a sign change within the sine term leads to a sign change of x k n .The spin evolution in different situations is shown in Figure 3.This discussion primarily focuses on Figure 3a, where x final represents the value at the intersection point of y ¼ sinðc k n ⋅ xÞ and y = x in the first quadrant.When x k n lies within the range ð0, to increase during the next iteration.In essence, x k n is propelled closer to x final during subsequent iterations.Conversely, when x k n falls within the range ðx final , 1Þ, n during the.Once again, x k n is pushed toward x final .The situation is mirrored in the third quadrant that x k n will ultimately converge at Àx final .Figure 3b represents a negative counterpart of Figure 3a, and there exist both x final and Àx final as final phases, however, given that c k n < 0, x kþ1 n , and x k n will have opposite signs, indicating that spins will flip in each subsequent iteration at the final state.In Figure 3c, x n Þ is consistently smaller than x k n throughout the entire first quadrant.Consequently, x k n will ultimately converge at x final ¼ 0. Similarly, scenario (d) is the negative version of Figure 3c.When , the scenario may resemble Figure 3a, with final phases emerging at the intersection points.However, alternative situations, as depicted in Figure 3e, can also occur.Since sinðc k n ⋅ x k n Þ is no longer strictly monotone increasing, the possibility arises for four or more final phases where positive and negative mirror each other.In these cases, spins will transition among these final phases without undergoing complete reversals.Figure 3f serves as the antithesis to Figure 3e, wherein spins undergo full flips.The pivotal step in reducing the Ising energy lies in the "driving force" that propels x k n toward x final in the algorithm.To examine the behavior of the spins in the final state, let's consider a square coupling topology, which consists of a 20 Â 20 square lattice with periodic boundary conditions applied.
Each of the 400 spins are coupled to its four nearest neighbor spins with J mn ¼ À1.To better illustrate the gradual change in spin orientations during each iteration, a checkerboard pattern with different colors is used to represent the spin status.As previously mentioned, the values of parameters a and b play a crucial role in successfully reaching the GS of the Hamiltonian.Therefore, it is essential to set these parameters to appropriate values.Since the system has reached its final state and there is no disturbance from noise, c final n should remain constant.To better understand the relationship between x k n , x kþ1 n , and x kþ2 n , the same method as before is used, as shown in Figure 4.It is found that spins only stay at a steady state when c k n ∈ ½1, π=2, and in all situations, the Ising energies reach global minima.This is because the sign information, rather than the absolute value, determines the Ising energy.By looking at checkerboards of two adjacent iterations in Figure 4, we can observe how spins fluctuate in the final state and the evolution trajectory of each spin during iterations.In summary, when jc k n j ∈ ½1, π=2, there is a group of two opposite steady values of spins.When jc k n j ∈ ½0, 1, the spin values approach zero gradually.From Figure 4, it can be concluded that the required iteration number to reach the GS is not predetermined due to the stochastic nature of initial conditions. [15]However, a discernible pattern emerges in the number of iterations for each scenario after multiple runs of the system.Now that functionality of the algorithm is confirmed, we now consider whether an arbitrary initial state will lead to the GS.Checkerboards are used to visualize this information.When there is no randomness, or "noise", the only thing that matters is the initial state.In this study, we investigate two initial feedback signals, f 1 0 and f 1 00 , which lead to opposite results.Figure 5 shows the evolution of the checkerboards.
Figure 5a shows the evolution of spins with f 1 0 .Initially, unsettled spins form an eye diagram, which then separates into two parts from the joint area and forms a ring due to the periodic boundary condition.The ring gradually shrinks to solid circles and eventually reaches the GS.In contrast, with another initial feedback signal f 1 00 , the system cannot reach the GS as shown in Figure 5b.Again, an eye diagram of unsettled spins is observed at first.However, instead of a ring, the spins are separated into two curves, which gradually straighten out.The key difference is that with f 1 0 the joint area is separated horizontally, while with f 1 00 the joint area is separated vertically, leading to rings and curves, respectively.This means for the system to reach the GS, convergent patterns must form on a global level.On an individual level, each spin is regulated by c k n as stated.When the sum effect of coupled neighbor spins is up, the center spin tends to face down, and vice versa.Regardless of the pattern of unsettled spins, whether a curve, a ring, or a solid circle, the outer arc has more neighbors than the inner arc, making spins on the outer areas easier to regulate.Thus, for nonconvergent patterns, the curvature will reduce until they turn into straight lines and have the same number of neighbors on both sides.For convergent patterns, rings will narrow until they disappear.

Discussion
Based on the analysis of the algorithm and the convergence of spin diagrams, it is possible to gain insights into the performance of the IPIM for solving MAXCUT problem.In this study, the success rate, noise tolerance, and error tolerance performances are investigated through simulations.

Noise Tolerance
We explore the tolerance of our system on Gaussian noise in three specific cases with different initial states.Here, the noise originates from the entire Ising circuit, such as fluctuations in electrical input of the modulator. [15]To calculate the success rate of the IPIM, 1000 runs are conducted with different standard deviations of noise, and each run has a randomly generated feedback signal f 1 at the beginning.The success rates in reaching different ratio of the GS (defined as the division of the final Hamiltonian energy by the GS energy) in terms of Ising energy are studied, and the results are shown in Figure 6.The success rate of reaching the GS for d ¼ 1=2, which is far larger than normal, is slightly higher than d ¼ 0.Moreover, the number of iterations required to reach the final state is nearly the same.However, the success rate to reach below 90% of the GS is much lower.As mentioned earlier, in certain scenarios, two straight lines of spins can become trapped in local minima, which accounts for the remaining 10%.In contrast, when the standard deviation is raised to d ¼ 1, the success rate suddenly drops to nearly zero.It indicates that on the macrolevel, when the noise is within a certain range, the Gaussian noise scarcely influences the success rate or the speed.However, when the noise is beyond a certain value and approaching the final phase states, which are approximately AE1 as demonstrated as the GS in Figure 4, random noise takes the lead so the system cannot reach the GS.The results prove the high success rate of the IPIM within a considerable noise range.
In another perspective, Gaussian noise brings randomness to the system.For f 1 00 and other initial feedback signals which cannot lead to the GS, some spins get stuck in a local energy minimum.In this situation, a certain amount of noise can make the phase values fluctuate which can help spins escape from local minima and explore a wider range of solutions, allowing the algorithm to converge to the global minimum.As a result, non-convergent scenario (Figure 5b) can be transformed into convergent one (Figure 5a).Therefore, within a reasonable range, increasing the level of Gaussian noise can improve the performance and speed of the IPIM.In contrast, for f 1 0 , which can lead to the global minimum, each phase tends to change toward the final state.The noise may interfere with this tendency and delay the whole process.
However, with a small possibility, the noise can also accelerate it.These results suggest that in some scenarios, noise can enhance the performance of the integrated photonics Ising machine for non-convergent initial states, while in others, it may slow down the system.As demonstrated in Figure 6, the success rate of 1000 random initial feedback signals have been previously discussed, and the success rate stabilizes at the same level of noise intensity.Overall, while noise may reduce the success rate and slow down the speed of the system, the IPIM exhibits a high noise tolerance.

Error Tolerance
Although matrix multiplication can be done optically, waveguides can't be fabricated perfectly so there is a loss associated with light propagation.Such a loss can result from waveguide transmission or an imbalance in the power splitter ratio of passive devices. [36]Hence, the error tolerance of the integrated photonics Ising machine must be considered.To model the error, a random N Â N matrix is added into Equation ( 6) to get Equation ( 15) and ( 16).Here, the error set as a random function uniformly distributed between ½Àe 0 , e 0 and e 0 is the error index.This error is added to the matrix of coupling topology.
Figure 7 shows the results without noise.When d ¼ 0, Figure 6a and 7a display nearly identical results.The only distinction is that in Figure 7a, e 0 ¼ 1=100, whereas in Figure 6a, e 0 ¼ 0. This implies that a small error has minimal impact on the system.For e 0 ¼ 1=50, the success rate remains high.Similar to Figure 6b, where d ¼ 1=2, the success rate keeps the same when reaching the GS but drops when reaching below 90% of the GS.When the error index further increases to e 0 ¼ 1=20, the success rate drops significantly.This error index is larger than what can currently be achieved in waveguide fabrication. [37,38]Therefore, IPIM is predicted to have a certain degree of error tolerance.

Performance on Other Topologies
Reports have shown good performance of coherent Ising machines on various 2D graph problems. [32]In addition to the square lattice, our proposed IPIM also performs well on other coupling topologies such as the triangular lattice and the Moebius ladder.Similar to the square lattice, in the triangular lattice each spin is connected to two more spins, its upper-left spin and lower-right spin neighbors.In the Moebius ladder configuration, spins are arranged in a circle and each spin is connected to its two adjacent neighbors as well as the spin opposite the circle.As with the square lattice study, an antiferromagnetic situation is applied, and 1000 simulations are conducted to determine success rates for 1000 random initial states.The results are illustrated in Figure 8.This indicates that the IPIM has broad applicability and is very likely to perform well in other NP-hard problems.However, we note the impossibility of proving wide applicability of any IPIM to any particular NP-hard problem, due to the very nature of NP.Although not all NP-hard problems can be solved by our proposed IPIM, it is expected that our proposed IPIM can work in many cases where the specified problems can be mapped into the Equation (1). [33]

Conclusion
In conclusion, we propose a promising photonic Ising chip design that utilizes matrix decomposition to handle unlimitedsized arrays of Ising spins and has demonstrated the ability to solve MAXCUT problems.The IPIM uses waveguides and optical components to manipulate the phase of light, potentially on-chip, to solve optimization problems in an optical way.The iterative algorithm proposed has shown that the photonic Ising machine converges toward the GS of various problems studied.Through our simulations, we have demonstrated high success rates, high noise tolerance, and high error tolerance.There appears to be a high noise tolerance, and in some scenarios, the noise can even enhance the system's performance for nonconvergent initial states.Additionally, we have seen that the system can tolerate a certain degree of error.

Figure 4 .
Figure 4. Evolution trajectories and diagrams of the spins during each iteration corresponding to Figure 3. Ising energy evolutions are demonstrated here when c k n belongs to the following ranges: a) c k n ∈ 1, π

Figure 5 .
Figure 5. Simulated checkerboards results.Checkerboards that display the status of spins for different initial feedback signals during iterations, considering two cases: a) k = 15/30/80/100 for f 1 0 and b) k = 15/20/80/150 for f 1 00 .In the ground state of f 1 0 , only two spin states are present.However, in the final state of f 1 00 , spins exhibit four states.Some spins appear partially up or down, and they are arranged in parallel straight lines that are denoted by different colors in the checkerboard.These spins are trapped in local minima in terms of Ising energy, and they cannot escape from their partially up or down states.

Figure 6 .
Figure 6.Success rates results.The simulated success rates of achieving different percentages of the ground state energy for 1000 random initial states with varying levels of noise, where a) d ¼ 0, b) d ¼ 1=2, and c) d ¼ 1.

Figure 7 .
Figure 7.The simulated success rate with 1000 different initial feedback signals, which are the same as the ones used in the noise tolerance study.Three different values of the error index e 0 are considered: a) e 0 ¼ 1=100, b) e 0 ¼ 1=50, and c) e 0 ¼ 1=20.

Figure 8 .
Figure 8.The simulated success rates for two different coupling topologies.a) The Moebius ladder with parameters a ¼ 0.25 and b ¼ 0.29.b) The triangular lattice with parameters a ¼ 0.45 and b ¼ 0.25.In both cases, the standard deviation of Gaussian noise was set to d ¼ 1=100.Note that while the simulation is executed with 400 spins, the figure only illustrates a subset consisting of 36 spins for both the triangular lattice and the Moebius ladder.