Enhancing Robustness of Memristor Crossbar‐Based Spiking Neural Networks against Nonidealities: A Hybrid Approach for Neuromorphic Computing in Noisy Environments

Memristor crossbar‐based spiking neural networks (SNNs) face challenges caused by nonidealities associated with their hardware‐based neurons and synapses. The key nonidealities include electric‐field noise, conductance noise, and conductance drift. This study investigates the robustness of fully connected, convolutional, residual, and spike‐timing‐dependent plasticity‐based SNNs against hardware nonidealities using the MNIST, Fashion MNIST, and CIFAR10 datasets. In response to these challenges, a novel hybrid residual SNN (HRSNN) is proposed that incorporates a new neuron circuit and a weight‐dependent loss function. The HRSNN in a high‐intensity noise environment is evaluated using the neuromorphic DVS128 Gesture dataset. The achieved accuracy rate of 92.71% is only 2.15% lower than that of the noise‐free environment. These results demonstrate the robustness of the proposed HRSNN under high‐intensity noise conditions and present new possibilities for the advancement of neuromorphic computing in noisy environments.


Introduction
Memory resistors, known as memristors, have attracted significant research interest since their initial demonstration in 2008. [1]ua proposed the idea of using a memristor as a fourth-generation two-terminal passive electronic component was proposed five decades ago. [2]This idea holds significant implications for electronic components that depend on the quantity of charge traversing the device. [3]Memristors have emerged as promising candidates for constructing neuromorphic hardware due to their ability to fulfill nearly all criteria for artificial synapses.A prevalent application involves using memristor crossbars to implement spiking neural networks (SNNs) based on conventional artificial neural networks (ANNs). [4]Several researchers have demonstrated the benefits of implementing novel SNN configurations with distinctive brain-inspired characteristics, leading to significantly reduced power requirements.SNNs are energy-efficient and biologically plausible neural network architectures that excel in processing temporal information.They operate in an event-driven manner, encoding data with precise spike times rather than continuous values, which enhances their robustness to noise and improves their suitability for dynamic environments.Furthermore, because SNNs are compatible with parallel and distributed computing, they enable efficient hardware implementation and real-time processing.However, training SNNs and integrating them into hardware solutions pose significant challenges.One mitigation strategy utilizes memory blockers (e.g., crossbars), which offer parallel computing power, energy efficiency, and previous-state retention.Owing to the grid-like structure of memristive devices, multiple inputs, and outputs can be processed simultaneously to accelerate calculations and reduce energy consumption.The integration of memristor crossbars into computing systems offers great potential for advancing efficient and powerful ANN applications.
The application of memristors faces numerous nonidealities caused by the limitations of current manufacturing processes and materials.For example, deficiencies include conductance noise and drift, which can significantly impact memristorenabled SNN training.Therefore, it is crucial to develop alternative approaches to effectively mitigate these nonidealities.[7] Furthermore, several studies have explored new synaptic structures that utilize special designs to enhance the resistance to conductance drift. [8,9]In this study, we analyze the electric-field noise, conductance noise, and conductance drift in memristor synapses and propose three solutions to mitigate these challenges.
This study emphasizes the resilience of the residual SNN structure to electric fields and conductance noise.The main contributions are summarized as follows: 1) We introduce a novel neuron structure that effectively mitigates the impact of memristor synaptic conductance drift and electric-field noise on performance.The proposed structure compensates for the current input to the subsequent neuron by controlling the pulse frequency and utilizing an amplifying circuit, thereby eliminating electric-field noise and conductance drift, which improves robustness.2) A new weight-dependent loss function is introduced to mitigate conductance drift during training.This approach involves compensating for the loss function to ensure the actual weights slightly surpass the ideal weights.Therefore, the impact of conductance drift is mitigated, and weight convergence is promoted.Extensive testing supports these contentions.
3) The novel hybrid residual SNN (HRSNN) is proposed and applied to three recognition tasks, and promising results were obtained.The experimental outcomes provide compelling evidence of the model's exceptional resilience in high-intensity noise environments when employed for neuromorphic tasks.Therefore, our findings strongly support the feasibility and potential advantages of integrating HRSNNs into future neuromorphic computing systems.

Training Algorithm
SNNs offer several advantages over discrete neural networks, but what sets them apart is their reliance on nondifferentiable pulses, which hinder the application of traditional gradient descent learning algorithms used in conventional ANNs.With ANNs, error backpropagation can be achieved by propagating the error through each synapse using the chain rule.However, SNNs operate on nondifferentiable impulse signals.
The complex and dynamic nature of impulse-based neural networks makes traditional backpropagation algorithms unsuitable for direct SNN applications.Consequently, the lack of effective learning algorithms hinders the development and advancement of SNNs, despite their notable characteristics.
In terms of training, Neftci [10] introduced an algorithm to address the challenges of using gradient descent in pulse-based neural networks.The algorithm achieves this by leveraging the gradient of the sigmoid function instead of the pulse gradient.Rueckauer [11] presented another approach involving the conversion of parameters from traditional ANNs to those suitable for SNNs.This transformation allows trained ANNs to be easily adapted to SNNs, facilitating their construction.Diehl [12] utilized a spike-time-dependent plasticity (STDP) learning mechanism inspired by biological synapses to train SNNs.Although the STDP-based supervised learning algorithm is aligned with biological learning processes, it can significantly improve the learning effectiveness of SNNs.To address this issue, Zheng [13] proposed an STDP-based supervised learning algorithm that approximates the gradient of neurons and preserves the characteristics of biological synapses, significantly enhancing learning effectiveness.
When implementing hardware SNNs, transferring trained network parameters to resistive random-access memory (RRAM) or phase-change memory (PCM) leads to information loss.Compared with simulations, hardware-based results always suffer from inherent synaptic noise, resulting in significant errors.Joshi [8] proposed a global compensation technique using parameter batch normalization using a residual convolutional neural network (CNN), producing highly favorable results.The hardware-based solution achieved an accuracy of 93.75% on the CIFAR10 dataset and consistently maintained accuracy above 92.6% for 1 d.
Conductance drift [24] also presents a significant challenge to network training.Boybat [9] studied the conductance drift of PCM devices and demonstrated the possibility of resetting the drift history by applying partial SET pulses.To solve this issue, Boybat [9] proposed a novel multi-PCM synaptic architecture that showed remarkable improvements in training deep neural networks.Similarly, Dai [25] incorporated the characteristics of conductance drift into a reinforcement learning system based on memristor arrays.They utilized the memristor array's topology to implement λ decay in the Sarsa (λ) algorithm, which reduced the computational burden associated with power-exponential decay calculations.These studies have addressed the challenges posed by conductance drift in memristors and have offered innovative approaches to enhance the training and efficiency of SNN implementations.
Zheng [6] presented a novel learning algorithm tailored for SNNs.The algorithm considers the nonidealities present in both neurons and synapses to account for the presence of current leakage in neurons and conductance noise in synapses, making it suitable for SNNs.Therefore, the proposed supervised STDP learning algorithm exhibits remarkable robustness when applied to SNNs.

Addressing the Nonidealities
To assess the resilience of different network structures and learning algorithms to different types of noise, we evaluated the performance of convolutional SNNs (CSNNs), residual SNNs (ResSNNs), fully connected SNNs, and unsupervised learning STDP SNNs on three classification datasets: MNIST, Fashion MNIST, and CIFAR-10.

Electric-Field Noise
Electric-field noise is produced by small perturbations in the membrane potential of a hardware neuron, V, expressed as where Ψ is the length of the polarization, and A v and ω are the intensity and angular frequency of the external electric field, respectively.In our simulation, we have Ψ ¼ 1 mm and ω ¼ 2πf , where f denotes the electric-field frequency.
The greater the magnitude of the electric-field noise, the greater the fluctuation of the membrane potential, resulting in missed pulses or misfires.To clearly determine the impact of electric-field noise on neuron membrane potential, we selected a large-value, high-frequency electric-field noise of 40sin 100πt ð ÞmV to the fully connected SNN with 784 Â 10 dimensions.The membrane potential of the output layer neurons was recorded and the results are presented in Figure 1.
The color intensity corresponds to the membrane potential, wherein the darker colors indicate lower values.Neurons fire when the membrane potential reaches À0.055 V. When the input of the handwritten number "7'' was provided, adding electricfield noise significantly reduced the number of spikes emitted by the associated eighth neuron.Considering the SNN identifies numbers based on the neurons with the highest spike-firing rate, a decrease in the number of spikes led to inaccurate predictions.We investigate the robustness of different network structures by introducing electric-field noise with varying intensities and frequencies.The results, as shown in Figures 2 and 3, indicate that as the intensity and frequency of the electric-field noise increase, the test accuracies of the four SNNs gradually decrease owing to the impact of the noise on the membrane potential; this, in turn, leads to changes in the firing frequency and affects the recognition outcomes.Notably, the CSNN exhibits the most significant sensitivity to the electric-field noise, whereas the ResSNN shows minimal susceptibility.Furthermore, the ResSNN demonstrates strong robustness when subjected to electric-field noise.

Conductance Gaussian Noise
During the write operation, noise is introduced to the memristor circuits.Assuming that the write noise, denoted as G noise , is independent of the expected conductance change and current conductance state, the conductance with write noise G real is given by The write noise follows a Gaussian distribution, defined as where N is the Gaussian distribution and σ is the dimensionless standard deviation of the conductance noise normalized to the range of G max À G min ð Þ =2.The test results for the four SNNs are presented in Figure 4.
As the conductance noise increases, the accuracy of the four SNNs gradually decreases.Among these, the CSNN exhibits the fastest rate of reduction, whereas the ResSNN exhibits the slowest rate.
In summary, conductance noise introduces randomness in the synaptic conductance values, leading to increased variability in the network's response.This variability can disrupt the precise timing of spikes and interfere with the reliable transmission of information.Consequently, it has the potential to reduce the accuracy and overall performance of SNNs.

Conductance Drift
Although memristor resistance variability can be utilized for information storage, it is important to note that memristors undergo changes over time, making them imperfectly nonvolatile.This phenomenon is known as conductance drift, where the conductance of memristors gradually decreases over time, leading to changes in their synaptic weights.][28] The conductance is calculated as where G t 0 ð Þ is the conductance at time t 0 and v is the conductance drift coefficient.v is typically equal to 0.1 and t 0 is the initial state, where conductance drift occurs at time t 0 .
To evaluate the impact of conductance drift on the network training, we conducted tests that varied the drift coefficient, denoted as v.The results are shown in Figure 5.Both CSNN and ResSNN exhibited high sensitivity to conductance drift,  particularly CSNN, where even a slight amount of conductance drift led to learning failure.This can be attributed to the larger number of parameters and the deeper network structures of CSNN and ResSNN compared with the other two SNNs.
In neural networks, the weight range typically falls within À1, 1 ½ .However, a single crossbar cannot directly represent negative weights.To overcome this limitation, a pair of crossbars is commonly used. [4]In our study, we employed a linear mapping rule to map the weight values to the conductance values of the memristor cross array.The mapping rules are defined as where W þ and W À are the positive and negative weights in the weight matrix W, respectively.Therefore, the relationship between weight and conductance is expressed as From Equations ( 7) and ( 8), we can conclude that if the conductance value is attenuated to γ times its original value, the weight attenuation can be calculated as where τ is the attenuation factor of the weight.For every γ decrease in the conductance of the crossbar, the weight of the network decreases by τ.
Figure 6b shows the forward propagation process of the pulse signal on the crossbars.Note that the signal output to the next  neuron is the weighted sum of the current passing through each memristor.Owing to conductance drift, the weights decay, and the current signal input to the next stage also decays.The current signal attenuates each time it passes through a layer; therefore, the network with more layers is more affected by conductance drift.Evidently, in an n-layer network, if the weights undergo decay, the output will be τ n times the original value.Therefore, deeper networks are more susceptible to the effects of conductance drift.This also explains why ResSNNs and CSNNs are more affected than regular and STDP-based SNNs.
In this study, we constructed four fully connected SNNs with varying numbers of hidden layers (i.e., 1, 3, 5, and 7) and tested them on the MNIST dataset.The results, as shown in Figure 7, reveal that the network with a greater number of hidden layers is more susceptible to conductance drift.

Method
This section outlines our methodology, which includes the construction of the HRSNN, its loss function, and the novel neural circuitry.All simulations were conducted using Python (i.e., spikingjelly [29] and memtorch [30] platforms).We used spikingjelly to build the SNNs and novel neurons and implemented a hybrid training algorithm, as proposed in this article.Following each iteration, we used memtorch to map the SNN weights to the crossbar and introduce nonidealities, simulating the in situ training of the memristive neural network.

HRSNN
Our proposed HRSNN leverages the inherent resilience of the ResSNN to electric-field and conductance noise, enabling effective mitigation of these challenges discussed.The HRSNN's connection layer is trained using gradient descent with STDP sequences, whereas the remaining layers are trained using surrogate gradient learning.This hybrid training strategy empowers the HRSNN to achieve high robustness with significantly improved performance over other SNN networks.The model adopts an in situ training process, as shown in Figure 8.

HRSNN Training Algorithm
The STDP gradient descent algorithm can capture the temporal dynamics of synaptic plasticity. [31]The STDP sequence and its arithmetic mean are defined as follows where D L is the learning duration and T is the system delay for evaluating spikes.x l i n ½ is the sequence of states of layer l's ith neuron.PyTorch is used for pretraining, after which the weights are mapped to the memristor crossbars and device nonidealities are added.Following forward propagation, the error is calculated, the gradient is updated, and the weights are updated using the error backpropagation algorithm.These variables are then remapped to the memristor crossbars.
where V lþ1 j is the membrane voltage, th lþ1 h is the threshold of the neuron, and R is a random variable obeying the Bernoulli distribution used to achieve a stochastic refractory period. [13,31]ccording to the findings of Zheng and Mazumder, [31] the gradient components required for the stochastic gradient descent learning process can be estimated as where μ l i and μ lþ1 j represent the mean firing rates of neurons x l i and x lþ1 j , respectively, and Þrepresents the sample mean of time sequence x l i n ½ during one learning iteration.This weight variable can be expressed as follows where E is the loss function and α is the learning rate.In this study, we chose the cross-entropy loss function.
Another common training method uses the surrogate gradient learning algorithm, as described in the study of Eshraghian et al. [32] This approach leverages the sigmoid function gradient instead of the actual neuron gradient.The weight update rules for this algorithm can be expressed as where S is the spike operator function, which is nondifferentiable and cannot be used for gradient descent optimization.Therefore, the gradient of the sigmoid function is used as a substitute.
Although the STDP learning algorithm is robust to conductance noise, [6] it faces significant challenges when training large-scale networks.The algorithm requires the recording of each neuron's state at every time step, which results in substantial storage consumption, leading to increased energy consumption and computational costs.Therefore, the HRSNN provides direct benefits.It comprises one 3 Â 3 convolutional layer, two pooling layers, five residual blocks, and one fully connected layer.The residual block has two 3 Â 3 convolutional layers, followed by a batch normalization layer and a rectified linear unit (ReLU) activation function.We skip the two convolution layers and add the input directly prior to the final ReLU activation function.The architecture is shown in Figure 9.
Conversely, the HRSNN training approach trains the convolutional layers using surrogate gradient algorithms, whereas the fully connected layers are trained with STDP-based gradient descent.The surrogate gradient in the convolutional layer avoids the need to record the timing of each neural pulse, which is more efficient and saves storage space.The STDP-based gradient descent algorithm in the fully connected layer enhances the robustness and biological rationality of the network.The combination of benefits leads to highly optimized network parameters for error minimization while capturing local plasticity and temporal relationships.

HRSNN Simulation Result
We evaluated the HRSNN in the presence of electric field noise 5 sin 100πt ð ÞmV and conductance noise V 0, G max ÀG min 2 ⋅ 0.01Þ 2 À Á À and tested its effectiveness on the MNIST, FashionMNIST, and CIFAR10 datasets.Concurrently, we compared the HRSNN with the ResSNN trained using Surrogate Gradient Training.The results, shown in Figure 10, reveal that the training achieves remarkably high accuracy even in a noisy environment, affirming the significant efficacy of the algorithm and network structure proposed in this study in mitigating the effects of noise.Furthermore, we found that the accuracy of the HRSNN is higher than that of the ResSNN and the fluctuation of accuracy is smaller considering the HRSNN inherits the robustness based on the STDP gradient descent algorithm; this causes its accuracy to fluctuate less in a noisy environment, and the training process to be less affected by noise.

Weight-Dependent Cross-Entropy Loss
To address the issue of conductance drift during training, we introduced the weight-dependent cross-entropy loss function.
According to Equation ( 15), Δw can be calculated using the following equation The weight-dependent cross-entropy loss function is as where v is the conductance drift coefficient and k is the compensation coefficient.The novel function corrects the loss using the conductance drift coefficient, compensation coefficient, and weight mean value, such that the loss can be closer to the ideal state.

Novel Loss Function Simulation Result
By comparing the data in Figures 10 and 11, we can observe that the new loss function accelerates the convergence rate without significantly impacting the convergence speed when using the MNIST and Fashion MNIST datasets.However, with the CIFAR10 dataset, the new loss function shows a remarkable improvement in convergence speed.In Figure 10, the network without the new loss function achieves an accuracy of over 90% on CIFAR10 by epoch 12.In contrast, when incorporating the new loss function (Figure 11), the network surpasses 90% accuracy by epoch 5.This improved convergence can be attributed to the weight-dependent mechanism within the loss function, allowing for more efficient updates to network parameters during training.Consequently, the HRSNN benefits from faster learning and improved efficiency, leading to its superior training speed.

Novel Neuron Circuit Structure
Our model utilizes a low-power stochastic neuron [33] with a stochastic refractory period as the front of the clock comparator.The voltage accumulates on the integrator, and the pulse is released by the clock comparator when it exceeds a predetermined threshold.By employing clock-controlled neurons, the proposed model offers precise control over pulse firing, which effectively mitigates the influence of electric-field noise.Additionally, the model incorporates an amplifier circuit to mitigate the effects of conductance noise and drift during testing.By integrating these mechanisms into the neuron model, the objective is to minimize disruptions caused by nonidealities and enhance the reliability of training.
The clock signal period of the clock signal neuron is π=ω, where ω denotes the angular frequency of the electric field.Therefore, the neuron spike is expressed as follows where V n ½ is the membrane potential of the neuron at time n.When considering electric-field noise, the membrane potential can be expressed as Because the rising edge of the clock always occurs at Nπ=ω, the electric-field noise at this time is Ψ A v ω sin Nπ ð Þ ¼ 0. Therefore, when the rising edge of the clock arrives, the electric-field noise is always zero; that is, the electric-field noise does not affect the pulse emission (Figure 12).The second stage of the circuit incorporates an amplifier to mitigate the nonidealities of the memristor.Obtaining the current signal input to the next stage is straightforward where V 0 is the output spike of the clock comparator, and R 3 ¼ 1=G 0 and G 0 is the conductance of the synapse after training.G t ð Þ is the conductance of the synapse and G 2 t ð Þ is the conductance of the memristor in the amplifier.v 1 and v 2 are the drift coefficients of the synapse and memristor in the amplifier, respectively.Ideally, the current input to the next stage is expressed as Comparing ( 20) and ( 21), it can be observed that when v 1 and v 2 are equal, the current signal produced by the new neuron aligns with the ideal scenario.Nevertheless, due to limitations in the existing manufacturing process, achieving an exact match for the drift coefficient of each memristor is not feasible, leading to only a close proximity of v 1 and v 2 .When v 1 and v 2 are in close proximity, t Àv 1 =t Àv 2 can be treated as a constant around 1. Consequently, the output current of the new neuron can be reasonably approximated as the ideal output current.

Novel Neuron Circuit Simulation Result
We conducted MATLAB simulations to assess the output current of the novel neuron, and the outcomes are depicted in Figure 13. Figure 13a shows the current output through a memristor synapse to the next neuron using a conventional neuron circuit, and Figure 13b shows the synaptic output current through a memristor to the next neuron using the novel neuron circuit.
The traditional neuron exhibits a decrease in the pulse peak and current fluctuations over time owing to its output voltage that generates a series of pulses with equal amplitude, combined with the gradual decay of the memristor synapse's conductance.In contrast, the novel neuron compensates for the output voltage, ensuring a consistent pulse peak value of the output current despite the decay of conductance.Additionally, the novel neuron incorporates a clock signal to regulate the firing frequency of pulses, thereby effectively minimizing fluctuations in the output current.
We implemented the novel neuron circuit in HRSNN, and tested it on different time scales and different data sets to analyze its robustness and stability in the conductance drift environment.
We set different v 1 and v 2 and tested on CIFAR10 for a time scale of 5,000 s.The test results are shown in Figure 14a.The accuracy loss of the HRSNN is relatively large within 1-500 s due to the increasing value of G t ð Þ=G 2 t ð Þ.This leads to a larger deviation between the current input to the next-level neuron and the ideal current, resulting in accuracy loss for HRSNN.After 500 s, the value of G t ð Þ=G 2 t ð Þ stabilizes, and the current input to the next level of neurons starts decreasing, leading to a stable accuracy for HRSNN.Additionally, the greater the difference between v 1 and v 2 , the more serious the accuracy loss of HRSNN.This is because the larger the difference between v 1 and v 2 , the greater the value of G t ð Þ=G 2 t ð Þ, resulting in a larger gap between the current input to the next neuron and the ideal current, thereby causing more severe accuracy loss for HRSNN.
We tested the HRSNN with different test sets under various time scales, as summarized in Table 1.We performed ten tests on each data set, and the accuracy shown in Table 1 is the mean and standard deviation of the ten sets of data.We set v 1 ¼ 0.1 and v 2 ¼ 0.13.For the MNIST dataset, the accuracy decreases from an initial 99.72 to 96.94% after 1 month.Similarly, for Fashion MNIST, the accuracy decreases from 97.56 to 95.33%.Finally, for the CIFAR10 dataset, the accuracy decreases from 94.42 to 92.65%.These results suggest that the performance of HRSNN, when employing novel neurons, is less affected by conductance drift, highlighting the efficacy of the proposed novel neuron in mitigating the detrimental effects of electric-field noise and conductance drift on network performance.By effectively  suppressing these disturbances, the novel neuron design ensures that the HRSNN maintains its accuracy during testing, particularly for simpler datasets.We also plot the change in accuracy of the HRSNN in logarithmic time when tested on the CIFAR10 dataset.The result is shown in Figure 15.From the results in the figure, it can be found that when it reaches 700 s, the loss of accuracy of HRSNN becomes very slow, but the accuracy still decreases with time.However, the accuracy of HRSNN can still remain above 90% after one month, which also shows that HRSNN has very good stability.

Experimental Results
To ascertain the effectiveness and advantages of the three proposed techniques in this study, we conducted tests not only on their combinations applied to the CIFAR10 dataset but also analyzed their impact on the DVS128 Gesture dataset to assess the model's feasibility for neuromorphic computing tasks.Additionally, in this section, should the mentioned HRSNNs be integrated with novel neurons, they will be trained using weight-dependent cross-entropy loss functions.

Comparison with Related Works
The Backpropagation with Gradient Accumulation (BP-GA) algorithm proposed by Dong [7] and the STDP gradient descent algorithm proposed by Zheng [6] effectively suppress conductance noise.The Defect Rescuing algorithm proposed by Liu [14] can overcome conductance drift.The locally-connected memristive spiking neural network (LC-MSNN) proposed by Li [5] mitigates the influence of errors on network conductance.To evaluate these networks' performance in the presence of complex high-intensity noise, this study applies an electric field noise of 27sin 45πt ð ÞmV to all these networks simultaneously, as well as V 0, G max ÀG min 2 ⋅ 0.01Þ 2 À Á À conductance noise and drift coefficient v ¼ 0.1.The network are tested with the   MNIST dataset, and results are shown in Figure 16 and Table 2.
As depicted in Figure 16, the performance of SNNs deteriorates over time due to nonidealities.The BP-GA algorithm proposed by Dong accumulates errors continuously.As a result of nonideal characteristics, the errors become larger, resulting in a sharp decline SNN performance.The LC-MSNN proposed by Li can only partially overcome nonidealities.The accuracy rate of LC-MSNN fluctuates significantly and declines rapidly over time, with a decrease of 71.61% after 500 s.Although the STDP gradient descent algorithm proposed by Zheng has good robustness to conductance noise, it does not mitigate the influence of electric field noise and conductance drift.Therefore, its performance also changes dramatically in the presence of complex and high-intensity noise, dropping from 94.7% to 36.34%.Although the reduction is smaller than that of LC-MSNN, the impact of nonidealities on it is still destructive.Detect rescuing SNNs improve the accuracy of SNNs under the influence of conductance drift, but it ignores the influence of electric field noise and conductance noise.Under the influence of high-intensity complex noise, its accuracy rate decreases by 34.8%.Therefore, we can conclude that the hybrid training HRSNN proposed in this article has very obvious advantages under complex and highintensity noise.

Results on the CIFAR10 Dataset
In this test, we constructed five HRSNN architectural combinations, as summarized in Table 3.We also developed five ResSNNs with identical structures to serve as baseline comparators.Note that the ResSNN models did not incorporate the three techniques proposed in this study.Tests were conducted on the CIFAR10 dataset.The results, depicted in Figure 17a, illustrate the impact of different noise levels (i.e., noise numbers) on the network performance.The corresponding noise intensities are presented in Table 4. Noise No. 1 represents an ideal condition without noise, whereas Noise No. 10 represents the most challenging case, with maximum noise intensities across all dimensions.For each noise, we performed ten tests for each network, calculated the mean and standard deviation of the accuracy, and plotted the accuracy curve and error lines.
As observed from Figure 17a and Table 5, the accuracy decay of HRSNNs is smaller than that of ResSNN.The accuracy attenuation of HRSNN1 and HRSNN2 is approximately 2%, but the accuracy attenuation of ResSNN1 and ResSNN2 reaches 60-75%.The accuracy of HRSNNs changes minimally with noise enhancement, while the accuracy loss of ResSNNs increases with noise enhancement.In high-intensity noise, the standard deviation of HRSNN1 and HRSNN2 is approximately 1.5, while the standard deviation of ResSNN reaches 4.5; therefore, the robustness of the HRSNN is better than the ResSNN.
Figure 17a illustrates the impact of different noise types on various network structures.The results show that the proposed HRSNN exhibits robustness against mixed high-intensity noise.As the intensity increased, the accuracy of the ResSNN gradually decreased, whereas that of the HRSNN with the same structure remained relatively stable.This observation highlights the noise resilience of the HRSNN.
Furthermore, Figure 17b illustrates the impact of two types of noise on various different network structures.By comparing the data for ResSNN2 under noise 13 and 10, the conductance drift significantly reduces the accuracy of the neural network by 36%, whereas the other two noise types affect it by 7% and 10%, respectively.These findings indicate that conductance drift has the most pronounced influence on neural networks, and neural networks affected by conductance drift generally exhibit lower accuracy compared with those exposed to other types of noise.
The HRSNN's novel neurons, which synchronize their spiking frequency with the frequency of the electric-field noise, effectively minimize noise on the membrane potential and suppress electric-field interference.During testing, the memristor's inverse amplification circuit compensates for the voltage inputs  Backpropagation with Gradient Accumulation Algorithm [7] FC SNN 96.9 12.32 Dynamic Learning Rate Training [5] LC-MSNN 97.4 25.49STDP gradient descent [6] FC SNN 97.4 36.34 Defect Rescuing Learning [14] FC to the synapse, thereby ensuring that the current input to the subsequent neuron remains consistent with the ideal scenario, even when the memristor's conductance decreases.For training, the HRSNN utilizes a weight-dependent cross-entropy loss function that enables the network parameters to update slightly more than the ideal values and gradually decay to the ideals.This approach bridges the gap between actual and ideal weights, leading to an improvement in network performance.

Result on the DVS128 Gesture Dataset
SNNs are widely used for neuromorphic data classification owing to their time-driven nature and resemblance to brain processes.
In this study, the DVS128 Gesture dataset was used to evaluate the performance of the HRSNN and ResSNN networks within that domain.The test results are shown in Figure 18, focusing on the most severe noise levels (i.e., Noise No. 10) to examine the impact.
The results provide valuable insights into the differential impact of noise on the HRSNN and ResSNN models, highlighting their respective performances under challenging conditions.The HRSNN demonstrates remarkable resilience by exhibiting only a marginal decrease in performance in the presence of high-intensity noise.In noise-free conditions, the HRSNN achieves an exceptional accuracy rate of 94.75%, which decreases slightly to 92.71% when noise is present.Notably, the impact of  high-intensity noise on the HRSNN is limited, resulting in a mere 2.15% decrease in accuracy.
The ResSNN exhibits a stark contrast in performance.Although it achieves an accuracy rate of 84.53% without noise, it experiences a substantial decline in the presence of noise.The accuracy rate plummeted to a mere 28.125%, representing a significant decrease of 66.7% in performance.Moreover, ResSNN reached a critical point after 21 epochs, where its accuracy rate remained stagnant at 16.67%, which indicates that the noise interference trapped the ResSNN in a local optimum, thereby hindering its ability to effectively complete training.
The notable disparity in training stability between the HRSNN and ResSNN showcases the superior stability of the proposed model throughout training, characterized by minimal fluctuations in the accuracy rate when compared to the.This stability serves as a testament to the resilience and robustness of the HRSNN in mitigating the detrimental impacts of noise in neuromorphic tasks.

Conclusion
In this study, we simulations and investigations into the nonidealities present in hardware SNNs while focusing on electric-field noise, conductance noise, and conductance drift.
We also examined the impacts of these properties on different network structures and derived the key insights from our findings.Despite these adverse conditions, the HRSNN successfully completed the learning process and achieved outstanding accuracy across various classification tasks.This robustness to noise and the ability to maintain high performance highlight the effectiveness of the HRSNN in real-world applications.
The results showed that the ResSNN structure exhibits remarkable robustness against electric field and conductance noise.We provided a novel neuron circuit that successfully mitigated the issues associated with electric-field noise and conductance drift on network performance.Additionally, we introduced a novel weight-dependent cross-entropy loss function that effectively minimized the influence of conductance drift during network training.The experimental results demonstrated the exceptional performance of the HRSNN when utilizing the weight-dependent cross-entropy loss function.The network achieved high accuracy, which indicated successful convergence of the model.Accordingly, we proposed the novel HRSNN structure, which demonstrated exceptional performance in environments with high levels of noise and intense interference, as seen from our simulation results.The exceptional performance of the HRSNN can be attributed to its brain-inspired features that closely emulate the functioning of the human brain.
By leveraging the principles of neural information processing, the HRSNN showcases its potential to revolutionize in-memory  computing and artificial intelligence.Moreover, its ability to operate reliably in noisy environments can help tackle complex and challenging tasks.This research holds significant promise for applications in neuromorphic systems based on hardware neural networks.It enhances the robustness of such networks against device nonidealities and yields impressive outcomes in neuromorphic tasks.

Figure 1 .
Figure 1.a) Membrane potential without electric-field noise and b) membrane potential with electric-field noise.Lacking electric-field noise, the eighth neuron fires at a very high frequency.However, the pulse frequency decreases significantly with noise.Therefore, network accuracy decreases.

Figure 2 .
Figure 2. Effect of electric-field noise amplitude of different SNN structures: a) MNIST results, b) Fashion MNIST results, and c) CIFAR10 results.As the amplitude of electric-field noise increases, the accuracy of all networks decreases, with the ResSNN exhibiting the smallest decline in accuracy.

Figure 3 .
Figure 3.Effect of electric-field noise frequency of different SNN structures: a) MNIST results, b) Fashion MNIST results, and c) CIFAR10 results.As the frequency increases, the accuracy of all networks decreases, with the ResSNN exhibiting the smallest decline in accuracy.

Figure 4 .
Figure 4. Effect of conductance noise on different SNN structures: a) MNIST results, b) Fashion MNIST results, and c) CIFAR10 results.As the conductance noise increases, the SNN performance decreases and ResSNN performance decreases.Therefore, it has very good robustness to conductance noise.

Figure 5 .
Figure 5.Effect of conductance drift on different SNN structures: a) MNIST results, b) Fashion MNIST results, and c) CIFAR10 results.As the drift coefficient increases, the accuracy of all networks decreases rapidly.Therefore, it is clear that conductance drift seriously impacts all SNN structures.

Figure 6 .
Figure 6.a) Architecture of DSNNs, b) crossbar and circuit structure corresponding to the network connection weight.The pulses output by the neurons at the upper level pass through the crossbar and become current signals that are input to the neurons at the next level.Because the conductance of the memristor decreases over time, the weights decay, causing the network to output incorrect results.

Figure 9 .
Figure 9. HRSNN architecture.The convolutional layer uses surrogate gradient learning, and the fully connected layer uses STDP learning.After the picture is input to the convolutional layer, it reaches the fully connected layer after average pooling, and the classification result is output.

Figure 13 .
Figure 13.a) Conventional neuron output current and b) novel neuron output current.The output current of traditional neurons is unstable and slowly decays owing to the influence of conductance drift and electric-field noise.The output current of the novel neuron is very stable with good robustness.

Figure 14 .
Figure 14.a) HRSNN accuracy under different drift coefficients, b) changes in G t ð Þ=G 2 t ð Þ over time.From (a), we can observe that the accuracy loss of the HRSNN increases with a greater difference between the drift coefficients of the two memristors.The accuracy loss tends to plateau around the 20th second, corresponding to the reduced variation of G t ð Þ=G 2 t ð Þ, as shown in (b).

Figure 15 .
Figure 15.Test results of HRSNN on CIFAR10 in log time.

10 Figure 17 .
Figure 17.Effects of different intensity noises on different SNNs.a) Accuracy change curves of the ten SNNs and the increased intensity of the three noises and b) the accuracy change graph of SNNs after removing one type of noise.In (a), the analysis reveals a noticeable decline in the accuracy of ResSNNs as the noise intensity increases.Conversely, the HRSNNs exhibit remarkable stability with minimal impact on accuracy, despite varying noise intensities.In (b), among the three types of noise, conductance drift exhibits the most pronounced effect on network accuracy.

Figure 18 .
Figure 18.Simulation result on the DVS128 Gesture dataset.

Table 1 .
Test results of the HRSNN on different time scales.

Table 2 .
Comparison with related works.