Low ‐ power hybrid memristor ‐ CMOS spiking neuromorphic STDP learning system

An electronic circuit that implements a neural network architecture with spike neurons was studied, proposed, and evaluated, primarily considering energy consumption. In this way, CMOS transistors were used to implement neurons, memristors were used to work as synapses, and the proposed network has a spike‐timing‐dependent plasticity (STDP) learning aspect. The validation of the circuit modules and the complete network architecture was performed using SPICE models. Since most data of company technologies is restricted, some universities provide predictive models to reproduce the real ones. In this paper, two types of Integrate and Fire Neuron (I&F) using 32 nm CMOS technology simulated in LTspice with BSIM4v4 model designed by Berkley University and applying predictive parameters provided by Predictive Technology Model (PTM) are presented. The simulation results obtained here reduces the bias voltage and the chip size to the most recent designs implemented. Finally, communication between neurons and synapses with STDP learning has been successfully simulated.


| INTRODUCTION
The main reason for a physics noble prize to John Bardeen and Walter Brattain in 1956, the creation of the transistor was a foundation for all existing electronic/semiconductor technology today. With the passing years, new and more powerful computational systems appear all the time, resulting in a cell phone from years ago being considered outdated. This exponential growth was theorized by Moore [1] and proved over the years. Although this growth is achieving its limits, computer systems arrive at a physical barrier, and the size of the transistor. Different approaches are studied every year to increase the computational power of the systems without necessarily changing the transistors. Many artificial intelligence systems are emerging over the years, demanding smaller and more energyefficient computer systems. Many companies and universities invest money to create machine learning systems to solve unimaginable problems. The leading problem is that linear processing units used in computers today were not initially developed to execute activities such as pattern recognition and learning mechanisms; neither are energetically efficient. Computational systems that use the CPU with von Neumann architecture are not optimized for these functions. A functional processing unit capable of quickly performing these functions with high energy efficiency is the human brain [2]. So why not study and create architectures capable of mimicking the learning aspects of a brain, using its most basic functions, neurons, and synapses. The human brain is a highly performing processing unit capable of storing, organizing, calculating, and executing various types of information simultaneously. It can recognize or recall data that has been 'stored' for over 70 years. Although we use high-performance computing tools to facilitate the processing of digital information, none of them is capable to perform the same functions as a human brain, with low power consumption in such a small space. The neuron cell is responsible for numerous functions within the animal's biological system. It can fire nerve impulses; in other words, the action potential [3]. The synapses are the name given to the communication channel between two neurons and, each neuron makes connections to a target neuron and excite or suppress their activity, forming circuits that can process information and carry out a response to adjacent courses. The synapses are strengthened by a biological process called Spiketiming-dependent plasticity (STDP). It adjusts the connection strengths between neurons in the brain, based on the relative timing of a particular neuron's output and input action potentials (or spikes). The action potential leaves the axon from the cell body and travels through the myelin sheath arriving at the gap (synapse) [4]. At this point, the synaptic strength is adjusted and, the signal crosses it via neurotransmitters and activates or not the postsynaptic signal. The novel microelectronic systems that try to reproduce the human brain's functionality are commonly called neuromorphic systems. There are numerous forms, already developed and detailed, of different architectures capable of reproducing the functioning of neurons and synapses. In the scientific literature, it is possible to find highly complex integrated circuits that try to simulate practically all biological characteristics. In contrast, other circuits are simpler and reproduce only the most basic functions, such as the action potential. The need for computational systems with high processing capacity and, as always, greater energy efficiency led the studies of neuromorphic circuits to be, replaced or used alongside the von Neumann architecture on conventional processors, where tasks that require neural networks would be processed. The development and implementation of microelectronic circuits with similar functions to biological neurons are presented in many works [5][6][7]. These works propose different approaches used in several practical applications, some more focused on the electrophysiology of neurons and others on the ability to create highly dense neural networks. Also, some projects already uses memristors to implement synapses and show excellent results [7][8][9]. A large part of this novel brain-inspired architecture is implemented, simulated, and even manufactured using MOS-FET transistors. It is because MOSFET is the most commercially used device. Consequently, there is a vast collection of tools that work with MOSFET transistors, and also it has physical characteristics that benefit energy consumption, speed, and reliability of integrated circuits. Another device that works very well in neuromorphic systems is the novel memristors, also known as the fourth fundamental circuit element. The scientific bibliography shows its high capacity to simulate biological synapses between neurons [10]. Due to its ability to 'remember', its resistive memory makes it useful to function as a device to represent synaptic weights.
Considering that one of the ideas of neuromorphic systems implemented in hardware is to seek the integration density of biological systems, to perform functions similar to those of the brain, this work aims to study proposals for low-power and small area neurons quantitatively. For this purpose, it aims to establish and evaluate an electronic circuit that implements a neural network architecture with spike neurons using analogue MOSFET neuron design. In its turn, spike neural networks (SNN), besides mimicking the neural dynamics of the brain, can usually achieve lower power consumption compared to artificial neural network (ANN) implementations. This feature is quite impressive when it comes to hardware realization [11].
The proposed network has an STDP learning aspect as it applies memristors to work as synapses. We evaluate two types of Integrate and Fire Neuron (I&F) [5,12] using 32 nm CMOS technology, changing the technology in which they were originally proposed. Both circuits were simulated in LTspice with predictive MOS transistor models. The simulation results obtained here reduce total power consumption in more than 85 % and reduce chip size to the most recent designs implemented using smaller CMOS technology. In the end, communication between neurons and synapses with STDP learning executed by the memristors has been successfully simulated.

| NEURAL NETWORKS AND ARCHITECTURES
The brain inspires artificial neural networks (ANNs). They are massively parallel architectures, composed of simple processors, called neurons, highly interconnected by the synapses. ANNs are adapted explicitly to solving visual perception and dynamic control problems. Specifically, Spiking Neural Networks (SNNs) have almost the same properties. However, the main difference between ANN and SNN is the time operation. SNNs works based on timing spike inputs while ANN are static. So, SNNs are closer to the brain's neural dynamics. Besides that, SNN based systems have lower power dissipation if compared to ANNs in the same application.
Energy-efficient hardware implementation of learning systems is a challenge when it comes to current von Neumann architecture-based computers. On early 1990's, Carver Mead first described the Neuromorphic Electronic Systems, 'biologyinspired' microelectronics, using the idea of an analogue electronic architecture, applying very-large-scale integration (VLSI) process, that can perform activities equivalent to biological neurons presented in mammals [13]. Those systems are intended to compete with von Neumann systems, improve energy efficiency, and cost of computational systems.
According to Indiveri [12], analogue VLSI technology is an appropriate process to create neurons and neuromorphic systems. There is a correlation among the biological neurons presented on animals and the analogue VLSI neuromorphic systems such as conservancy of charge, integration, amplification, and size reduction [14]. Collective computation in a densely parallel analogue VLSI circuit is probably one of the most advantageous approaches to process the information on pattern recognition or sound recognition.
In neuromorphic systems, all the information processed is assigned and stored in each neuron and their synapses, responsible for the learning mechanism [15]. So, it is necessary to distinguish the types of neuron models. Many different approaches of neuromorphic implementations were studied in the last two decades, using various kinds of circuits, neural functionality and styles to transfer data from neuron to neuron [16]. All of them are trying to approximate the biological neuron behaviour.
The need for faster, efficient, reliable, and smaller systems leads the studies to choose digital or analogue approaches regarding design hardware neurons. Both paths have pros and cons when comparing the following characteristics, ease-ofdesign, robustness, scaling, and storage [17].
In terms of reliability and robustness, from the analogue point of view, it demands substantial care to create the design to minimize effects such as variations on the power supply, leakage, and temperature variations, unlike binary digital circuits considering it has only two states. Despite being harder to design, analogue neuron circuits still require far less chip area than digital ones, as well as they are approximately 20 times more energy efficient considering that digital implementations demand high signal-to-noise ratio [17].
In addition to the discussion about analogue or digital design, one more method of classifying microelectronic neurons operation refers to less or more biologically realistic. The first one is the Integrate and Fire (I&F) model, and the other is the more complex cortical or conductance-based model [16].
Initially, the simpler I&F model has a straightforward implementation and non-realistic when comparing to the biological system but successfully used to create dense networks and one of the most utilized models for analyzing neural network properties [18]. It defines the neuron using the membrane potential v(t) and applying Kirchhoff's current law for the conservation of charge: The membrane is stimulated by postsynaptic input deriving from other neurons and their synapses, I S (t). A current I leak (t) is responsible for modelling the leak conductance and potential that decays over a constant time and lastly, I inj (t), representing an external current stimulation. When the membrane potential hits a fixed threshold mark, the spike is generated [19].
Alternatively, the conductance-based neurons are more realistic because they have properties remarkably similar to those cortical neurons, by having protein molecule ion channels represented through conductance and their lipid bilayer by a capacitor, in other words, an equivalent circuit representation of a cell membrane [20].
First proposed by Hodgkin and Huxley, the Hodgkin-Huxley model [21] represents a simple biophysical perception of an excitable cell in which current flows through the membrane due to charging of the membrane capacitance, I C , and the action of ions across ion channels. Two ion channels, sodium, and potassium, are characterized by I Na and I K , respectively. Thus, the membrane current I m is given by

| Analogue spike neuron
As part of neuromorphic systems, the analogue spiking neuron represents a fundamental piece. Several methods to reproduce a spike (the main activity for communication between cells also known as nerve impulses) of a biological neuron using VLSI have been described over the years. The action potential (spike) is the communication of neuronal cells, quickly carrying information between and within tissues, mainly inside the brain. When the excitation on the neuron membrane reaches a specific threshold voltage, the fire occurs. The typical shape of an action potential is presented in Figure 1.
It is vital to take into account, as stated above, that analogue spiking neuron applying the I&F model has most details about the electrophysiology of real ones not included in the design. Instead, its simplicity is a significant advantage adequately describing the network mechanisms [22]. That simplicity resulted in the focus of theoretical and computational studies over decades and was still extensively used and improved on large dense neural networks.
An example related to this kind of hardware neuron is the Axon-Hillock (A-H), first proposed by Indiveri [12], presented in Figure 2, a simple implementation that can be useful to create dense networks and study neural network properties [18]. It usually has two additional inverters acting as a buffer for the output signal.
The central part of a hardware neural network is the neuron itself. Responsible for adding multiple synaptic signals, and once exceed a triggering threshold, it executes the spike or action potential. Thus, the compact leaky integrate-and-fire neuron circuits have a reasonable accuracy to the biological neuron and are a reliable abstraction of neurons. It is useful for
The circuit works as follows: injection DC current I nj charges the membrane capacitor C m causing V mem to increase linearly with time. The differential pair works as a comparator, comparing the V mem value with a fixed threshold value V thr . While V mem < V thr , the output maintains a LOW state. After a time, V mem reaches a considerable value bigger than V thr ; the output goes to HIGH, and through the two inverters working as buffers, V out also goes HIGH. At this moment positive feedback created by the capacitor divider C m and a feedback capacitor C fb is activated because V out turns on the reset transistor M10, allowing V mem to increase with a rate given by On the other side, while V out stays HIGH, C m discharges in a rate determined by V pw applied on the M11 gate. A reset current I r goes in the direction of M10 drain terminal, if higher than I nj , the membrane capacitor discharging take V mem down until became less than V thr thus, V out returns to LOW. At this time, the neuron enters in the refractory period, a moment where none spike is possible. With V out at a LOW state, the output of the first inverter is HIGH, driving M9 to turn on and C r to discharge through M12 at a rate set by V rfr .
The V pw is responsible to the pulse width of the output spike, and the injection current will control both t low (interval inter spikes) and t high (pulse duration) that also depends on I r , as shown below:

| Low-power Axon-Hillock
Analysing the low-power A-H implementation proposed in Ref. [5], it has fewer transistors on the circuit, composed by just two inverters (M1-M4) with the same feedback capacitor C fb from regular A-H. Some modifications were applied to the original circuit trying to reduce power consumption drastically and achieve higher energy efficiency for each spike. As presented in Figure 4, for this design C m , reset circuit, and threshold differential pair were fully removed. Each part has its substitutes. The replacement for the membrane capacitance is the parasitic component of the first inverter capacitance. The reset circuit that controlled I r through V pw was replaced by only one transistor M5, and the W/L ratio of M3 and M5 are now responsible for the increase or decrease I r . Finally, the differential pair that evaluates whether or not the spike should occur is now replaced by the first inverter's switching voltage.
The behaviour is very similar to the regular neuron, the injection current I nj increase charge and V mem value. Once V mem is sufficiently high for the switching voltage of the first inverter (V thr ), V out goes high through the inverters. The C fb works again as a feedback capacitor, increasing V mem and turning ON M5. At this point, I r is now controlled by the pull up (M3) and pull down (M5) transistors, causing V mem to decrease over time, and the cycle begins again.

| Memristor synapse
The generic memristor SPICE model can reproduce several kinds of memristors [23]. Through its parameterization, different operating characteristics can be reached for different applications.
All the fitting parameters, and their initial values, for this particular model, are presented in Table 1. Using these default variables, we are able to plot the famous memristor pinched hysteresis loop represented in Figure 5. These variables are responsible for the three main characteristics: electron tunnelling, non-linear drift, and a voltage threshold [23].
It was considered the main reason for using memristors during this work, to have similar behaviour to neurological synapses and be able to perform the STDP Learning mechanism. It is necessary to configure the presented model to a kind of device that acts the same way. By using a minimal threshold voltage from the spike to activate a change through multiple resistance states. Each pre-and post-synaptic spike works together to modify the synaptic weight, in this case, the state variable.

| Spike neural network
One of the main characteristics of a neural network is how synaptic weights are updated. Each pre-and post-synaptic spike originated from a biological neuron starts the process of adjusting the weights based on the time variation between the pulses. The learning process previously explained, STDP, requires at least one triad of neuron-synapse-neuron, since the synaptic weight will depend on the inputs and outputs. On the current work, the triad will be composed of an I&F neuronmemristor-I&F neuron, as shown in Figure 6. Figure 6 shows a pre-synaptic green pulse and a postsynaptic blue pulse at different times, making the existence of an STDP curve during Δt = t post − t pre . The shape of action potential will profoundly influence the resulting STDP-learning function; it must have a narrow positive spike with a broad voltage peak and a soften negative extremity.
After configuring the circuit block proposed by this work, the system could be increased to a crossbar array, the fully comprehensive spike neuron network. Being also scalable, so large networks could be achieved using that set-up. Figure 7 represent 4 � 4 crossbar array composed of the primary cell described above.

| METHODOLOGY
All the steps and choices made for the development of this work, from the models to the simulation of the spike neural network will be described below.

| MOSFET
The choice of transistors to be used in this work was based on proximity with commercial ones, availability, and adaptability to the LTSpice platform bringing the whole circuit closer to reality. As these industrial models are hidden by companies, it was necessary to select mathematical and predictive systems that simulate a transistor's physical characteristics. Composed of more than 220 variables for modelling the BSIM4 [24], and ASU Predictive Technology Model (PTM) [ If 0 a positive voltage will decrease SV value a a SV: State-variable, normalized value for conductivity, between 0 and 1.

F I G U R E 6
Pre-and post-synaptic pulse between spike neurons with memristor synapse MARANHÃO AND GUIMARÃES suggested by the compact model coalition (CMC), a group responsible for standardizing the design of semiconductor device models. However, there are still more contemporary models, the multi-gate devices, such as FinFet also available at BSIM Group site as BSIM-CMG. The problem is the language in which this model was implemented no longer SPICE; it uses Verilog-A. LTspice XVII is not prepared to compile such a type of file. The parameter characterization file provided by ASU PTM was chosen based on analysis developed by previous works [26,27]. So, ASU PTM HP HKMG 32 nm made available by the Arizona State University [28] was used in this work.

| Memristor
As an extremely new circuit element, commercial manufacturing models do not yet exist. Some prototypes already presented were created to represent the functioning of a memristor physically. Those produced physical models are usually adapted for only a single function as a memory, ROIC circuits, or synapses. Although they are functional and reliable, there is complexity in translating these devices for SPICE simulations. The memristor model proposed by Yakopic et al. [23] was chosen to be used as the synaptic circuit, as an alternative for those physical models. This specific design is responsible for translating the equations of scientific literature from different configurations and grouping them all in the so-called Generic Model, capable of representing with equivalence different types of memristors and to being all developed for the LTSpice platform. With a simple adjustment of parameters, a generic memristor functioning as a synapse capable of STDP learning can be implemented.
Over the years, some researchers were able to create real memristors with the same ability previously mentioned [10], ideals for sweep behaviour, and power-efficient when driving STDP learning operation. Using these real memristor models as exemplars and fitting the SPICE model presented in this session to have the same characteristics, achieving the ideal hysteresis is possible.

| Circuits
Two types of I&F neurons were chosen to be implemented due to their simple operation and easy implementation, thus enabling them to function together with synapses and higher neuron density. The regular A-H circuit on Figure 3 and the low-power A-H circuit in Figure 4 were used to develop the neural network.

| Simulations
The most common and first approach used by companies and universities to simulate the behaviour of integrated circuits is SPICE, capable of simulating the circuit and all the

F I G U R E 8 Simulations flow chart
242characteristics of a transistor before committing to manufacture an integrated circuit. The simulations performed in this work use the LTspice XVII software, and the collected data were manipulated with MATLAB. A hierarchical methodology was used to perform the simulations, starting with minor blocks and ending with the major project.
A brief diagram is shown in Figure 8, showing how the simulation hierarchy went step by step. Starting with the most basic simulations of the NMOS and PMOS transistors and ending with the neuron-synapse-neuron triad. First, a MOS-FET evaluation to ensure the expected operation must have a high gain in weak inversion regimes and use a low supply voltage. Second, the neuron circuits are tested with the selected transistor model, observing parameters for better operation and making adjustments. After that, neuron simulations were paused to find and test the best way to recreate synapses. The first tests with the memristor were to guarantee its functioning on the LTSPICE platform, following by its synaptic configuration, using several parameters to find the ideal curve. Finally, the two main blocks were placed to work together, testing final adjustments to ensure the best STDP curve to update the synaptic weights.

| RESULTS AND ANALYSIS
This work used the 32-nm MOSFETs for designing the neurons. For the gate width, different values will imply mainly on altered gain and power consumption, both been directly proportional to gate width. Smaller gate widths are expected, presuppose a smaller area and power consumption, although the gain will drop too. Another characteristic influenced by the gate width is the neuron's spike frequency and will be discussed later [29]. The first neuron, a regular A-H, was built using the circuits mentioned previously. Using a power supply voltage of 0.9 V. Besides, the capacitive divider was calculated to have a value of 0.2, that is ΔV mem = 0.2 � V dd , so ΔV mem = 0.18 V. This variation is how much V mem will increase and decrease when V out goes from ground to 0.9 V, the exact spike pulse. Using I nj = 20 nA, V pw = 0.22 V and V rfr = 0.53 V.
Two other important components to show are V out (spike train), and I r . Equation (5) presented that I r has primary importance determining the t high time, how long the output spike will stay at 0.9 V. Figure 9 presents a complete graph with all main tree characteristics alongside. As expected, the membrane voltage reaches a predetermined value, and the spike happens; it is represented by the red output signal. At this point, I r increases its value by discharging C m starting the system's refractory period and a new cycle.
In this circuit, the spike frequency is influenced directly by the refractory period; the shorter its value, the higher the pulse frequency. I nj and V rfr mainly determine this factor.  When evaluating only I nj , the I&F neuron can also behave like an encoder, encoding the input current to a spike train, Figure 10. The number of spikes is proportional to the input current. Finally, using default parameters such as V DD = 0.9 V, I nj = 10 nA, V rfr = 0.53 V, V pw = 0.20 V, and W NMOS = 52 nm, the power consumption of one neuron turns out to be 4.75 nW, and it is directly proportional of the spike frequency since more spikes will dissipate more energy. Thus all significant parameters that influence pulse frequency will direct impact on power consumption. More important to analyze than energy dissipation is energy efficiency (Joules/ F I G U R E 1 3 I-V characteristics of the memristor with synaptic behaviour using triangular pulses after adjust voltage parameters: (a) voltage sweeps of the adjusted synaptic memristor and (b) applied pulse train to synaptic memristor fitted MARANHÃO AND GUIMARÃES -245 spike); for the regular neuron, a 48.5 fJ/spike was achieved, being directly dependent on variables that considerably change t high and t low characteristics. At the end of the next session, a table presents this work and other power dissipation and energy efficiency values.

| Analogue spike neuron 2
A different approach was used to extract the results of the simulation of this neuron, taking into account that there are fewer parameters to be configured. Respecting the low power consumption, 0.15 V was selected to be the power supply and current source of I nj = 35 pA. Using C fb = 5 fF, the capacitor divider works the same way of the first neuron. The difference is that the parasitic capacitance of the first inverter has significant influence now. Figure 11 presents the spike pulse for the low-power Spike Neuron with V mem and V out . Comparing with Figure 9, the results shown here are different when evaluating the shape of the curves and also peak voltage values. There is no longer an output signal with a square wave shape since the differential pair was removed, resulting in a spike much more like the ideal waveform of the action potential in Figure 1 with a voltage peak near 70 mV.
Like the previous neuron, the injection current I nj remains one of the main factors responsible for adjusting pulse period and frequency. However, as mentioned earlier, the pulse width control is no longer adjusted by V pw , the withdrawal of the second transistor next to M5 turns the W/L ratio of M5 and M3 now responsible for controlling I r and consequently the width of the spike, in other words, the period in which pulse remains HIGH.
The low-power neuron can also behave like an encoder, as shown in Figure 10 presented in the last section. Applying a sine wave for injection current, the spikes fire proportional to the current as shown in Figure 12. I nj will alter spike pulse time and the frequency in which it occurs as well as the regular neuron.
Lastly, to extract the power dissipation and energy efficiency a standard model was used, without significant changes to variables, V DD = 150 mV, I nj = 38 pA, and W NMOS/ PMOS = 52 nm. The power consumption for a single neuron was 23.73 pW and the energy efficiency of a single spike 0.232 fJ/Spike. As with the first neuron presented, the parameters that modify pulse frequency and pulse size directly influence these values.
Concluding the simulation results for neurons section, Table 2 presents the results of this work and other main differences when evaluating power consumption, energy per spike, and fire frequency. Energy per spike should be used as a figure of merit as it provides a fair comparison of power consumption concerning the processing capacity of neurons. So, the two proposed circuits have achieved good results among these projects.
There are two that deserve more attention, from all eight referenced models, Indiveri [30] and Danneville [5]. As already said, the two neuron circuits used here are almost the same proposed on these works. With considerable changes and adjustments to the MOSFET model and transistor W/L ratio. On the table, it can be seen that both works presented here were capable of obtaining better energy efficiency for each spike. The regular I&F neuron reduced from 900 pJ/Spike to 0.0485 fJ/Spike and the low power I&F from 2 to 0.232 fJ/ Spike. An estimate of the areas occupied by analogue neurons 1 and 2 was obtained scaling down previous implementations [5] and were also included in the table.

| Memristor synapse
Given the generic model of memristor presented, Yakopcic also demonstrates a synaptic representation of his memristor modelling the device-related on [10]. It was possible to extract the necessary parameters for our composition of the final memristor used through these data. A model to operate with low voltages, a higher frequency than the one presented by him and to perform as the characteristic I-V curve for the synaptic memristor.
To configure each of the fitting parameters presented, Yakopcic prepared a detailed study [35] showing the paths to design a wide range of memristors by changing each variable, creating a device that operates as desired. With this, it was possible to verify which parameters required adjustment. First, the threshold voltages, V n and V p , setting the points where the conductivity is biggest for both positive and negative slope. Also, adjusting A n and A p , which represents the variation on conductivity over time, reaching the neurons' frequencies of operation.
In order to adjust the model to perform using triangular peaks of 60 mV (an approximation for the neurons action potential), as indicated in Figure 13b, it is necessary to make, V p = 55 and V n = 50 mV to set threshold voltages. The variable responsible for fitting how quickly the state variable

-
changes once the threshold is surpassed is A p and A n , changing it to 10 9 scales, will adjust the time response to microseconds, resulting in Figure 13a. Table 3 presents a list of all fitting parameters for the memristor synapse model proposed by Yakopcic and the ones adjusted to perform alongside this work.

| STDP learning
After presenting the results for each circuit block of this project, it is time to show these pieces working together.
From the configuration neuron-memristor-neuron in Figure 6, the STDP curve, responsible for adjustment on the state variable or synaptic weights generated at the memristor between the implemented neurons are revealed in Figure 14 for each neuron circuit.
To show how these two curves are capable of modifying the synaptic weights, Figure 15 shows both neurons the gradual alteration of the memristor's state variable, which is precisely the change in the synaptic weights.
The increase and decrease of synaptic weight depend on how the STDP is acting. It is also possible to notice that in Figure 15b, the positive peak begins to decrease over time until it becomes less than the threshold voltage of the memristor. Thus the positive changes on the state variable stop occurring. It is also possible to note that the state variable keeps its value constant if nothing is applied in both cases. Finally, a system was created to show how different presynaptic pulses from two different neurons in a different time change the synaptic weights of their respective synapses using the same postsynaptic signal from a third neuron. In other words, Figure 16 presents the beginning of a spike neural network and how the circuits presented manage to work together considering the blue neuron, a regular frequency of spikes and the red one a random spike generator.
In Figure 17, it is possible to observe the performance of the second neuron being responsible for updating the synaptic weights not only for the second memristor but also being part of the update for the first memristor; this is because both pulses influence the postsynaptic signal. This phenomenon will F I G U R E 1 5 Synaptic weights updated by spike-timing-dependent plasticity: (a) regular neuron, and (b) low power neuron F I G U R E 1 6 The simple spike neuron network F I G U R E 1 7 Neurons operation and spike-timing-dependent plasticity learning with two input neurons and one output neuron. Each colour corresponds to a specific neuron in Figure 16 248occur at all times, even in more extensive neural networks. So the synapses of the existing neural network are updated. In other words, they are learning patterns.

| CONCLUSION
This work proposed and validated the basis of a spike neural network architecture using two different types of analogue CMOS neurons and configuring a memristor to function as a synapse enabling the STDP learning process aiming to achieve a low-power neuromorphic configuration. Through a bibliographic review covering all aspects of other studies, and evaluating methods for simulations and data acquisition, two existing neuron designs were implemented and simulated using a different CMOS process technology from previous works [5,6,30]. We were able to reproduce the same results even with reduced power supply (lower power consumption) and the transistor length almost six times shorter, leading to the less effective area. In the second part, a generic memristor model was configured to work together with these neurons making the creation of a neuromorphic system able to update synaptic weights, in other words, to perform an online training of the neural network.
Studies on the performance of networks are also foreseen, going beyond 32 nm technology, to assess how power dissipation would be modified with the reduction of the technological process. In the future, the SNN proposed in this work will be enlarged for performing a specific application concerning an image recognition task by simulation. Also, a training methodology will be proposed. If necessary, the network circuit will be adjusted to fit in the application. Depending on the performance results provided, in the future, the physical design and fabrication are planed.