High‐speed adaptive analog filter based on fully analog artificial neural network

This paper presents an innovative concept of a high‐speed and higher order adaptive filter. A fully analog artificial neural network handles the adaption by using a filter bank for filtering and learning through analog feedback inspired by backpropagation. There is no clock control used in this concept, which makes real‐time adaption possible even for high frequencies. Its functionality is shown by several electrical circuit simulations involving component inaccuracies as well as a commonly used filter example.


| INTRODUCTION
Filters are an essential building block for signal processing in almost every modern electronic system.When used in environments where conditions change over time or are simply poorly controlled, there is a need to change filter parameters to ensure proper system functionality.5][6] Furthermore, for lower frequencies in acoustics, including echo cancelation, noise control, array processing, and acoustic communication filtering. 7For higher frequencies, they are used, for example, in antenna arrays, to improve the reception quality of multiple signals in radio engineering systems. 80][11] These are often preferred due to their simplicity and efficiency; however, they are primarily suited for lower frequency applications.When it comes to adapting both digital and analog filter types, algorithms such as least mean square (LMS) or Heuristic are predominantly used. 1 These adaptive algorithms inherently operate in digital form, resulting in computational power limitations for digital processing at higher frequencies.
Specific implementations have been developed to address these challenges using field-programmable gate array (FPGA) technology. 10,11Switched filters, such as switched capacitors or filter-based adaptive fuzzy finite-time control for switched nonlinear systems, are examples of technologies that filter analog signals.Despite their analog nature, the adaptation mechanism in these filters is often executed by algorithms, or their switching speed ultimately imposes limitations for high-frequency applications. 12,13ll the adaptive filters mentioned above face limitations stemming from factors such as sampling, analog-to-digital converters (ADCs), and the clock speed of computing machines.Furthermore, filters capable of operating at high frequencies do not offer real-time adaptation, further limiting their applicability in dynamic situations.These issues can be particularly problematic in systems with restricted computational resources for digital signal processing, such as satellite systems, making them less adaptable to dynamic operational requirements. 10]14 Due to the limitations of existing adaptive filters, this paper introduces an innovative concept of a fully analog adaptive filter based on a fully analog artificial neural network (FAANN). 15The innovative aspect of this approach lies in its analog nature that does not use any clock control, which enables both high-frequency operation and real-time adaptation capability, thus overcoming the limitations of traditional adaptive filter technologies.In addition, the adaptive mechanism of this new concept allows for the adaptation of a wide range of filter types, including high-order filters, offering unprecedented versatility in the field of analog adaptive filtering.
The remainder of this paper is organized as follows: Section 2 discusses the implementation of fully analog artificial neural networks and the effect of their primary parameters on adaptive filtering.Section 3 covers the types and properties of filters in a filter bank with respect to filter adaptation and their limits.Section 4 delves into the adaptation process itself and provides an overview of the adaptation possibilities of one configuration of the proposed filter.Finally, Section 5 concludes the paper with a summary of the findings and potential future work.

| FULLY ANALOG ARTIFICIAL NEURAL NETWORK
A fully analog artificial neural network is designed as near-sensor hardware to process signals. 157][18][19] The learning process here is fully parallelized by using circuit feedback. 17,20,21][22] The whole FAANN consists of two parts: Forward and backward propagation.The hardware implementation of forward propagation in neural networks has been addressed many times. 16,20,21,23,24The behavior of this process is defined from the perspective of an individual neuron as where V out is the output signal, SðÀÞ is the activation function, V in i is one of the input signals, and V w i is the weight, where for i ¼ 0 V in 0 is the bias input.However, FAANN brings an innovative implementation of a learning process inspired by the classical backpropagation with gradient descendent. 15,20,25Here, the neuron's weight is changed by charging the capacitor with current, and the resulting voltage on the capacitor is the weight itself, as shown in Figure 1.The whole learning process can be fitted into the familiar formula of charging the capacitor with current, which eventually comes out as where V w ðtÞ is a function of weight at continuous time t ℝ, K η [S] is the conductance coefficient, and EðtÞ [V 2 ] is the error of neural network, in this case, calculated as MSE. 26igure 2 shows a detailed block diagram of the single neuron.There are two inputs relevant to this paper.The first is the input V η , which is an alternative learning rate from digital networks.It directly proportionally affects the coefficient K η from the formula (2).Thus, if V η ¼ 0V , the network does not learn.Conversely, if V η > 0V , the weights change according to the magnitude of the error and in proportion to the magnitude of this voltage.The second important input is V E , which defines the error of the neuron.If it is an output neuron, a voltage is applied to V E according to the derivative of the MSE, namely, where V target is the desired voltage at the output of the neuron V out 26 itself.
A frequency filter is a linear system, while a neural network, being a highly nonlinear system, therefore exhibits considerable potential for distortion.Total harmonic distortion (THD) must be evaluated and eliminated for this reason.The degree of distortion depends on factors such as the activation functions employed in the neural network and the level of adaptation that can be achieved to minimize the distortion.
When utilizing a neural network for adaptive filtering, it is crucial to select appropriate parameters, one of which is the choice of the activation function.Thus far, the FAANN has been implemented with three activation functions: "linear," "tanh," and "sigmoid."Consequently, three distinct neural networks have been created, each featuring a different activation function in their hidden layers.The adaptation process is simulated for each network in accordance with Section 4.
The block implementation of a fully analog neuron with learning and two inputs.
The THD is calculated from each measured frequency and depicted in the graph shown in Figure 3. Five adaptations were performed for each neural network, starting with different random weights.The graph reveals that, in terms of signal distortion, the linear activation function is the most suitable for filtering applications.
Subsequent simulations demonstrated that the THD with the linear activation function rapidly falls below 5% for the pass-band frequencies of the tested filters.The THD continues to decrease as the learning process progresses.
Another important parameter of a neural network is its structure.In the same way, a neural network with only one output neuron was first created, the results of which can be seen in Figure 4. Subsequently, structures with one and two hidden layers were created where each layer contained 3, 6, or 12 neurons.The results of all these simulations are shown in Figure 5. Again, five adaption processes were performed for each structure here.
In order to be able to compare the changes in the adaptive filter attributes, a reference fourth-order Chebyshev filter with a cutoff frequency of 10 MHz is used throughout the paper unless otherwise noted.
From the frequency characteristics in Figure 4, it can be seen that one neuron already handles the adaption in a decent way.A better choice of other neural network parameters or more extended learning could solve the visible minor attenuation in the pass-band part of the filter.The graph in Figure 5 shows that the smaller neural network structure performs better with the current adaptive filter training parameters.One of these parameters is the learning length, set to 100μ s for all simulations in this paper, to allow comparison of the other parameters.However, tests that are not included in this paper show that larger neural network structures have a greater potential to adapt to the desired reference filter but require a longer learning time.
For the purpose of this paper, a structure with one hidden layer of six neurons with linear activation functions was chosen, which is used in all other simulations in this paper.

| FILTER BANK
In discrete-time filters, the fundamental element is the delay. 1 Neural networks apply this same element to process sequences or signals, such as recurrent neural networks (RNN) or long short-term memory (LSTM).However, the component functioning as a signal delay element in analog circuits is unsuitable for these purposes.This problem can be solved in several ways, such as switching capacitors or operational transconductance amplifiers (OTA) tunable filters.In this article, a solution based on the filter bank was chosen. 6,14I G U R E 5 Frequency characteristics of adapted filters with 1 or 2 layers and 3, 6, or 12 neurons in each layer.
The structure of a simple generic, fully analog adaptive filter for frequencies from 1 to 100 MHz is proposed in this paper.
In this concept, an input signal is fed into the filter bank.The outputs from each filter bank are then fed to the neural network as inputs.Figure 6 shows an example of such a circuit.For other practical purposes, there may be multiple input and output signals.
Similar to a neural network, a filter bank has a large number of parameters that require selection, and some of these parameters come with inherent limitations.One such parameter is the type of filters included in the bank.A neural network equipped with a first-order filter bank can adapt from low-pass to high-pass filters and vice versa. 26Therefore, the focus is primarily on the lower-pass filters, which should cover all possible filter structures.
However, it remains to be seen if the simpler filters in the bank can produce a higher order filter.To investigate this, first-order and second-order filters with varying quality factors (Q) are chosen for simulations, the results of which are shown in Figure 7. From the figure, it can be seen that the first-order filter and filters with Q ≤ 0:5 are unable to adapt to the higher order filter.In contrast, second-order filters with Q > 0:5 possess this capability.Figure 7 also shows that when Q is small, the amplitude's rate of decline in the frequency domain is more gradual.Conversely, if Q is too high, the rapid decreasing tendency prevents a flat pattern in the pass-band region, which may be mitigated through various measures, such as adjusting the number of filters in the bank.These findings emphasize the importance of carefully selecting filter parameters to optimize the filter's performance and adaptability of adaption.
Another critical parameter is the number of filters in the bank itself.Thus, banks of varying numbers of secondorder filters with Q ¼ 5 are created so that they are logarithmically uniformly distributed between 1 and 100 MHz.The results of these simulations are shown in Figure 8. Again, the results show that if there are fewer filters, it is more challenging to adapt between the cutoff frequencies of the filters in the bank.Conversely, the more filters there are in the bank, the more likely and easier it is for the filter to adapt.
As we have shown, one of the significant drawbacks of filter banks is that they should contain a larger number of filters and, second, that the filters they contain should be very accurate.This second drawback can be mitigated with described adaptive filter concept because the adaption does not assume specific filter values but uses what is available.In the following simulations, each filter bank is set with random variation of filter parameters, namely, Q AE 20% and cutoff frequency f c AE 10%.The resulting frequency waveforms in Figure 9 show that even with such parameter variance, the result is not much worse.It allows the filter bank to assemble itself from higher tolerance components and thus simplify the manufacturing process.

| ADAPTION
The aim of this paper is to show that this adaptive filter concept is able to adapt to filters of higher frequencies and higher orders.For that, the whole created structure is rewritten into electrical circuits by a notation called netlist.An open-source spice simulator for electrical and electronic circuits called ngspice is then used for the simulations.Ngspice handles various analyses; however, the FAANN structure is highly nonlinear, so AC analysis cannot be used, which would be convenient for filters.For this reason, all simulations performed are of the transient type.
The sizes of simulated neural networks and filter banks can be quite large and, more importantly, variable, especially in the conceptual phase of development.Creating these structures and reading frequency characteristics from transient analyses in graphical programs would be very challenging.Therefore, a program was created whose flowchart is shown in Figure 10.Here, it is essential to mention that in the learning phase of the adaptive filter, a source with a random waveform close to white noise is used as the input signal.That ensures all frequencies are represented in the input at a similar rate during learning.The target signal is then obtained using the reference filter according to F I G U R E 1 1 Block diagrams used in adaptive filter learning simulations.
F I G U R E 1 2 Demonstrate continuous real-time FAANN filter learning using transient analysis with the input signal V in as random noise.Figure 11.In the measurement phase, V η is then set to 0, and a sinusoidal source of the particular frequency is used as the input signal.
FAANN learns with a teacher-assisted learning principle using analog feedback based on backpropagation.Since the desired output of the whole neural network after adaption is an already filtered signal, the signal to be learned has to be fed to the V target .This adapted filter learns in the time domain, not the frequency or phase domain.Figure 12 shows an example of such an adaptive signal waveform.All learned features, such as filtering specific frequencies or phase shifts, are unknown to this filter, and all these features are learned just from the time domain waveform.It makes it possible to adapt to a filter with an unknown transfer function just by feeding a weighted signal waveform into V target .
In this paper, we have developed a simpler general adaption filter for adaption in the range of 1 to 100 MHz using a filter bank containing 24 second-order filters with Q ¼ 5, six hidden layer neurons and one output neuron.One particular filter was designed and subjected to the adaption test for low-pass and high-pass fourth-order Chebyshev-type filters with cutoff frequencies of 3, 7, 15, and 40 MHz.Subsequently, the eighth-order Chebychev-type band-pass and bandstop signals were also tested with a range of 2.5-5.5, 6-9, 13-21, and 32-56 MHz.
F I G U R E 1 4 Adaption of one particular adaption filter to the eighth-order Chebychev-type band-pass and band-stop.
F I G U R E 1 3 Adaption of one particular adaption filter to low-pass and high-pass fourth-order Chebyshev-type filters.
Each of these adaptions was run five times with different initial weights.These simulations' results can be seen in Figures 13 and 14.At the same time, all adaption runs lasted only 100μ s of simulated time.Despite this short learning time, the results are very promising.
The last simulation in this paper will be a load test of this adaptive filter.For this purpose, a dual band-pass filter used, for example, in VDSL modems was chosen. 27The attempt to adapt to this more complex filter can be seen as red lines in Figure 15.The THD of the filter learned in this way can also be seen as red lines in Figure 3.The plot shows that an adaption filter set up in this way is unsuitable for the such challenging task.However, just a tiny change in the filter bank to 48 second-order filters and the result can be seen in the same Figure 15 as the green line.

| CONCLUSION
This paper presents a new concept of an adaptive filter based on a fully analog artificial neural network with the filter bank.The proposed structure is not affected by the von Neumann bottleneck and avoids all limitations based on sampling and clock speed, thus enabling high-speed adaptation.
The properties of the most critical parameters of both the neural network used and the filter bank are described through simulations: • Neural network activation function.For filtering purposes, linear activation functions are most suitable.It is mainly due to the smaller output signal distortion, as shown in Figure 3. • Neural network structure.In Figures 4 and 5, it can be seen that simpler structures can adapt more efficiently in the same learning time.However, larger structures can adapt better after a longer learning time.• Types of filters in the bank.Using simulations, it is demonstrated that only low-pass filters in the bank are necessary to adapt all other types of filters (low-pass, high-pass, band-passes, or stop-bands).In addition, to adapt to a highorder filter, at least second-order filters with quality factor Q > 0:5 in the bank need to be used, as is shown in Figure 7. • Number of filters in the bank.Figure 8 shows that the adaptive filter learns more effectively with a higher number of filters in the bank.Twenty-four filters seem to be adequate for an adaptation frequency range of two decades.
However, there is a grater amount of parameters, such as the learning duration, the voltage V η (learning rate), or the learning modes.All these parameters interact with each other and affect the progress and accuracy of adaption.Unfortunately, the number of combinations of these parameters does not allow us to illustrate all of their behavior.
Based on the mentioned findings, a simple general adaptive filter is proposed for adaption in the frequency band from 1 to 100 MHz.The specific parameter values are chosen to show the favorable properties and their disadvantages.Thus, the presented filter consists of only six neurons in one hidden layer, one output neuron, and 24 filters, which consist of only second-order low-pass filters in the bank.Simulations with this simple adaptive filter configuration have shown that it can be adapted in 100μs to various Chebyshev's fourth-order low-pass and high-pass and eighth-order band-passes and stop-bands.The same filter was tested to adapt to the more complex dual-band filter in common use, and the result was satisfactory when the number of filters in the bank was increased to 48, as shown in Figure 15.
In future work, it is planned to investigate the full implementation of the adaptive filter at the transistor level and explore further limitations of this concept.Additionally, explore the possibility of using operational transconductance amplifiers (OTA) tunable filters instead of the filter bank to enhance performance and adaptability.These advances may lead to a new generation of efficient, high-speed adaptive analog systems.

F I G U R E 3
Measured THD of the adapted filter with different activation functions.F I G U R E 4 Frequency characteristics of adapted filters containing only one neuron.

F I G U R E 7
Frequency characteristics of adapted filters with different filter types in the bank.Filters with defined Q are second order.F G U R E 8 Frequency characteristics of adapted filters with a different number of filters in the bank.F I G U R E 9 Comparison of frequency characteristics of adapted filters with precisely defined filters in the bank versus filters with deviations f c AE 10% and Q AE 20% in the bank.

E 1 0
Algorithm flow diagram for the simulation of a fully analog adaptive filter.

F I U R E 1 5
Frequency characteristics of the adapted filter to dual band-pass filter.