A Biologically Interfaced Evolvable Organic Pattern Classifier

Abstract Future brain–computer interfaces will require local and highly individualized signal processing of fully integrated electronic circuits within the nervous system and other living tissue. New devices will need to be developed that can receive data from a sensor array, process these data into meaningful information, and translate that information into a format that can be interpreted by living systems. Here, the first example of interfacing a hardware‐based pattern classifier with a biological nerve is reported. The classifier implements the Widrow–Hoff learning algorithm on an array of evolvable organic electrochemical transistors (EOECTs). The EOECTs’ channel conductance is modulated in situ by electropolymerizing the semiconductor material within the channel, allowing for low voltage operation, high reproducibility, and an improvement in state retention by two orders of magnitude over state‐of‐the‐art OECT devices. The organic classifier is interfaced with a biological nerve using an organic electrochemical spiking neuron to translate the classifier's output to a simulated action potential. The latter is then used to stimulate muscle contraction selectively based on the input pattern, thus paving the way for the development of adaptive neural interfaces for closed‐loop therapeutic systems.


The Least Mean Squares Algorithm
The purpose of the LMS algorithm is to minimize the error between the class labels (often chosen as +1 and -1) and the weighted sums ("regression values") generated by the network.
The expectation is that this procedure should also minimize the classification error rate when the threshold circuit is applied. This is not always the case as the two criteria are different.
However, the LMS optimization will converge also for patterns which are not linearly separable. This is in contrast to the Perceptron learning algorithm which is based on minimizing the error rate. [1] The error that is considered in the LMS algorithm is the sum of the squared differences between the regression values ! = ∑ " "! " and the class labels: where wi is the i:th weight, Vij is the i:th component of a labelled input pattern Vj (being part of the "training set") and lj is its label. As is seen the error is a quadratic function. Thus, the weights that minimize the error can be solved for analytically. This is a tedious procedure however that requires solving a linear equation system with as many equations as there are weights. It also requires that all the training samples are available simultaneously.
In contrast, the LMS algorithm iterates over the training patterns to approach the correct solution. Not only is the computational burden distributed over time, but the procedure allows that new patterns and/or changed labelling can be introduced to facilitate adaptivity. The basic idea is the following.
For each training pattern the weight vector W is adjusted to bring the regression value rj closer to the label associated with that pattern. This can be expressed as: where step controls the convergence rate. As an example, consider a case where the label is +1 for a certain input pattern while the regression value is less than 1. Then, all weights which are multiplied by a positive input should be increased while those which are multiplied by a negative input should be decreased. Altogether, this will make the new regression value come closer to +1. A similar procedure is done for an input pattern which is labelled -1. But here, the adjustment of the weights will be reversed. The amount which is added to each weight should be small enough not to cause overshoots but rather a smooth convergence. Following Widrow's ADALINE implementation [10a] we will only consider binary input patterns. Each pattern component is assumed to have the value 1 or -1. Figure S1. Current ratio between positive and negative voltages for a set of devices of varying conductance. Data taken from Figure 2e.

Read voltage scaling
A non-destructive read can be obtained by keeping potential differences between the source/drain terminals and the gate well below 0.2 V. However, in read mode, it is very important that the positive and negative components of a given pixel are modulated so that only the difference in the effective channel volume rather than any difference in the redox state of the ETE-S channel is measured by the amp-meter. This is obtained by introducing an offset coefficient by which we multiply the negative input voltage to compensate for the reduction in the doping level. The offset coefficient is calculated from the IV data in Figure