Global shutter CMOS vision sensors and event cameras for on-chip dynamic information

The on-chip extraction of dynamic information from a scene can be addressed with either frame-based CMOS vision, also called smart image sensors


| INTRODUCTION
Temporal and spatial redundancy of a static background scene do not call for continuous streaming or the continuous running of elaborate computer vision models such as object detection, tracking, or action recognition.Indeed, it is the entrance of an object into the field of view of the camera, or, in general, object motion, that triggers further computer vision processing and continuous streaming.2][3] In this context, dynamic information extraction is key to separating said two operation modes.
5][6][7] These events are sent outside of the pixel array in address-event representation (ARE).They implement a logarithmic photodiode that offers high dynamic range and allows for continuous operation of the pixel, leading to finer temporal resolution.The DVS is usually accompanied by a 3T-APS structure, leading to a Dynamic and Active Pixel Vision Sensor (DAVIS); see Figure 1.
][10] Global shutter pixels working in integration mode with frame differencing functionality are another approach to generating events.In this case, frame differencing is performed through two consecutive frames with a fixed integration time and a global reset phase in between.Global reset keeps false event generation through leakage currents at bay, although at the cost of worse temporal resolution than that of event pixels. 7,11As a benefit, the pinned photodiode (PPD) in 4T-APS allows for lower dark current and reset noise with correlated double sampling (CDS) techniques.Apart from their worse temporal resolution, global shutter pixels with frame difference features lack HDR capability.The purpose of this design is to lower the gap in dynamic range between dynamic vision sensors and frame difference-based event cameras.Former frame differencing pixels either do not output events [12][13][14] or do not implement any dynamic range extension methods. 10,15This paper describes a frame differencing-based event pixel with a lateral overflow capacitor that collects saturated photoelectrons as a dynamic range extension method. 16We give performance metrics of area, speed, and power consumption through post-layout simulations, as well as of false event generation through Monte Carlo and noise simulations.

| GLOBAL SHUTTER HDR 4T-APS PIXEL FOR EVENT GENERATION
Our global shutter HDR 4T-APS pixel provides events in the form e k ¼ ðx k , t k ,p k Þ, with the event's position in the array x k , the timestamp t k , and the polarity of the intensity change p k .A preliminary version of our pixel has been previously introduced in Jaklin et al. 18 Apart from a less in-depth description of all the concepts and circuits, that paper lacks layouts, post-layout performance metrics, noise analyses, and a comparison with dynamic vision sensors.

| Pixel operation
The schematics and timing diagram of our pixel can be seen in Figure 2 and Figure 3 with three main operations: (i) image acquisition, (ii) HDR algorithm, and (iii) frame differencing.Figure 2 also includes simplified schematics for key phases of operations of the pixel by configuring switches along the data path.All these phases are explained throughout the next paragraphs below.The analog memory bank and the comparator are labeled AMB and CMP, respectively.
The image acquisition of our pixel features conventional and HDR integration modes.The conventional mode, named as S1 integrates electrons onto the floating diffusion node FD.The HDR mode, referred to as S2, adds the capacitance of the CS node to the floating diffusion node FD. 16 This permits to store more electrons at FD þ CS, extending the dynamic range at the price of a smaller conversion gain.The selection is automatic, that is, each pixel on its own determines whether to use S1 or S2. Figure 2B illustrates the decision on working in conventional or HDR mode with the comparison of signal S1 with a user programmed signal V THS1=S2 with circuit CMP.
F I G U R E 1 DAVIS solution to the dynamic vision sensor concept. 17he operation of our global shutter HDR 4T-APS pixel in more detail is as follows.First, right after reset at t1 in Figure 3 and with signal S pulsed high, the noise level N2 of the node FD þ CS is read and stored on the capacitor C S2 driven by the source follower (SF).During the integration time, signal S is pulsed high, keeping the M3 transistor active and allowing for any saturated electrons that flow through M1 when TX is low at this time to be stored on the overflow capacitor CS.Just before integration time ends, signal S is pulled low, isolating FD from CS and immediately after, at t2, the noise level N1 associated with the electrons of signal S1 of the now isolated FD node is read and stored on C S1 .Next, the switch TX is pulsed high, transferring the electrons from the PPD to the FD node, and thus, generating N1 þ S1, and sending it to C S1 .The CDS is performed with the arrival of N1 þ S1 at t3 by setting signal phi2 high in the subtraction unit, yielding S1 on the node V S .As said above, this is illustrated with Figure 2B.S1 signal represents the voltage value generated by nonsaturated electrons only, while the signal S2 which is calculated shortly after S1, represents the value generated by nonsaturated and saturated electrons.Similarly to S1, Figure 2C shows the circuit at the time of calculating S2.At t4, in order to set the correct amount of charge on the quiescent node, phi CS2 is pulsed high.Also at t4, TX is again pulsed high to deposit the electrons collected during the calculation of S1 signal.At this moment, the new integration cycle begins.And at t5 S2 is calculated on the node V S .The S1-S2 crossing is given by a userdefined threshold V THS1=S2 .If the accumulated voltage S1 exceeds said threshold, the analog memory stores signal S2, otherwise it stores S1.COMP signal enables the comparator and is also pulsed high at t3 to compare S1 to V THS1=S2 .
Frame differencing, executed after polarity decision through F n > F nÀ1 is performed with subtraction and comparator circuits, which are also used for the decision making on the switch from S1 to S2. Fn and Fn À 1 frames are compared to each other between t5 and t6 when signal COMP is pulsed high.Figures 2D and 2E show the polarity decision and frame differencing phases, respectively.At t7, the frame differencing is complete and the result is compared to a user-defined threshold voltage V THevent .After the comparison the cycle begins anew with pulsing the reset signal R high.We apply circuit sharing techniques, as said circuits are the same for ON/OFF event generation and for CDS operations to mitigate the effect of mismatch and noise levels N1 and N2 corresponding to signals S1 and S2, respectively.

| Pixel circuits
All our circuits feature power gating in order to decrease power consumption.The supply voltage of the 4T-APS is V dd = 3.3 V, searching for a wide dynamic range.This voltage supply is set to 1.8 V for the rest of the circuitry, aiming for low power consumption.

| 4T-APS
Our 4T-APS sensing structure with a PPD comprises a programmable overflow metal insulator metal (MIM) capacitor CS set to 50, 100, and 150 fF for the HDR algorithm, which allows for different upper limits in the incident light.The aspect ratios W/L (μm/μm) in the transistors of our 4T-APS implementation are: 5.4/0.8,0.35/0.35and 0.35/0.5 for M1, M2 and M3, respectively.The source follower is biased with a current I SFAPS = 0.5 μA with an aspect ratio of F I G U R E 3 Timing diagram of our HDR 4T-APS for event generation.Control signals during two sequential frames, "Fn-1" and "Fn," are shown.Red lines represent steps for S2 calculation, blue lines for S1, and green ones mean frame difference operation.0.22/0.9(μm/μm), while the biasing transistor aspect ratio is 0.22/1.5,again in (μm/μm).The capacitance of the floating diffusion node FD has been estimated by post-layout simulations as 12 fF.

| Subtraction unit
The subtraction unit is a double cascode inverting amplifier in feedback mode with the capacitor C 2 and the reset switches phi 1 and phi 2 .This unit runs CDS for signals S1 and S2 and frame differencing between the current F n and the previous frame F nÀ1 .This structure is the same as that of the dynamic vision sensor 6 or other CMOS vision sensor solutions like. 19he sensitivity of the frame differencing and S1/S2 signals is given by the C FD /C S1 /C S2 to C 2 ratios.We have set C 2 as a programmable device to 1Â, 2Â and 3Â gains.Capacitors C FD /C S1 /C S2 have been sized to 90 fF, while C 2 is scaled accordingly.Signal V 3 adds a user-defined offset, which in our case sets the comparator input transistors to saturation.The input transistors of the comparator are NMOS types with a threshold voltage of around 650 mV, and V 3 of minimum 700 mV is advised.All the switches have been designed as NMOS transistors with minimum dimensions.The double cascode inverting amplifier has an open-loop gain of 100 dB.
The sequence of operations of the different switches can be seen in Figure 3.The CDS operation for signals S1 and S2 to mitigate the effect of noise N1 and N2 is given by the formula: with subindex i referring to either S1/S2 or N1/N2.This result is stored in the analog memory bank, which holds the values of the current frame F n and the previous one F nÀ1 .The programmability of the overflow capacitor CS offers flexibility in HDR mode. Figure 4A collects a simulation for monochromatic light at wavelength of 555 nm (green) for the three gains defined in our pixel.The Y-axis is the voltage V S in Figure 2A, after running CDS.The pixel operates only with the floating diffusion capacitance, and thus with high conversion gain (signal S1), at low illumination levels.At a given illumination level, the pixel enters the HDR region, with the excess electrons being collected on the overflow capacitor CS.This is the region of signal S2, where the conversion gain decreases due to the capacitance of two shunted nodes, FD þ CS.The higher the curve in the HDR region, the lower the CS capacitance.The three curves in Figure 4A correspond to our three cases of CS; 50, 100 and 150 fF.The peak seen in Figure 4A occurs when pixel switches from S1 to S2 signal.It occurs when S1 > V THS1=S2 .In this case, V THS1=S2 is set to 1.1 V.
The frame differencing is calculated as Simulations showing HDR extension and gain programability.
This value is compared to a user-defined threshold voltage V THevent to make a decision on whether or not there is an event.We have implemented the absolute difference operation in Equation (2) in order not to yield negative voltages and to have only one comparator instead of two dedicated comparators for ON and OFF events, as is the case of the classical dynamic vision sensor. 7The operation of our subtraction unit requires the pixel to store on C FD the highest voltage of the current F n and the previous frame F nÀ1 .This is carried out with the comparator labeled CMP in Figure 2A.This comparison can also be used as polarity flag p k , latching this result onto the digital memory block of our pixel.Figure 2D shows a simplified version of the circuit during the comparison of F n and F nÀ1 , with both signals coming from AMB.In terms of the timing diagram of Figure 3, when phi 1 is pulsed high at t6, the higher value frame arrives on C FD .The arrival of the second frame with phi 2 pulsed high completes the operation of frame differencing.Figure 2E shows the circuit at the time of calculating the frame difference.Figure 4B conveys simulations showing the absolute value of the frame differencing operation for our three different gains.According to Equation (2), higher slopes come from lower capacitance values of C 2 .The dashed line represents an example of the user-defined threshold V THevent to trigger an event.

| Analog memory bank
The analog memory bank, shown in Figure 5, stores the previous F nÀ1 and current F n frames, which are the results of CDS.It is implemented as an open-loop sample and hold configuration.AMB is needed as SCC cannot input two frames at once.It is designed with C Mi = 40 fF, and source followers acting as buffers with low-threshold voltage NMOS transistors to give as wide a linearity range as possible.Open-loop sample and hold architectures are up to the challenge of keeping the image for hundreds of ms with acceptable accuracy degradation due to long term storage losses in more demanding solutions for on-chip dynamic information extraction. 20n alternative implementation of our pixel would be to store one frame instead of two.This can be achieved with two SCCs instead of only one.One of the SCC circuits would perform CDS, while the other one would run frame differencing.We have seen that storing two frames leads to simpler logic and control than two SCC circuits.

| Comparator for event generation and S1/S2 crossing with its input logic
The decision-making on when to switch from conventional integration mode with signal S1 to HDR extension through signal S2 and on whether or not there is an event is carried out by the comparator.The comparator takes the difference given by the subtraction unit V S , that is, either the CDS output of S1 or the frame differencing value, or signal S2 as input IN1, and a user-programmable threshold V THS1=S2 or V THevent for the S1-S2 crossing or the event generation, respectively as input IN2.The block labeled Input Logic in Figure 2A sets the appropriate input at the right instant.Such an input logic is implemented with a bank of switches realized with NMOS transistors of minimum dimensions.
F I G U R E 5 Analog memory bank (AMB) of our pixel (see Figure 2) The comparator is implemented as a two-stage open-loop amplifier with a 5-transistor operational amplifier (OTA) architecture and a differential input driving an inverter.Noise and mismatch can cause incorrect polarity when two frames are close, so although it does not feature offset cancellation, it has been designed with large transistors in order to make it mismatch resilient.Post-layout simulations yield: (i) ξ s = 1.1 mV as the static resolution of the comparator.This metric is achieved with a large input differential pair, sized to W= 5 μm, L = 0.58 μm.

| Digital memory block: Event generation and read-out
The digital memory block shown in Figure 6A contains four D-latches to (i) provide events; (ii) set the polarity of the event; and (iii) assess the signal uniformity of S1 or S2 between frames.
The behavior of the four D-latches is conveyed in Figure 6B.If D e is set to the logical "1" and D SFn and D SFnÀ1 have the same logical values, either "0" or "1," an event is issued.In this case, the local value at D FD determines the polarity of the event.A logical "1" means that the voltage variation along the integration time at the current frame exceeds that of the previous frame, providing an ON event, and vice versa.The fact that two consecutive frames come from two different sensitivities (S1 and S2) is accounted for with D SFn and D SFnÀ1 issuing different logical states.In this situation, there is an ON event if the state from D SFn is a logical "1" and that from D SFnÀ1 is a logical "0" regardless of D e and D FD .There is an OFF event if the state from D SFn is a logical "0" and that from D SFnÀ1 is a logical "1" regardless of the states of D e and D FD .D SFn and D SFnÀ1 holding the same logical values means that the change from the previous to the current frame generates events.
Finally, our chip provides both raw images and events.This is managed by the "Output Select" block in Figure 2A.The raw image is read out with a unity gain buffer as an analog signal as a result of the CDS operation.The status of the D-latches through a 4-bit word is read out serially and sent for off-chip processing that, based on the table in Figure 6B, determines what kind of event occurs, either ON or OFF events.An example of an OFF event is shown in Figure 7.The input light in Figure 7A is an abrupt decrease in input light power in the S2 regime.Accordingly, the output corresponds to an OFF event with the code "1011."Said output is a continuous-time signal on the output column line or bus of the pixel (see Figure 2), as shown by the blue curve in Figure 7B.This continuous-time signal on the output column line is the result of a sequential writing of the outputs of the four digital or D-latches, as labeled within vertical green lines in Figure 7B.Finally, the read-out speed is estimated at around 1000 efps by post-layout simulations.
F I G U R E 6 Digital logic structure and logic.

| Spatial accuracy
The spatial uniformity of a pixel array in our global shutter HDR 4T-APS is given by the mismatch of every pixel along the data path.Our solution comprises the two integration modes S1 and S2, so we have run Monte Carlo simulations for both cases in order to assess the sensitivity to intensity changes of our approach.
Mismatches of individual circuits are listed below: with σ CMP being the standard deviation of the offset of the comparator CMP, σ SCC the standard deviation of the output of the subtraction unit SCC, and σ AMB_SF and σ FD_SF the standard deviation of the output of the source followers or buffers in the analog memory bank AMB and in the 4T-APS, respectively.As seen, the comparator CMP and the different source followers in our pixel are the major sources of mismatch in our pixel.All these contributions accumulate along the datapath of the pixel, contributing to false event generation, which can be seen in Figure 8A,B that collect the effect of mismatch on event generation for and S2.The X-axis is the intensity change in percentage between two consecutive frames, while the Y axis shows the percentage of ON and OFF events from Monte Carlo simulations for different user-defined threshold voltages, namely, V THevent = 7, 17 and 23 mV, which correspond to percentage changes in the light intensity of 0.9%, 2.6% and 3%.An ideal scenario is that of a sudden jump from 0% to 100% of events for a given threshold, shown with continuous lines in Figure 8A,B.Mismatch and temporal noise in actual circuits cause a minimum threshold to generate events.
We have run Monte Carlo simulations to emulate a whole array of global shutter HDR 4T-APS pixels for event generation.Every dot in the plots of Figure 8A,B is a percentage of 300 nominal Monte Carlo simulations that we have thought of as a percentage of pixels in an array yielding events.Thus, the plots in Figure 7 can be seen as a simulation of pixel fixed pattern noise (PFPN).The percentage % of intensity change per frame along the X-axis is calculated with the formula ðP1 À P2Þ=FSO Si , where P1 and P2 mean the light input power of two consecutive frames, and FSO Si is the full-scale output of either signal S1 or S2.The results expressed as percentage of events in Figure 8A,B are similar to one another, but it should be taken into account that the power of signals S1 and S2 differ.The light input power is kept constant during the first frame (P1) at 117 lux for S1 signal (low illumination) and 4675 lux for S2 signal (high illumination), while it is subject to increasing and decreasing variations during the second frame to generate the percentage of illumination changes along the X-axis.The input light is simulated to be a green monochromatic light with a wavelength λ ¼ 555 nm, making the conversion from photometric to radiometric units more straightforward.The integration time has been set to 1 ms.Finally, Figure 8A,B shows that the 100% of correct cases is only met with around of 5% of intensity change for S1 (conventional mode) and S2 (HDR mode) signals, as can be seen by the red points.Temporal noise adds more inaccuracies.As apparent, the application dictates the user-defined threshold voltage V THevent .The simulations include leakages caused by off-resistance of switches and the ones caused by the inverse pn junctions of the transistors.Leakages caused by the impact of light, that is, parasitic light sensitivity, are not included.

| Temporal noise
The temporal noise has to be added to the spatial noise to determine the noise floor in our implementation in order to calculate the dynamic range of our HDR 4T-APS pixel for frame differencing.

| Circuit noise
The effect of thermal noise in our implementation has been obtained by averaging 10,000 nominal transient noise simulations.N1 and N1S1 (N1 þ S1 in Figure 3) are noise samples taken on the FD node of our circuit (Figure 2) at their corresponding time instants of the timing diagram of Figure 3, while N2 and N2S2 (N2 þ S2 in Figure 3) are taken on the FD þ CS node.The subtraction of N1S1 and N2S2 from N1, N2 runs the CDS operation.

| Photon shot noise
Photon shot noise has also been added for a given input light, with the number of photons as N p ¼ PT e E f , where P is the light input power with a given wavelength, in our case λ ¼ 555 nm for an easy conversion from radiometric to photometric units, T e is the integration time, and E f is the energy of a single photon.From the number of photons, the Poisson distribution has been derived and added to our CAD simulator.

| Total noise
Figure 9A shows root mean squared (RMS) values of photon shot and circuit noise on the output node (V S ) of the subtraction circuit of Figure 2A, with and without CDS for a sudden transition from low illumination, where photon shot noise dominates, and the global shutter HDR 4T-APS works in the normal region with signal S1, to high illumination, where circuit noise prevails, and the pixel works within the HDR extension, with a lower conversion gain with signal S2. Circuit simulations do not account for parasitic light sensitivity and the effects of leakage currents caused by the reset transistor of the subtraction unit. 21Nevertheless, we do not expect a large because the global shutter HDR 4T-APS works with reset between two consecutive frames, where integration times are usually in the order of ms, and, thus, short for the leakage currents to have a significant impact. 8Figure 9B shows the temporal noise along the data path of our HDR 4T-APS for event generation.The noise floor is estimated to be 0.5 mV rms .

| Dynamic range
The estimation of dynamic range (DR) is given by ( 3).E v sat2 is the upper limit of S2 and after this limit signal N1 is saturated resulting in nonlinearity and false threshold V THS1=S2 detection.E v f loornoise is the lowest illuminance that can be detected.
The HDR extension raises the dynamic range from 53 dB up to 85 dB, which is lower than that of the original HDR pixel with overflow capacitor, 16 which achieves 100 dB.This is due to two factors, the higher number of active elements in our circuit to generate events, which increases the noise floor, and the lower power supply voltage, 3.3 V versus 5 V. 16

| Design considerations for CS capacitance
As stated in Akahane et al, 16 it is important to maintain a high enough signal to noise ratio(SNR) at S1=S2 switching point.The signal to noise ratio of signal S2 at the S1/S2 switching point is given by equation (4) in which, S2 S1=S2 is the voltage level of the signal S2 at the switching point and NS2 S1=S2 is the noise of S2.
At the S1/S2 switching point signal S2 is Noise analysis of our HDR 4T-APS pixel for event generation.
where S1 S1=S2 is the signal S1 at the S1/S2 switching point.C FD is the capacitance of the FD node C CS is the capacitance of the CS capacitor.And the noise of S2 is given by where q is elementary charge and Q μ number of residual charges still remaining after the CDS of S2.
The dynamic range is given by The sizes of C FD and C CS will affect SNR of S2 and DR of the pixel.From (4), we see that we increase the SNR by increasing the signal S2 at S1/S2 switching point.This can be done by changing the capacitance ratio C FD C FD þC CS .However, this will negatively affect the DR.Thus, a choice of the capacitance's is a matter of trade off between DR and SNR.

| Chip data and comparison with prior art
We have laid down a 64 Â 64 pixel array in 180 nm CMOS technology.Our pixel is able to run at 1000 event frames per second (efps), measured as post-layout simulations.Our pixel pitch is 32.3 μm.The photodiode size is 5.4 Â 5.4 μm 2 .The layout of the pixel with all its individual blocks labeled can be seen in Figure 10A.The layout of the complete chip is displayed on Figure 10B.The area of the chip is 2.81 Â 2.90 mm 2 .The array is surrounded by row decoders on the left, row drivers on the right, column drivers on the top, and select circuits on the bottom side of the array.
Table 1 shows a comparison with prior art.Post-layout simulations show a higher power consumption in the range of 505-555 nW per pixel, depending on the incident light power.Our excess power consumption comes mainly from the in-pixel HDR algorithm.As an example, peak power consumption happens when input power generates S1 signal that is similar to the V THS1=S2 .A way to cut power consumption is the exchange of conventional source followers-used F I G U R E 1 0 Pixel and chip layouts of our HDR 4T-APS for event generation.
in our implementation-for dynamic source followers (DSF).We have seen that the power consumption could be lowered in the range 32% to 45% with the adoption of DSFs, again according to the input power of the incident light. 22s seen in Table 1, this would make our chip in 180 nm competitive with that of reference, 6 manufactured in 65 nm.
Compared to prior art-Table our design suffers from a low fill-factor and an overhead pixel pitch caused mainly by the in-pixel HDR algorithm to deal with signals S1 and S2, and by a less advanced technology node when compared to that of the solutions presented in the literature. 5,14Nevertheless, this can be partially alleviated on two-tier vertical technologies with the split of our pixel in a pattern of 2 Â 2 photosites sharing common processing circuitry, similarly to previous event pixel sensors, 5,23 and focal-plane processors by previous authors. 8,19The same methodology applied to our pixel by sharing the subtraction unit, the comparator and the logic unit would increase the fill-factor by 4 times, that is, from 2.8% up to approximately 11.2%.The remaining gap with the prior art in Table 1 would have to be tackled through a further effort in diminishing the area of the bulkiest blocks of the pixel, that is, digital memories and input logic, for example, through dynamic logic instead of the current static logic.
The dynamic range of our implementation falls below 120 dB, the state-of-the-art for event cameras.The dynamic range can be extended by increasing the size of the overflow capacitor CS.This would lead to a lower fill-factor, and to a decrease in the conversion gain, resulting in a trade-off between dynamic range and sensitivity.Also, as stated in Akahane et al, 16 an increase in the power supply could be another method to extend the dynamic range of pixels with the overflow capacitance approach.Nevertheless, this solution cannot be adopted in a straightforward manner in our pixel due to its complexity with its many blocks and the interactions among them.This might lead to new modifications in our pixel, which might result in a worse fill-factor.
In summary, event generation through frame differencing with global shutter CMOS vision sensors with HDR through an overflow capacitor could be competitive in scenarios or applications with slow enough objects and a relatively high dynamic range while providing low power consumption.More demanding situations with fast moving objects and a very wide dynamic range would call for dynamic vision sensors.

| OUTLOOK AND CONCLUSION
This paper has delved into the two main CMOS options to run on-chip dynamic information from a scene, namely, global shutter pixels and dynamic vision sensors.We provide post-layout simulations of a global shutter 4T-APS with HDR extension for event generation by means of a local algorithm on an in-pixel overflow capacitor in order to cut the HDR edge of dynamic vision sensors on global shutter pixels.Our HDR mode extends the dynamic range from 53 to 85 dB, which, although it can be improved, is still far from the state-of-the-art value of 124 dB of dynamic vision sensors.Additional circuits in global shutter pixels for HDR extension and event generation hamper the fill factor and increase the noise floor.Both figures of merit can be tackled through low power techniques like the inclusion of dynamic source followers instead of conventional ones, and the pixel split in several tiers with CMOS-3D technologies.Still, our solution provides low enough power consumption and it would be competitive in scenarios with slow enough moving objects and a relatively high dynamic range when compared to dynamic vision sensors.

F I G U R E 7
The two graphs show the relation between the input power of light during two sequential frames and the output of the second frame.The simulations provide an OFF event in S2 mode with the code "1011."F I U R E 8 False event simulations due to mismatch effects in our HDR 4T-APS solution.Full lines represent transition in the absence of mismatch.
T A B L E 1 Chip data and comparison with prior art.