Wearable Recognition System for Complex Motions Based on Hybrid Deep‐Learning‐Enhanced Strain Sensors

Wearable recognition systems based on flexible electronics present immense potential for applications in human–machine interfaces, medical care, soft robots, etc. However, they experience challenges in terms of the nonideal consistency and stability of flexible sensors, which are responsible for detecting physical signals from human motions. These challenges hinder the improvement of recognition precision and capability in the wearable systems. Furthermore, the computational consumption for the recognition increases as more sensors are used to extensively gather information for distinguishing between complex motions. Herein, a wearable recognition system based on deep‐learning‐enhanced strain sensors for distinguishing between the complex motions of the human body is presented. A strain sensor based on peak–valley microstructures is fabricated and packaged to improve consistency and stability. Moreover, a lightweight hybrid convolutional neural network long short‐term memory model is designed to lower the computational costs of the deep learning process. In particular, by designing Butterworth filtering and Z‐score normalization algorithms, the error in feature extraction caused by sensor signal fluctuation is reduced, thereby improving the recognition accuracy of the proposed wearable system to 95.72% for seven gait motions and 100% for four different continuous series of Tai Chi forms.


Introduction
[12] Although the aforementioned requirements can be achieved with commercial inertial measurement units [13,14] or pressure sensors, [15,16] their limited wearability and comfort due to stiffness issues render them less suitable for application in wearable electric systems.In this regard, flexible strain sensors, with their suitable stretchability, offer a promising solution for detecting human body signals in wearable systems.
Flexible strain sensors are primarily classified as resistive, [17,18] capacitive, [19,20] piezoelectric, [21] or triboelectric [22] sensors based on their operating mechanisms.Among these, resistive strain sensors have received significant attention owing to their simple structure, [23] easy of fabrication, [24] convenient signal collection, and high reliability. [25]Generally, resistive strain sensors consist of a flexible substrate matrix and active materials, which are crucial for sensing performance.[35] However, for mass production and commercial applications, the output signal fluctuation of flexible strain sensors, resulting from inconsistencies in standardized manufacturing or tensile fatigue from long-term repeated operations, remains a common challenge.Such sensor signal fluctuations can lead to feature extraction errors, thereby compromising the recognition performance of machine learning models.Therefore, the recognition accuracy of a wearable system employing flexible sensors must be improved.To address this challenge, suitable compensation or recognition algorithms have been developed, [36][37][38][39][40] such as traditional machine learning algorithms (support vector machine, [41] K nearest neighbor, [42] and linear discriminant analysis [43] ) and deep learning algorithms (artificial neural networks, [44] convolutional neural networks [CNNs], [45] and CNN-long short-term memory network [CNN-LSTM]). [46,47]These methods have proven effective in improving the accuracy in the application of static gesture or repetitive gait motion (walking and running) recognitions.Moreover, full-body complex motion recognition is required in real application scenarios, such as virtual reality (VR)/augmented reality (AR), medical rehabilitation training, and smart sports exercises.In such applications, to obtain sufficient sensing information, a series of dynamic physical signals must be collected from whole-body movements using large-scale sensor arrays assembled at various joints of the human body, thereby ensuring accurate detection and recognition.Subsequently, compensating for large-scale sensor output signals fluctuations becomes more challenging, thus necessitating advancements in multiplex readout circuits and algorithms.Therefore, the main challenge in complex motion recognition is the tradeoff between the mass data collection requirement and the compact design of wearable systems, in which the number of sensors and algorithm design considerably affect this balance.Thus far, only a few studies have focused on the detection and recognition of complex motions based on a limited number of sensors.
This study proposes a convenient and cost-effective wearable recognition system that utilizes packaged flexible strain sensors based on double-layer microstructures of carbon nanotubes (CNTs) and Ecoflex as the sensing component.A lightweight hybrid deep learning CNN-LSTM algorithm is designed to realize the effective recognition of complex motions.Butterworth filtering and Z-score normalization are employed to mitigate sensor fluctuations.The fabricated sensors and processing circuits are integrated into a wearable system to simultaneously collect signals from the motion of different body parts (such as the neck, elbow, wrist, and knee).Upon extracting the composite spatiotemporal features of the movement signals, we achieve accurate recognition of seven gait motions and four different sets of Tai Chi forms using only four and seven discrete strain sensors, respectively (Figure S1, Supporting Information).

Mechanical and Electrical Property Characterization of the Sensor
The layer-by-layer casting process for the strain sensor is illustrated in Figure S2, Supporting Information.First, a polyvinyl chloride (PVC) mold was attached to a polyimide (PI) substrate.To obtain the semicured Ecoflex for use as a flexible substrate, the prepared Ecoflex mixture was poured into a PVC mold and cured for 50 min.Next, the CNTs were brushed onto the semicured Ecoflex surface to form a conducting network.Owing to the pressure during brushing, some of the CNTs were embedded in the Ecoflex to form a hybrid CNT/Ecoflex thin-film layer after 3 h of full curing.Subsequently, conductive tapes were pasted to the two ends of the hybrid layer as electrodes. [48]Thereafter, the Ecoflex casting process was repeated to encapsulate the structure and ensure sensor stability.Finally, the sensor was peeled off from the PI substrate, and the PVC molds were removed to obtain the completed sensor.A digital photograph of the strain sensor is shown in Figure S3, Supporting Information.The electromechanical properties of the fabricated sensor were measured using a system equipped with an LCR Meter (TH2826A) and a motorized push/pull test stand (ESM303).The test system could measure the output resistance value of the sensor in response to the applied strain, which was driven by a bias voltage of 1 V.The gauge factor (GF) of the sensor is defined as (ΔR/R 0 )/ε, where ε is the stretchable strain applied to the sensor, ΔR is the change in resistance, and R 0 is the initial resistance of the sensor.
Figure 1a presents the relative resistance ratios (ΔR/R 0 ) as a function of the applied stretching strains and analyzes the influence of the weight density of the CNTs (1.163, 1.395, 1.682, and 1.860 mg cm À2 ) on the sensitivity; the sensitivity of the sensor can be adjusted using the CNT weight density.Evidently, the strain sensor with a CNT density of 1.628 mg cm À2 exhibited the best strain sensitivity 4.7 in the range of 0-65% (region I) and 24.9 in the range of 65-200% (region II).As the CNT density increases, the CNT overlap within the CNT/Ecoflex hybrid layer increases, creating more conductive paths.Under strain, more conductive paths are destroyed, resulting in a large change in resistance and an increase in sensitivity.However, when the CNT density exceeds a certain value, a large number of overlapping CNTs would form an excessive number of conductive paths.This leads to a relative decrease in the number of conductive paths that are destroyed under the same strain, thus reducing the sensitivity of the sensor.In general, the sensitivity of the device tends to increase and then decrease as the CNT density increases.From the experimental results in Figure 1a, it is illustrated that the sensitivity reaches the maximum value at a CNT density of 1.628 mg cm À2 , and the sensitivity is similar at CNT densities of 1.395 and 1.860 mg cm À2 .
The relative resistance ratios of the five sensor samples were similar, thus indicating the satisfactory consistency of the packaged sensor (Figure 1b).However, certain discrepancies persisted in the consistency, which could be improved by designing appropriate signal preprocessing algorithms.The average strain sensitivities of the five sensors were 6.0 in the 0-65% range and 21.9 in the 100-200% range, thus indicating adequate strainsensing capabilities.
Figure S4a, Supporting Information, illustrates the strain sensing mechanism of the sensor, which is dependent on the synergetic concave-convex conductive areas formed by overlapping CNTs in the CNT/Ecoflex hybrid thin film layer.Initially, the concave regions in region I (0 < ε < 65%) predominantly extended along the stretching direction, whereas the convex regions maintained their initial shapes, thus resulting in a relatively small change in resistance in this region.The gradual increase in the applied strain in region II (65 < ε < 200%) was attributed to the gradual stretching of the convex regions, thereby resulting in a relatively large change in resistance.This result implies that the randomly peak-valley microstructures of the hybrid CNT/Ecoflex thin film layer play a crucial role in the sensor's performance.The scanning electronic microscopy (SEM) images of the strain sensor are shown in Figure S4b, Supporting Information.Figure 1c presents the mechanical hysteresis stress-strain curves of the sensor, including 50%, 100%, and 150% strain; evidently, the sensor exhibited a linear elastic region within 150% strain and almost completely returned to the initial point for each strain during the loading and unloading cycles.Electrical hysteresis across the strain ranges (50%, 100%, and 150%) is plotted in Figure 1d.Within 50% strain, the strain sensor exhibited nearly the same curve during the stretch-release cycle which is shown in the inset of Figure 1d.The long-term cyclic stability under test with a 40% loading strain during 5000 cycles stretch test is presented in Figure 1e, essentially demonstrating that the fabricated strain sensor exhibits excellent stability.
The real-time physical signal monitoring of human motion was realized by attaching the sensor to human body joints, such as the neck, wrist, elbow, and knee, as demonstrated in Figure 1f-i.Note that the range of joint bending is 0°-60°for the neck and 0°-90°for the elbow, wrist, and knee.Evidently, the responses to various motions differed, and owing to the different stretching of the strain sensor for different motions, the relative resistance ratios (ΔR/R 0 ) were approximately 1.8, 0.9, 2.4, and 1.4 for the neck, wrist, elbow, and knee bending, respectively.These results demonstrate the highly sensitive capabilities of real-time detection, essentially yielding sufficient information to the wearable recognition system.
Moreover, the reusability of the flexible strain sensor was validated through 10 times of adhesion and peel-off tests.Figure S5, Supporting Information, gives the bending signals of the right elbow for the 1st, 4th, 7th, and 10th adhesion and peel-off test; each test shows five cycles of the elbow bending; the output signals were similar, thus indicating the excellent reusability of the strain sensor.

Wearable Recognition System
We constructed a wearable system for recognizing complex motions, as shown in Figure 2a.Several strain sensors were mounted on the required joints of the body by double-sided tape.A customized wireless circuit combining multiple modules, including a signal readout circuit, microcontroller unit (MCU), Bluetooth module, power management module, and serial module, performed multiple functions, including multichannel signal acquisition (up to seven channels), signal amplification, analog-to-digital conversion (ADC), and wireless transmission.Moreover, the signal readout circuit determined the output voltage of each sensor, and the built-in ADC module of the MCU converted the gathered output signals into digital signals, which were subsequently transferred to a computer using a Bluetooth module for further signal preprocessing.Figure 2b depicts the flow chart of the signal preprocessing algorithm used to compensate for the sensor signal fluctuations caused by variances in consistency or sensor fatigue.To filter noise and reduce interference in recognition, a moving average filter (with a window size of 5) and a first-order Butterworth low-pass filter were devised.In brief, the moving-average filter considers N consecutive sampling data points within a queue.Each time new data are added to the queue, the data at the top are released.Filtering is achieved using an arithmetic averaging process conducted on N data points in the queue, which has a positive impact on reducing periodic interference.Evidently, the amplitude-frequency characteristic curve of the Butterworth filter exhibited a fairly flat passband and rapid amplitude decline after the cutoff frequency, thus effectively suppressing high-frequency noise.Because the frequency of the most-picked motions is less than 3 Hz, the cutoff frequency of the Butterworth filter was set at 3 Hz.Additionally, data normalization scales features to the same order of magnitude, thus allowing for faster convergence of the gradient descent approach to the minima, thereby improving model performance and classification accuracy.Z-score normalization was used to handle the filtered data, as defined in Equation (1) where μ is the mean and σ is the standard deviation.Both of these are expressed as follows: The signal preprocessing algorithm successfully enhances the signal quality of the sensor.
Figure 2c illustrates the topology of the lightweight CNN-LSTM hybrid neural network.CNN-LSTM combines the benefits of CNN and LSTM, which extract essential spatial and temporal information from data and improve neural network learning.To build a lightweight neural network and minimize the computational costs, one CNN layer and one LSTM layer were employed.A lightweight CNN-LSTM was developed to analyze a large amount of signal information from detecting sensors for recognizing the complex motions.A skip connection structure was designed to compensate for several critical features lost during the network learning process.The output vector of the LSTM layer and the input vector of the neural network were overlaid on the input of the fully connected layer.Equation ( 4) defines the input of the fully connected layer x FC where o LSTM is the output of the LSTM layer and x is the input of the CNN-LSTM neural network.The neural network output was converted using a sigmoid activation function, and the class with the highest conversion value was considered the prediction result for multiclassification.The sigmoid function is defined in Equation (5) as where i is a category in the multiclassification; η i is the neural network output value of this category; and Sðη i Þ is the conversion value of this category, which can be considered a prediction of probability.
The cross-entropy loss of the model is calculated using Equation ( 6) and (7).
where x i denotes the predicted probability of a sample belonging to class i.If the predicted class matches the real sample class, y i is 1; otherwise, y i is zero.Finally, a gradient descent algorithm was used to optimize the model parameters.

Gait Motion Recognition
To demonstrate the performance of the designed recognition system with preprocessing and the hybrid lightweight CNN-LSTM algorithm, the signals generated by the seven gait motions were collected (Note S1, Supporting Information) from four attached sensors (left knee, right knee, left elbow, and right elbow) during seven gait motions (slow walk, walk, fast walk, uphill, run, upstairs, and downstairs), as shown in Figure 3a.The collected signals were filtered and normalized; subsequently, they were split into training and testing sets in a ratio of 80:20 for the CNN-LSTM model training and testing, respectively.Note that neural network training and testing are influenced by network parameters, particularly the number of hidden layers and hidden layer neurons, which significantly impact the network's learning speed and classification results.Therefore, the optimized parameters must be analyzed.First, as shown in Figure 3b(i) and S6, Supporting Information, the classification accuracy increased with increasing number of neurons and plateaued to a value of approximately 88.9%.Consequently, the number of neurons in the hidden layer was set to 64.Then, Figure 3b(ii) shows that optimal classification accuracy was achieved when the number of hidden layers was set to one.Therefore, the improved CNN-LSTM network contained 1 hidden layer with 64 neurons.Figure 3b(iii) shows the training loss and test accuracy for the specified settings.As the number of training batches increased, the training loss decreased, whereas the test accuracy increased and gradually saturated.
A comparison of the different preprocessing algorithms is shown in Figure S7, Supporting Information.Figure 4a illustrates the effectiveness of the classification results achieved using the specified Butterworth filtering and Z-score normalization for signal preprocessing.The poor classification performance of slow walking, upstairs, and downstairs (accuracy <85%) in the confusion matrix without signal processing is shown in Figure 4a(i).This is because the contribution of raw data without signal preprocessing toward model learning is negligible.Although Butterworth filtering is used in the confusion matrix in Figure 4a(ii), thus resulting in an increased classification accuracy of various motions up to 93.1%, the classification performance of upstairs and uphill motions remains unsatisfactory because these two motions are similar in nature.Thus, distinguishing these motions using only data from the elbow and knee sensors is challenging.To overcome this challenge, both Butterworth filtering and normalization were performed, thus significantly improving the classification accuracy, as demonstrated in Figure 4a(iii).This indicates that normalization can significantly enhance the model's classification performance.The ROC curve illustrated in Figure 4a(iv) is an effective assessment tool for classification performance.The ordinate represents the true positive rate (TPR), which is the proportion of positive samples accurately determined to be positive, whereas the abscissa represents the false positive rate (FPR), which is the proportion of negative samples incorrectly classified as positive.The area under the curve (AUC) is defined as the area under the ROC curve and used to evaluate the classification performance of the model.The ROC curve obtained using Butterworth filtering and Z-score normalization was closest to the upper left corner, with an AUC of 0.9988, thus suggesting the highest classification performance.According to the bar chart of the classification results in Figure 4a(v), classification with Butterworth filtering and normalization yielded the best results, with an overall accuracy of 95.72%, whereas classification without signal preprocessing exhibited the worst performance, with an overall accuracy of 86.29%.These findings further confirm the effectiveness of Butterworth filtering and Z-score normalization.
To validate the proposed algorithm model, the participants continuously executed seven motions.The corresponding signals were fed into the trained model for recognition after Butterworth filtering and Z-score normalization. Figure 4b shows the signals and results of the continuous motion recognition.Classification errors invariably occurred during the transition phase between two motions, indicating that motion transitions result in classification errors.
To further demonstrate the recognition ability of the proposed system complex motions, two new motions, namely, walking with a phone (WWAP): holding a mobile phone in the ear with the right elbow while walking and 40°uphill, were incorporated into the gait recognition.As shown in Figure 4c(i) and (ii), the proposed system achieved an overall accuracy of 95.08%, and the classification of two new motions was completely correct, thus demonstrating excellent recognition performance for complex motions.

Tai Chi Gait Recognition
The Chinese traditional kung fu art of Tai Chi was used to demonstrate the capability of the recognition system in recognizing complex human motions.The dataset was consisted of signals from four series of Tai Chi forms (Note S2, Supporting Information), and the recognition of four Tai Chi forms is shown in Figure 5. Figure 5a presents the output voltage signals recorded by the seven strain sensors placed on the neck, left elbow, right elbow, left twist, right twist, left knee, and right knee during the four Tai Chi forms.The signals were filtered and normalized before being separated into training and testing sets in a ratio of 80:20 to train and test the CNN-LSTM network, respectively.Figure 5b(i) and (ii), and S7, Supporting Information, show the comprehensive optimization procedure for the network parameters.In terms of computational costs and classification performance, the optimal numbers of hidden layers and neurons were 1 and 64, respectively.Figure 5b  (>98%).Moreover, the recognition accuracy is highly related to the number of sensors, which can supply ample and ideal information for the deep learning model.However, the developed hybrid lightweight CNN-LSTM algorithm is crucial for simplifying the calculation of the signal data acquired from multiple sensors.Figure 5c(iv) and (v) shows the overall and individual-form motion classification accuracies.Classification using Butterworth filtering and normalization yielded the maximum accuracy for all motions, with an overall accuracy of 100%, thus indicating that the proposed wearable system based on strain sensors is well suited for complex full-body motion recognition.Table 1 compares the wearable motion recognition systems based on flexible sensors published in the literature.Notably, this study has a tailored circuit, signal preprocessing, and deep learning algorithm, as well as an integrated wearable system, thus enabling high-accuracy recognition of complex full-body motions, including repetitive and nonrepetitive motions, which has not been observed in previous research.

Conclusion
In summary, we presented a stretchable and flexible strain sensor composed of CNTs and Ecoflex.The strain sensor was formed using a hybrid CNT/Ecoflex layer and encapsulated with Ecoflex.The conductive network of CNTs enabled the sensor to exhibit consistent conductivity, high strain sensitivity (GF = 24.9),and remarkable stability throughout the stretching/release test for over 5000 cycles.A lightweight hybrid CNN-LSTM model was developed to implement a simple and low-cost wearable recognition system based on the fabricated strain sensors for distinguishing between complex human body motions.Butterworth filtering and Z-score normalization algorithms were intended to filter out noise and compensate for variances in sensor consistency.This approach yielded high-accuracy recognition of complex full-body motions, with an accuracy of 95.72% for gait motions and 100% for Tai Chi motions.Overall, this study offers a practical method to develop the wearable system based on flexible strain sensors for intricate recognition applications in healthcare, intelligent robot control, and HMIs.

Experimental Section
Materials: CNTs with an average diameter of 50 nm and length of less than 10 μm were obtained from XFNANO Inc., Nanjing, China.Ecoflex00-30 was obtained from Smooth-On, Inc., Macungie, USA.Conductive tape was obtained from MaoYe, Inc., Shenzhen, China.All the materials were utilized in their original state as received, without any further improvement.Motion complexity: complex motions are generally distinguished by three fundamental characteristics: 1) time-varying dynamic motions comprising a series of motions; 2) cooperative motions of numerous body parts, and 3) nonrepetitive activities.Minimal motion complexity indicates that the motion lacks all these characteristics, such as static hand gestures, static postures, and repeated joint bending.Medium motion complexity indicates that the motion has some of these characteristics such as dynamic hand gestures and repeated gait motions.High motion complexity indicates that all features, such as the Tai Chi motion, are satisfied.
Preparation of Flexible Sensor: The preparation process for the flexible sensor is shown schematically in Figure S2, Supporting Information.First, the PVC mold (600 μm in depth, fabricated by laser cutting) was pasted onto the PI substrate.Ecoflex Part A (50 mg) and Part B (50 mg) were then mixed in a 1:1 mass ratio and placed in a vacuum oven for vacuum treatment to remove bubbles in the Ecoflex mixture.Subsequently, the prepared Ecoflex mixture was poured into a PVC mold and cured for 50 min to obtain a semicured Ecoflex film.Thereafter, CNTs (5-8 mg) were brushed onto a semicured Ecoflex surface to form a conducting network.Owing to the pressure applied during brushing, some of the CNTs were embedded into the Ecoflex and formed a hybrid CNT/Ecoflex thin-film layer after 3 h of full curing.Subsequently, the electrodes were obtained by cutting the conductive tape into the required shape and attaching it to the two ends of the hybrid layer.Two copper wires were connected to the electrodes to realize electrode extraction.Thereafter, another PVC mold was pasted onto the sensor, which was poured into the PVC mold and cured for 3 h to encapsulate the sensor and ensure its stability.Finally, the sensor was peeled off from the PI substrate, and the PVC molds were removed to obtain the completed sensor.All fabrication steps were completed at ambient temperature (20 °C, 50% humidity).
Characterization and Measurement: The morphology of the sensor was observed using SEM (Helios 5 CX, Thermos Scientific Inc.).A Motorized Push/Pull Test Stand (ESM303, Mark-10 Co.) was used for the mechanical measurements, and an LCR Meter (TH2826A, Tonghui Electronic Co., Ltd.) was used for the electrical measurements.
Signal Preprocessing and Deep Learning: The signal preprocessing and deep learning programs were constructed in PyCharm (Community Edition 2021.3.1).Construction, training, and testing of the neural network were performed using PyTorch (1.10.0 version).
Informed consent was obtained from the participants who volunteered to perform all experiments and studies (i.e., wearable testing and image publication).All testing reported conformed to the ethical requirements of Southeast University.No animal or medical experiments were performed in this study.

Figure 1 .
Figure 1.a) Relative resistance ratios (ΔR/R 0 ) versus stretching strain of the sensors with different weight densities of CNTs (1.163, 1.395, 1.682, and 1.860 mg cm À2 ) within 200% strain.b) Consistency comparison of sensors with CNTs density of 1.628 mg cm À2 .c,d) Mechanical and electrical hysteresis characteristics of the sensor for stretching strains of 50%, 100%, and 150%.e) Stability test of the sensor.f-i) Bending signal monitoring for neck, wrist, elbow, and knee, respectively.

Figure 2 .
Figure 2. a) Schematic of the wearable recognition system with strain sensors.b) Signal preprocessing flowchart, and the signal preprocessing results with and without Butterworth filtering and Z-score normalization.c) Diagram of the lightweight CNN-LSTM hybrid neural network.
(iii) illustrates the training loss and test accuracy with the set parameters.Notably, the training loss decreased, and the test accuracy increased rapidly in the batch range of 0-250.The training loss and test accuracy tended to saturate in the range of 250-1500 batches, thus indicating that the CNN-LSTM model with the chosen parameters exhibits optimal performance.Figure S8, Supporting Information, compares the different preprocessing algorithms.The confusion matrices for the unprocessed signals, Butterworth filtering, Butterworth filtering, and Z-score normalization are shown in Figure 5c(i)-(iii).In particular, all four Tai Chi forms achieved excellent categorization accuracy

Figure 3 .
Figure 3. a) Outputs from four sensors (placed to the left knee, the right knee, the left elbow, and the right elbow) during seven gait motions, including slow walk, walk, fast walk, uphill, run, upstairs, and downstairs.b) CNN-LSTM parameters optimization, including the classification accuracy with (i) different number of layers and (ii) different number of neurons; (iii) train loss and test accuracy with chosen parameters.

Figure 4 .
Figure 4. a) Schematics of gait motion recognition result, including (i) the confusion matrix without signal preprocessing, (ii) the confusion matrix using Butterworth filtering, (iii) the confusion matrix using Butterworth filtering and Z-score normalization, (iv) ROC curve, and (v) classification result bar chart.b) Signal and result of continuous motion recognition: the top figure shows the right elbow signal in various motions, whereas the bottom one shows the corresponding classification results.The ordinate is the predict result, samples marked in green indicate proper classification, and samples marked in red indicate incorrect classification.c) Schematics of nine gait motions recognition result, including (i) confusion matrix and (ii) classification result bar chart.

Figure 5 .
Figure 5.The Tai Chi motion recognition using deep learning algorithm.a) Outputs from seven sensors: neck, left elbow, right elbow, left wrist, right wrist, left knee, and right knee.b) Bi-LSTM parameters optimization, including classification accuracy with (i) different number of layers and (ii) different number of neurons; (iii) train loss and test accuracy with selected parameters.c) Schematics of Tai Chi motion recognition result, including (i) confusion matrix without signal preprocessing, (ii) confusion matrix using Butterworth filtering, (iii) confusion matrix using Butterworth filtering and Z-score normalization, (iv) classification result bar chart, and (v) classification accuracy for each Tai Chi form and overall accuracy.The three distinct preprocessing methods are illustrated.

Table 1 .
Summary of the wearable motion recognition system based on flexible sensors.