A Review of Hand Gesture Recognition Systems Based on Noninvasive Wearable Sensors

Hand gesture, one of the essential ways for a human to convey information and express intuitive intention, has a significant degree of differentiation, substantial flexibility, and high robustness of information transmission to make hand gesture recognition (HGR) one of the research hotspots in the fields of human–human and human–computer or human–machine interactions. Noninvasive, on‐body sensors can monitor, track, and recognize hand gestures for various applications such as sign language recognition, rehabilitation, myoelectric control for prosthetic hands and human–machine interface (HMI), and many other applications. This article systematically reviews recent achievements from noninvasive upper‐limb sensing techniques for HGR, multimodal sensing fusion to gain additional user information, and wearable gesture recognition algorithms to obtain more reliable and robust performance. Research challenges, progress, and emerging opportunities for sensor‐based HGR systems are also analyzed to provide perspectives for future research and progress.


Introduction
Wearable hand gesture systems have been widely implemented in the decoding of human movement for intention identification due to their ever-decreasing cost and less safety concerns than invasive sensing methods.Integrating hand gesture recognition (HGR) sensors in wearable systems provides novel solutions to improve the life quality of disabled people, [1] help assess rehabilitation progress, [2] and retain intention and neural motor control for amputees. [3]As illustrated in Figure 1, a wearable humanmachine interface (HMI) system with one or multiple types of skin-mounted sensors can capture and transmit upper-limb gestures from an operator to a connected machine, which processes and decodes the gesture-related biosignals using predictive analytics to recognize human intentions and generate corresponding commands for machine response. [4,5]hese wearable sensors include an inertial measurement unit (IMU) for sensing arm/finger movements, flexible strain sensors for sensing physical movements, surface electromyography (sEMGs) for sensing electrical signals from muscle contractions, etc.An example of an HMI application of a robot mimicking the gestures of a human through a sensing system is shown in Figure 2. [6] To enable natural and multifunctional command, conventional machine learning (ML) approaches, that is, classification and regression, have been widely used to decode a user's movement intentions from the biosignals provided by wearable sensors.Predictive and statistical learning approaches like expectation maximization (EM) and maximum a posteriori are used to identify classes of movements and continuous estimation of joint kinematics of human arms and hands. [7]To better use the sensory information, deep learning (DL) techniques are also widely employed for hand gesture classification, using complex multilayered neural networks without accounting for a priori knowledge. [8,9]12][13][14] Researchers have recently developed various upper-limb wearable devices: wearable data gloves and wrist/armbands. [15]rist-or-arm-worn devices may include several sensors to continuously monitor various biosignals about important indications (e.g., muscle volume change, skin vibration, blood pressure, etc.) and environmental conditions (e.g., subtle arm movements by IMU).Recent advances in wearable electronics and materials research have allowed wrist/arm devices to be in the form of wristbands, rings, smartwatches, and bracelets for HGR.Some commercially available products of these types, that is, Myo Armband, can acquire enriched sEMG signals from muscle activation to detect the intended hand gestures. [16]Another method for acquiring signals is a data glove, which is equipped with sensors to detect physical movements of a hand, for example, the bending and abduction of fingers, the flexion and abduction of the wrist, etc. [17,18] However, although data-glove-based systems can achieve a high classification rate, [19] it can be cumbersome for practical daily living use. [20,21]o differentiate some existing reviews, [18,21] this review focuses explicitly on noninvasive sensor-based systems for upper-limb motion detection.By narrowing the scope, the review aims to provide a more in-depth analysis in three aspects: 1) stateof-the-art sensing interfaces; 2) multimodal fusion techniques to provide additional information; and 3) data processing approaches, including ML and DL, to enhance the reliability of estimation outcomes.
[13][14] The remainder of this review is organized as follows.Section 2 investigates noninvasive wearable sensing technology, along with multisensing fusion principles.Section 3 presents acquisition methods, feature extraction techniques, and conventional decoding algorithms for classification.Section 4 analyzes the applications of HGR systems.Section 5 describes the future research directions and challenges of wearable HGR sensor-based systems.

Sensing Modalities
Hand gesture movements are caused by muscle activity and tendon movement in the upper limb which are accompanied by blood vessel distortion and bone motion.Various sensing modalities can record the alterations of the biological and physical characteristics during hand and finger motions These alterations can be measured by using electrical, mechanical, acoustical/ vibrational, or optical sensing techniques in the form of wristbands and data gloves.As illustrated in Figure 3, different sensing modalities in the forms of wrist/armband are used to measure the biosignals related to upper-limb motions.Some of them are also used in data gloves, such as IMU.Wearable system, consisting of skin-mounted sensors and a customized interface circuit integrated with ML algorithms, to accurately recognize hand gestures.Adapted with permission. [6]Copyright 2021, American Chemical Society.

Electrical Sensing
Muscle contraction can be measured by electrical sensing techniques with intrinsically generated electrical signals (electromyography [EMG]) or electrical impedance tomography (EIT) as the response to externally applied electric current (within a safe range).By measuring the electric signals, the EMG and EIT sensors can identify upper-limb movements, including compression, tension, and twist, caused by the hand/wrist movements.[24][25][26][27] For instance, in Figure 4, the stretchable and multifunctional sEMG electrodes were used to record muscle activities and joint motions. [28]owever, sEMG signals can be easily influenced by numerous disturbances in practical environments which may influence the feature extraction and hence affect the hand gesture classification/regression.There are various confounding factors affecting EMG signals, for example, electrode shifting, limb position, muscle contraction intensity, concept drift, etc. [29] Surface EMG signals are usually classified for control purposes using the electromyographic control technique, known as myoelectric control systems (MCSs).One of the main potential applications of MCSs is powered upper-limb prostheses and electric-powered wheelchairs.Dwivedi et al. [30] used EMG and fiducial markerbased tracking to capture the myoelectric activations of the user during the execution of specific hand gestures.The device demonstrated to be capable of controlling a dexterous robot arm hand system and translating the gestures into the desired grasp type for the robot hand.
EIT is a noninvasive imaging method to detect the inner impedance distribution and conductivity changes of a particular tissue.To obtain the internal resistivity of the tissue, EIT passes high-frequency and low-amplitude currents between pairs of electrodes on the body and simultaneously records the potential resulting between all other pairs of electrodes.In parallel with the research into EIT for medical and physiological applications, there have been studies into the use of EIT in the field of HGR.Zhang et al. [31] also developed a system named Tomo, based on the EIT to recognize hand gestures.This system has a high accuracy, but the electrodes require a narrow contact with the skin for proper operation.EIT is widely used in HGR as it offers more design flexibility in terms of the number of electrodes, and measurement pattern, compared to sEMG. [32,33]

Mechanical Sensing
Mechanical sensing can be categorized into four types: forcemyography (FMG), IMU, strain sensing, and flex sensor sensing.FMG, sometimes referred to as muscle pressure mapping (MPM) and topographic force mapping (TFM), measures the force variation from the volumetric and stiffness changes of the muscle movements to obtain information about the underlying musculotendinous complex.Gesture recognition devices based on FMG sensors offer a wide range of sensor options, such as capacitive, [34] piezoresistive, [35] pneumatic-based sensors, [36] piezoelectric, [37] and force sensitive resistors (FSRs). [38]s shown in Figure 5, 4 Â 3 capacitive sensor arrays are sewn into the denim fabric using conductive wires. [39]The system can recognize gestures of varying complexity with an average accuracy of 99% with minimal training using hidden Markov model and dynamic time warping.Similarly, Wong et al. [40] developed a capacitance-based glove for sign language recognition (SLR).In this system, 15 features are extracted from the signals to use support-vector machine (SVM) and k-nearest neighbor (KNN) in classifying various letters.Chapman et al. [38] designed an armband composed of six FSRs for HMI to decode four different gestures (pinch, power, tripod, extension) using linear discriminant analysis (LDA), SVM, and random forest (RF) and concluded that FSR-based interfaces performed better in recognizing the grasp motion (accuracy of 91.2%) as compared to the sEMG-based interfaces (accuracy of 84.6%).Despite the high performance of these mechanical sensing methods in the recognition of individual gestures, they are still incapable of acquiring the movement information of deep muscles. [41]MUs, which usually consist of 3-axis accelerometers (ACCs) and/or 3-axis gyroscopes (GYR), are used to record the arm's Figure 4. Basic layouts of the stretchable, multifunctional sEMG electrodes.a) Stretchable, multifunctional epidermal sEMG adhered on a stretchable, transparent medical tape as a soft sensor patch.b) Gel-based electrodes and the stretchable, skin-interfaced sEMG electrodes mounted closely on the forearm.Reproduced with permission. [28]Copyright 2021, Wiley-VCH.
overall movement such as the compensatory balancing movements, limb orientation and even information related to fatigue, smoothness, and the velocity of an arm's movements. [42]owever, IMUs are rarely used on their own for hand and wrist gesture recognition, as the kinematic and orientation information alone is not sufficient to detect patterns of gestures.
Strain sensors measure the elastic deformation of tendons, skin, and muscles to recognize the hand gesture. [43]The advantage of strain sensors is that they often have better resolution [44] and are more durable (less prone to electromigration and environmental electrical interference). [45]Authors in another study [46] designed a novel system for monitoring hand gestures based on a comfortable wristband equipped with stretchable strain gauge sensors.The system includes low power consumption and stretchable devices mounted on a breathable cotton wristband, reaching a reproducibility of over 98%, making this device unique in the landscape of gesture recognition.The drawback of strain sensors is that these characteristics (affordable, invisible, thin, lightweight, stretchable, and attachable) are typically not possible to achieve concurrently, which prevents their practical use.Wearable strain sensors are comprehensively studied and reported in another study. [47]lex sensors are also commonly used as strain sensors to measure the angle of bending. [48]For example, Bhuyan et al. [49] attached five flex sensors on each finger to recognize hand and finger gestures.As shown in Figure 6, flex sensors are usually embedded in a smart glove corresponding to each finger, where each finger gesture is observed through the flexion of the flex sensors. [50]However, with the short finger, the flex sensor's midpoint may no longer be precisely on the PIP joint, which influences the accuracy of the bending angle.
The unique operating constraint of a wearable sensor requires a careful choice of substrate materials with critical properties.With the advances in material science, wearable strain sensors consist of numerous ultrathin layers of various synthetic polymers and composite materials assembled in a complex, but affordable process.Many polymers, such as silicone, polydimethylsiloxane (PDMS), polylactic acid, polyvinylidene fluoride, polytetrafluoroethylene, polyimide, and polyvinylidene chloride, are used for a wearable interface which is expected to be compatible with human skin with improved stretchability, sensitivity, and reliability.Hence, researchers produced epidermal e-skin sensors based on epidermal iontronic sensing [51] and carbon nanotubes [52] that follow similar principles for detecting fingers' bending angles and administering better user experiences.However, nonideal characteristics such as hysteresis, signal drift, and sensitivity to ambient noises remain usual challenges in wearables. [7]With more research on the properties of new materials, the use of flexible sensors in data gloves has become widespread.Compared with inertial sensors, flexible sensors are lighter, better fit with gloves, and have a better user experience.However, their change of resistance has nonlinear characteristics, and the sensors are extremely sensitive to the slightest movement or compression.5][56] 2.3.Acoustical/Vibratory Sensing Acoustic sensors are employed in HGR because changes in the physical structure of the wrist and forearm will result in varied echoing acoustic qualities being generated by the external source apparatus or the muscle itself.There are three primary acoustical sensing techniques: sonomyography (SMG), mechanomyography (MMG), and bone-conducted sound sensing.Researchers have employed SMG to provide information on deeper tissue layers of human muscle-tendon complex.It is a noninvasive approach that uses ultrasound (US) imaging to obtain the information.There are two kinds of US imaging: A-mode (portable, Figure 5.The top photo shows the prototype system worn by a person with a spinal cord injury.The bottom figure demonstrates a 4 Â 3 capacitive sensor array and an accelerometer wristband used to find the orientation of the capacitor sensor array with respect to the hand.Reproduced with permission. [39]Copyright 2015, IEEE. ) the adopted bent sensors.Reproduced with permission. [50]Copyright 2013, IEEE.
1D sonomyography) and B-mode (high-resolution, 2D sonomyography).Yang et al. [57] developed a multi-A-mode US, with an offline accuracy of 98.87% and an online accuracy of 95.4%.Hettiarachchi et al. [58] introduced a new wearable ultrasonic radial muscle activity detection system and obtained recognition accuracy of 72%.Similarly, Yan et al. [59] designed a new lightweight A-mode ultrasonic probe and obtained a recognition rate of 97.6%.Due to the high commercial need of B-mode US in the clinical field, the research on B-mode US started earlier than on A-mode US.B-mode ultrasonic reflects more information about complex grasps in able-bodied subjects via imaging of the anterior forearm musculature.A recent review investigates 17 studies making use of B-Mode US for biomonitoring muscle and tendon functions during locomotion. [60]Akhlaghi et al. [61] also used B-mode US images to identify four different gestures in real time with an accuracy of 77%.Compared with sEMG, B-mode US has shown remarkable classification accuracies of various muscle states and activities; however, it is not suitable for simple, low-cost, and wearable solutions.B-mode US imaging requires complex algorithms and visual identification of the relevant methods for features.
MMG quantifies the low-frequency (5-100 Hz) vibration and lateral changes in the muscles originated from muscle contraction.MMG, sometimes referred to as vibromyography, uses distinct types of sensors to measure the vibrations such as condenser microphones, [62] low-mass accelerometers, [63] laser distance sensors, [64] and piezoelectric films. [65]The excellent reliability and measurements of an accelerometer for muscle fiber sensing are particularly analyzed. [66]However, its reliability is prone to being affected by motion artefacts and a low signalto-noise ratio (SNR), and it often experiences interference from ambient noise. [66,67]As shown in Figure 7, most studies measure MMG signals from muscles on posterior of the forearm as the main muscle groups are wrist flexors and the wrist extensors. [68]f the MMG sensors are placed at the right position, the classification performance can reach more than 90%, but it can only reach 20% otherwise. [69]In the same year, Booth et al. [70] positioned six piezoelectric films on the wrist to measure the MMG signals for finger tap recognition.It solved the issue of wearing discomfort.
Bone-conducted sound sensing is an active-bioacoustic-based HGR method.Unlike MMG, bone-conducted sound sensing requires an external source of vibration rather than merely receiving the measured sounds and vibrations intrinsically generated by structural muscle contractions.Due to the morphological variations of muscles, the active vibration will result in amplitude [71][72][73] and power spectral density (PSD) [74] changes.Bone-conducted sound sensing has the advantage of simultaneously using a subtle vibration (to enable sensing) and a noticeable vibration (to supply haptic feedback).Bone-conducted sound sensing is one of the essential techniques for intuitive interfaces and immersive configuration for HMIs.Hiroyuki et al. proposed a hand pose (or gesture) estimation method based on active bone-conducted sound sensing and shows the recognition rate of the proposed method to be higher than 88% when seven hand poses are classified.

Optical Sensing
Optical sensors are often lightweight, portable, and simple to integrate into consumer electronics such as a smartwatch.The most common sensing technology for optical sensing is photoplethysmography (PPG), which employs a light source and a photodetector at the surface of skin to detect the volumetric changes of blood circulation.When performing gestures, the blood volume changes in the microvascular of skin tissues due to muscle and tendon movements and changes the reflected light intensity.It can also track these motion artefacts and recover the gesturerelated signals. [75]The PPG approach has the practical benefits of being inexpensive, lightweight, and easily implemented on a wearable device.Zhao et al. [76] used PPG sensors to detect over 7000 gestures and reached 98% recognition rate.Subramanian et al. [77] realized the recognition of four simple hand gestures over four individuals with 92.4% accuracy by adopting three PPG sensors.In another study, [77] the gesture recognition experiments were conducted in static scenario.Different to this work, Ling et al. [78] used four PPG sensors and reached 88.8-95.4% accuracy across 14 gestures in a subject-dependent strategy.
The key challenges with PPG technology are that it relies on light reflection, which is susceptible to ambient light, skin conditions and tone, and prone to physical motion artefacts, making it sensitive to the noises of the surrounding environment. [76]ear-infrared spectroscopy (NIRS) measures the concentration changes of blood diffusion, by placing a certain number of NIR light sources.During muscle contraction, the NIRS detects the variations of amplitude, which are influenced by the blood reflux into the muscle. [79]Due to the recent noteworthy technological advancements, NIRS devices are more portable and miniaturized, allowing measurements in everyday life scenarios with minimal restraints.However, there is limited literature using the single modality of NIRS for discriminative hand motion patterns despite being seen to be robust to muscular fatigue. [80]A similar method to the NIRS proposed by Shahmohammadi et al. [81] is  [68] Copyright 2020, IEEE.
called lightmyography (LMG), that can be used to efficiently decode human hand gestures, motion, and forces from the detected contractions of the human muscles.The results demonstrate that LMG outperforms EMG for most methods and subjects.

Multimodal Sensing
Relying on a single sensor or multiple same sensors is less desirable and not readily adopted in a real-world HGR application as it suffers from several issues, such as sensor failure, limited spatial coverage, limited precision, and uncertainty.A solution to these limitations is to use multisensory fusion to establish robust sensing systems using multiple (heterogeneous or homogenous) sensors.Sensor fusion integrates different sensing modalities combined with data fusion techniques to fill the drawbacks where other modalities are lacking and to provide the HGR algorithm with complete information from which to associate gesture or movement patterns.
There are extensive attempts to fuse two or more noninvasive body-sensing modalities for HGR.Literature reveals that multisensing modalities can interpret hand movement with a higher accuracy than unimodal signals.It is a purposeful solution to fuse multiple sensing modalities so that, where each individual modality is limited, they can compensate for one another. [35]In general, data of MMG, FMG, IMU, NIRS, and SMG are often combined with sEMG for HGR applications.Some sensor configurations for multimodal signals are reviewed next.

sEMG and IMU
This is the most studied modality to capture orientation and movement of an upper limb during a gesture.There are commercially available products of this type, that is, DELSYS Trigno [82] and Myo Armband, [27] which are worn just below the elbow and embedded with sEMG and IMU.For all the studies, with the addition of IMU sensor, the pattern recognition accuracy was improved, compared to using sEMG signals only. [83]t is worth noting that if the gesture number is not high, and there are no gestures that are remarkably similar in motion, inertial sensors may be sufficient for the task to reduce the system cost. [84]The major disadvantage with fusing these two modalities is that most of the systems are analyzed and designed to recognize discrete signs assuming a delay exists between two gestures.However, in daily conversation, a whole sentence may be performed continuously without a clear pause between each word, especially for sign language identification.To recognize continuous sentence, a different segmentation approach or other models should be considered.Despite the high correlation between sEMG and the intensity of neural drives to target muscles, sEMG signals alone may not be adequate enough for many practical applications of multifunctional upper-limb HMI, mainly because of 1) many sources of motion artefacts in sEMG-based systems, including interface between the electrode and skin, and the artefacts from the cable connecting the electrode to the amplifier which causes inconsistencies in the sensor data and pattern results and 2) a large number of degree of freedoms (DoFs) and noncyclic nature of the upper limbs' movements.Therefore, the fusion of sEMG with other signals has gained considerable attention, such that more complementary information can be obtained from the environment to compensate the shortcomings of sEMG.FMG is a much more recent development which senses the volumetric changes associated with muscle activation/deactivation because of the mechanical changes in the muscles or tendons but is still limited in its robustness and accuracy.

sEMG and FMG
Researchers investigated the elbow, forearm, and wrist location detection performance using EMG and FMG and found that both modalities could generate useful information for upper-limb movement recognition.For instance, Mclntosh et al. [85] fused sEMG þ FMG used to detect finger movements and rotations around the wrist and forearm.Other researchers have integrated the sensing system by colocating the sensors, which provides a feasible method to compare the sensing modalities since both sensors share the same points of contact on the forearm.For example, Jiang et al. [86] presented a new colocated sEMG-FMG sensing armband which can record FMG þ EMG signals simultaneously.The accuracy of EMG-only gesture recognition was 81.5%, while FMG-only was 80.6%, and colocated EMG-FMG had the best performance of 91.6%.Although off-located configuration is simple to reach, colocated configuration is most likely to achieve better hand motion classification accuracy by providing complementary information from different sensors at the same body site.However, these studies simply employ two or several types of sensory systems to collect data for offline analysis.Further direction of multimodal sensory fusion is to integrate them in both hardware and software.

Other Multimodal Sensors
Multisensing fusion can be another promising approach for enhancing capability, because using more sensing modalities can potentially overcome drawbacks of single sensing and thus improve a higher capability, generate richer information, and increase robustness from the user side.The combination of SMG and sEMG has also gained considerable attention.Xia et al. [87] designed a portable hybrid system using A-mode SMG transducer.Experiments validated that hybrid features contributed to significant improvement of HGR (20.6% when compared to features alone).Robustness and performance improvement have consistently been studied when combining two sensor modalities.This means that even better performance can come from combining three.Few researchers combined three modalities for HGR such as EMG þ FMG þ MMG, [88] PPG þ FMG þ ACC, [89] FMG þ EMG þ IMU, [90] and sEMG þ MMG þ NIRS. [91]As shown in Figure 8, the authors reported that the addition of MMG and NIRS to sEMG achieved higher classification of fatigued muscles which typically happened after 1 min sustained muscle contraction. [91]There is a need for extensive research to explore the actual benefits and drawbacks of integrating three modalities for the recognition of upper limb intention, which brings the question of a strong framework to explore: whether using three sensing modalities make the system even more robust, accurate, and cost-effective while it allows online processing of such large data.

Sensor Fusion Strategies
Regarding the data processing level of abstraction, fusion strategy can be achieved at data-level fusion, feature-level, and decisionlevel. [92]Data-level fusion, also often referred to as early fusion, combines multiple homogenous sources of raw or preprocessed data into a single-feature vector before being used as the input to an ML algorithm. [93]However, one of the drawbacks of data-level fusion is to select the time synchronicity between different data sources.
Usually, these signals are resampled at a standard sampling rate.To dodge this limitation, Martinez et al. [94] proposed several approaches, such as convolution, training, and pooling fusion, to integrate sequences of discrete events with continuous signals.
The second form of multimodal fusion is the feature-level fusion which extracts features from multiple data sources to create a new high-dimensional feature vector.However, to find the most significant feature subset, large training sets are usually needed. [95]u et al. [92] used a feature-level fusion and achieved 95.1% accuracy for Chinese SLR of 150 subwords using sEMG, ACC, and GYRO sensors.Multimodal fusion techniques that fall into the category of decision-level fusion are based on selecting one hypothesis from the set of hypotheses generated by individual decisions of different sources.Whilst the use of multisensing modality sensors is for performance improvement, the reliance on different sensors in terms of both sensing type and location will require the use of pattern recognition (PR) and ML. [20]6.Summary The acquisition methods for HGR include a range of various sensing modalities in the form of a data glove and wrist/armband.Through comparison and analysis of the accuracy results, it is evident that the sensing modalities perform differently in recognizing gestures.The accuracy variation within the same sensing modality could be due to the sensitivity of a specific type of gesture, the size of the datasets, and experimental designs.In fact, from the content described in Table 1, we can find that most researchers used offline analysis and fixed arm configuration for their sensing systems to obtain high-classification accuracy.In clinical applications, characteristics of the collected signals vary with time, making every HGR wearable system face exponentially rising errors over the long-time operation.Online training, where a classifier is trained continuously using new patterns during operation, promotes longer-term use.Moreover, to improve motion classification, researchers have combined many sensing modalities to integrate data from multiple sensors to provide a more comprehensive and accurate interpretation of the hand motion than unimodal bio-signals (Table 2).Fusion strategy for data processing can be achieved at data-level fusion, feature-level, and decisionlevel.Multiple fusion strategies may sometimes be used to achieve optimal performance.b) signal conditioning module; c) "banana-like" transmission channel from near-infrared light source to photodetector.Reproduced with permission. [91]opyright 2018, IEEE.

Data Processing for HGR
Table 1.Sensing modalities and their use in wrist/arm bands and data gloves and their performance with variable or fixed arm configurations.((fixed): fixed limb position, (dynamic): dynamic limb position, (offline): offline testing and validation, (online) online testing and validation).
segmentation; 3) extracting relevant features from the signal and eventually transmit only those to a remote server for the interpretation; and 4) the input data are classified using a suitable classifier to achieve gesture recognition.

Data Acquisition
Data acquisition is to capture the input data by the wearable device.
Common among all the data-capturing wearable devices are analogue-to-digital converters (ADC), which sample the continuous data from the sensors and convert it to digital signals.
To capture signals from a sensor using ADC, the sampling rate must be sufficient to acquire data at specific frequencies. [96]urthermore, the Nyquist criterion states that the sampling frequency must be more than twice the highest frequency of the signal to be measured, which can be different for each modality. [97]

Preprocessing
Raw data from wearable devices may not be directly suitable for classification or contain noisy features resulting in poor  classification accuracy. [96]Therefore, once the signal is acquired and saved, the noise and unwanted signals are eliminated before proceeding to the next processing stages.The unwanted signals can be easily removed by filtering, reducing unintended artefacts, such as electrical and ambient noise, from the signal of interest. [97]Those with FMG and IMUs are used without being filtered as they have acceptable SNRs. [98]Other examples of preprocessing for sEMG include offset compensation, presmoothing, rectification, and amplification. [99]Another primary processing task performed on the on-body sensor data is data sampling.Data sampling techniques include fixed rate, variable rate, adaptive sampling, compressed sensing, and sensor bitresolution tuning. [100]Depending on the sensors' type and signal quality, these and other preprocessing techniques can be used to prepare the signal for the next stage of the process.However, the time synchronization of various signals collected across different devices still needs to be improved for multimodal systems.This type of processing challenge in data gloves can be experienced because of the occurrence of restriction due to the transmission of data from using a pair of gloves, which can lead to unsynchronized processing of data. [101]This can result in mismatched data in time, resulting in incorrect predictions. [102,103]everal studies have been performed to propose a time alignment solution for multimodal signals.Jiang et al. [104] employed a dynamic time warping (DTW) approach, termed event DTW, which uses data from defined events (i.e., upslope and downslope in the signal) as the basis to acquire the optimal time alignment between two signals. [105]Time alignment is again the foundation to build this synchronous multimodal data collection.

Feature Extraction and Feature Selection
It is necessary to understand which features are better to extract, as reducing the number of features will also decrease the computational complexity for real-time signal processing of low-power wearable sensors.In ML, handcrafted features from digital signals can be extracted in the time-domain (TD), frequency-domain (FD), or time-frequency domain features (TFD). [106]The TD features extract meaningful information from the signals, and FD features offer information regarding the PSD, including frequency ratio (FR), SNR, and spectral momentum (SM).In comparison, TFD features combine signal amplitudes and PSD signals. [107]To further improve the robustness of feature extraction, several new features, such as time-dependent power spectrum descriptors (TD-PSD) and temporal-spatial descriptors (TSD), [108] have also been proposed recently.Since one feature can only provide limited information, combining multiple features from distinct groups, such as Hudgins' feature set [109] and Phinyomark's feature set, [106] is practical.sEMG usually performs better in TD features for HGR tasks which would be less computationally expensive. [110]TD features of IMU signals (e.g., the mean value of accelerometers, gyroscopes, and magnetometers within a sliding window) are used in sensor fusion.Different from ML, which relies on handcrafted features, DL derives representative high-level features from raw signals, which can be more effective in many cases.DL models do not require manual feature selection and instead can learn hierarchical representations automatically from raw data through a process known as representation learning.
Some typically used features with their corresponding equations are listed in Table 3. Root mean square (RMS) is the most basic and commonly used feature in statistical analysis, representing the square root of the signal's average power for a given time.Zero crossings is another feature extraction index used in detecting fatigue in muscles, which counts the number of times the signal crosses the zero value in each time frame.A summary of all the different features, along with the calculation details of each feature, can be found in another study. [107]Usually, research studies extract many features and then apply a feature selection tool to optimize the feature sets and processing procedures.

Classification
Robust classification is needed to recognize gestures with realtime processing speed to avoid interference and eliminate capturing unintentional gestures.It is essential to acquire the real meaning of a hand gesture so that a proper response can be sent back.Supervised classification is used more often than unsupervised one.It records the data of each gesture separately and assigns a discrete label (the user's actions, e.g., "hand opening," "hand lifting") for training a model.Classification models to classify the datasets obtained from the wearable devices can be divided into two main techniques: conventional ML and DL.For conventional ML, feature extraction and classification are separated while they are combined in DL, in which no feature Table 3.Some statistical indexes for feature extraction.Reproduced (adapted) with permission. [68]Copyright 2020, IEEE.

Statistical Indexes
Indexes Equations The number of points where α i changes signs  [29] For classification tasks, model training has been examined for HGR literature, among them, SVM, [99,[111][112][113] decision tree (DT), and RF. [114]The classification models can be modified to fulfill the regression tasks, including multiple regression, Gaussian process regression, support vector regressors, RF regressors, and neural networks.There is no universal model feasible for all applications and datasets; most studies have attempted several different algorithms [115] and selected a suitable one based on their requirements, such as performance and computational expenses.Unsupervised learning involves using unlabeled datasets for classification/regression problems such as extracting generative features and identifying meaningful trends and structures.Unsupervised learning models include LDA, principal component analysis (PCA), and self-organizing feature maps (SOFMs).Reinforcement applies reward on desired behaviors and/or punishes undesired ones.Since most studies for sensor-based gesture recognition tasks are based on supervised classifiers, a huge amount of multimodal data must be labeled before the data is fed into the ML model.This setup is very timeconsuming and costly. [116]Therefore, advanced unsupervised or semisupervised ML algorithms that can automatically label new data for learning are needed for multimodal to decrease the time and cost of data labeling. [117]

Deep Learning
Compared with conventional ML algorithms, DL has shown improved advantages for wearable sensor-based gesture recognition, including 1) attaining better accuracy and robustness for gesture detection; 2) learning deeper features automatically without requiring to manually extract and select; [92] and 3) holding assurance for extracting the cross-modality features for hybrid modalities tasks. [118]It can learn from raw data without the need for handcrafted features which drops some of data preprocessing that is typically involved with ML. [119] However, using DL techniques for HGR recognition may not guarantee a better performance than conventional methods.There are various factors that should be considered when evaluating whether to adopt DL, including the number of datasets, the task complexity, and real-time computational expenses. [29]DL techniques such as convolutional neural networks (CNN) and artificial neural networks (ANNs) can ingest and process unstructured data and automate feature extraction which removes some of the dependency on the trainer.The adaptive nature of ANNs enables connectionist approaches to incorporate learning in data-rich environment.Various connectionist models are used in HGR literature, including multilayer perception (MLP), time delay neural network (TDNN), and radial basis function neural network (RBFN). [33]nlike ANN, CNN requires many more data inputs to achieve its high accuracy rate.Atzori et al. [120] adopted a simplified CNN to recognize 50 gestures using sEMG modality.Similarly, Tan et al. [121] also presented a real-time HGR system using embedded CNN for classification.Using hand prosthesis leveraging hyperdimensional (HD-EMG) and DL, the authors improved reliability and execution, and reaction times were minimized.Moreover, Dwivedi et al. [122] proposed two novel DL techniques called temporal multichannel transformers (TMC-T) and vision transformers (ViT) for decoding object motions in dexterous, inhand manipulation tasks using EMG signals.Their TMC-ViT model surpassed the CNN benchmark model in both correlation and accuracy for object motion decoding, achieving 89.68% and 79.09%, respectively.The study shows that the performance of muscle-machine interfaces (MuMIs) can be improved using DL-based models with raw myoelectric activations instead of developing DL or classic ML models with hand-crafted features.
Despite the improvement in previous studies, there are still limitations to DL for HGR-based tasks.The existing studies based on DL still heavily rely on labeled data, which is expensive and time costly.It is desired to develop robust and privacyconscious methods for gathering valuable data.A potential solution is adopting deep transfer learning using labeled data from other gesture recognition domains. [119]DL approaches usually work on classifying data from the same feature space, which require substantial amounts of data.In sensing modalities like sEMG, PPG, FMG, and IMU, transfer learning can be used to exploit auxiliary information by adjusting existing model parameters or reformulating the model to suit a new task, requiring fewer training iterations which can reduce the overall training time. [123]Other methods to speed up modeling and reduce the number of model parameters include PCA and hyperparameter optimization.These methods, combined with appropriate hardware accelerators (such as graphics processing units (GPUs) or tensor processing units (TPUs)) and efficient implementation frameworks, can mitigate overfitting, reduce training time, and decrease the number of model parameters while maintaining or improving the model's performance.Pruning is another common technique to reduce the number of model parameters in DL.It is important to understand that complex models do not necessarily mean better performance; sometimes, simpler models with fewer parameters can achieve better performance than complex models.
DL has shown significant potential for wearable sensor-based gesture recognition, as it can automatically extract deep features from raw data, improving the accuracy and robustness of gesture detection.However, adopting DL depends on several factors, such as dataset volume, task complexity, and real-time computational expenses.While DL approaches heavily rely on labeled data, transfer learning can leverage labeled data from other domains, reducing the cost of data annotation.Despite the potential benefits of DL for gesture recognition, there are still limitations that need to be addressed, such as the need for labeled data and the privacy concerns associated with collecting such data.Critical techniques in each DL-based gesture/movement recognition are further reviewed in another study. [124]

Applications
Major applications have been envisioned and studied, from consumer and medical wearable devices to industrial use.For example, controlling home devices, known as smart home or home automation, enables individuals to interact with household appliances such as TV, radio, fans, and doors, along with controlling channels, temperature, and volume. [125]Extension of the capabilities of wearables beyond the house to the car and workplace, [126] myoelectric control-based prosthetic hand to control assistive devices for amputees, orthotic devices, and exoskeletons also paves the way for third-generation wearable interfaces for both clinical and nonclinical scenarios.Another application for HGR is identity verification, which is often used in security systems to provide an extra layer of authentication.For instance, instead of using a traditional password or PIN, users can use their unique hand gestures to gain access to a system or device.In the gaming and VR industries, HGR is also used to enhance user experiences.By tracking a player's hand movements, game developers can create more immersive environments that respond to the player's actions.This can include everything from controlling game characters to manipulating virtual objects.In addition, assistive robotic devices can also be designed to assist individuals with a wide range of disabilities or impairments, including those who have difficulty with fine motor skills or mobility.These devices can be designed to help individuals perform everyday tasks and improve their quality of life.For instance, a study [125] uses a combination of human and robot strength to manipulate the object, with the robot providing additional support as needed.The system showed improved handling performance and reduced physical fatigue compared to manual handling alone.The study demonstrates the potential for assistive physical robots to enhance human capabilities and improve efficiency in physical tasks.Figure 10 illustrates some common applications based on the upper-limb gesture interface.The following section presents the major applications of wearable hand gesture systems.

Human-Machine Interface
HMI allows humans to communicate with computers (machine) when controlling and conveying meaningful information.Detecting gesture intention becomes the key point in the related research. [1]Signals from hand motions are continuously transferred to an external machine to achieve the designed HMIs.For instance, playing games which intend to provide mouseand keyboard-free experience, control robots in hazardous conditions, for example, the environment underwater, where it is inconvenient to use speech or typical input devices or simply provide translation for individuals who use different gesture languages.The interest in HMI has led to a large body of research in HGR.For instance, Iba et al. [19] used CyberGlove to control a mobile robot.The user controlled the robot using gestures (close fist, open fist, point and waive left/right) that corresponded to commands.The response time of the robotic arm movement is a crucial factor in measuring the system's success; hence, the delay time should be minimized to improve the system's response.Puchuan et al. [121] also developed a sophisticated gesture recognition wristband for keyboard and multicommand input.The HGR systems not only support activities of daily livings (ADLs) but can also ease the load of specialized clinicians in hospitals.In another study, [127] the authors developed a smartwatch to control smart TV using multidimensional DTW (MDDTW) to recognize six 3D gestures with natural interaction.DTW reduces the problem of time series comprising dynamic features by speeding up the algorithm and finding the optimal alignment between two nonuniform time series. [128,129]The findings of this study show that gestures can be used for daily tasks with the right application of the algorithm.Using a ring as a TV remote control was also studied to change channels and control volume among other functionality, with intuitive movements by the fingers. [130]2.Sign Language Recognition SLR is promising to analyze sign languages automatically by a computer to help the deaf communicate with hearing individuals.SLR can read and process the hand movements which could pave a path for barrier-free communication.[131] Most researchers have concentrated on a small number of gestures such as some alphabets [132,133] and some numbers, [134,135] a few word gestures, [136,137] or a combination of the alphabets, numbers, and words.[138,139] Each country has its sign language, such as Australian Sign Language (ASL) and Arabic Sign Language (GSL), which makes it challenging to develop a standardized sign detection system for worldwide use.Moreover, in SLR studies, gloves are widely used to capture multi-DoF systems between 5 and 22 degrees-of-freedom of the hand.[140] For instance, Sousa et al. [141] presented a GyGSLA system, a wearable glove that helps individuals to learn Portuguese sign language alphabet.For a different use, Caporusso et al. [142] introduced dbGLOVE, a wearable device for supporting deaf-blind people to communicate with others.Based on the recommendations of many researchers, additional progress may be achieved by expanding the database set, [143] so that the hearing-impaired population can easily interact with others using an extensive vocabulary of Sign Language.[144][145][146][147][148][149]

Rehabilitation
Rehabilitation is essential for motor function recovery after a stroke, disease, or disorder.Rehabilitation supports patients with neurological diseases and neural damage by performing goaldirected and nongoal-directed movements, which are more integral to the patient's daily life. [150]Most of the movements targeted to help with accomplishing ADLs include wrist flexion, wrist extension, hook-like grasp, opposition (hand pinch) and thumb adduction (lateral hand pinch), cylinder grip, and spherical grip. [151,152]esture recognition based on wearable sensor techniques can be used in the rehabilitation process to acquire physiological signals from the patient's body surface to allow an active interaction between the patient and the rehabilitation system, which is essential in the recovery process. [153]Intention detection based on sEMG can provide an objective measure of muscle activation and fatigue, which is a tool for assessing and treating patients. [154]MUs are also attached to the wrists to monitor stroke patients and identify abnormal activation patterns.However, wearing equipment can be exhausting for patients, especially when calibration is required before each use.
HGR can also be a method to control hand function rehabilitation robots and complete hand function training in the active mode. [155]For instance, a game-based upper-limb rehabilitation was designed for children with cerebral palsy combining one accelerometer and three (sEMG) sensors to maximize neurological restoration and optimize patient engagement. [156]he wearable devices can also transmit the corresponding motion instructions to the hand rehabilitation robot to drive the patients.Similarly, Jing et al. [157] created an upper-limb rehabilitation platform using the commercialized product Myo gesture control armband to recognize the gesture intention of patients.The patient's hand-grasping functions are then trained through virtual environment (VR) glasses and an immersive game.The wearable system is composed of an inertial sensor unit, eight sEMG sensors, and a Bluetooth receiver, which is low-cost and portable.Mukai et al. [158] also built a hand rehabilitation robot, "ReRoH", made of flexible pneumatic gloves, ERB, an electrical stimulator, a Leap Motion noncontact sensor, and a game controller.The ability to grasp and stretch the hand can be trained with rehabilitation games, and the motor function of the fingers and hands can also be assessed.However, the balance and optimization between comfort, diversity, stability, accuracy, and timeliness need to be further studied, and the safety of unsupervised training is also worthy of attention.

Myoelectric Control for Robotic Prosthetic Hands
A wide choice of devices is available to restore the capabilities of hand amputees with myoelectric robotic prostheses.Such devices continuously evolve according to technology, scientific research, market needs, and user requirements.Myoelectricbased commercially available hand prostheses are Ottobock DMC, Touchbionics iLimb, Ottobock Michaelangelo hand, RSL Steeper Bebionic, and TASKA hand.The hand prostheses vary in complexity and components, with some offering different grip patterns. [140]However, these commercial systems have yet to be widely adopted to replace traditional direct control of myoelectric hand users, as the online robustness of HGR still needs to be improved in clinical use.
Despite the robustness and speed of these commercial myoelectric control systems, the number of movements produced by proportional and direct control systems is limited.Because of this, current prosthetic upper limbs are restricted to basic operations like producing power grip or elbow flexion/extension, far from the human arm's complex multifunctional control.Considering this, more advanced pattern recognition control strategies have been created to provide more degrees of freedom for prostheses.This allows more natural and dexterous control, thus overcoming the deficiency in multifunctional control. [23]It is also time for researchers to stop experimenting with controlled laboratory conditions with nonamputated subjects, as it does not adapt to different real-life conditions of amputees. [159]any studies have proposed different techniques for natural control of robotic hands for trans-radial hand amputated individuals. [159]One of the innovative techniques is based on the use of targeted muscle reinnervation (TMR) to actuate the hand and improve the control of myoelectric upper-limb prostheses.TMR surgery involves rerouting nerves and putting them into a muscle to move a prosthesis.However, surgery cost and clinical risk discourage patients from taking this approach. [160]It is also currently unclear to what extent a person with more distal amputation levels can benefit from improved prosthesis control after TMR in the residual forearm. [159]It should be noted that in higher-level amputations, such as shoulder disarticulation, the choices of HMIs for prosthesis control are more limited (when compared with lower-level amputations).The reason is that in high-level amputations, the muscles responsible for finger movements, and hand gestures are not accessible.Therefore, FMGand MMG-based HMIs cannot be implemented.In such cases, TMR or brain-machine interface (BMIs) are more viable options for prosthesis control.
Yang et al. [161] summarized confounding factors that could affect the stability of myoelectric control and reviewed the strategies aiming to improve the adaptation of classifiers among individuals and for long-term usage.Finally, in ref. [159], researchers revealed recent developments in the design of prosthetic hands from four aspects: stable interfaces, advanced decoding algorithms, somatosensory feedback, and assessment methods.

Improving Robustness and Wearability for Sensors
Several technologies that can be worn to recognize hand gestures have been proposed to make human-machine interactions more natural and intuitive.For example, glove-based methods are commonly used to track finger movements and determine what gestures the user is performing.[165] Notably, in cases where the positioning flex sensors are improperly placed, the sensor deviation would break the sensor and thus affect the recognition responses. [166]This is one of the significant problems related to the sensors on the market. [167,168]Hence, the gloves must fit the user's hand properly to ensure accurate detection of hand gestures.Any looseness or tightness in the gloves can affect gesture recognition accuracy.To solve this challenge, researchers implemented transfer learning to compensate for glove shifts. [139,165]Comfort and skin health is also a significant concern, especially for the applications requiring a long wearing time.For example, data gloves do not have to fully cover the hand all the time.Leaving some areas of the hand to air may mitigate the discomfort.Additionally, this arrangement can also prevent the glove interfering/blocking the function of a biological hand as a vital sensation receptor.
Furthermore, wearable devices may be unable to accommodate large batteries for power management, limiting their operation usage and functionality.However, developers of sensors are addressing this challenge by incorporating miniaturized power sources, solar cells, and wireless power delivery that eliminate the need for a charging circuit.As the field evolves, more researchers are expected to be featuring fully integrated, powered devices assessed in relevant scenarios.Demonstrating true continuous monitoring capabilities instead of serial measurements will also be essential in these systems.However, it is worth noting that recognizing thousands of gestures is computationally costly.
Undoubtedly, wearable devices face significant challenges, particularly in hardware development.Computational load associated with power consumption can be another critical issue for deploying ML and DL techniques. [169]Processing sensor data in real time can consume a significant amount of power, which can reduce battery life. [170]This can be a significant challenge, especially for wearable devices that are designed to be portable and used for extended periods. [171]To address this contradiction, researchers have started investigating neuromorphic computing, which exhibits desirable properties, including analogue computation, low power consumption, fast inference, event-driven processing, online learning, and massive parallelism. [172]Designing hybrid digital-analogue systems to enable conventional ML/DL models in neuromorphic computing is now possible.The combination of two techniques can be further explored for myoelectric control. [128]he robustness of HGR systems needs to be more resilient to the confounding factors such as sensor location shifting, limb position, muscle contraction intensity (for wristbands/armbands), sensor malfunction due to local condition (e.g., sweating for sEMG sensors), etc.For individual HGR systems, the classification accuracy between similar hand gestures needs to be improved so that many hand gestures can be recognized to increase the number of functions.Besides, most reported studies work on recognizing static or isolated hand gestures.More efforts must be attributed to dynamic HGR, especially when aiming for a more practical and complete SLR system for deaf-mute people.Moreover, the intersubject HGR needs to be enhanced to lower the requirement for a new user to adapt to such HGR systems, which can raise their popularity and acceptance among people. [173]ransfer learning seems an appealing approach for such intersubject recognition and even for intertask recognition.

ML/DL Implementations: Sensor Fusion and Online Learning
Despite the high recognition accuracy for specific HMI applications, the number of gestures classified is limited.To overcome the limitations, hybrid HMIs were used to fuse two or more biosignals to increase performance.Multisensing modality fusion combines signals from different modalities or features extracted from them with different fusion strategies and then processes the combined information to better grasp the user's intentions than unimodal systems.However, evaluations of multimodal systems are typically conducted using offline analysis or virtual assessments restricted to laboratory settings.Since an HMI system is a useful device in real-life applications, a myoelectric prosthetic/exoskeleton is required to realize the daily activities of people with limb differences.Unfortunately, offline analysis has minor impact on improving HMIs for prosthetic control because the motion pattern obtained in the training session would severely mismatch the real patterns of the same intended motion in real life.Offline signal processing and ML for pattern recognition are away from the reality of real-time or real-life signal acquisition, processing, pattern recognition, and device control.Therefore, there is an increasing need for online signal acquisition, processing, and pattern identification for HMI systems.

ML/DL Resources: Public Databases
Although some researchers may prefer to use their databases, there is a need for more available public databases to enable the development and testing of new algorithms.Therefore, ongoing efforts are needed to develop and maintain open-source HGR research databases so that more practical scenarios can be included, multiple sensing systems can be involved, and stateof-the-art decoding methods can be updated.Public datasets such as CapMyo, [174] CSLHDEMG, [175] HandCorpus, [176] and NinaPro [163] have allowed implementing more advanced ML techniques, for example, DL and transfer learning, for established HMIs.Such developments could lead to HGR systems that can function and predict users' intentions with no/minimal training for new users.Using open-source datasets, researchers can find how different algorithms can perform differently with different biosignals, [176] which allows the development of new classifiers/ regressors to maximize the matching between the algorithm and biosignal.However, the number of studies that use an available database is significantly lower than those that use their database.To this end, continuous efforts are highly urged to enrich open-source resources, with more practical scenarios included, multiple sensing systems involved, and state-of-the-art decoding methods updated. [177]

System Vulnerabilities: Data and Neglect of Data Safety
Researchers may also face ethical challenges related to the collection of data, user compliance, and data safety, which lead researchers to fail in considering the utility, deep integration with daily activities, and social acceptance during each step of the product development.The wearable devices may not meet universally designed criteria, may need costly fixtures, and fail to address the requirements of a range of market opportunities, which hinders wearable HGR devices from widespread adoption and practical use.This work highlights the need to develop new and modern data collection techniques to reduce the ethical dilemmas faced by researchers.Furthermore, privacy and data integrity are also critical challenges.It is important for privacy controls to be comprehensive and transparent, allowing users to opt-in to terms and services easily.In future, leakage of health data is possible to have more profound consequences as the functions of wearable advance.Unfortunately, technology companies have poor records for preserving user data from security attacks (hardware or software), suggesting the demand for broader government oversight. [171]There is no unified solution to tackle all threats in wearable technology security.Thus, further research and development is necessary to mitigate potential risks.It is crucial for developers to consider these ethical challenges when creating wearable HGR devices to ensure that they are safe, reliable, and respect user privacy.

Conclusion
The review shows that sensor-based HGR is an active field of research, with many studies conducted to compare the performance of various gesture detection modalities, propose algorithms for the effective classification of gestures, and describe the development of novel wearable sensors for sensory data.Various state-of-the-art sensing interfaces for the upper-limb are discussed and categorized by sensing principles and their use in sensory fusion.Irrespective of sensing modality, the study has reviewed the conventional ML algorithms and emerging DL techniques.This review also provides an overview of potential application areas in the VR, rehabilitation, prosthesis control, SLR, and other HMI areas.Based on the insights drawn from this review, various challenges and future research directions demonstrate numerous opportunities for noninvasive wearable interfaces for HGR, despite the advances achieved to date.

Figure 1 .
Figure 1.Illustration of the general process of a wearable HMI system connecting an operator to the machine with skin-mounted sensors to recognize upper-limb gestures conveyed by the operator.

Figure 2 .
Figure 2.Wearable system, consisting of skin-mounted sensors and a customized interface circuit integrated with ML algorithms, to accurately recognize hand gestures.Adapted with permission.[6]Copyright 2021, American Chemical Society.

Figure 6 .
Figure 6.Adopted bent sensors and positions.a) Illustration of a flexbased glove and b) the adopted bent sensors.Reproduced with permission.[50]Copyright 2013, IEEE.

Figure 9
Figure9illustrates a common structure to process biosignals and recognize hand gestures.It is usually composed of four stages: data acquisition, preprocessing, feature extraction or feature selection, and classification.More specifically, these stages provide the following functions: 1) hand gesture-related biosignal acquisition based on the chosen modality(s); 2) preprocessing the received signal such as filtering, synchronization, and

Figure 8 .
Figure 8. Signal conditioning module of a hybrid sEMGþMMGþNIRS sensor system: a) printed circuit board (PCB) of signal conditioning module; b) signal conditioning module; c) "banana-like" transmission channel from near-infrared light source to photodetector.Reproduced with permission.[91]Copyright 2018, IEEE.

Figure 9 .
Figure 9. Processes followed for a HGR System.

Figure 10 .
Figure 10.Upper-limb hand gesture interfaces connect human intention with smart hardware for control and guidance, descriptive communication, and assistive robotic devices in a variety of applications, including rehabilitation, prosthesis control, collaborative robot, SLR, HMI, and physical, social, and mixed assistance.

Table 2 .
Multi-modal Sensor Fusion and their use in wrist/arm bands, and their performance.Discrete Position: classification accuracies of LDA and SVM classifiers was 84.3 AE 3.0 and 97.5 AE 0.5% respectively.Continuous Position: classification accuracy is 84.3 AE 3.7% for the LDA and 98.7 AE 0.3% for the SVM classifier.
extraction/selection is required.The parameters in DL are optimized through gradient-based techniques to minimize the prediction error, ensuring the model to learn complex patterns and sequence.The choice between the two approaches (ML or DL) depends on the specific problem, available data, and computational resources.