Using IoT Smart Basketball and Wristband Motion Data to Quantitatively Evaluate Action Indicators for Basketball Shooting

Traditional approaches to improving basketball players’ shooting skills rely on coaches’ experience in adjusting players’ biomechanical motions. However, such an approach cannot provide specific instructions or facilitate immediate feedback for improvement of the shooting motion. In this article, a method is presented to quantitatively evaluate four key action indicators of shooting basketballs using a machine‐learning model based on Bayesian optimization of a light gradient boosting machine (LightGBM). Important motion data for the model are collected by micro‐inertial measurement units embedded in a wrist motion sensor and an internet of things (IoT) smart basketball. Basketball shooting motion data are collected from 16 subjects and used for model training and data testing, and four important action indicators that influence the shot quality are selected for quantitative assessment. The LightGBM model is then developed for the regression prediction of the four action indicators of shooting. In the results, it is indicated that for an individual player, the highest correlation scores of the four indexes range from 97.6% to 99.3%. The proposed approach for quantitatively assessing shooting indexes can provide objective and data‐based guidance to improve players’ shooting performance. Foreseeably, the prediction model can be embedded into a chip of a wearable device to evaluate the real‐time shot quality quantitatively.


Introduction
Scoring in basketball games depends on shooting performance of players, especially longer-range shots, and thus several researchers have focused on enhancing the shooting skills of basketball players. [1]Traditional approaches to enhancing shooting skill typically depend on coaches' training experience, which is highly subjective and thus makes it difficult to quantitatively evaluate changes in the shooting skill of players.With the integration of scientific and technological tools, the analysis of players' shooting techniques has undergone a shift from subjective to objective quantitative assessment. [2]pecifically, the application of internet of things (IoT) intelligent sensing technology has been increasingly widespread in areas such as home monitoring and automation, remote healthcare monitoring, and autonomous transportation, [3][4][5][6] and is also extensively applied in sports analytics.For instance, by utilizing IoT intelligent sensing technology, it is possible to collect motion data of basketball players and the data can be quantitatively analyzed in conjunction with machine-learning algorithms to identify key indicators affecting shooting performance.Such motion analysis can provide coaches with automated and objective guidance, thereby improving the training effectiveness and game performance of athletes.
Basketball shooting activities can be tracked using image-based or motion-sensor-based techniques. [7]mage-based techniques are generally used for motion recognition, [4] motion evaluation, [7,8] motion tracking, [6] and motion analysis. [5]Iosifidis et al. proposed a method for the single-view action representation and classification of videos describing human motions. [8]Notably, this method cannot recognize motions with high similarity (e.g., eating and drinking) or Traditional approaches to improving basketball players' shooting skills rely on coaches' experience in adjusting players' biomechanical motions.However, such an approach cannot provide specific instructions or facilitate immediate feedback for improvement of the shooting motion.In this article, a method is presented to quantitatively evaluate four key action indicators of shooting basketballs using a machine-learning model based on Bayesian optimization of a light gradient boosting machine (LightGBM).Important motion data for the model are collected by micro-inertial measurement units embedded in a wrist motion sensor and an internet of things (IoT) smart basketball.Basketball shooting motion data are collected from 16 subjects and used for model training and data testing, and four important action indicators that influence the shot quality are selected for quantitative assessment.The LightGBM model is then developed for the regression prediction of the four action indicators of shooting.In the results, it is indicated that for an individual player, the highest correlation scores of the four indexes range from 97.6% to 99.3%.The proposed approach for quantitatively assessing shooting indexes can provide objective and data-based guidance to improve players' shooting performance.Foreseeably, the prediction model can be embedded into a chip of a wearable device to evaluate the real-time shot quality quantitatively.identify the minor differences among similar actions.Abebe et al. proposed powerful multidimensional motion features for recognizing human activities from first-person videos. [9]This method can recognize eight indoor activities (walking, running, standing, sitting, ascending, descending, turning, and jumping) and 11 basketball activities (bow, sit/stand, left/right turn, walk, jog, run, sprint, pivot, shoot, dribble, and defend).However, the high feature dimensions and large data volumes of images and videos lead to a high redundancy and low robustness.Ivankovic et al. proposed an image-based framework to automatically detect the player position in basketball games [10] with a hybrid undirected image structure.Because this approach is based only on game data, it cannot be used for the players' personal training and cannot provide information on specific motions.Liu et al. proposed a deep-learning-based basketball video analysis system to automatically detect highlight score clips and replay them. [11]The system is aimed at enhancing the spectacle of basketball games and does not provide detailed action information.Therefore, this approach cannot be used for daily basketball training.Bertasius et al. proposed a scheme for assessing basketball skills based on first-person video analysis to detect the ball and identify if it is being held.The objective was to build a basketball skill assessment model to predict players' performance index rating (PIR) scores. [12]However, this technique can only evaluate the different skills of the players in a game.Furthermore, the first-person view video does not contain adequate details for motion analysis.In conclusion, the existing video-based methods are mainly used for motion tracking and recognition, whole court videotaping, and first-person view PIR scoring and cannot be applied to the daily training of players in the aspects such as shooting.Moreover, real-time analysis using video-based methods is challenging because of the large data size [13] and time-cost associated with video data computation. [14]Additionally, fixed cameras are vulnerable to blind spots [15] and ambient light variations. [16]ith the rapid development of microelectromechanical systems (MEMS) technologies, MEMS motion sensing devices have emerged, the size and cost of which are considerably smaller than those of conventional motion sensors.An MEMS-inertial sensor (e.g., 5 Â 5 Â 0.8 mm) can be used for human motion measurement as a small wearable device.Several research groups have developed microsensor-based methods for basketball-related motion recognition and analysis.Kuhlman et al. used a wearable sensor to classify four shooting styles. [17]Notably, this approach cannot analyze motion details or evaluate skill.Peng et al. used smart insoles with integrated multisensors to identify five basic basketball steps. [18]However, this method cannot be used to analyze the influence of steps in basketball on skills.Shankar et al. proposed a heuristic classification method to evaluate the performance and efficiency of free throw actions. [19]This method uses the speed and angle of the shot to assess whether the basketball goes into the basket, but cannot quantitatively analyze the key motions that affect shooting performance.Furthermore, the large size of the wearable sensors (%50 Â 50 Â 10 mm) inevitably influences hand motion, resulting in inaccurate experimental results.Bai et al. proposed a system involving a wristband sensor to detect the activity of a player in basketball games or a shot in a one-to-one game. [20]This method requires players to carry a smartphone to ensure data transmission, which affects their performance during games.Moreover, only the number of shots is counted, and no quantitative analysis is performed to enhance the shooting performance.Atsushi et al. used a wireless hybrid sensor (WAA-010) with a triaxial accelerometer and triaxial gyroscope to extract the motion characteristics that lead to a good jump shot. [21]hen players practice jump shots with the device mounted on the back of their hands, the device can detect differences in the features of the jump and provide acoustic feedback to help players' correct errors.The feedback on motion irregularity is generated by comparing the feature differences between novice and experienced players; however, this approach cannot identify specific motion differences.Overall, the existing research involving MEMS sensors is aimed at identifying and classifying shooting actions, counting the number of shots, and determining whether the ball goes into the basket.These approaches enable only qualitative analyses, cannot provide data on specific indicators that influence shot quality, and do not yield information that can help players analyze changes in shot quality.
The basketball shooting trajectory can be approximated as a parabola. [22]The shooting angle (SA), which is the angle of inclination of the initial velocity from the horizontal axis, considerably influences the flight arc of the ball. [23]The backspin of the ball contributes to the stability of the shooting trajectory during the flight phase and increases the force to enhance the shooting performance. [24]Moreover, upper limb strength considerably influences the shooting performance, elbow orientation influences the shooting trajectory, and the angle of elbow abduction during the ball-holding action is directly related to upper limb strength. [25]Considering these physical aspects, four key shooting action indicators can be used to evaluate shooting performance: shooting angle (SA), backspin speed (BS), elbow abduction angle (EA) at the moment of shot, and maximum elbow abduction angle (MA).By quantifying these four specific factors for each shot in real time, players can potentially adjust their actions to enhance their shooting performance.
To address the limitations associated with existing video-based and sensor-based methods in sports performance evaluation, we propose here a method to quantitatively assess the key action indicators for shooting actions.The four shooting indicators were quantified using a light gradient boosting machine (LightGBM) regression model combined with Bayesian optimization as shown in Figure 1.The indicators were estimated based on raw micro-inertial measurement unit (μIMU) data captured by a custom-built smart wristband.
The remainder of this article is organized as follows.Section 2 describes the experimental process, including the experimental method and data acquisition process.Section 3 describes the data processing method, including data segmentation, regression prediction based on the LightGBM model, and edge computing.Section 4 discusses the experimental results, and Section 5 presents concluding remarks.

Experimental Setup
To develop a quantitative index of shooting performance, experiments were performed with 16 participants (male nonathletes, S1-S16).The experimental procedures (internal reference no.NEU-EC-2021B023S) were approved by the Northeastern University's Biological and Medical Ethics Committee.Table 1 lists the basic characteristics of the participants.Figure 2 shows the experimental scenario.A μIMU device was embedded in a bracelet, which the participant wore on the right wrist, as shown in Figure 2a.A smart basketball was used to determine the ball status.The participant stood at the free throw line and performed a one-handed shot.Two cameras were placed at the front and side of the participants to record a motion video for each shot.Each participant was requested to shoot the ball 60 times.Participant S1 was requested to shoot the ball 200 times to generate an individual dataset with 200 samples.The two datasets allowed us to the compare the differences between the individual model (model trained on individual data) and the general model (model trained on mixed data from multiple people).

Data Acquisition
Three systems were used for data acquisition in the experiments: a custom-built wrist-worn μIMU sensor, a smart basketball embedded with a μIMU sensor, and cameras.

Wrist-Worn μIMU
The wrist motion data were collected by a μIMU sensor that acquires triaxial acceleration (Ax, Ay, Az) and triaxial angular   velocity (Wx, Wy, Wz).The data were processed to extract features to quantify and analyze the four key motion indicators.

ERock Smart Basketball (Stone Sports Intelligence Technology, China)
The basketball contained a nine-axis μIMU embedded in a rectangular housing.Table 2 summarizes the technical features of the ERock smart basketball. [26]The nine-axis data from the smart basketball were used to calculate two key indicators: SA and BS, as shown in Figure 3.The data were transmitted via Bluetooth and displayed on a smartphone application.In general, because a basketball might exhibit a side spin after being released by the shooter, accurate backspin information could not be obtained using image-based methods.To avoid this problem, the sensor inside the ERock smart basketball was used to sense the motion in 3D space.The SA and BS acquired from the ERock smart basketball were used as the true values to train the quantification model.

Camera (GoPro, USA)
To obtain the EA and MA, the images at the moment of shooting and maximum elbow abduction were extracted from the videos recorded by the two cameras (in the front and at the side of the participant).OpenPose [27] joint point recognition was performed to track the position of the human joints.The OpenPose framework is based on a two-branch multistage CNN model, as shown in Figure 4. Branch 1 was used to predict the confidence maps (S), and Branch 2 was used to predict the part affinity fields (L).S ¼ ðS 1 , S 2 , •• • ,S j Þ represents a heatmap; j is the number of joints to be detected (and the background, in certain cases).
represents a vector map; c is the number of joint pairs to be detected.
Feature F was obtained using a VGG-19 network and processed by the Branch 1 and 2 networks to determine S 1 and L 1 , respectively.From stage 2, the input of the stage t network consisted of three parts: S tÀ1 , L tÀ1 , and F. The input to the network in each stage is The processed images are shown in Figure 5, in which the angle between the two red lines is the EA.The EAs shown in Figure 5a,b are the EA and MA, respectively.The data represented in Figure 5a,b were recorded from the camera in front of the participant.The method for solving the EA θ is shown in Figure 5c, where the coordinates of A and C are known, i.e., x 1 , x 2 , y 1 , and y 2 are known quantities. (3) Notably, this method ignored the angle between the upper arm and camera photographic plane, as shown in Figure 5d.Therefore, the data represented in Figure 5d were recorded by the camera at the side of the participant.To make our data realistic and scientific, angle correction was incorporated.θ 1 shown in Figure 5d was the angle calculated using the method illustrated in Figure 5c.AD is the measured human upper arm length, and θ 2 is the true EA after considering the angle between the camera plane and the upper arm of the person in the 3D space.Taking the 200 sample data of S1 as an example, the distribution of EA and MA calculated in 2D plane and 3D space is shown in Figure 5e.The mean values of EA and MA under the two calculations were 17.7, 20.9 and 52.5, 57.9, respectively.We used the data of EA and MA under the two calculation methods as the output values of the regression model for analysis, and the difference between the obtained correlation score results was  less than 1%.So, using each one of them for subsequent analysis was reasonable.In the follow-up analysis, we used the angle of elbow abduction calculated under the 2D plane (the method of calculating θ 1 ).This observation demonstrated the scientificity of the proposed method to obtain the angle using the configuration as shown in Figure 5a,b.By processing the video captured from the front of the participant, the MA and EA for each shot were obtained as the true values to train the quantification model.Using the smart basketball and cameras, the four key shooting indicators of the 16 participants were collected, as shown in Figure 6.The range for each indicator differed across participants.This finding indicated that different players has different shooting motions, which were related to the basic physical condition and training habits of different players. [28] Data Processing

Preprocessing
Data segmentation must be performed to extract the IMU motion data of a single shot from continuous sensor readings.In this study, the sliding window algorithm was used for data segmentation, and Figure 7 shows a segmentation sample.The sliding window algorithm determined the boundaries by setting the left and right thresholds and window size.The beginning and end of the shooting motion were characterized by distinct changes in the angular velocity and acceleration.When the acceleration and angular velocity reached the thresholds, the shooting motion was considered to begin.When the acceleration and angular velocity became smaller than the thresholds, the motion was considered to have ended.

Feature Extraction
The nine eigenvalues listed in Table 3 were extracted from each axis of the six-axis motion data (triaxial acceleration and angular velocity), i.e., 54D (six axes Â nine eigenvalues) features were extracted from each segment of the motion data.
The use of many features can increase the model complexity and computational cost.The traditional approach to feature reduction is to eliminate the features with high importance in the model, although this setting may lower the model accuracy.In contrast, the LightGBM algorithm used to quantify the indicators performs feature dimensionality reduction internally to enhance the training speed without compromising the accuracy, through gradient-based one-side sampling (GOSS) and exclusive feature bundling (EFB).GOSS can decrease the number of features with only small gradients, such that only the data with high gradients remain.The time and computation costs associated with this method are considerably lower than those of methods such as XGBoost that traverse all feature values.EFB can bind many mutually exclusive features into one feature, thereby decreasing the dimensionality.

LightGBM Model Combined with Bayesian Optimization
Regression analysis refers to a statistical analysis method to determine the quantitative relationship between two or more dependent variables.In this study, 54 independent variables were used.Therefore, multiple regression analysis was performed to map these variables to the key indicators.The multiple regression model is presented in Equation ( 6).The triaxial acceleration and triaxial angular velocity of the wrist motion during shooting were intrinsically related to the four key action indicators, and we defined this relationship using a regression model based on the LightGBM algorithm. [29]ach sample in the dataset contained 54D features and four target values (key indicators).For each target value, a LightGBM model was trained with the 54D features.The gradient boosting tree model and L1 regularization term were used to create the model.The model hyperparameters were optimized to achieve the highest correlation score through Bayesian optimization.The histogram algorithm was incorporated in the LightGBM model (Figure 8a) to discretize the continuous floating-point feature values into m integers and construct a histogram of width m.The optimal splitting points were determined by traversing all of the datapoints and generating the histogram.A leaf-wise tree growth strategy was used to perform the splitting.The leaf with the largest splitting gain was identified, and new leaves were grown on it, as shown in Figure 8.To avoid overfitting and maintain high efficiency, a maximum depth was introduced to limit the layer of the tree.
The regression fitting results were evaluated using the rootmean-squared error (RMSE, Equation ( 7)).RMSE is an indicator used to measure the difference between the predicted value and the true value.It is the square root of the average of the squared differences between predicted and actual observations.However, owing to the large differences in the units and data ranges of the four key action indicators, the RMSE could not reflect the performance difference across the different key motion indicators.Therefore, the correlation score (Equation ( 8)) was used to indicate the correlation between the predicted and true values.These two evaluation metrics were used to describe the fitting error of the regression model.
To enhance the accuracy of the regression model, Bayesian optimization [30] was applied for the hyperparameter tuning of the LightGBM model.Bayesian optimization involves two key components: a probabilistic model of the function and an acquisition function.The probabilistic model mainly uses Gaussian process regression.The acquisition function can be defined using the expected improvement, probability of improvement, and upper confidence bound.The balance between exploration and exploitation can be ensured using the acquisition function, aq(x).The Bayesian optimization process involves the following steps: 1) based on the Gaussian process, initialize the probability distribution of the surrogate function; 2) sample several points according to the prior distribution of the current surrogate function and aq(x); 3) determine the value of the objective function with the samples obtained in Step 2; 4) update the prior distribution of the surrogate function based on the output in Step 3; and 5) repeat Steps 2-4 until the optimal solution is found.
The Bayesian tuning involves fewer iterations and exhibits a higher speed than grid search, which has a large number of parameters that can lead to dimension explosion. [31]Random search is faster than grid search because it does not require all parameter combinations for training. [32]Bayesian tuning may be faster or slower than random search, depending on the  x i where x i is the value of the ith sample point, and n is the total number of sample points where x max i is the absolute value obtained after differencing the data, and n is set as 10 where p i is the frequency of the ith data occurrence, and E is the mathematical expectation where μ is the mean, σ is the standard deviation, and E is the mathematical expectation

Standard deviation
Std ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q where x i is the value of the ith sample point, x is the mean of the data, and n is the total number of sample points Kurtosis Kur ¼ where x i is the value of the ith sample point, x is the mean of the data, and n is the total number of sample points Integral Int ¼ ∫ tn t0 f ðxÞ where f ðxÞ is the fitted data curve over the horizontal axis; t 0 and t n are the start and end times of the action data, respectively application.Bayesian parameter tuning is robust for nonconvex problems, grid search can easily obtain the local optima for nonconvex problems, and random search cannot identify the optima for any problem.Therefore, Bayesian optimization is superior to grid search or random search.To compare the performance of the three parameter tuning methods, tests over the experimental data were conducted using four models: the original model without parameter tuning and models tuned by Bayesian optimization, random search, and grid search.The results are presented in Figure 9 and Table 4.The LightGBM model tuned by Bayesian optimization consumed less time and achieved a higher performance (RMSE of 0.837 and correlation score of 98.7%) than the other two tuning methods.

Edge Computing
Edge computing refers to an open platform that uses network, computing, storage, and application core capabilities as an integral part on the side close to the object or data source to provide the nearest end service, thereby decreasing the pressure on the data center and improving the data security.The edge computing framework includes the device layer, network layer, and data layer.The device layer monitors the environment and collects data, for instance, from different sensor devices.The network layer transmits the sensor readings to the data layer.The data layer stores, analyzes, and displays the received data.
To enhance the applicability of the proposed approach, we could further combine the Bayesian-optimized LightGBM model with edge computing technology to design a quantitative basketball shot performance evaluation system in the future.The proposed system structure is shown in Figure 10.The system collects acceleration and angular velocity data through the MPU9250 nine-axis IMU, as shown in Figure 10b.Because machine-learning models are computationally expensive, embedding the entire LightGBM model into the chip causes the processing to be resource-constrained.We trained the model on a   computer, determined the most accurate model by Bayesian optimization, and visualized the generated tree using the plot function of the LightGBM model.The model could be potentially optimized for embedding into the wristband for edge computing at the device side, and the quantified shooting performance indicators were transmitted to a mobile device for display through wireless communication, enabling the provision of real-time objective guidance information during the player training process.Owing to hardware performance limitations, the model accuracy associated with the wearable device was lower than that on a computer.To avoid the wristband from interfering with the hand movements, we used a device that is smaller (247 Â 20 Â 11 mm) and lighter (7.9 g) than most commercial smart wristbands.

Experimental Results
As mentioned previously, the four shooting performance indicators varied across the 16 participants.To verify the performance of the proposed regression model, we built individual models for each participant.The 60 samples collected from each of the 16 participants were randomly divided into training and test sets with a ratio of 9:1 and used to train and test the models, respectively.To prevent overfitting, the fivefold cross-validation method was used for validation.The prediction correlation scores of the four action indicators for S1-S16 are shown in Table S1 To examine if a general model could be used to predict the shooting indicators of the 16 participants, we built two groups of regression models using 200 samples from S1 and 200 samples total from S1 to S16 (random choice method).To assess the accuracy of our established models' predictions, two assessment metrics (RMSE and correlation) were chosen.It should be noted that the RMSE percentage was determined by dividing the RMSE by the data range to more easily demonstrate the model's accuracy.Figure 11 shows the S1 sample quantitative prediction results.The values predicted using the S1 model were closer to the true values than the general model, which demonstrated that the individual model outperformed the general model.The results of the two models are compared in Table 5.The performance of the S1 model was superior to the general model obviously.For example, the average prediction correlation scores of the S1 and general models were 98.4% and 80.4%, respectively.The difference in the RMSE ranged between 0.46% and 9.42%.Therefore, we should build models to forecast the four indicators for various objects respectively because different people's shooting performance indicators have distinct features.
Moreover, the sample size influenced the prediction accuracy.The accuracy associated with the regression prediction based on 200 samples of Subject1 was higher than that for 60 samples, and the correlation score increased from 94.5% to 98.4%.
Throughout the trial, we kept track of S1-16 participants' shooting percentage and checked to see if the four shooting indicators we chose could forecast whether the shot was hit or not.We used the classification model of LightGBM to predict whether the shot was hit or not.We used data from four indicators of S1 to S16 as input, and shooting percentage as a prediction label.We used the "accuracy" metric to evaluate our prediction results.The results for predicting the shooting percentage are shown in Table S2, Supporting Information.The range of prediction accuracy is 66.7%-91.7%.[35] The prediction accuracy of the LightGBM model based on Bayesian optimization proposed in this article is significantly better than the results of existing research.Based on the shot predicting results, we found that the performance indicators of a successful shot vary among different participants.The prediction accuracy is affected by the sample distribution, and the prediction accuracy of samples with balanced categories may be higher.So, each participant requires a unique model to predict the shot based on performance indicators, which can also be feedback for the player to improve the shooting performance.For example, a player could collect the performance indicators of multiple shooting motions and find the ranges of them for a successful shot.If one or several indicators are out of the range, the player can adjust the shooting motion to fit the range and get good performance.To analyze the impact significance of each indicator on shooting percentage, we analyzed the "feature-importance" of each indicator when predicting shooting percentage.We used the LightGBM model to calculate feature importance by accumulating the split gains for each feature.The split gain is the mean square error change of each feature before and after splitting, that is, the mean square error of the current node minus the mean square error of each split as shown in Equation ( 9  parameter).The LightGBM model calculates the split gain of each feature on all decision tree nodes, and sums them up to obtain the total split gain of the feature as shown in Equation (10)  (wherein, T represents the total number of decision trees, and Split gain i represents the sum of split gains of all nodes in the ith decision tree.), which is the feature-importance.The colorlevel diagram of the "feature-importance" results is shown in Table S3, Supporting Information.We found that the BS, EA, and MA indicators have relatively greater impact on shooting percentage, while the SA indicator had less impact.

Split gain
Split gain i (10)

Conclusions
To quantitatively assess the quality of the shooting action in basketball, four key action indicators, SA, BS, EA, and MA, were selected.A Bayesian-optimized LightGBM model was used to perform regression fitting analysis on the four shot evaluation indicators.Experiments were conducted to collect data for building the individual models and a general model.The average correlation score and RMSE of the fitting results for the individual models were 98.4% and 1.198, respectively, and the corresponding values for the general model were 80.4% and 3.908, respectively.To  enhance the applicability of the proposed approach, we could combine edge computing with the wearable smart sensor through machine learning in the future to build a basketball shot quality assessment system and allow the quantitative assessment results to be fed back to a basketball player in real time.

Figure 1 .
Figure 1.Proposed system architecture to quantitatively assess the key action indicators in basketball shooting.

Figure 2 .
Figure 2. Experimental setup to acquire the subject's motion data.Camera 1 records from in front of the subject: 1.4 m from the ground and 1.82 m from the subject.Camera 2 records from the side of the subject: 1.4 m from the ground and 2.12 m from the subject.Computer receives and saves micro-inertial measurement unit (μIMU) data.a) Enlarged view of a wearable μIMU device worn on the left hand.b) Schematic diagram of the three-axis direction of the μIMU.c) Display of the view captured by Camera 1.

Figure 3 .
Figure 3. Schematic of the shooting angle (SA) and backspin speed (BS).SA and BS are obtained by IMU sensors embedded in the smart basketball.

Figure 4 .
Figure 4. OpenPose processing to identify the node position.The entire network is a two-branch, multistage convolutional neural network.The first branch is used to predict the confidence map, which can be regarded as a scoring map.The second branch is used to predict part affinity fields.Each branch has multiple stages, and the input of each stage is the output of the two branches of the previous stage and the original image input for fusion.

Figure 5 .
Figure 5. Schematic of calculating the elbow abduction angle (EA).a) Schematic diagram of EA. b) Schematic diagram of maximum elbow abduction angle (MA).c) Calculation model of EA. d) Schematic diagram of EA in 3D space and 2D plane.e) Angle comparison of EA and MA for 200 samples from Subject1 with two calculation methods (2D plane and 3D space).

Figure 6 .
Figure 6.Distribution of four shooting indicators of Subject1 to Subject16.

Figure 7 .
Figure 7. Triaxial a) acceleration and b) angular velocity data segments for one shooting action by Subject14.c) Shooting action process.

Figure 9 .
Figure 9.Comparison of results of the three parameter tuning methods.
) (wherein, N represents the number of samples; G L and G R represent the average value of the left and right sub-nodes, respectively; T L and T R represent the target values of the left and right sub-nodes [the true value in the regression problem]; H L and H R represent the Hessian matrix sum of the left and right sub-nodes, respectively, and λ is a regularization

Figure 10 .
Figure 10.Proposed system to quantitatively evaluate basketball key movements.a) Overall appearance of the wristband.b) Circuit board of the smart basketball wristband.c) Drawing and dimensions of the designed printed circuit board.d) The blue light emitting diode is on when the smart basketball works.

Figure 11 .
Figure 11.The regression results of the four shooting indicators through the LightGBM regression model using 200 samples from Subject1 as input.

Table 3 .
Nine features to describe shooting motion.

Table 4 .
Comparison of three parameter tuning methods.

Table 5 .
Data analysis results.

Table 6 .
Prediction accuracy compared with previous studies.