A Wearable Device Integrated with Deep Learning‐Based Algorithms for the Analysis of Breath Patterns

Sleep problems are serious issues that make life difficult for all people, including sleep apnea. Sleep apnea, which causes breathlessness for more than 10 s, is linked to severe health problems due to the serious damage it can induce. To mitigate the risk of these disorders, the monitoring of patients has become increasingly challenging. Wearable technologies offer an effective healthcare solution for remote patient monitoring and diagnosis. A novel wearable system based on Arduino technology is introduced, specifically designed to monitor the breath patterns of patients. The analysis of breath data from patients holds great importance for the diagnosis and continuous monitoring of sleep apnea. To address this need, an advanced image processing system based on deep learning techniques is presented. This system automatically detects respiratory patterns, including inhalation, exhalation, and breathlessness. The device has an average of 97.6% sensitivity, 79.7% specificity, and 96% accuracy in identifying breath patterns. The designed device can offer patients and healthcare institutions a simple, inexpensive, noninvasive, and ergonomic system for the analysis of breath patterns that can be further extended for sleep apnea diagnosis.

for cost-effective and automated products that can provide straightforward and real-time measurements while ensuring high accuracy, sensitivity, and specificity rates.
Another highly useful technique in deep learning is object detection. [24]This technique provides a fast and accurate way to predict the position of an object in an image. [25]In recent studies, YOLO, which allows faster prediction than other algorithms, was presented. [26]The use of Tensorboard and Tensorflow tools in this algorithm is important for deep learning and objective sensing techniques as they enable more understandable models and measurable performance parameters. [27]Tensorboard provides metrics and visualizations of the TensorFlow deep learning dataflow and includes tools for graph visualizations of the object detection model. [28]YOLO algorithms and object detection were used for sleep apnea detection and monitoring by analyzing polysomnography signals [29] and observing the real-time human pose involving leg, arm, and body rotations during breathing. [30]ere, we present a wearable device that utilizes an imagebased strategy to detect breath patterns, including inhalation, exhalation, and breathlessness, in real time (Figure 1).The developed system measures the diaphragm movement resulting from respiration with the accelerometer placed on the diaphragm.A temperature sensor on the device was also utilized to validate breath patterns by monitoring nasal airflows for reference measurements.To simulate sleep apnea conditions, breathing and breathless situations were realized in different body positions in healthy individuals.Afterward, the accelerometer data were remotely transferred and visualized for breath analysis.Image processing and YOLO algorithms were applied to the visualized data to determine breathing patterns.Using the trained dataset, inhaled breath, exhaled breath, and breathlessness numbers could be automatically determined on new subjects.Thus, our developed deep learning algorithms have enhanced and boosted prediction performance on detecting breath patterns from a simple accelerometer.Moreover, the detection speed has been accelerated, leading to real-time breath pattern detection.As a result, we developed a cost-effective, portable, noninvasive, easy-to-use platform breath pattern analysis, which holds great potential for sleep apnea monitoring and detection.

Design of the Wearable Device
The developed wearable device, which analyses breath patterns, is given in Figure 1a.The device consists of a high-resolution (13-bit) triple-axis accelerometer sensor (ADXL345, Analog Devices, America) located on the diaphragm to detect diaphragm movements and a microcontroller (Arduino Uno) to process and transfer sensor data.The breath cycle can be measured by the accelerometer because of the movement of the diaphragm in an axis perpendicular to the abdomen during breathing.Maximum and minimum peak acceleration values of the diaphragm can occur in the inhaled and exhaled states.Hence, counting these peak locations can be used to analyze breathing frequency.Moreover, breathlessness, which is a symptom of sleep apnea, can be also determined on a time period when a constant reading is obtained from the accelerometer.To verify the breath pattern, a temperature sensor (MLX90615 sensor, Melexis, Belgium) was also placed on the upper lip near the nostrils during measurements.By doing so, hot and cold air flows during exhaling and inhaling could be identified and the relationship between nasal breathing and diaphragm movements could be matched.The collected data were transferred over the microcontroller unit to the remote computer with binary streaming of the acceleration and temperature measurements.These measurements were collected from the corresponding sensor and these sensors used interintegrated circuit (I 2 C) communication protocols for transferring the data over the Arduino (Figure 1b).Each sensor has a unique I 2 C communication address determining the sensor behavior and the designed system set the parameters with respect to I 2 C communication protocols for measurements.In the measurements, there were three outputs for measuring the gravitational acceleration (g) in three axes for the accelerometer and two outputs for measuring the breath and environment temperatures in the temperature sensor.The realtime transmission of these data was realized using the data streamer from the remote computer (Monster Tulpar T7 V20.4,Monster Notebook, Turkey) over the microcontroller.The real-time data stream was visualized on the computer and processed using splines to remove noise.Afterward, breath pattern classes on the obtained images were annotated, trained, and tested using YOLOv5m (medium) architecture (Figure 1c).

The Simulation Breath Patterns of Sleep Apnea Condition
From three healthy volunteers aged between 18 and 24, 150 breath measurements were taken in %1.5 min in a supine position from accelerometer and temperature sensors for training the designed model.While the data of three healthy volunteers were used to train the system, the system was tested with the data of another 10 healthy volunteers.In the measurements, the simulation of breathing and breathlessness was carried out in a controlled manner.During the measurements, the subject took 5 nasal breaths in %30 s and then held his/her breath for %10 s.This procedure was continued in a loop throughout the measurement period.The collected data from these measurements were used for the development of the object detection algorithms as training, validation, and test datasets.
The trained deep learning-based method was also tested for standing, sitting, and supine positions.Hyperapnea and mouth breathing were also tried to test the performance of the wearable device and image processing algorithm in different conditions.When the standing position was tested, the healthy individual repeated the simulation cycle by standing on both feet while the thorax was perpendicular to the ground surface and the spine was in an upright position.In the sitting position, the healthy individual was tested on an upright chair, with the upper leg and trunk positioned perpendicular to each other without causing spinal curvature and the hands resting on the upper leg.In the case of mouth breathing, a healthy individual underwent mouth breathing in the supine position.In all positions and conditions, the accelerometer and temperature sensor were positioned, as shown in Figure 1a.Except for mouth breathing, all trials were performed with nasal breathing.In hyperpnea, the healthy individual completed the simulation cycle in %35 s by breathing at a rate twice as fast as normal while in the supine position.

Image Processing
From breath measurements experiments, the sensor data were collected with Arduino serial used for communication between sensors and the computer.The collected raw data were plotted using GraphPad Prism 8.0.2.The data from the acceleration sensor were expressed as multiples of the gravitational acceleration and plotted on the graph with respect to time.The collected accelerometer raw data had too much noise for accurate analysis of the breath patterns.The raw data was processed for getting smooth breath patterns and breathless conditions.For this purpose, spline interpolation with 50 knots was used for accelerometer raw data.The plotted interpolated graphs were drawn as red curves, and then they were converted to grayscale.

Classification
A dataset containing a total of 150 images from breath measurements from supine positions was used for classification.The training and validation dataset were divided randomly into 120 and 30 images, respectively.Classification is one of the important parameters for the detection model.For this purpose, the data on the images were annotated using Roboflow (Roboflow Inc., USA) as inhale, exhale, and duration (i.e., breathless duration) classes with rectangular bounding boxes (Figure S1, Supporting Information).
After the annotation of all images was completed, we exported the annotated data in YOLOv5 format from Roboflow.YOLOv5m architecture was used for the dataset consisting of 640 Â 640 pixels training and test images of breathing.The training process was conducted on Google Colaboratory (Colab) using an external CPU of the Colab system and operating with Python programming language (Figure S2, Supporting Information). [31]The whole process was operated with an external computer (Tulpar 17 V20.4., Monster Notebook, Turkey).When starting the training, necessary parameters such as image size, batch size, epoch, and YOLO model were defined.These parameters were tuned as 416 pixels Â 416 pixels for image size, 16 for batch size, 230 for epoch, and medium for the YOLO model.The model was trained in 230 epochs to determine the maximum fitness epoch.In the training steps, the image was divided into a grid of residual blocks, and each block contained probability values for the location and class of an object.Boundary boxes were formed from combinations and intersections of these blocks and were used to detect objects in the image.The grid box with the highest similarity was achieved by accepting the intersection of all the predicted boxes with an intersection over the union.In this way, only the boxes with the highest detection probability were retained.YOLOv5m was chosen because of its fast and detailed image processing capacity, instant processing of images, and strong predictions.When the training process was done, the training performance was evaluated, and various metrics, such as recall, precision, mAP@0.5 (mean average precision), and object loss, were obtained by YOLOv5 on the Tensorboard machine learning tool.The maximum fitness epoch was determined as 130, based on the highest calculated mAP@0.5 value.As a result of the training, the training weights file corresponding to the 130th epoch, which yielded the best fit, was obtained.Training weights were used with the YOLO detection algorithm to detect objects.
The diaphragm acceleration data were collected from the data streamer of the Arduino to a data sheet.The collected data was smoothened for avoiding noise using spline curves with 50 knots.The data was visualized using matplotlib and animation libraries of Python as a video stream (Figure S3, Supporting Information).The animated video was used for the real-time detection of inhale, exhale, and duration of breath using a deep learning-based image classifier algorithm.The classification was conducted using YOLOv5 and the inhale, exhale, and duration values were calculated for each frame of the real-time video stream and the finalized result was reported.

Sensor Data Obtained during Sleep Apnea Simulation
During sleep apnea simulation in the supine position, the acceleration data were recorded on the device in three different axes (Figure 2a-c).Since the diaphragm accelerates perpendicular to the thorax (i.e., z-axis) during breathing, the acceleration data on this axis showed clear diaphragm movements.On the other hand, breathing patterns were not clearly observed on the acceleration data obtained from another axis.Breathing patterns can be verified using breath temperature analysis [32][33][34] by an infrared temperature sensor placed near the nostrils (Figure 1a).It was observed that the measured breath temperature data proceeds harmonically with the acceleration data coming from the diaphragm movements in the z-axis (Figure 2d).Hence, the acceleration data could be used to detect breath patterns as breath temperature.

Deep Learning-Based Breath Analysis
A deep learning-based method has been developed for breath analysis on the z-axis acceleration data.In this method, a realtime image classification was performed using YOLO architecture to identify inhaling, exhaling, and duration in the breath patterns.The model was trained in 230 epochs to determine the maximum fitness epoch, based on the highest mAP@0.5 value for getting better prediction results.The mAP@0.5 value reached up to 0.9 after the 60th epoch and the maximum fitness epoch was measured for the 130th epoch in the trained model.The prediction performance parameters of the model on the trained dataset were examined for the 130th epoch and calculated as 0.94629 for the precision, 0.94772 for the recall, 0.97185 for the mAP@0.5, and 0.077332 for the object loss (Figure S4, Supporting Information).These show us that the trained model is perfectly suitable to detect inhalations, exhalation, and breathlessness from the diaphragm acceleration.
Moreover, the confusion matrix was given for the performance analysis of the trained model with the comparison of predicted and true classes (Figure 3).As a result of the confusion matrix, all classes were predicted with ≥0.96 accuracies.The sensitivity  and d) their comparison with breath temperature measurements during sleep apnea simulation.In the simulation, an 5 breathing (inhaling and exhaling) and 1 breathless duration (i.e., breath-holding) sequence as a cycle was tested.The acceleration data were represented as multiples of gravitational acceleration (g = 9.8 m s À2 ).The color lines on the graphs are the spline curves that fit the data.
values of the inhale, exhale, and duration classes are 0.99, 0.98, and 0.96, respectively.So, the model sufficiently predicts the object accurately in the related class.Besides, the specificity values for the inhale, exhale, and duration classes are 0.678, 0.741, and 0.979, respectively.Thus, the object can be predicted as inhale and exhale, but it is actually background classes.However, the specificity of duration is more important than inhale and exhale classes since the breathless condition is specifically associated with sleep apnea.In this sense, the model can be used to detect breathlessness conditions in high sensitivity and specificity, enabling its potential use in the diagnosis and monitoring of sleep apnea.

Effect of Different Conditions on Breath Analysis
The trained deep learning-based method was also tested for different body positions (standing, sitting, and supine), hyperpnea, and mouth breathing in supine positions to examine the tolerance of the method.z-axis acceleration data clearly showed inhaling, exhaling, and duration in the breath patterns for different body situations that were verified with temperature  measurements also (Figure S5, Supporting Information).Furthermore, the developed deep learning method was successfully used to predict breath cycles and durations for each position and condition (Figure 4).The breath patterns can be determined in both acceleration and temperature data in supine, sitting, and standing positions.However, the temperature data were not sufficient to accurately determine the breathing patterns during mouth breathing and hyperpnea conditions (Figure S5, Supporting Information).Thanks to the acceleration data, breath analysis can be conducted for all conditions.In addition, changing the patient's position does not affect the prediction model performance.

Real-Time Breath Analysis
The real-time detection of inhale, exhale, and duration classes in breath patterns was conducted with the trained deep-learningbased algorithm.For this purpose, collected acceleration data were drawn in real time as a spline fit curve in 100 ms intervals as a video stream and the algorithm was applied to this video (Video S1, Supporting Information, and Figure 5).Although the algorithm was trained with images, it can also detect different classes on the video accurately.The model, which has the ability to detect breath class in 73.77AE 3.36 ms, can detect rapidly breath class in an incoming video stream while plotting data.As a result, the model can provide real-time breath analysis.

Comparison of Breath Analysis Methods
Wearable devices used for analyzing breathing patterns are reviewed in Table S1, Supporting Information, with their features and performance.There are several techniques such as acoustical, [35] photoelectrical, [36] electrical, [37][38][39][40] electrochemical, [41] and mechanical [42][43][44][45][46][47][48] to determine breath analysis and detect the breath patterns.[51] In addition to direct breath analysis methods, the analysis can be made with machine learning and deep learning techniques for automatic and classified results. [35,39,46,47,50]hese techniques are differentiated by their sensor and the developed analysis model.In most of these studies, classification is made according to the presence of the disease; thus, breath patterns are not interpreted.However, in our designed system, diaphragm movements are detected by means of an accelerometer for sensitive detection of breathing patterns.Moreover, the deep learning algorithm developed with the data obtained from the diaphragm movements can classify the breath patterns (inhalation, exhalation, and breathlessness) and this classification can be done in different body positions and conditions.The detection of breath patterns from diaphragm acceleration with image processing and deep learning algorithms has been shown for the first time in this study.Besides, the designed system shows superior performance than other devices and systems in terms of its high accuracy and sensitivity (>0.96).This system, which can allow real-time breath analysis, is also easily accessible with its low-cost (<30$) and easy-to-find components (Table S2, Supporting Information).Hence, the presented method shows that deep learning-based algorithms can boost the performance of simple accelerometers for accurate and automated breath pattern analysis.

Conclusion
In this study, a wearable device has been developed for monitoring breath patterns via measuring diaphragm movements in real time.The diaphragm movements have been monitored with the acceleration sensor attached to the patient's diaphragm.The obtained acceleration data has been preprocessed to eliminate noise and visualize.Afterward, the visualized data have been used for training the deep learning algorithm to detect inhale, exhale, and breathless duration.With the trained model, the inhalation, exhalation, and breathlessness conditions were detected successfully via the acceleration of the diaphragm with 97.6% sensitivity, 79.7% specificity, and >96% accuracy on average.Furthermore, breath analysis in different body positions did not affect measurement accuracy.Thus, the proposed system allows robust classification of breath patterns.This intelligent system with miniaturized and portable components also provides real-time breath analysis.Therefore, the system equipped with these features would enable remote and automated patient monitoring, thereby facilitating the diagnosis of sleep apnea.

Figure 1 .
Figure 1.A developed wearable device for the analysis of breath patterns.a) Breath measurement setup.An accelerometer and temperature sensor connected to an Arduino microcontroller were utilized to sense diaphragm movements and nasal airflows, respectively, to determine breathing patterns.b) Wiring diagram of the setup.c) Analyzing breath patterns.Accelerometer data were processed using splines and annotated for different breath pattern classes (i.e., inhale, exhale, duration (breathlessness)).Then, YOLOv5m architecture was trained with these data to automatically detect breath pattern classes.Temperature data were only used to validate obtained breath pattern classes as a reference measurement.

Figure 2 .
Figure 2. Acceleration measurements of the diaphragm in a) x-axis, b) y-axis, and c) z-axis,and d) their comparison with breath temperature measurements during sleep apnea simulation.In the simulation, an 5 breathing (inhaling and exhaling) and 1 breathless duration (i.e., breath-holding) sequence as a cycle was tested.The acceleration data were represented as multiples of gravitational acceleration (g = 9.8 m s À2 ).The color lines on the graphs are the spline curves that fit the data.

Figure 3 .
Figure 3. Confusion matrix of the trained model for predicted and true classes.

Figure 4 .
Figure 4. Predicted inhale, exhale, and duration classes in breath patterns monitored using acceleration data collected for a) supine, b) standing, c) sitting, d) mouth breathing, and e) hyperpnea conditions.The time scale bar is given for each condition.

Figure 5 .
Figure 5. Real-time detection of classes in breath patterns in a) 10 s, b) 30 s, c) 50 s, d) 70 s, and e) 90 s time intervals.