Quantitative and Real-Time Control of 3D Printing Material Flow Through Deep Learning

3D printing could revolutionize manufacturing through local and on‐demand production while enabling uniquely complex and custom products. However, 3D printing's propensity for production errors prevents autonomous operation and the quality assurance necessary to realize this vision. Human operators cannot continuously monitor or correct errors in real time, while automated approaches predominantly only detect errors. New methodologies correct parameters either offline or with slow response times and poor prediction granularity, limiting their utility. A commonly available 3D printing process metadata is harnessed, alongside the video of the printing process, to build a unique image dataset. Regression models are trained to precisely predict how printing material flow should be altered to correct errors and this should be used to build a fast control loop capable of 3D printing parameter discovery and few‐shot correction. Demonstrations show that the system can learn optimal parameters for unseen complex materials, and achieve rapid error correction on new parts. Similar metadata exists in many manufacturing processes and this approach could enable the adoption of fast data‐driven control systems more widely in manufacturing.


Introduction
Material extrusion is the most widespread 3D printing or additive manufacturing technology due to its low cost, ease of use, minimal post-processing, and compatibility with a broad palette of functional materials. [1][2][3] Presently, the technology is promising across a range of fields including medical devices, [4] soft robotics, [5] and building construction. [6] However, extrusion 3D printing is vulnerable to a plethora of errors that limit the adoption of these 3D printed products. These errors often arise due to the open loop nature of the manufacturing process [7][8][9] as currently process monitoring, parameter selection, and the application of corrections are entirely manual procedures provided by expert human operators. These are primarily determined through visual inspection and learned experience. Thus, there is considerable scope for the deployment of deep learning and computer vision techniques to automate and improve these processes. Furthermore, due to the slow manufacturing speed, it is beneficial to identify and correct defects at an early stage to avoid wasted material, energy, and time. This makes automated monitoring ideally suited. Recently, reinforcement learning techniques have been applied to control in numerous applications [10] such as robotics, [11] the electrical grid, [12] and even nuclear fusion. [13] Additionally, supervised methods have been used effectively in many fields such as autonomous vehicles with end-to-end networks enabling impressive real-time control. [14,15] These supervised approaches often utilize the latest deep convolutional neural networks and vision transformers [16][17][18] to determine appropriate actions such as a steering angle from input images. Limited work exists on applying such controllers to manufacturing, especially through the utilization of readily available metadata as continuous image labels. Furthermore, many existing control approaches apply large models which require considerable data and compute resources and are thus challenging to scale.
Indirect methods have been developed to detect errors during the additive manufacture of parts by monitoring acoustic emissions and printer vibrations [19,20] as well as motor current. [21,22] However, these methods often require accurate physics models in addition to expensive equipment, but are not sufficiently rich to identify or correct diverse errors. Correct extrusion creates highly repeatable and uniform patterns, and thus with vision a broad range of defects can be determined. There have been several attempts at using vision-based approaches to detect errors during the 3D printing of parts. Traditional image processing and computer vision techniques have been applied with a single camera to detect large errors such as layer shifts and poor infill. [23][24][25][26] These methods though often struggle in detecting finer scale error modes and image features. Other methods use multiple cameras to better handle occlusions and to enable 3D reconstructions of parts via techniques such as structured light scanning. [27][28][29][30] These approaches can detect a wider range of errors in greater detail. However, they are often monetarily and computationally expensive, create extra work for users with calibration steps, are sensitive to lighting conditions and part surface properties, and can be limited by scanner resolution. DOI: 10.1002/aisy.202200153 3D printing could revolutionize manufacturing through local and on-demand production while enabling uniquely complex and custom products. However, 3D printing's propensity for production errors prevents autonomous operation and the quality assurance necessary to realize this vision. Human operators cannot continuously monitor or correct errors in real time, while automated approaches predominantly only detect errors. New methodologies correct parameters either offline or with slow response times and poor prediction granularity, limiting their utility. A commonly available 3D printing process metadata is harnessed, alongside the video of the printing process, to build a unique image dataset. Regression models are trained to precisely predict how printing material flow should be altered to correct errors and this should be used to build a fast control loop capable of 3D printing parameter discovery and few-shot correction. Demonstrations show that the system can learn optimal parameters for unseen complex materials, and achieve rapid error correction on new parts. Similar metadata exists in many manufacturing processes and this approach could enable the adoption of fast data-driven control systems more widely in manufacturing.
While the automated detection of errors is useful, expert operators are still required to make necessary online and offline parameter corrections. Recently, deep learning vision-based approaches have been used to autonomously apply corrections offline after printing for a range of errors. [31][32][33] These methods are very useful for errors that build over time as the material cools, such as cracking and warping, and nonrecoverable failures, for example, poor bridging. Although a significant step forward, these approaches still fail to catch many internal error modalities thanks to externally mounted cameras and result in wasted prints as corrections are applied after the fact.
Errors can be reduced further and corrected online during printing by combining vision with traditional real-time feedback loop strategies. [33][34][35] Object detection networks [36] have been used along with image processing techniques to localize specific errors and estimate their severity to facilitate the appropriate response. [33] However, these methods were focused on slowgrowth errors and thus do not present true real-time control. Additionally, manual labeling of the training dataset was required. Real-time control of flow rate has been explored using classification methods such as k-NN [34] and ResNet models. [35] These approaches, although a large step in the right direction, currently yield slow response times. This is primarily due to the classification-based approach used in these works, where the suboptimal flow rate is either categorized as under-or over-extrusion. Corrections are thus applied with single-scale fixed increments or estimations derived from classifications, and thus large errors take numerous updates to fix. Additionally, there is a compromise between steady-state error and correction speed in choosing the appropriate step size. The slow response time is further compounded by slow sampling frequencies of 1 [34] and 3.33 Hz, [35] lengthy prediction filtering strategies, and G-code execution delays. In reality, flow rate is a continuous variable and thus for real-world deployment, systems capable of estimating the precise level of flow rate are required for faster correction and improved performance.
In this work, commonly available 3D printing process metadata from firmware was harnessed, alongside a real-time video of printing, to build a new dataset of over 250 000 automatically labeled images. With this dataset, vision regression models were trained to precisely predict material flow as a continuous variable, in turn, enabling true proportional correction. We coupled this monitoring of real-time video with a new feedback loop capable of 3D printing parameter discovery and few-shot correction. This combined control system ran at sampling rates of nearly 15 Hz, over 4 times faster than the previous state of the art. Finally, the response times were further reduced through toolpath splitting and optimized prediction filtering. Experiments showed that the system can learn the optimal flow rate for unseen complex materials and achieve rapid few-shot error correction on new parts. Our method improves on prior work in 3D printing error detection, correction, and flow rate control with the application of models capable of continuous prediction for a single parameter compared to the previous discrete prediction approaches. This is achieved using a lightweight model and efficient sampling to enable significantly faster feedback speed. The data required can be rapidly collected and the models quickly trained, with the whole process from no data to deployed control system taking only 8 h total. Similar metadata exists in many manufacturing processes and this easy to deploy approach could enable the adoption of fast data-driven control systems more widely in manufacturing.

Autolabeled Dataset Generation
A unique data acquisition system was developed to capture highquality images of the material deposition process in extrusion AM and to label each image with metadata, specifically the current material flow rate. An endoscope camera was attached to a low-cost 3D printer to capture images of the extrusion process during printing (see Figure 1A). An overview of the steps in this data acquisition system is shown in Figure 1B. This pipeline enabled the creation of an entirely new autolabeled dataset of 253 405 images in less than 5 h. Sample images from this dataset alongside their labels are shown in Figure 1C. Uniquely, the approach is scalable to any number of printers, in future enabling the creation of large datasets from fleets of machines.
A range of labeled flow rate levels were required to enable the precise real-time prediction of flow rate for future unseen prints. At run time, the printer operates off relative flow rate levels A B C Figure 1. Data collection process for a range of flow rates across single-layer geometries. A) Creality CR-20 Pro used for data collection equipped with Raspberry Pi Model 4 Bþ and endoscope camera recording video at 1080 pixel resolution. Endoscope was mounted to printer using an in-house printed clip on mount. B) Outline of data collection pipeline. 19 levels of flow rate are used in log space, mapping to a range of 1 3 to 3 times optimal flow. C) Image of each flow rate level randomly sampled from the test set.
www.advancedsciencenews.com www.advintellsyst.com (in integer percentages), which act as multipliers for the input absolute flow rate amount specified. Upper and lower relative flow rate limits were set to three times (300%) and one-third (33%) of optimal extrusion. The natural log space of this range was sampled at regular intervals to ensure even coverage and a symmetric dataset with an equal number of over-and underextrusion levels. In total 19 flow rate levels were used. The resolution and spacing of these levels were determined qualitatively through experimentation by printing samples at a range of flow rates centered around 100% to determine the smallest relative change with a noticeable impact on part quality. For underextrusion, this was when gaps first appeared between the extruded paths and for over-extrusion when the extruded paths overlapped to increase surface roughness, with the optimal range resulting in completely uniform extrusion over the build surface.
In total 24 prints were completed. These prints all consisted of single-layer circular geometries with a layer height of 0.2 mm and diameter of 150 mm. All parts were printed with a hot end temperature of 205°C, bed temperature of 60°C, lateral speed of 45 mm s À1 , cooling fan enabled, and 3 external perimeters. Due to the constant speed during printing, the flow rate for all sliced parts was 3.60 mm 3 s À1 when using a relative flow of 100%. As such with the selected bounds, the range of flow covered was from 1.20 to 10.8 mm 3 s À1 . Twelve different infill types were used (2 samples printed of each). Six of the samples used 100% linear infill at 30°increments from 0°to 150°. Two used concentric infill at 25% and 100% densities. The remaining 4 prints used cross, grid, gyroid, and triangle infill patterns all at 25% density. This range of infill types was chosen to allow the vision models to generalize across different geometries.
For each print, a flow rate value was randomly sampled without replacement from a uniform distribution of 19 possible values in log space. This value was then converted back to relative percentages and sent to the printer to update the flow rate. Upon execution, 1200 images were collected and labeled for that level. Subsequently, another flow rate was sampled without replacement and 1200 images were again captured until no levels remained. A new complete set of the 19 flow rates was then generated to start the process again. With this method, a total of 253 405 labeled images were collected. In total, this only took 5 h of printing time on a single machine. With this system, the mean sampling rate across the dataset was 14.37 Hz. Overheads involved in retrieving information from the printer's firmware, capturing the snapshot, sending it over the network, and labeling the image reduced average the sampling rate from the endoscope's 30 Hz; however, this rate was still occasionally reached. Thus, the developed controller had to be capable of running faster than this maximum sampling frequency.
It was important to consider the response time to parameter changes when labeling the captured images during printing. When the relative flow rate was updated on the printer there was both a software execution delay and mechanical response time delay before the change could be visually seen. The majority of this delay comes from the mechanical response of the system as pressure needs to be increased or decreased rapidly in the hotend. To determine the maximum time taken for this delay, a series of experiments were run to go from the minimum relative flow rate of 33% to the maximum of 300% and vice versa. With the print settings used during data collection, it was determined that on average after approximately 10 s the new desired level had been reached. As such 150 images were removed after each parameter change to ensure the labeled data later used for training models was correct and did not contain any images from the transition region.
From the complete dataset, an equal number of the 19 levels of flow rate were then sampled -this number being the total samples for flow rate level with the fewest samples. This resulted in a final dataset of 125 077 labeled images which was then split through random sampling into train, validation, and test sets with a ratio of 80:10:10. Finally all images were pre-cropped to a 256 Â 256 pixel region centered around the nozzle tip using the coordinates stored during collection to significantly speed up training. See Figure 1C for some examples.

Model Training and Performance
For the precise prediction of flow rate various sized models with a RegNet backbone [37] and single output fully connected linear layer were trained. The initial weights of these network backbones were pre-trained on the ImageNet1k dataset. [38] The models used the log space flow rates as targets -using this normalized, linear, and evenly spaced data was vital for achieving good performance.
During training, the pixel values across each red, green, and blue (RGB) channel were normalized. Additionally, several common data augmentation techniques were used as popularised on standard datasets such as ImageNet [38,39] (see Figure 2A). The pre-cropped and normalized 256 Â 256 image centered on the nozzle tip was randomly cropped to a 224 Â 224 square. The resultant cropped image was then horizontally flipped with a probability of 0.5. After these geometric augmentations, principal component analysis (PCA) color augmentation was applied to each image as popularised by AlexNet. [39] These augmentations were only applied during the training pass, and for validation passes the input 256 Â 256 pixel images were center cropped to the required 224 Â 224 input shape.
Five model sizes were trained for 25 epochs across 2 graphics processing units (GPUs) in parallel. The performance of these models can be seen in Table 1. Increasing the model size reduced the in-distribution test set loss at the cost of computation time and resource needs. Out-of-distribution (OOD) print performance appeared to be less coupled to model size, suggesting that the very large models overfitted the training data and thus they did not generalize as well. The OOD print used can be seen in Figure 3A and was later used for tuning averaging and filtering parameters as described in further detail in Section 2.2.2. The use of a small and fast model was just as important as the accuracy for this real-time control application, where memory footprint and the number of iterations achievable per second were important requirements. Therefore, for the remainder of the work the smallest model (RegNetY 400MF backbone) was used. Five of these models were trained using different random seeds--the mean training and validation loss along with 95% confidence intervals for these seeds can be seen in Figure 2B. These random seeds primarily affected the order in which the model saw the data and the augmentations applied, as all network backbones were initialized with the pre-trained weights, with only the final output linear layer initialized differently given the seed. The mean squared error (MSE) was minimized using stochastic gradient descent and the learning rate was updated during training for all models using a cosine annealing learning rate scheduler (see Figure 2C).
At test time, to improve the predictive performance, each 256 Â 256 input image was cropped into five 224 Â 224 windows for the top left, top right, bottom left, bottom right, and center. Then, each of these five images was horizontally flipped resulting in 10 images produced from the single input (see Figure 2E). These 10 images were stacked together and passed through the network in a single forward pass. The 10 output predictions were subsequently averaged using the mean. Without this augmentation, when using a single center cropped image, an MSE of 0.0187 was achieved for the RegNetY 400MF backbone compared to the 0.0159 MSE with augmentation. Thus, the multi-crop approach led to a boost in performance of 14.6% at test time.
We also tested each model on an out-of-distribution print and noticed a drop in performance; however, the predictions were sufficiently accurate to still enable real-time control.
The results of the small trained model on the held-back test set can be seen in Figure 2D. The trained network accurately predicted flow rate, with the light green shaded region showing the input resolution used in training for the 19 levels of flow rate. In addition, 95% and 99% intervals are shown with the network producing relatively few outliers--these can be easily removed with suitably chosen filtering procedures. The residuals of this test data are then plotted along with a fitted Gaussian distribution illustrating that the predictions are well centered. The smallest model with test augmentation achieved an inference rate of %100 Hz on a single GPU--significantly higher than the endoscope's 30 Hz sampling rate or the 14.37 Hz seen during data collection. As such, there is considerable scope for running multiple printers in parallel or an ensemble of models to improve predictive performance and enable uncertainty estimation. [40]

Prediction Filtering and Smoothing
The trained model was capable of fast and precise flow rate level predictions from a single input image on the training, validation, and unseen test sets. However, an increased robustness to noise and inaccurate predictions was required for handling OOD samples and to smooth predictions given the significant number made during a single print. To tackle this issue, an unseen OOD print of a single layer geometry was printed with 100% linear infill at an unseen angle of 15°. This print is shown in Figure 3A and the results in Table 1. Due to the temporal nature of the printing process, more information can be gained by looking at a series of images rather than single images in isolation. Therefore, a first-in-first-out (FIFO) buffer was used to determine a rolling mean prediction over a set of images. The length of this Figure 2. Training of RegNetY model and results on test set. A) Data augmentation used during the training process. Input 256 Â 256 images are randomly cropped to 224 Â 224 and a horizontal flip is applied with a probability of 0.5. This effectively increases our number of training samples by 2 11 . Additionally, PCA-based RGB noise was added to the images with a standard deviation of 0.1. B) Training and validation loss for 5 random seeds of a pre-trained RegNetY-400mf model, mean is shown along with 95% confidence interval. C) Cosine Annealing Scheduler decaying the learning rate value during the training process. D) Predicted versus Actual plot on our unseen test set showing boundaries for 95% and 99% of predictions. Shaded green region is the spacing between flow rate values in our training data so represents. Residuals of the predictions and plot showing the density. E) During test time each input image is cropped in the five locations specified and mirrored creating 10 images from a single input. These are all fed through the trained network and their predictions are average using the mean. FIFO had a significant impact on reducing the value of the MSE. Buffer lengths ranging from 1 to 100 predictions were tested and the MSE loss was computed (see Figure 3B). To calculate this loss, the target flow rate mean was computed and compared to the mean of the predictions in the buffer. A short FIFO was more susceptible to noise in the predictions of the network, and increasing the length of the buffer to cover just the previous second of printing resulted in an appreciable drop in error. A FIFO buffer with a long length also harmed performance as the predictions used were too far away both spatially and temporally from the current point of printing and thus the current flow rate level. The optimal length was found to be at 29 predictions, and therefore the last 29 input images. This approximately corresponded to the previous 2 s of printing.
To further improve the performance of the system, outlier predictions were removed from the buffer before the mean was computed. For this, the median absolute deviation (MAD) statistical measure was used. For the set of 29 predictions, the MAD is the absolute deviation from the median of the predictions. This can also be interpreted as the median of the absolute values of the prediction set's residuals from the median of the set. Similarly to optimizing the FIFO length, a range of MAD threshold values was swept from 1 to 5 at 0.1 intervals (see Figure 3C). Predictions with MAD values greater than this threshold were removed. A low MAD threshold, in this case, removed too many predictions and thus was a poor choice-for sparse distributions the majority of values were removed. A high threshold did not successfully remove outlier predictions, skewing the mean. A threshold value of 2 provided the best results and was selected with the FIFO length of 29. In Figure 3D the effect of these established hyperparameters for prediction buffer length and the MAD threshold is visualized. The ground truth prediction of relative flow rate is shown in log space along with all the predictions from the network. Overlayed is a tighter distribution of predictions, filtered and smoothed with the method described.

Learning Flow Rates for Novel Materials
New materials with unknown printing parameters are continually being developed. These include foaming materials that expand upon extrusion and can be used, for example, to reduce weight or for insulation. The level of foaming achieved by these materials is directly coupled to the temperature during material deposition, with higher temperatures, in general, leading to increased foaming. The more foaming the greater the volume; thus, to achieve a uniform and dimensionally accurate prints with good surface finish the amount of material extruded must be proportionally updated to account for the expansion at different temperatures. The relationship between temperature, foaming, volume expansion, and therefore flow rate, is unknown to a new user and currently, these relationships must be found through intensive manual experimentation.
The ability to discover the correct flow rates at given temperatures for these new materials demonstrates the effectiveness of the trained model. Figure 4A shows a test part printed from foaming PLA with 100% relative flow (3.60 mm 3 s À1 ) at 45 mm s À1 lateral speed, with 100% linear infill, 3 outer   Figure 4B along with the model's mean rolling flow rate predictions. This plot highlights that the model generalized to an unseen material and successfully predicted under extrusion at low temperatures, good extrusion at 215°C, and over extrusion at higher temperatures. The predictions for each target temperature were then averaged and plotted with their standard deviations in Figure 4C. The use of such an automated learning system to determine the couplings and relationships between parameters is clear. Users could apply such systems to autonomously predict optimal parameter levels for new unseen materials in a single sample print, and subsequently, use these levels on future parts. We go further and show that human operators are not required to manually correct parameters using the predictions of the machine learning model. The model enables rapid realtime closed-loop control, allowing the printer to self-correct and autonomously learn the optimal combination of parameters (see Figure 4D). A part was printed in the foaming PLA at 235°C with the same settings as described previously (see Figure 4E).
During the printing process, images were captured and sent over the network for inference with test time data augmentation. By taking the inverse of the predicted relative flow rate, updates were generated to proportionally correct towards the optimal level, and then sent to the printer over the network. To increase the likelihood that the steady-state prediction for this unseen material was correct and to reduce the chance of overshooting slower updates were made, with 150 predictions averaged before sending a correction. Prior to printing, the G-code toolpath of the object was split into subpaths with a maximum length of 1 mm to reduce the firmware response time for updates. G-code commands are executed sequentially, thus without splitting long moves result in significant correction delays. The plots in Figure 4D show the predicted flow rates and the updates made to the actual flow rate until the steady state was achieved. Figure 4E shows the part in its entirety--changes in extrusion level are visible, especially between the 1 and 2 min markers. During the main printing sequence from 2.5 to 5 min, the closed-loop system was remarkably stable and kept the flow rate within a tight range. The very beginning and very end of the predictions in Figure 4D show detection of under extrusion. During the initial lines of printing predicting, the level of extrusion is challenging due to the lack of interaction with adjacent paths. This is compounded by the current bias within the training dataset, as there are more images of the dense infill than the initial paths. Nozzle images at the end of printing appear as under extrusion, because the bed is visible between deposited Figure 4. Self-learning the optimal flow rate for unseen foaming PLA. A) Printed sample of foaming PLA with the constant flow rate set to 100% with increasing temperature in 5°C increments from 195°C to 240°C. B) Plot showing the target and actual temperature during the print alongside rolling mean relative flow rate predictions. C) Relationship of hotend temperature to predicted flow rate. The model correctly predicts that higher temperatures cause greater foaming. D) Self-determining the optimal flow rate for foaming PLA at a temperature of 235°C. Model stabilizes at 48% relative flow (1.728 mm 3 s À1 ) providing good and consistent extrusion. E) Image of the flow rate corrected part. F) Macro shots of the start of the print at 100% flow and after reaching correct level automatically.
www.advancedsciencenews.com www.advintellsyst.com paths -the primary feature of under extrusion. As such, greater training data of these final lines is required for accurate predictions; however, this current limitation was acceptable as only the final seconds of printing were affected causing minimal impact. Figure 4F shows close-up images at the start of the print with a 100% flow rate at 235°C and at the end with an optimal steadystate flow rate of 48%. These images show a clear improvement in print quality with minimal overshooting and oscillation.

Rapid One-Shot Online Correction
For the production of end-use parts, control systems must correct errors rapidly to minimize their impact on dimensional accuracy and mechanical performance. Being able to precisely predict the exact distance from optimal extrusion allows the developed system to correct errors in one or very few actions. Due to the sampling rate and split toolpath, this update can be applied rapidly after an error occurs.
In Figure 5A, the first layer of a spanner geometry is shown, with the overall printing direction indicated by the black arrow. An error was introduced reducing the flow rate to 50% of optimal (3.60-1.80 mm 3 s À1 ). No feedback or automated correction was applied, thus, significant under extrusion is visible for the remainder of the print. The print settings were the same as used for previous prints in this work with a layer height of 0.2 mm, hotend temperature of 205°C, bed temperature of 60°C, lateral speed of 45 mm s À1 , cooling fan enabled, and 3 external perimeters. A second print was then run with identical settings but with error correction enabled. A FIFO buffer of length 29 and a MAD threshold of 2 were used as described previously. Figure 5B shows that the system was able to accurately predict and correct the flow rate rapidly. Unlike previous work, the controller does not iteratively approach optimal flow, but jumps in one-or few-shots to the correct level. The controller showed good steady-state behavior with no large incorrect updates applied; however, minor oscillations did sometimes occur. To mitigate this, the response time was updated depending on the flow rate prediction made. For predictions near 100%, updates are less time critical and a greater level of accuracy was required. Therefore, for FIFO predictions between 89% and 113%, the average was taken across 20 buffers before an update was made. Also to stop overshooting caused by the mechanical and software response time of the system, after very large updates a delay of 150 images was used to ensure that new predictions were only made after the previous update had been realized. This delay was deemed to be acceptable due to the one-shot nature of our control algorithm meaning often a single update was all that was required.

Conclusion
Here we report a real-time closed-loop control system for 3D printing which is capable of predicting the precise level of material flow rate enabling parameter discovery for unseen materials and rapid few-shot correction. Commonly available 3D printing parameter information was harnessed, alongside the video of the printing process, to autonomously create a new dataset of over 250 000 labeled images. With this dataset, multiple regression models were trained to precisely predict the level of relative material flow as a continuous variable from a single input image. The temporal nature of extrusion printing was utilized by combining multiple predictions to further improve the accuracy of the system. The trained model was then used on complex materials to predict and learn the relationship between printing temperature and material flow rate in both an offline and online fashion, enabling autonomous parameter discovery for unknown materials -a previously manually intensive and tedious process. Also, the same model was used as a few-shot controller to rapidly correct flow rate and recover parts after severe errors were introduced. Due to our unique data generation and labeling procedure of combining readily available manufacturing metadata with video the system was even capable of correcting errors in a single action. Importantly, the aforementioned was achieved in a short time, %8 h, with data collection taking only 5 h on a single printer and the lightweight model taking 3 h to train. This speed of deployment to new settings, alongside the use of metadata that A B Figure 5. Demonstration of rapid online flow rate correction in the first layer of an adjustable spanner. A) Control sample with an error introduced, reducing the flow rate to 50%. After the error, the remainder of the print was severely under-extruded. B) Correction sample with an error introduced, reducing the flow rate to 50%. The few-shot control system detected the error and accurately predicted the required correction.
www.advancedsciencenews.com www.advintellsyst.com is available for many manufacturing processes, may aid industry uptake and the potential future applications of data-driven control.
There is significant potential to build upon various aspects of this work. The current dataset is focused on the initial layer of printing and was collected using one material on one machine and had a bias toward densely filled parts. A larger and more diverse dataset is important to achieve good generalization across systems. Expanding the problem space to include more layers and infill types would increase data collection times. However, this could be addressed with parallelization, using a scalable fleet of printers to collect data simultaneously, and with intelligent sampling, to obtain the most informative data within the problem space. The algorithm and process described in this work could also generalize to a range of parameters other than flow rate (e.g., hotend temperature, Z offset, and cooling fan speed) if provided with the appropriate data. No alteration of the system would be required for parameters that present themselves in optical images. However, a wider range of parameters could be covered with the addition of further sensors such as accelerometers and infra-red cameras. Furthermore, image-based models were chosen to achieve the fast training times in this work. However, moving toward video approaches [41,42] which combine spatial and temporal information would likely be beneficial and improve model performance. The printer side of the control loop should be developed further to increase the sampling rate to enable even faster response times as this is the current bottleneck. This could be achieved by not retrieving flow rate information with every image captured, upgrading the endoscope and Raspberry Pi, and replacing OctoPrint with a less computationally intensive alternative.
Finally, this work reinforces the need for both predictive and in-situ solutions to 3D printer control and optimization. Even with future developments in corrective systems, errors will still be present in parts due to processing times and machine limitations. This could result in failed prints depending on the design requirements of the part. As such, there is scope for using monitoring systems like the one presented in this work to train predictive and preventative models capable of determining the likelihood of an error prior to printing.

Experimental Section
Equipment Setup: The hardware used in the setup consisted of a 3D printer (Creality CR-20 Pro) equipped with a low-cost USB endoscope (Pancellent END-AU-108A) capable of recording 5 megapixel video at a frame rate of 30 Hz. The endoscope was mounted to the printer using a custom-designed and 3D-printed mount, which required no fastenings and as such could be attached to the print head carriage with ease. The positioned endoscope focussed on the nozzle tip and thus the latest material deposition from the 0.4 mm diameter nozzle on the printer. This endoscope was connected over USB to a Raspberry Pi 4 Model Bþ, which in turn was connected to the 3D printer over a USB serial connection. This Pi was running a Raspbian-based distribution and an OctoPrint server with a custom data collection plugin. The in-house plugin captured 1920 Â 1080 pixel snapshots from the endoscope at regular intervals. Each snapshot captured was then labeled with the current relative flow rate of the printer, this value was retrieved directly from the printer's firmware, in this case, Marlin version 1.1.9, using the M221 G-code command. The printer ran a configured version of Marlin with electrically erasable programmable read-only memory (EEPROM) chit-chat enabled as well as features such as thermal runaway protection to ensure safety when printing unattended. The plugin, after pairing each captured image with its respective relative flow rate, subsequently sent this information over the network to a custom server for storage. This server application saved each incoming image and created a CSV file for each print containing the relative flow rates, images, timestamps, and nozzle tip coordinates in the image.
Model Training: The models were written in PyTorch [43] and trained on two Nvidia Quadro RTX 5000 GPUs with an i9-9900 K CPU (8 cores 16 threads) and 64 GB of random-access memory (RAM). This setup was used for real-time correction but with only a single GPU. The geometric augmentation applied with cropping and horizontal flipping effectively increased the size of the dataset by a factor of 2 11 . For the color augmentation, PCA was performed on the RGB pixel values for the cropped images in the full dataset prior to training. During training multiples of the three principle component eigenvectors found were proportionally added to each image pixel across RGB channels corresponding to each of their eigenvalues multiplied by a random variable drawn from a zero mean Gaussian distribution with a standard deviation of 0.1. For training the model, the mean squared error (MSE) loss was minimized using a stochastic gradient descent optimizer with an initial learning rate of 1Â10 À3 , momentum of 0.9, and weight decay of 5Â10 À5 . A batch size of 8 was used throughout training--small batch sizes resulted in faster convergence.

Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.