Spatiotemporal Projection-Based Additive Manufacturing: A Data-Driven Image Planning Method for Subpixel Shifting in a Split Second

speed in serial processes) or in the spatial domain (e.g., more tools in parallel processes). To improve the resolution without sacri ﬁ cing ef ﬁ ciency, a data-driven mask image planning method based on subpixel shifting in a split second by tuning the process in both temporal and spatial domains is presented. The method is based on the optimized pixel blending principle and a fast error diffusion-based optimization model. Various simulation and experimental tests are carried out to verify the developed subpixel shifting method. The experimental results demonstrate the data-driven-based mask image calibration and planning techniques signi ﬁ cantly improve the fabricated part quality without compromising the process ef ﬁ ciency. The presented spatiotemporal strategy may shed light for future research on the projection-based AM processes.


Introduction
Additive manufacturing (AM) is a new manufacturing process that can directly fabricate a physical object from a computer-aided design (CAD) model without tooling and fixturing. [1]It is a collection of technologies for manufacturing solid objects by the sequential delivery of material or energy to specified points in space to produce the objects in a layer-by-layer manner. [2,3]Due to the additive and discrete nature, this collection of technologies needs to find a trade-off between process resolution and layerwise throughput.The process resolution is generally controlled by the motion resolution (e.g., linear-stage plotter for planner motion, and galvano-mirror scanner for rotational motion [4,5] ), and the tool resolution including the size of the tool (e.g., nozzle diameter and laser spot size [6] ) and the tools' location for multiple-tool processes (e.g., print-head array for multijetting process, and micromirror array for projection stereolithography process [7,8] ).The layerwise throughput is primarily determined by the changing frequency of the tool, including the update of the tool position and the tool status (e.g., nozzles' on/off state and the change of energy intensity), as well as the forming volume per controlled status (e.g., deposited material per motion step, [9] and cured material per laser spot).Due to several common factors, the resolution and throughput are generally at odds with each other. [10]For example, a reduced tool size results in higher resolution but lower throughput.
Two common strategies to improve the layerwise throughput without compromising the process resolution include: 1) increasing the number of tools and 2) increasing the changing frequency of a tool.These two strategies represent the fundamental mechanism to optimize the AM performance by the process planning in the spatial domain (using multiple tools) and temporal domain (controlling changing frequency of the tool).More specifically, Figure 1a shows the most commonly used vat photopolymerization (VPP) processes and their location in the spatiotemporal domain.The horizontal axis represents the size of the exposure light pattern, which is associated with the spatial domain.The vertical axis represents the changing frequency of the exposure light pattern in the temporal domain.For example, the laser-based Stereolithography apparatus (SLA) and scan, spin, and selectively photocure (3SP) [11] processes use a singletool (laser) in a serial fabrication process, in which the toolpaths are mainly planned in the temporal domain by changing the tool status at desired timestamps.Their resolution is controlled by the laser spot size (%0.1 mm) and the motion resolution of the scanning mirrors (%0.01-0.1 mm).The throughput of the SLA and 3SP processes is mainly controlled by the changing frequency of the laser location.Consequently, the 3SP process has a higher throughput than the SLA process due to its higher changing frequency in the temporal domain.
Different from the laser-based SLA and 3SP processes, the mask-image-projection-based stereolithography (MIP-SL) processes use multiple tools such as millions of micromirrors and are essentially a parallel process, in which the process planning is mainly in the spatial domain by tuning the tools' status at desired spatial locations.The regular MIP-SL process forms a 2D layer by exposing a static light pattern defined by a digital mask image, where the pixels of the light pattern serve as the parallel fabrication tools.As the motion of the tool is not involved during the curing process, the resolution is mainly defined by the number of pixels provided by the mask projection device.A projection device such as a Digital Micromirror Device (DMD) has a limited number of micromirrors (e.g., 1024 Â 768).Consequently, the process resolution and the layerwise throughput have to be balanced when using the limited number of tools to form a layer simultaneously.For example, the macroscale MIP-SL processes using a large pixel size (e.g., 0.15 mm) can form a large 2D layer with high throughput, but the process resolution is low.In comparison, the microscale MIP-SL processes using a small pixel size (e.g., 0.01 mm) can have a high resolution, but the throughput is low as the projection area is small and needs to be moved to form a larger layer. [12]n the past decade, tremendous effort has been devoted to various process planning for the MIP-SL processes, either in the temporal domain or the spatial domain, to improve the process performance including the resolution and throughput. [13,14]owever, little work has been done on the process planning in both spatial and temporal domains.In this article, we develop a process planning method for a subpixel shifting MIP-SL (sps-MIP-SL) process that combines spatiotemporal considerations to significantly improve the overall process performance (both resolution and throughput).In the sps-MIP-SL process, the projection image is moved at a subpixel size distance ( Size pixel n , n ≥ 2) in a split second.Figure 1b shows the basic concept and hardware configuration of the sps-MIP-SL process for n ¼ 2. For each layer, we precompute a set of n 2 optimized images and then project the mask images in a predefined sequence (from Mask 1 to Mask 4 ) with a controlled exposure time ( T curing n 2 ).Between two consecutive exposures, the projection device is moved by a small distance Size pixel n using a fast and precise XY linear stage.The total exposure time of the layer is still equal to the regular curing time for photopolymerizing a layer.Such a method extends the traditional static MIP-SL process from the pure spatial domain to the spatiotemporal domain by increasing the changing frequency of the projection images by n 2 times (in 1-10 2 Hz).Consequently, the process resolution of sps-MIP-SL can be significantly improved using bettercontrolled light exposure in the curing process.However, the hardware change alone using the traditional image planning method cannot achieve such performance improvement.A key challenge of the sps-MIP-SL process is how to plan a set of mask images for accurate light exposures at n 2 tool locations.The traditional 2D layer slicing method is insufficient; instead, we present a fast error diffusion-based optimization method for the sps-MIP-SL process.Figure 1c,d,e shows the comparison of the mask images and the simulated curing profiles between the static MIP-SL process and the subpixel shifting method for a circular layer.The sps-MIP-SL process with optimized mask images can achieve higher product surface quality without sacrificing the process throughput.The main contribution of this article is to explore a data-driven and hardware-software-integrated intelligent system, which leverages mask image planning and optimization, process synchronization and control, light and material characterization, and system calibration.Both subpixel shifting and error diffusionbased optimization are generic methods.They can be used in various MIP-SL systems, including the moving-projection systems, [14][15][16] as well as the continuously printing AM systems such as CLIP, [17] HARP, [18] and FLOAT. [19]

Principle of Subpixel Shifting
In the traditional MIP-SL process, a set of dynamically controlled mask images are projected on a surface area to selectively convert liquid resin into solid layers.Before the building process, the 3D CAD model of an object is firstly sliced by a set of horizontal planes.Each slice is then converted into a 2D image and sent to a digital device used in the light projection system, such as a DMD.The planned mask images are then projected onto a resin surface to selectively solidify liquid resin into 2D layers of the object.By repeating the layer-based fabrication process, the 3D object can be formed from multiple 2D layers.
Compared with the laser-based SLA, the MIP-SL process can be much faster by simultaneously forming a layer using millions of pixels.However, the accuracy and resolution of the MIP-SL process are constrained by the limited number of tools such as the micromirrors of a DMD.The energy delivered by the micromirrors is spatially discontinuous, which causes the well-known aliasing effect that is unique to digital devices.Such an effect will be significant for a large-scale MIP-SL process when its XY resolution is close to or even larger than its Z resolution.Figure S1, Supporting Information, shows a test example in which a binary bitmap generated from a sliced contour was used.The surface quality will be poor due to the aforementioned aliasing effect (Figure S1-left, Supporting Information).In our previous work, [20] we presented an optimized pixel blending method to address the aliasing issue, which can significantly improve the part accuracy and surface finish by intelligently setting mask images used in MIP-SL.The fast error diffusion model in this work will be built upon the optimized pixel blending method.
Like the toolpath used in computer numerical control (CNC) machining, the mask images used in the light projection are among the most critical process parameters in the MIP-SL process.We studied the interaction between the projection light of neighboring pixels and how to manipulate the pixel values to achieve desired accuracy and resolution.In particular, we mathematically formulated a pixel blending problem in an optimization model and developed a two-stage optimization method. [20]ccordingly, an optimization tool was built to solve the optimized pixel blending problem by intelligently manipulating the grayscale values of each pixel so the energy input required to cure a layer can be controlled.Such an approach based on intelligently setting mask images provides an effective method to achieve desired accuracy and resolution in the MIP-SL process (Figure S1-right, Supporting Information).Based on the concept of pixel blending, Kang et al. proposed a pixel-based solidification model for MIP-SL to predict the patterning results. [21]mami et al. proposed an energy-based model for the scanningprojection system to study the scanning and curing properties. [22]ecently, Emami et al. studied the light field effect in grayscale MIP-SL system by finite element-based Multiphysics simulation. [23]However, all these models focus on the vertical resolution instead of the lateral resolution.The optimized mask image planning methods in the aforementioned work and others [24][25][26] do not consider the subpixel shifting problem critical in the sps-MIP-SL process.
The cutting behavior of a tool in CNC machining is uniform both across the cutter and the part (i.e., the cutting depth under the cutting tool is the same while it is zero outside the cutting tool).Unlike CNC machining, the material's photocuring behavior in MIP-SL is highly nonuniform, including 1) nonuniform within each tool (a projection image pixel) and 2) nonuniform across the whole layer.
1) Nonuniform in a pixel: the pixel of a projection image is Gaussian distributed rather than uniformly distributed. [27]ence, the curing depth varies at different locations according to the Beer-Lambert law of absorption. [28]In particular, a single pixel generates a paraboloidal shape instead of a cubic or cylindrical shape. [1]  g) The dimensional accuracy can also be tuned by the location of the boundary pixel in the pixel shifting system.h) The comparison between the regular pixel blending and pixel shifting shows the pixel shifting approach can achieve higher-contrast images and hence sharper and cleaner cured edges.
single pixel, and Figure 2b-left shows the simulated curing profile in the paraboloidal shape (blue color) for a single pixel.Also, each pixel spreads into neighboring pixels and causes additional energy deposition, which is called pixel blending. [20]Figure 2aright shows the energy-deposition effect of multiple overlapped pixels, and the red curve shows the accumulated energy contributed by all the associated pixels.Figure 2b-right shows the curing result of these pixels, and the blue curve shows the curing profile with a top-hatted paraboloidal shape.Depending on the optic configuration, the projection image pixels have different Gaussian profiles at the focal plane.Figure 2c-left shows the pixel profile of a highly focused image from a high-quality projector.The crest and trough of the ripples correspond to the center of the micromirrors and the gap between the adjacent micromirrors.The highly focused image is beneficial for highquality fabrication in MIP-SL systems; however, the ripple profile will be replicated onto the printed part with undesired marks. [16,23]In contrast, Figure 2c-right shows the pixel profile of a less focused image, where the pixels are blurred and spread to neighboring pixels.According to the curing profile (Figure 2b-right), the over-blurred image will affect the edge sharpness of the cured part and the accuracy of the lateral dimension.Figure 2c-middle shows a pixel profile that achieves a good balance between the fully focused and over-blurred profiles.
Based on the projection image calibration, [29] this pixel profile will be used for the process planning in our study.
2) Nonuniform across the whole layer: MIP-SL is a multipletool process (using a pixel array), and the pixels have different geometries and intensities.The approach to characterize the pixels' geometric and intensity nonuniformity will be discussed in Section 3. In addition, the CNC machining provides sharp edges by cutting since only the materials along the toolpath are removed.However, in the photopolymerization process, the pixels of a projection image solidify the material by providing exposed energy doses.Hence, sharp edges in MIP-SL can only be achieved using high-contrast energy doses defined by the projection images.
In the remainder of the section, we will discuss how the subpixel shifting technique can address the energy dose control challenge in the MIP-SL process.The quality of a manufacturing process is typically evaluated by the tightness control of the dimensional tolerance and geometrical tolerance. [30]The geometric tolerance includes positional tolerance and shape tolerance (e.g., straightness, flatness, circularity, squareness, etc.).The sps-MIP-SL system can meet these geometric dimensioning and tolerancing (GD&T) criteria in the following ways: a) Dimensional tolerance: although the pixels are Gaussian distributed and overlap with each other, these properties can be taken advantage of by intelligently setting the grayscale values of the projection images to tune the part dimension.As shown in Figure 2d, a boundary pixel merges with the neighboring pixels, so the cured boundary can be fine-tuned by assigning the grayscale level of the boundary pixel.Accordingly, a tight dimension tolerance can be achieved by the optimized pixel blending algorithm.The subpixel shifting technique provides another degree of design freedom to control the dimension by moving the location of the boundary pixel (green color in Figure 2g).As both the grayscale value and the location of the boundary pixel can be adjusted in the near-continuous motion mode, the dimensional accuracy can be more precisely controlled.The blue color curves in Figure 2 show the predicted shapes of the cured part by adjusting both the grayscale value and the location of the boundary pixel.b) Positional tolerance: positional tolerance is crucial for part assemblies.Although the pixel blending algorithm can achieve a reasonable control on the dimensional tolerance, the discretized nature of the light projection device brings challenges to the positional tolerance control.It is almost impossible to precisely align all the features with the pixels in a layer for a generic digital model.This problem can be effectively addressed by subpixel shifting.An example is shown in Figure 2e, in which a slab part has six slot-hole features.Also, these holes are misaligned within the pixels, and the size of the gap between the adjacent holes is not a multiple of the pixel size.Hence the locational tolerance cannot be tightly controlled by adjusting the layout of the part.In comparison, with subpixel shifting, the features can be fabricated by exposing the shifted projection images that may be perfectly aligned with the features.An example of these pixels is shown in cyan color in Figure 2e.c) Shape tolerance: the MIP-SL process fabricates parts by tessellating the target geometry with discretized unit cells.Such approximation introduces shape errors.For example, as shown in Figure 2f, the inclined line feature and curve feature lost the linearity and circularity during the tessellation process (top figure).This problem can be addressed by moving the mask image in a subpixel distance to better approximate the features (bottom figure).Essentially, shape tolerance is benefited by the increased resolution from the subpixel shifting motion.d) Surface finishing and sharpness: due to the Gaussian distribution and pixel blending effect, it is challenging to achieve a high-contrast image, i.e., the accumulated energy gradually increases from 0 to maximum value across the boundary (the red curve in Figure 2d,g,h).It is desirable to plan the images such that the accumulated energy curve has a steeper slope around the shape boundary.Hence: 1) a higher contrast can facilitate the separation between the solid and liquid areas without resulting in the gel state (partially cured), because the gelled features affect the surface finishing and part accuracy during the post-processing stage.
2) According to the absorption law (C d ¼ D p lnðE=E c Þ), the curing depth is logarithmically proportional to the accumulated energy.Hence the higher contrast results in a steeper side surface in the vertical direction, which is more desired in the layerbased AM process.As shown in the simulated result in Figure 2h, the subpixel shifting-based image projection can form a steeper energy curve (solid red curve) than the regular pixel blending images (dashed red curve).Consequently, sharper side edges (blue curves) can be achieved for a high-contrast layer boundary.
The aforementioned discussion elucidated the basic principle of subpixel shifting to improve MIP-SL's process resolution and fabrication accuracy without sacrificing production throughput.However, the shapes of all the demonstrated cases are relatively simple, and the forward problem (i.e., from mask images to the cured profile) is computationally inexpensive.The real-world applications in the sps-MIP-SL process are more complicated in terms of the given geometric shape, the number of pixels, and the nonuniformity of the projection pixels.More importantly, the mask image planning to compute a set of mask images from a given cured profile is an inverse problem.Such a backward problem is much more computationally intensive, typically requiring an iteration-based optimization solver.In the next section, we will present a highly efficient optimization solver, namely error diffusion, to address the complex mask image planning problem for sps-MIP-SL and account for the non-uniformity of the projection pixels.

Optimized Pixel Blending Model Based on Image Calibration Results
As demonstrated in our previous work, [20] the pixel blending of light intensity would present tremendous capability in selectively solidifying liquid resin into desired shapes.However, a single Gaussian function was used to approximate the light intensity of all the pixels in the work.The actual projection system, however, has inherent optical defects caused by spherical aberration, astigmatism, dispersion, spatial incoherence, and distortion. [29]Hence the energy distribution is generally nonuniform.These geometric and energy differences in a projection image would affect the pixel blending results and, therefore, part quality.Also, the original pixel blending method can only solve a small-scale problem.It falls short when dealing with real-world CAD models to be fabricated in the sps-MIP-SL process.We presented a mask image calibration method for MIP-SL to address the nonuniformity of a DMD-based projector before. [29]Our approach was based on two types of calibration systems, i.e., a geometric calibration system that can calibrate the position, shape, size, and orientation of a pixel, and an energy calibration system that can calibrate the light intensity of a pixel.
By integrating the geometric and energy calibration results, we can compute an approximation function for the light intensity of each calibrated pixel by using the following equation gðx, yÞ ¼ Aexp where the parameters x 0 , y 0 , σ x , σ y , and θ are computed from the geometric calibration system, and the parameter A is computed from the energy calibration system.Accordingly, a database of such functions g(x,y) can be built by storing all the parameters (x 0 , y 0 , σ x , σ y , θ, A).For a pixel not directly calibrated, an approximated function can be computed by identifying its direct neighbors and, accordingly, using a simple linear interpolation to calculate the related parameters.Based on the calibrated approximation function g(x,y), we can derive a new optimized pixel blending model shown as follows. where where x 0 ij , y 0 ij , A ij , σ x ij , σ y ij are the calibrated parameters for native pixel (i, j) with full light intensity (grayscale ¼255 or H ij ¼ 1).A ij , σ x ij , σ y ij are the Gaussian parameters for pixel (i, j) with partial light intensity (i.e., grayscale <255 or H ij < 1).Compared with the original pixel blending model, [20] the objective function and the constraints are identical; however, the Gaussian function G x,y has been replaced by the image calibration results.The new optimized pixel blending model is more general than the original model; however, it is also more difficult to solve since G x,y may vary from pixel to pixel in the newly formulated model.

Error Diffusion Algorithm
In the formulated model defined in Equation (1), the shape parameters are not linearly decreasing with the light intensity, even though all the calibrated parameters are constant coefficients.Thus, such an optimized pixel blending model cannot be solved by linear programming solvers.An iteration-based optimization method called discrete direct search (DDS) was developed in our previous work. [15]The DDS method is independent of the properties of the constraints, variables, and objective functions in the optimization model, and can only provide a suboptimal solution for images with medium size.It cannot be used for large-scale mask image planning required by the sps-MIP-SL process.In this work, we investigated two fast optimization methods: boundary erosion and error diffusion, where error diffusion was built upon boundary erosion.Experimental results demonstrated that both algorithms could solve the optimized pixel blending problem in a reasonable time.However, the error diffusion method performed substantially better than the boundary erosion method.
The boundary erosion method is purely based on geometric information.The detailed algorithm is discussed in Section S1, Supporting Information.Although boundary erosion works well for parts with large sizes, it has difficulty dealing with sharp features and small features.To overcome this problem, we combined the boundary erosion method and the pixel blending model, and developed a new method called error diffusion.The error diffusion method is based on both geometrical and energy information.Unlike the optimization methods used before, the error diffusion method considers the error information and compensates the energy distribution by diffusing the errors back to the planned mask images.
The basic idea of the pixel blending process is schematically shown in Figure S3, Supporting Information.A 3D solid model (a) is first sliced by a set of horizontal planes.Each slice is then converted into a 2D image (b) by some sampling methods.In our case, we use super-sampling.That is, the sampling resolution is n times higher than the projection image's resolution.Hence, for each pixel of the projecting image, we subdivide it into n Â n subpixels and use the subpixel resolution as the input image's resolution.According to the target image (b), we use optimization algorithms (geometric algorithm, linear programming optimization, or iterative optimization algorithm, etc.) to get a mask image (c).The grayscale level of each pixel H in mask image (c) represents the corresponding light intensity.Since each pixel follows a Gaussian function (d), the mask image will convolute with the Gaussian function and get the accumulated intensity (e) as I ¼ H ⊗ G.According to the polymerization reaction process, the material will be cured only when the energy is higher than the critical energy.Thus, the blending result (g) is computed as F 0 ¼ TðIÞ, where T is the gate function (f ) with the threshold of critical energy for a given resin, and F 0 (g) is a binary image representing the blending result.Comparing the blending result F 0 with the target F, we get the error image E (h).The colorful pixels show the errors (red color denotes extra portions and green color denotes missing portions).The objective of mask image planning is to reduce the error as small as possible without violating the constraints.
Although the theoretical pixel blending model is straightforward, solving it for the global optimum is difficult, if not impossible in practice.It becomes even worse when the calibrated database is integrated into the optimization model.Different optimization techniques such as the gradient-based optimization approach and the iteration-based heuristics approach can be used.Unfortunately, they cannot achieve an acceptable solution in a reasonable time.The main problem is that they work in an open-loop manner without considering the essential property of the pixel blending process.For example, the iteration-based DDS method arbitrarily assigns light intensity to pixels without considering the direction and amount of the pixel change, [15] while the gradientbased method only relies on the mathematical model that is quite generic. [20]Here, a new error diffusion method has been developed to solve the mask image planning problem.It updates the light intensity according to the error changes, as shown in Figure 3a, using similar notations in Figure S3, Supporting Information.In essence, the pixel blending process now evolves from an open-loop system to a closed-loop system.
The error diffusion method is similar to other algorithms listed before, except that it diffuses errors and updates the light intensity according to the diffused errors.An initial solution from the geometric method or the aforementioned boundary erosion method is used as a starting point of the error diffusion method.According to the current light intensity and the calibrated Gaussian distribution of each pixel, the accumulated energy for each subpixel is calculated.Based on the accumulated energy and the threshold energy, the error of each subpixel is judged based on Equation ( 6).Accordingly, the error is diffused to its neighboring pixels to rectify their light intensity as in Equation (7).Note that, the error is diffused with the weight proportional to the Gaussian distance as used for convolution.The flowchart of the error diffusion method is shown in Figure S4, Supporting Information, and is discussed using examples as follows.
Like other closed-loop systems in engineering problems, this algorithm is converged asymptotically.A constant parameter λ can be used as a multiplier for the diffused error to control the convergence speed.The larger the parameter, the faster the algorithm can converge, but it may oscillate when approaching the steady-state.In contrast, a smaller parameter can lead to slower but smoother convergence.Figure 3b shows the error convergence with different coefficients.It shows that the algorithm converges pretty fast in the initial steps.During these steps, the algorithm erodes the boundary pixels to match the target dimensions.The algorithms converge much slower in the following steps as it manipulates the pixel values to recover the sharp features during these steps.In Figure 3b, (a Figure 3c shows the detailed error diffusion procedure based on a simplified example.Suppose the image size is 4 Â 4, and the target geometric shape is a circle.In this example, the supper sampling level n ¼ 3.According to the image coverage, a supper sampled image with small pixels is shown in Figure 3c(a).An initial mask image is generated by counting the ratio of small white pixels, as shown in Figure 3c(b).After applying the Gaussian function to each big pixel and distributing the light intensity to its neighborhood, the accumulated effect for each small pixel is shown in Figure 3c(c).In this case, the Gaussian parameter σ ¼1.0.After comparing the accumulated effect with the predefined threshold, the error of each small pixel can be calculated according to Equation (1).The threshold is set as 1.8 in this example.As shown in the figure, errors occur on the boundary of the geometry.In this case, the circular feature is a small feature, and some portions of the boundary are missing.Accordingly, the error of each super-sampled pixel will be diffused into its neighboring original pixels following the Gaussian function.Hence, all the diffused errors on each original pixel will be accumulated together.The final result is shown in Figure 3c(d).After adding the diffused errors to each pixel of the previous mask image, the updated mask image is generated in Figure 3c(e).Note that the grayscale value is truncated in the range of [0, 1].In this example, as the boundary has missing portions, the grayscale values of the updated mask image are higher than the initial mask image.With the new mask image, the accumulated effect will be updated.This process is repeated until the error is converged.
Once the parameters are obtained using the image calibration method, the error diffusion method is ready to solve the optimization problem.However, the error diffusion method is still iterative, and the efficiency could still be a problem if the image size is large.The convolution process shows that the pixels around the part boundaries are more important and more challenging to solve in the mask image planning.Based on this observation, a more efficient error diffusion method using adaptive sampling has been developed.Figure 3d shows two examples of adaptive sampling.That is, only the regions (defined as B p in Figure S4, Supporting Information) that are close to the boundary (shown as blue and green dots) are finely sampled.The adaptive error diffusion method can dramatically improve the mask image planning efficiency without losing solution quality.Also, the algorithm complexity now only depends on the length of the contour rather than the image area.Thus, the adaptive samplingbased error diffusion is a more general and robust method for the sps-MIP-SL process.
The subpixel shifting method significantly expands the searching space of mask images.However, the dramatically increased amount of data brings significant challenges to the computational algorithms.The error diffusion method will consider the interplay between the pixels and light-material interactions, the shifted mask images' coordination, and the optical and material properties.The revised optimization model incorporating the calibrated pixel blending model in Equations ( 2)-( 5) can be augmented by replacing the single image (pixel location: x 0 ij , y 0 ij ) with a set of images (pixel location: where Δ x and Δ y are the shifted subpixel distances).Similarly, the error diffusion algorithm shown in Figure S4, Supporting Information, can be revised by adding another loop to evaluate all the subpixel images for each iteration.Note the supersampling level n can be the same or larger than the subpixel shifting level.

Computational Results and Simulation Verification
The presented mask image planning algorithm based on adaptive error diffusion has been implemented using Cþþ programing language with Microsoft Visual Cþþ compiler.The used test platform was a commodity personal computer with a 3.2 GHz processor and 16 GB random-access memory running Windows 10.A 3D simulation software was developed using Microsoft Foundation Class (MFC) and OpenGL library to verify the effectiveness of the computation results.The simulation software implemented the visualization of the profile of individual pixels (gold color), accumulated energy (red color), and the cured part (blue color) for any planned mask image or a set of shifted mask images (Figure 4a).Several test cases were designed to demonstrate the effectiveness of the subpixel shifting technique based on adaptive error diffusion in controlling dimensional and geometric tolerances.

Shape Tolerance Control
A test case of a simple circular shape was used to verify the circularity control of the developed method.Figure 4a shows the comparison of the accumulated energy (red color) between the traditional image planning method (top) and the subpixel shifting method (bottom).It can be seen that the mask images generated by the subpixel shifting method can achieve a much smoother distribution of the accumulated energy.Figure 4c shows the top view of the accumulated energy (red) and the cured profile (black) for the static MIP-SL process (top), the regular pixel blending approach (middle), and the subpixel shifting approach (bottom), respectively.Among them, the subpixel shifting approach has a smoother boundary and higher circularity.Due to the Gaussian distribution and pixel blending in the layer curing process, the traditional MIP-SL process cannot preserve sharp features.A set of squared features (Figure 4e-left, magenta color) were designed to validate the squareness control of the subpixel shifting technique.Figure 4d shows the planned images (left) and the associated cured profile (right: magenta color) by the traditional grayscale image planning approach (top) and the error diffusion-based image planning approach (bottom).Note the error diffusion method generated optimized mask images with intriguing grayscale patterns around the corners, which is nonintuitive to humans.The underlying reason is that the corner features are located at the boundaries of the part; hence less exposure energy is allocated due to the pixel blending effect.Therefore, a sharp corner will be lost where the light energy is less than the critical exposure energy (E c ).To retrieve the missing portions, the optimized mask images using adaptive error diffusion automatically assign positive grayscale values outside the part and around the corners to increase the accumulated energy at the sharp corners.However, the grayscale values of the outside pixels should be delicately assigned to avoid curing the regions outside the CAD model.In the optimized mask image result, higher grayscale values were assigned to the pixels close to the corner.The simulated curing results (Figure 4d-right) clearly show the subpixel shifting method (bottom) can preserve the sharp corners, where the static MIP-SL process generates undesired round corners (top).

Dimensional Tolerance Control
Section 2 demonstrates adjusting the grayscale value of the boundary pixels and using subpixel shifting can achieve tight dimensional control in principle (refer to Figure 2d,g).In this section, we will mainly focus on the validation of the dimensional tolerance control for small features.Figure 4e-right (red color) shows the test cases with a set of line strips with a varying width from one single pixel to multiple pixels.We observed that the dimensional tolerance of large features (over 10 pixels) is easy to control for both the traditional MIP-SL and sps-MIP-SL processes.However, the traditional MIP-SL process will completely lose the small features with only a few pixels (e.g., 1-pixel wide strip in Figure 4f-top).In comparison, the adaptive error diffusion method can preserve such small features by automatically assigning positive grayscale values to the pixels outside the small features (Figure 4f-bottom).

Positional Tolerance Control
Figure 2d demonstrates the positional tolerance control for some simple test cases.In this section, we will validate the effectiveness of the error diffusion method for more nonintuitive test cases.Figure 4g shows a test case with a micropillar array that consists of micropillars with 2-pixel width and the incremental gaps with a subpixel size (g.1).The traditional image planning method generates rounded corners and incorrect gap sizes (g.2) due to the combination of the small geometric features and the subpixel-size locations.Like the test cases of sharp-feature and small-feature, the error diffusion method intelligently tuned the grayscale values of the neighboring pixels and computed non-intuitive mask images (two example mask images are shown in g.3 and g.4).The light energy accumulation of the planned mask images can generate both sharp corners and desired gap size (g.5).

Sharpness Control
We discussed the steeper accumulated energy with higher energy contrast could generate sharper part boundary to enhance the surface smoothness in Section 2. In this section, we used the same cylindrical feature used in the shape tolerance study to validate the sharpness control.Figure 4b shows the side view of the accumulated energy (red) and the cured part (blue).The subpixel shifting approach (bottom) can achieve steeper accumulated energy and a sharper cured surface.The improvement is mainly attributed to the larger searching design space introduced by the mask image motion in the sps-MIP-SL process.Hence the error diffusion method can intelligently assign the grayscale values to the mask image pixels to maximize the contrast of the accumulated energy.Such optimized results are non-intuitive to users and cannot be achieved by simple mask image planning methods.

Computational Cost Study
The adaptive error diffusion method considers the specific property of the MIP-SL's mask image planning problem.
Hence, it can significantly improve computational efficiency and part quality.In this study, different mask image planning methods, including the linear programming method, the DDS method, and the error diffusion method, were compared based The side view of the 3D simulation software shows the pixel shifting approach (bottom) can achieve steeper accumulated energy and a sharper cured surface.c) The top view of the 3D simulation software shows the pixel shifting approach (bottom) can achieve higher circularity of the printed feature than the static MIP-SL process (top) and the regular pixel blending approach (middle).d) Sharp feature simulation: the mask image (left) and simulated cured profile in the lateral direction (right) for the grayscale-based mask image planning method (top) and the error diffusion-based mask image planning method (bottom).e) The benchmark test cases designed to simulate the mask image planning algorithms for sharp features and small features.f ) Small feature simulation: the mask image (left) and simulated cured profile in the lateral direction (right) for the grayscale-based mask image planning method (top) and the error diffusion-based mask image planning method (bottom).g) Dimensional accuracy and positional accuracy simulation: a micropillar array consists of micropillars with 2-pixel width and incremental gaps with subpixel size (g.1).The regular image planning results in rounded corners and incorrect gap size (g.2),whereas the masks (g.3 and g.4) from the error diffusion method can achieve both sharp corners and desired gap size (g.5).
on various image sizes for one slicing layer of a Stanford dragon model. [20]In the tests, the iteration number for the error diffusion method was set to 15, and the CPU time for optimizing each image is less than 30 s for each layer.The results are shown in Table 1.In comparison, the linear programming method can only solve very small-scale problems, which is not acceptable for industrial applications.Both DDS and error diffusion methods can achieve satisfactory results in a reasonable time.However, the error diffusion method as a closed-loop approach can achieve smaller errors in much less computational time.

Prototype Machine
A prototype machine to achieve the sps-MIP-SL process was built for physical tests (Figure 5a).In the prototype system, a DMDbased projector from SprintRay Inc. (Los Angeles, CA) was used as the mask image projection device.The pixel number of the projector was 1024 Â 768, and the envelope size of the projection image was 160 Â 120 mm 2 .Hence each pixel size was 0.156 mm.Commercial photocurable resin (Perfactory SI500 from EnvisionTec Inc., Dearborn, MI) was used in the tests.The layer exposure time was 4 s based on the curing depth analysis. [31]high-performance eight-axis motion control board KFLOP þ SnapAmp (Dynomotion Inc., Calabasas, CA) was used to drive the linear stages.The piezo stage was controlled by a high-precision and fast-speed piezo-driver (Viking Industrial Products, Marlboro, MA) with input voltage from 0 to 200 V.For each layer with subpixel level n ¼ 4, the piezo-driven XY linear stage moves the projected mask images in a step-wise moving mode within a pixel (Figure 5b).The increment size of each step is 1 4 of the pixel size (i.e., 0.039 mm) and the projection time of each mask image is 4 16 s (i.e., 0.25 s or 4 Hz) (Figure 5d).The accuracy and resolution of the piezo-stage were verified, as shown in Figure 5c for one voltage pulse to the Y-piezo.

Dimensional Accuracy Study
A part built using MIP-SL is usually larger than the desired size if no compensation is applied.To verify the adaptive error diffusion method on building parts with accurate dimensions, a test case with an incremental stair size was designed, as shown in Figure 6a.Furthermore, the part was duplicated six times and placed at six different positions within the building platform to verify the algorithm's effectiveness in considering the mask image calibration result.The size of each built stair was measured using a micrometer in both X and Y directions.The measurement results of the eight stairs are shown in Figure 6a.Almost all the dimensions are controlled within 50 μm (note the pixel size was 156 μm and the subpixel size was 39 μm), and the error percentage is less than 1% (an average of 0.15% with the standard deviation of 0.2%).

Thin Feature Verification
The simple boundary erosion method may lose thin features, whereas the error diffusion method is general for parts with different dimensions.This test case is to verify the error diffusion method on building such thin features.The CAD model is  composed of thin strips and cubes with incremental pixel sizes shown in Figure 6b.Note the same model was used in the simulation test shown in Figure 4e.All the tiny features were successfully built in the tests, and the size variation is within 50 μm.Figure 6b shows the built results of the error diffusion method, in which all the thin features are retrieved.In comparison, the same part built using the boundary erosion method lost all the one-pixel size features.

Sharp Feature Verification
The boundary erosion method does not consider the energy information.Hence the fabricated sharp features are rounded due to the pixel blending effect.In comparison, the error diffusion algorithm considers both the geometric and energy information to recover the sharp corners.The same squared features in the previous section were used in sharp feature tests.The mask images generated by the error diffusion method were used in the building process.The built parts were digitally scanned using a high-resolution scanner (2400 dpi, HP Scanjet 7400).The scanned images are shown in Figure 6c.It can be seen that all of the corners are rounded for the part built by the boundary erosion method (Figure 6c-left), while the corners are much sharper for the part built by the error diffusion method (Figure 6c-right).Some extra materials exist at the built corners, which can be further optimized by a better calibration and optimization procedures in the future.

Real-World Testcase Verification
Finally, the error diffusion-based subpixel shifting technique was tested using a real-world test case.As shown in Figure 6d, a teeth model was fabricated using the traditional MIP-SL process, the regular pixel blending technique, [20] and the subpixel shifting method.It is clear the subpixel shifting method outperforms the other two approaches in terms of surface finishing and process resolution, which verified the effectiveness of the datadriven subpixel shifting method for MIP-SL.process [36,37] and volumetric lithography. [38]Finally, although developed for the sps-MIP-SL process, the presented data-driven process planning framework is applicable to other energydeposition-based AM processes by considering their spatiotemporal properties.

Conclusion
AM processes suffer from limited tools available for material deposition and energy control.Most AM processes are either planned in the temporal domain or the spatial domain to balance process resolution and production throughput.In this work, we have presented a subpixel shifting-based MIP-SL process by fastmoving the projection device with a subpixel distance in a split second, so the mask images used in the sps-MIP-SL process can have significantly increased design space to address limited tools available.Furthermore, an efficient error diffusion-based optimization method based on calibrated light projection systems has been developed to fully utilizes the spatiotemporal property of the MIP-SL process.Both simulation and experimental results show that the data-driven mask image planning method based on subpixel shifting can significantly improve the process accuracy and resolution without losing production throughput.Finally, the developed subpixel shifting technique is a generic method that can be integrated into various MIP-SL systems to advance their use in future industrial applications.

4 Figure 1 .
Figure 1.Subpixel shifting in a split second.a) Various vat photopolymerization processes in the spatiotemporal domain.b) Principle of subpixel shifting for n ¼ 2.Four mask images will be used to photocure a 2D layer.c) The binary mask image (middle) for one layer of a given cylinder model (left) and its simulated curing profile in the lateral direction (right).d) The grayscale mask image (left) for the same layer and its simulated curing profile (right).e) A set of optimized mask images (Mask 1 -Mask 4 ) for the same layer using the subpixel shifting method and its simulated curing profile (right).
Figure 2a-left shows the Gaussian profile of a

Figure 2 .
Figure 2. Principle of subpixel shifting.a) The 2D (the projection of 3D) and 3D visualization of the Gaussian-shaped energy distribution of a pixel and the accumulated energy of multiple pixels.b) The simulated cured shape based on the accumulated energy and curing threshold.c) The accumulated energy of pixels with different shaped Gaussian distribution.d) 1D simulation shows the dimensional accuracy can be tuned by adjusting the grayscale value of the boundary pixel.e) An example to show the pixel shifting technique can achieve high positional tolerance.f ) Two examples to show the pixel shifting technique (bottom) can achieve high shape tolerance.g)The dimensional accuracy can also be tuned by the location of the boundary pixel in the pixel shifting system.h) The comparison between the regular pixel blending and pixel shifting shows the pixel shifting approach can achieve higher-contrast images and hence sharper and cleaner cured edges.
) shows a portion of a typical target image, (b-d) shows the error images corresponding to the target image at different optimization stages.

Figure 3 .
Figure 3. Calibrated image planning and optimization.a) Error diffusion algorithm based on a closed-loop pixel blending process.b) Iteration process controlled by the optimization parameters.c) An example to show the error diffusion process.d) Adaptive error diffusion with improved efficiency.

Figure 4 .
Figure 4. Subpixel shifting process.a) The perspective view of the 3D simulation software.b)The side view of the 3D simulation software shows the pixel shifting approach (bottom) can achieve steeper accumulated energy and a sharper cured surface.c) The top view of the 3D simulation software shows the pixel shifting approach (bottom) can achieve higher circularity of the printed feature than the static MIP-SL process (top) and the regular pixel blending approach (middle).d) Sharp feature simulation: the mask image (left) and simulated cured profile in the lateral direction (right) for the grayscale-based mask image planning method (top) and the error diffusion-based mask image planning method (bottom).e) The benchmark test cases designed to simulate the mask image planning algorithms for sharp features and small features.f ) Small feature simulation: the mask image (left) and simulated cured profile in the lateral direction (right) for the grayscale-based mask image planning method (top) and the error diffusion-based mask image planning method (bottom).g) Dimensional accuracy and positional accuracy simulation: a micropillar array consists of micropillars with 2-pixel width and incremental gaps with subpixel size (g.1).The regular image planning results in rounded corners and incorrect gap size (g.2),whereas the masks (g.3 and g.4) from the error diffusion method can achieve both sharp corners and desired gap size (g.5).

Figure 5 .
Figure 5. a) Experimental setup of the sps-MIP-SL system.b) The subpixel shifting sequence of Mask 1 -Mask 16 for n ¼ 4. c) The mask image movement by a piezo-drive XY linear stage for one voltage pulse in the Y-axis.d) The XY-piezo stage control to achieve the subpixel shifting sequence in building a layer.

Figure 6 .
Figure 6.Experimental results and analysis.a) Dimensional accuracy study.b) Thin feature verification.c) Sharp feature verification.d) A real-world case study.