#### OSVR

The SVR method applies a support vector machine (SVM) to regression analysis and can be used to construct nonlinear models by applying a kernel trick as well as the SVM. The OSVR method is a method efficiently updating a SVR model to meet the Karush–Kuhn Tucker (KKT) conditions that the SVR model must fulfill when training data are added or deleted.

The primal form of SVR can be shown to be the following optimization problem.

Minimize

- (A1)

where *y*_{i}_{,} and **x**_{i} are training data, *f* is a SVR model, **w** is a weight vector, *ε* is a threshold, and *C* is a penalizing factor that controls the trade-off between model complexity and training errors. The second term of Eq. A1 is the *ε*-insensitive loss function and given as follows

- (A2)

Through the minimization of Eq. A1, we can construct a regression model that has good balance between generalization capabilities and the ability to adapt to the training data. A **y**-value predicted by inputting data **x** is represented as follows

- (A3)

where *N* is the number of training data, *b* is a constant term, and *K* is a kernel function. The kernel function in our application is a radial basis function

- (A4)

where *γ* is a tuning parameter controlling the width of the kernel function. From Eqs. A1 and A2, *α*_{i} and *α*_{i}^{*} in Eq. A3 are obtained by minimizing the equation given as

- (A5)

subject to

- (A6)

- (A7)

*K*_{ij} in Eq. A5 is represented as follows

- (A8)

Now, we define *θ*_{i} as follows

- (A9)

From Eqs. A3, A4, and A8, a predicted **y**-value of data **x**_{i} is given as

- (A10)

where *θ*_{i} meets the following equation

- (A11)

The error function *h* is defined as

- (A12)

Each training data must meet one of Eqs. A13–A17. All training data can be divided into the following sets: error support vectors, E, which meet Eq. A13 or A17, margin support vectors, S, which meet Eq. A14 or A16, and remaining vectors, R, which meet Eq. A15.

When new data **x**_{c}, *y*_{c} are added, there is no need to update the SVR model *θ*_{i}, *b* if **x**_{c} belongs to R. On the other hands, if **x**_{c} belongs to E or S, the initial value of *θ*_{c} that is *θ*_{i} corresponding to **x**_{c} is set as 0, and *θ*_{c}, *θ*_{i}, and *b* are gradually changed to meet the KKT conditions. There are possibilities that each training data moves to another region due to the changes. But, assuming no such movements, variations of *h*(**x**_{i}), *θ*_{c}, *θ*_{i}, and *b*, Δ*h*(**x**_{i}), Δ*θ*_{c}, Δ*θ*_{i}, and Δ*b*, respectively, can be represented from Eqs. A11 and A12 as follows

- (A18)

- (A19)

The *θ*_{i}-values of the training data belonging to E and R did not change because of Eqs. A13, A15, and A17, and thus, Eq. A18 can be transformed as

- (A20)

The *h*(**x**_{i})-values of the training data belonging to S are settled due to Eqs. A14 and A16. Thus Eqs. A19 and A20 can change to

- (A21)

- (A22)

Here *M* is the number of the training data that belong to S. From Eqs. A20, A23, and A24, *h*(**x**_{i}) for the training data belonging to E and R can be transformed as

- (A27)

where

- (A28)

From Eqs. A24 and A27, Δ*θ*_{c} for the movement of each training data is represented as

- (A29)

- (A30)

The absolute Δ*θ*_{i}-values for each training data to move from the current region to another region, i.e. from E to S, from S to E or R and from R to S, are calculated by using Eqs. A29 and A30. The minimum value of the absolute Δ*θ*_{i}-values calculated with all training data is selected, and the data having the minimum Δ*θ*_{i}-value is actually moved to a new region. The calculation of the absolute Δ*θ*_{i}-values and the movement of the data having the minimum value of the absolute Δ*θ*_{i}-values are repeated until each of all the training data meets the KKT conditions, namely, one of Eqs. A13–A17. When one data are deleted from training data, the same iterative calculation is performed until all the data meet the KKT conditions.