Extension of simultaneous Diophantine approximation algorithm for partial approximate common divisor variants

Korea Institute for Advanced Study Individual Grant, Grant/Award Numbers: CG078201, CG080601; Institute for Information and Communications Technology Promotion, Grant/ Award Number: No. 2016‐6‐00598 Abstract A simultaneous Diophantine approximation (SDA) algorithm takes instances of the partial approximate common divisor (PACD) problem as input and outputs a solution. While several encryption schemes have been published the security of which depend on the presumed hardness of variants of the PACD problem, fewer studies have attempted to extend the SDA algorithm to be applicable to these variants. In this study, the SDA algorithm is extended to solve the general PACD problem. In order to proceed, first the variants of the PACD problem are classified and how to extend the SDA algorithm for each is suggested. Technically, the authors show that a short vector of some lattice used in the SDA algorithm gives an algebraic relation between secret parameters. Then, all the secret parameters can be recovered by finding this short vector. It is also confirmed experimentally that this algorithm works well.


| INTRODUCTION
Simultaneous Diophantine approximation (SDA) algorithms have been proposed to analyse the partial approximate common divisor (PACD) problem suggested by Howgrave-Graham [1]. Informally, PACD is the problem of recovering a secret prime p when approximate multiples of the form x i = p ⋅ q i + r i for a small integer term r i are given. Since the PACD problem was exploited to construct the fully homomorphic encryption over integers suggested by Dijk, Gentry, Halévi, and Vaikuntanathan (called the DGHV scheme) [2], both PACD and SDA are highlighted.
Shortly thereafter, several variants of the PACD problem were exploited. To encrypt multiple messages at once, using the DGHV scheme for its efficiency, Cheon et al. proposed a variant of the PACD problem and suggested batch fully homomorphic encryption over the integers [3]. In addition, Coron, Lepoint, and Tibouchi proposed a practical candidate for the multilinear map employing the variant of PACD [4]. In Cheon et al.'s paper, this new base problem is defined as ℓ-DACD, while Galbraith, Gebregiyorgis, and Murphy referred to it as CRT-ACD [5]. To avoid confusion, throughout this paper, we define the variant problem as CCK-ACD. Similarly, we redefine the problem used in Coron et al.'s paper as scaled CRT-ACD; because these primitives offer improved functionality and efficiency, they have been used in a variety of applications, especially homomorphic encryption over the integers, and indistinguishability obfuscations [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20].
Although these problems share common mathematical structures, both have identifying characteristics. Informally, CCK-ACD uses an extra-large prime factor, and scaled CRT-ACD is multiplied by a common secret constant. According to these two characteristics, we define scaled CCK-ACD and CRT-ACD and classify four distinct problems. The following four problems are defined as finding the η-bit integer factor of N using given γ-bit instances and the γ-bit integer N. For each • ½CCK -ACD� N ¼ p 0 ⋅ ∏ n i¼1 p i with |p 0 | ≫ |p i | and b j ≡ p i r i;j with r i,j ∈ [−2 ρ , 2 ρ ] for each i ∈{1, …, n}.
Because the SDA algorithm was only proposed for solving the PACD problem, analysing variant problems emerged as follow-up research. Recently, Cheon et al. first suggest an extended SDA algorithm to solve the CCK-ACD problem [21] under the condition n ⋅ ð 2logβ β−1 þ 1Þ ¼ OðηÞ. However, to date, extending the SDA algorithm to solve the other variants is not clearly established.

| Our work
The original SDA algorithm for PACD described by Howgrave-Graham (see Section 2 of [1]) is designed to find a short vector of the column lattice L generated by the matrix where N = p ⋅ q 0 and b i 's are PACD instances of the form p ⋅ q i + r i . We note that the lattice L then contains a short vector ðq 0 ; q 0 ⋅ r 1 ; …; q 0 ⋅ r k Þ T . Therefore, computing this short vector is equivalent to recovering a non-trivial factor of N in the case of PACD. However, there is no indication that the short vector of the SDA algorithm on other variants of PACD problem cases is related to the prime factors.
In this paper, we present SDA-variant algorithms for solving scaled CCK-ACD as well as CRT-ACD and its scaled version. In other words, SDA is also applicable to problems beyond PACD and CCK-ACD. Thus, we can obtain all the secret factors of this class of problems.
Our algorithms employ lattice reduction algorithms such as the BKZ algorithm to obtain a short vector generated by multiple instances of the target problems, and the short vector obtained is used to reconstruct algebraic relations for recovering secret primes, similar to previous approaches [21][22][23]. Asymptotically, we can find all prime factors of CRT-ACD and scaled CRT-ACD under the condition n ⋅ ð 2logβ β−1 þ 1Þ ¼ OðηÞ.
More precisely, we first suggest a new algorithm to solve the CRT-ACD problem under the condition where β is the block size of the BKZ algorithm, and η and ρ are the bit-sizes of primes p i and integers r i , respectively. Under the condition, our algorithm shows that all secret factors can be recovered in time maxfpolyðn; ηÞ; T ðA β Þg, where T ðA β Þ is the time complexity for running the BKZ algorithm with a block size β. Second, we provide a similar algorithm to solve the scaled CRT-ACD problem. More precisely, the scaled problem provides instances of the form c ⋅ CRT p 1 ;…;p n ð Þ ðr 1 ; …; r n Þ and N ¼ ∏ n i¼1 p i for an unknown fixed constant c ∈ Z N , instead of the original instances CRT p 1 ;…;p n ð Þ ðr 1 ; …; r n Þ, and N. By exploiting a similar algorithm, we show that the problem is solved if Similarly, the algorithm also takes time complexity max poly ðn; ηÞ; T A β À � � � , where T ðA β Þ is the time complexity for running the BKZ algorithm with a block size β.
Finally, we produce an algorithm for reducing the scaled CCK-ACD problem to the scaled CRT-ACD problem under the condition 2 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Thus, we can find all secret factors of scaled CCK-ACD through two steps: (1) reducing to the scaled CCK-ACD problem (2) solving the scaled CRT-ACD problem.
Related work. In addition to the SDA algorithm, orthogonal lattice attack (OLA) is another way to solve the PACD problem. For example, Coron and Pereira recently proposed a result to solve the scaled CRT-ACD problem by extending the OLA [23]. Compared with our extended SDA algorithm, their approach has the same complexity for solving the scaled CRT-ACD problem.
The extended OLA and our SDA-type algorithm commonly aim at recovering a linear summation of r i, j , where r i, j is an integer of each ACD variant problem. Once obtaining it, the next step for recovering the secret prime factor is the same. Both algorithms use distinct algorithms to get the linear summation.
As the name suggests, the extended OLA for ACD variants considers an orthogonal lattice to find the linear summation. Let fb j g 1≤j≤t be scaled CCK-ACD instances with b j ≡ p i r i;j ⋅ c. Let's denote b and r i as t-dimensional vectors (b 1 , …, b t ) and (r i,1 , …, r i,t ), respectively. Two lattices are then considered: According to the definition, it holds that L ⊥ 1 ⊂ L ⊥ and L ⊥ 1 contain a relatively short basis. Intuitively, OLA expects that short vectors of L ⊥ are a basis of L ⊥ 1 . Therefore, the orthogonal lattice of the short vectors reveals a linear summation of r i , a target of the algorithm, under a certain condition. Because the common factor c is cancelled out considering an orthogonal lattice L ⊥ , the OLA is also applicable to the entire ACD variants as our algorithm.
Organisation. In Section 2, we introduce preliminary information related to the lattice and previous works. In addition, we describe how to obtain an auxiliary input from CRT-ACD instances and variant samples in Sections 3 and 3.1, respectively. Next, we provide a heuristic analysis of scaled CCK-ACD in Section 3.2. Finally, we provide an experimental result in Section 4.

| PRELIMINARIES
In this section, we introduce notations and some background about CRT-ACD before presenting our theorems. First, we introduce some notion regarding the lattice. Then, we define the main problem CRT-ACD and a variant, the so-called CRT-ACDwAI. Lastly, we introduce a polynomial-time analysis of CRT-ACDwAI described in Cheon et al. [24].
Notation. Throughout this paper, we use a ← A to denote the uniform sampling operator that selects a from A for a finite set A. Moreover, when the distribution D is given, we use a ← D to denote the selection operator from D.
For integers t and p, we denote by [t] p the integer in (−p/2, p/2] satisfying [t] p ≡ t mod p. In general, we define CRT ðp 1 ;p 2 ;…;p n Þ ðr 1 ; r 2 ; …; r n Þ (alternatively abbreviated as CRT ðp i Þ ðr i Þ) for distinct primes p 1 , p 2 , …, p n as the integer We use bold letters to denote vectors or matrices. For any matrix A, we denote by A = (a i,j ) that (a i,j ) is the (i, j)-entry of A, transposing A as A T , and the i-th row vector of A as [A] i . Moreover, we denote by size(A) the logarithm of the maximal absolute values of all entries. In addition, we define the infinite norm ∥A∥ ∞ as max 1≤j≤m P n i¼1 ja i;j j with A = (a i,j ). We also denote by diagða 1 ; …; a n Þ the diagonal matrix with diagonal coefficients a 1 , …, a n . Finally, we denote by A (mod N) the matrix whose (i, j)-entry is ½a ði; jÞ � N for an integer N.
In case of a vector v = (v 1 , …, v n ), we compute the 2-norm ∥v∥ and 1-norm ∥v∥ 1 as ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P n i¼1 v i 2 p and P n i¼1 jv i j, respectively. We also use the notation v mod N regarding v as the matrix.

| Lattice
A lattice generated by a set of linearly independent vectors A ¼ fa 1 ; a 2 ; …; a n g ⊂ R m , denoted by LðAÞ, is the set of all integer linear combinations of A. The elements of A are called a lattice basis of LðAÞ, and the rank of LðAÞ is defined as n. If A is a matrix, then LðAÞ is the lattice generated by the set of all column vectors of A, and we call A a basis matrix of LðAÞ. Generally, we define the determinant of matrix A as ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi . Additionally, the determinant of the lattice with basis matrix A, denoted by det LðAÞ, is defined as the determinant of the basis matrix det A.
Let L be a lattice of rank n. Then, the successive minima λ 1 ; …; λ n ∈ R þ of L are defined as follows. For any 1 ≤ i ≤ n, λ i is a minimum value such that there exist i linearly independent vectors in L whose sizes do not exceed λ i . We use λ i ðLÞ to denote the i-th successive minimum of the lattice L. In relation to the successive minima, there is a useful result to restrict them, which is called Minkowski's Theorem [25].

Theorem (Minkowski)
Let L ⊂ R m be a n-rank lattice. Then, the following is valid: We also obtain an upper bound for λ 2 ðLÞ from the above inequalities as follows: Finding a short vector of a lattice is essential in our attack. Fortunately, there are some algorithms to find a short vector of a lattice, called lattice reduction algorithms.
Lattice Reduction Algorithm The LLL algorithm and the BKZ algorithm, introduced in [25,26], are lattice reduction algorithms. We mainly use these algorithms to find an approximately short vector of a lattice in finite time. According to [25], the LLL algorithm on the n-rank lattice L with basis matrix B gives a short vector v, which satisfies the following.  We denote the time complexity of the LLL algorithm by T L (n, size(B)), which is a polynomial function on inputs.
For the BKZ algorithm, according to [26], the block size β determines how short the output is. Applying the BKZ algorithm to the n-rank lattice L with basis B, we can obtain a short vector v in poly(n, size(B)) ⋅ C HKZ (β) time, satisfying the following: where γ β is the Hermite constant of rank β, which does not exceed β, and C HKZ (β) denotes the time spent to obtain theshortest vector of the β-dimensional lattice. Because it takes 2 O(β) time to find it, we regard C HKZ (β) as 2 O(β) henceforth.

| Cryptanalysis of the CRT-ACD with auxiliary input
The PACD, introduced by Howgrave-Graham [27], is a problem of finding a secret prime p for a given many instances which are nearly multiples of p. With the Chinese remainder theorem (CRT) on multiple primes, we define a multiple prime version of PACD, called a CRT-ACD problem. We present a formal definition of CRT-ACD problems as follows.
While the CRT-ACD problems are regarded to be too hard for proper parameters, Cheon et al. described an analysis of the CRT-ACD problem in polynomial time of n, η, ρ when an auxiliary input CRT ðp i Þ ðN=p i Þ is given. [24] In this Section, we define a CRT-ACD problem with an auxiliary input (CRT-ACDwAI) by importing Definition 1 and introduce a result of Cheon et al.'s research.
Definition 2 (CRT-ACDwAI) Let n, η, ρ be positive integers. For given η-bit primes p 1 , …, p n , define The CRT-ACDwAI problem is given as follows: for given many samples from D χ ρ ;η;n ðp 1 ; …; p n Þ, N and b We state an useful lemma and a result described in [24] below.

Lemma 3 ([22], Section 3.1) For a given
Proof: The first equality is evident due to the definition of the Chinese remainder theorem. To show that the second equality is correct, consider the equation modulo p i for each i.
Then the left-hand side is r i ⋅ b p i and the right-hand side is also , which is less than N/2. Hence, by the uniqueness of CRT, the second equality holds. Lemma 3 implies that the product of a CRT-ACD instance and an auxiliary input is small compared with the integer N. Simultaneously, the product is an integer equation between secret elements.
Cheon et al.'s algorithm still can be applied. Therefore, we accordingly now generally regard the auxiliary input b P in CRT-ACDwAI as the following:

| EXTENSION OF THE SDA ALGORITHM FOR ACD VARIANTS
In this Section, we introduce an approach to solve the CRT-ACD problem by using the SDA algorithm. To achieve our goal, we recover two auxiliary inputs of the CRT-ACDwAI problem from the CRT-ACD instances. Then, by applying the result of Section 2.2, CRT-ACD is solved.

-
We note here that any elements in Z N can be written as A ¼ P n j¼1 a j ⋅ b p j for some integers a j , because all b p j ¼ N=p j 's are relative primes. Multiplying by the given CRT-ACD instances b i ¼ CRT p j ð Þ ðr j;i Þ, we obtain elements of the form ½ P n j¼1 r j;i ⋅ a j ⋅ b p j � N . The main idea to recover an auxiliary input is that if A is an auxiliary input, by Lemma 3, ½A ⋅ b i � N is equal to P n j¼1 r j;i ⋅ a j ⋅ b p j . Compared with N, it is relatively small for each i.
To exploit this observation, we now consider a column lattice L generated by the following basis matrix.
Our main observation is then that ðA; ½A ⋅ b 1 � N ; …; ½A ⋅ b k � N Þ T is a short vector of L with an auxiliary input A. Hence, by finding a short vector of the lattice L, one may recover an auxiliary input. More precisely, we are able to obtain the following result. Proof: Assuming here that k is greater than n, let c ¼ ðc 0 ; c 1 ; …; c k Þ T be a short vector in L. Then, for some integer d ¼ P n j¼1 α j ⋅ b p j , c can be written as Except for the first entry of c, we define a vector Therefore, the vector c ∼ is decomposed as follows: where a ¼ ðα 1 ; …; α n Þ; b P ¼ diagðb p 1 ; …; b p n Þ; and R ¼ ðr j;i Þ ∈ M n�k ðZÞ.
For a matrix R and k ≥ n, there exists a right inverse R � ∈ M k�n ðZÞ such that the following holds with overwhelming probability.
where I n is an n � n identity matrix. Thus, we can restrict the size of ½α i ⋅ b p i � N 's as follows.
Our goal is to make the size of ∥c ∼ ∥ ⋅ ∥ R � ∥ ∞ less than 2 nη−n−2ρ− log n−1 which implies the following: The above condition establishes that ½d� N ¼ ½ P n j¼1 α j ⋅ b p j � N ¼ P n j¼1 ½α j � p j ⋅ b p j holds, and would be the auxiliary input for the CRT-ACD. Therefore, c 0 ¼ ½d� N is exactly P n j¼1 ½α j � p j ⋅ b p j with j½α j � p j j ≤ 2 nη−n−2ρ−logn−1 =b p j , and we can regard c 0 as an auxiliary input of CRT-ACD.
Size of ∥R*∥ ∞ : To estimate ∥R*∥ ∞ , we describe how to build R*. Let R ∼ i denote a matrix R without the i-th row of R.
We next define a lattice Its rank is k − (n − 1) and its determinant is less than ð ffi ffi ffi k p ⋅ 2 ρ Þ n−1 , by Hadamard's inequality. Then, by Gaussian Heuristics, there exist two vectors w i ; w 0 i ∈ L i such that kw i k ∞ ; kw 0 i k ∞ ≤ ð ffi ffi ffi k p ⋅ 2 ρ Þ ðn−1Þ=ðk−nþ1Þ . We now consider two integers 〈R i , w i 〉 and 〈R i , w i 0 〉. We expect that the two integers are relatively prime because L i are independent of the vector R i . Because the size of each entry of R i is less than 2 ρ , it is evident that both sizes have an upper bound of n ⋅ ð ffi ffi ffi k p ⋅ 2 ρ Þ ðn−1Þ=ðk−nþ1Þ ⋅ 2 ρ . This size bound in particular implies that there exist two integers J i and J i 0 such that We now define v i as J i ⋅ w i + J i 0 ⋅ w i 0 . By the linear homomorphic property, the equation R ⋅ v i = e i holds. It implies Size of ∥c∥ ∞ : Because ðb p 1 ; r 1;1 b p 1 ; …; r 1;k b p 1 Þ T is in L, the size of λ 1 ðLÞ does not exceed the previous vector's size. Therefore, we at least establish that the following equation holds.
Taking c as the shortest vector of the LLL algorithm output on L, we can bound ∥c∥, as described in Section 2.1. In other words, the size of vector c is less then 2 k=2 ⋅ λ 1 ðLÞ ≤ 2 k=2 ⋅ ffi ffi ffi k p ⋅ 2 ρþðn−1Þη . Thus, the upper bound of kc ∼ k ⋅ kR � k ∞ is computed by the following inequality.
To guarantee our goal, the following inequality is required: Then, this condition can be written as follows: Therefore, when this condition holds, we can regard c 0 as an auxiliary input of CRT-ACD. This completes the proof.
Remark: Formally, our algorithm cannot reduce the CRT-ACD to CRT-ACDwAI since we employ 'two' auxiliary inputs even though the original CRT-ACDwAI problem only provides a single auxiliary input. However, our algorithm is almost the same as the algorithm for solving CRT-ACDwAI except that our algorithm requires 'two' auxiliary inputs. Thus, in this paper, we simply say that CRT-ACD can be reduced to CRT-ACDwAI. Remark: In the event that the BKZ algorithm with block size β is exploited instead of the LLL algorithm, we can obtain a lattice point c such that kck ≤ 4 ⋅ β As a result, if it holds that We can ensure that c 0 is an auxiliary input of CRT-ACD. This condition can be written as follows: Now, we introduce our algorithm to solve a CRT-ACD problem using two auxiliary inputs. This process is almost the same to the algorithm in Cheon et al.'s algorithm [22]. First, by applying the above algorithm twice for different CRT-ACD instances, we assume that two auxiliary inputs d ¼ P n j¼1 α j ⋅ b p j and d 0 ¼ P n j¼1 α 0 j ⋅ b p j are given. By Lemma 3, the following equations hold.
where b i 's are CRT-ACD instances. For n CRT-ACD samples fb j ¼ CRT p k ðr k;j Þg 1≤j≤n , we denote by w i,j and w 0 i;j the expression ½b i ⋅ b j ⋅ d� N and ½b i ⋅ b j ⋅ d 0 � N , respectively. Then, we obtain the following matrix equations: where the vectors R i are of the form (r 1,i , r 2,i , …, r n,i ). By spanning 1 ≤ i, j ≤ n, we can construct two matrices W' = (w 0 i,j ) and W 0 ¼ ðw 0 i;j Þ ∈ M n�n ðZÞ. By computing all eigenvalues of a matrix Y ¼ W ⋅ ðW 0 Þ −1 ∈ M n�n ðQÞ, we can recover α j =α 0 j for each j in polynomial time η and n. More precisely, the overall complexity for computing eigenvalues is O ∼ ðn 2þω ⋅ ηÞ, where ω is a constant less than 2.38. Once we obtain the ratio, we can also get α j /g j and α 0 j =g j , where g j is the greatest common divisor of α j and α j 0 . Contrastingly, from the setting of d and d 0 , we know that for all j, This implies that Thus, by computing gcdðN; d ⋅ α 0 j =g j − d 0 ⋅ α j =g j Þ, we can find p j for each j.
We note that if there is an index k such that α j /α j multiple of p j ⋅ p k . Because d and d 0 are distinct integers, all eigenvalues cannot be the same. This means that if n = 2, computing gcdðN; d ⋅ α 0 j =g j − d 0 ⋅ α j =g j Þ only outputs p j . In the general case, we can therefore obtain at least a non-trivial factor of N. This allows us to reduce the CRT-ACD into the CRT-ACD with a small number of factors compared with n. In other words, by repeating the above algorithm on the reduced problem, we can obtain all prime factors of N. In summary, we have the following results.

| Scaled CRT-ACD problem
Asanextensionofthe previousresult,weintroduceavariantofthe CRT-ACD problem and its analysis. First, we provide a precise definition of the scaled CRT-ACD problem. (SCRT-ACD) Definition 7 (scaled CRT-ACD) Let n, η, ρ be positive integers. For given η-bit primes p 1 , …, p n , k þ 1 numbers of modified CRT-ACD instances for 0 ≤ i ≤ k are given in the following form: where c ← Z N . The scaled CRT-ACD problem is defined as follows: given such modified samples of CRT-ACD and N ¼ ∏ n i¼1 p i , find p i for all i. Because the size of c is unknown, the algorithm described in Section 3 is not directly applicable to the given modified instances. Before applying the previous algorithm, we compute a ratio between scaled CRT-ACD instances. In other words, we obtain the new quantities For the new samples b 0 i , we consider a new auxiliary input d of the form P n j¼1 α j ⋅ r j;0 2 ⋅ b p j . Suppose the size of r j,i and α j is sufficiently small. Then, with the similar argument in the proof of Lemma 3, the following hold.
More precisely, the above equation can hold under the condition From the observation, we consider a new lattice L 0 generated by the following matrix.
Similar to the analysis in the Section 3, the first entry of the short vector of L 0 becomes a new auxiliary input. As mentioned in Section 3, the process of optimising the number of samples k did not improve the results significantly. Therefore, here, for the convenience of computation, k is fixed to 2n. Let Then, we can write the following: where a = (α 1 , …, α n ), b P ¼ diagðb p 1 ; …; b p n Þ, and R 0 ¼ ðr j;i 2 Þ ∈ M n�2n ðZÞ.
Similar to Section 3, there exists a right inverse ðR 0 Þ � ∈ M k�n ðZÞ satisfying R 0 ⋅ (R 0 )* = I n . Then, the following holds. CHO ET AL.
In short, if the size of ∥c 0 ∥⋅∥(R 0 )*∥ ∞ is bounded by 2 nη−n−2ρ− log n−1 , the first entry d is a new auxiliary input.
Conversely, we already know that ðr 1;0 2 ⋅ b p 1 ; r 1;1 2 ⋅ b p 1 ; …; r 1;2n 2 ⋅ b p 1 Þ T ∈ L 0 . Therefore, this implies that an equation λ 1 ðL 0 Þ ≤ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 2n þ 1 p ⋅ 2 ðn−1Þηþ2ρ holds. As mentioned above, a vector c 0 is the shortest output vector of the LLL algorithm. Therefore, the size of c 0 is bounded by: Combining the above inequalities, we can obtain the following.
Therefore, the condition to find the new auxiliary input can be concisely written as follows: 2n ≤ η − 10ρ − 4 log n − 3: Hence, we have the following result.

Lemma 8
Let n, η, and ρ be parameters of a scaled CRT-ACD. When O(n) scaled CRT-ACD instances are given, we can find an auxiliary input for instances of SCRT-ACD under the asymptotic condition 2n ≤ η − 10ρ − 4 log n − 3 in T L (n, n ⋅ η) time by using the LLL algorithm.
Suppose we now have two auxiliary inputs We can check the following: With the same algorithm in Section 3, we can compute α j and α 0 j for all j in polynomial time of η, n, and ρ. From the relation d ⋅ ðd 0 Þ −1 ≡ α j ⋅ ðα 0 j Þ −1 mod p j , we can find p j for each j by computing gcdðN; d ⋅ α 0 j − d 0 ⋅ α j Þ and obtain the following theorem. Remark: If we use the BKZ algorithm with block size β instead of the LLL algorithm for the scaled CRT-ACD, the upper bound of a vector c 0 from the lattice L 0 is as follows: Therefore, the required condition changes to the following:

| Scaled CCK-ACD problem
In the previous Section, we gave an algorithm to solve the scaled CRT-ACD. Similar to the scaled CRT-ACD, we can define a scaled CCK-ACD by multiplying an unknown constant by CCK-ACD instances. In this Section, we provide an algorithm to solve the scaled CCK-ACD. Before extending a method, we give a definition of the scaled CCK-ACD problem.
½c ⋅ b i � N with b i ¼ CRT p j ð Þ ðr j;i Þ ← D χ ρ ;η;η 0 ;n ðp 0 ; …; p n Þ; where c ← Z N . The scaled CCK-ACD problem is given as follows: given k scaled CCK-ACD samples and N, find p i for all i.
Intuitively, except for the factor p 0 , scaled CCK-ACD coincides with scaled CRT-ACD and ½b i � p 0 has an arbitrary large value in Z p 0 .
Because the size of c is unknown, the algorithm in [21] is not directly applicable to the given modified samples. Thus, we consider an analogous technique in Section 3.1. In other words, we use division with modulus N and obtain the new instances j;0 mod p j holds for all 0 ≤ j ≤ n.
To apply the method similarly to the algorithm in [21], we define a new auxiliary input d of the form P n j¼1 α j ⋅ r 2 j;0 ⋅ b p j for scaled CCK-ACD. We here note that the auxiliary input has no term of b p 0 . Similar to the analysis in Section 3.1, if the size of r j,i and α j is sufficiently small, we can guarantee the following holds: Compared with scaled CRT-ACD, the auxiliary input has no term of b p 0 . Thus, the new auxiliary input is at least a multiple of p 0 . This implies that p 0 can be recovered by computing the greatest common divisor of the auxiliary input and a modulus N. When we recover an integer N/p 0 , scaled CCK-ACD can evidently be reduced scaled CRT-ACD. Therefore, we can apply the algorithm in Section 3.1. In short, our main goal is to recover the auxiliary input to recover p 0 . Now we consider a lattice L 00 generated by: is in a lattice L, λ 1 is smaller than 2 (n−1)η+η0+2ρ . However, we assume that λ 2 comes from the Gaussian heuristic assumption. i:e: λ 2 ≈ ffi ffi ffi ffi ffi ffiffi kþ p 1⋅ N k kþ1 . Hence, we can expect that the first component of the shortest output vector c produced by the LLL algorithm on the above matrix B is a multiple of p 1 as long as 2 kþ1 2 λ 1 ðLÞ is less than λ 2 ðLÞ. Therefore, we have the following inequalities. Taking the logarithm of both sides of the inequality, it can be asymptotically rearranged as ffi ffi ffi ffi ffi 2γ p : The second equality comes from the arithmetic-geometric mean inequality with (k + 1) 2 = 2γ. Hence, the following result holds.