In data generated by “natural” processes, approximation by normal (Gauss) distribution is feasible when the sample size is large enough, i.e., n ≥ 30(31, 33, 34). In smaller numbers of patients, the *t*-distribution replaces the normal distribution(34). In both cases, SDD and sample size can be determined by the calculation rules of normal distribution as follows(31, 34–37):

*General equation for the sample size (n).*

Effects were measured as differences (d) between 2 groups. For example, d is the difference between the mean of the intervention group and the mean of the control group. In large samples (n ≥ 30), d can be considered as normally distributed with the mean μ_{d} and the standard error SE(d). Specifically, this is true when the scores of both groups are normally distributed because of the fact that, by the calculating rules of normally distributed variables, the difference between 2 normally distributed variables is also normally distributed. The null hypothesis is that there is no effect: μ_{d} = 0. The alternative hypothesis(31, 34) is that

The z-values come from the standard normal distribution (mean = 0, standard deviation = 1) where α = two-sided type I error (mostly α = 0.05) and β = one-sided type II error; thus 1 – β = power (mostly power = 0.8). In the case of n < 30 the z-values must be replaced by *t*-values out of the *t*-distribution(34).

When comparing the difference (d) of 2 (effect) variables, the mean of the difference is equal to the difference of the 2 means by the commutative rule. By the rules of calculation with normally distributed variables, the difference's variance results from the sum of the variances of the 2 means: variance(d) = s^{2}/n_{1} + s^{2}/n_{2}, when both effect variables have the same (or a comparable) “a priori” standard deviation, SD, and n_{1}, n_{2} are the sample sizes of the variables. In paired followup data, or when both the control and treatment groups have the same size, we can set n_{1} = n_{2} = n. Thus, SE(d) can be replaced by SE(d) = √(s^{2}/n + s^{2}/n) = √(2s^{2}/n) in formula [0], resulting in the general equation for the sample size:

where z_{α} / z_{β} is the value of the standard normal distribution (mean = 0, standard deviation = 1) at the probability of α or β, respectively

α =two-sided type I error

β =one-sided type II error (thus, 1 – β = power)

Δ =mean effect, i.e., the difference of the mean score of the intervention group (or followup score) minus the mean score of the control group (or baseline score) equals the mean of the differences μ_{d}

SD =standard deviation of the scores at baseline (a priori standard deviation)

In the case of followup studies, we have the same subjects in the control group (before the intervention) and in the intervention group (after the intervention). Therefore, n is the total required number of the sample.

Conversely, given a sample size (n), and the a priori baseline standard deviation (SD), for example by a pilot study, the smallest statistically detectable difference (SDD = Δ) can be determined out of formula&lsqbr;1&rsqbr;:

*Determination of n by ES.*

If we know the effect size (ES) from a pilot study, and we assume that in the control group of the main study the standard deviation is equal or comparable to the a priori standard deviation of the control group in the pilot study, we have (SD / Δ) = 1 / ES by the definition of ES. Out of formula [1] follows

*Determination of n by SRM.*

If we have paired observations and we know the variance of the differences SD_{Δ}^{2} (from a pilot study), then we can replace the standard error of the mean difference by SE(d) = √(SD_{Δ}^{2}/n) in formula [0](32, 35):

Because SRM is equal to μ_{d} / SD_{Δ} (and μ_{d} = Δ) by formula [0] it follows that

For the mostly used type I and II errors from the standard normal distribution the expression (z_{α} + z_{β})^{2} can be replaced by