### Introduction

*Korean Journal of Anesthesiology*, have been striving to identify and to reduce statistical errors overall in medical journals [2,3,4,5]. As a result, a wide array of statistical errors has been found in many papers. This has further motivated the editors of each journal to enhance the quality of their journals by developing checklists or guidelines for authors and reviewers [6,7,8,9] to reduce statistical errors. One of the most common statistical errors found in journals is the application of parametric statistical techniques to nonparametric data [4,5]. This is presumed to be due to the fact that medical researchers have had relatively few opportunities to use nonparametric statistical techniques as compared to parametric techniques because they have been trained mostly on parametric statistics, and many statistics software packages strongly support parametric statistical techniques. Therefore, the present paper seeks to boost our understanding of nonparametric statistical analysis by providing actual cases of the use of nonparametric statistical techniques, which have only been introduced rarely in the past.

### The History of Nonparametric Statistical Analysis

### The Basic Principle of Nonparametric Statistical Analysis

### Advantages and Disadvantages of Nonparametric Statistical Analysis

### Types of Nonparametric Statistical Analyses

### Median test for one sample: the sign test and Wilcoxon's signed rank test

### Sign test

_{0}of a population, and it involves testing the null hypothesis H

_{0}: θ = θ

_{0}. If the observed value (X

_{i}) is greater than the reference value (θ

_{0}), it is marked as +, and it is given a − sign if the observed value is smaller than the reference value, after which the number of + values is calculated. If there is an observed value in the sample that is equal to the reference value (θ

_{0}), the said observed value is eliminated from the sample. Accordingly, the size of the sample is then reduced to proceed with the sign test. The number of sample data instances given the + sign is denoted as 'B' and is referred to as the sign statistic. If the null hypothesis is true, the number of + signs and the number of − signs are equal. The sign test ignores the actual values of the data and only uses + or − signs. Therefore, it is useful when it is difficult to measure the values.

### Wilcoxon's signed rank test

_{0}. In contrast, Wilcoxon's signed rank test not only examines the observed values in comparison with θ

_{0}but also considers the relative sizes, thus mitigating the limitation of the sign test. Wilcoxon's signed rank test has more statistical power because it can reduce the loss of information that arises from only using signs. As in the sign test, if there is an observed value that is equal to the reference value θ

_{0}, this observed value is eliminated from the sample and the sample size is adjusted accordingly. Here, given a sample with five data points (X

_{i}), as shown in Table 2, we test whether the median (θ

_{0}) of this sample is 50.

_{0}from each data point (R

_{i}= X

_{i}- θ

_{0}), find the absolute value, and rank the values in increasing order, the resulting rank is equal to the value in the parenthesis in Table 2. With Wilcoxon's signed rank test, only the ranks with positive values are added as per the following equation:

### Comparison of a paired sample: sign test and Wilcoxon's signed rank test

### Sign test

_{0}). The sign test for a paired sample compares the scores before and after treatment, with everything else identical to how the one-sample sign test is run. The sign test does not use ranks of the scores but only considers the number of + or − signs. Thus, it is rarely affected by extreme outliers. At the same time, it cannot utilize all of the information in the given data. Instead, it can only provide information about the direction of the difference between two samples, but not about the size of the difference between two samples.

### Wilcoxon's signed rank test

_{0}), while the paired test compares the pre- and post-treatment scores. In the example with five paired data instances (X

_{ij}), as shown in Table 3, which shows scores before and after education, X

_{1j}refers to the pre-score of student j, and X

_{2j}refers to the post-score of student j. First, we calculate the change in the score before and after education (R

_{j}= X

_{1j}- X

_{2j}). When R

_{j}is listed in the order of its absolute values, the resulting rank is represented by the values within the parentheses in Table 3. Wilcoxon's signed rank test is then conducted by adding the number of + signs, as in the one-sample test. If the null hypothesis is true, the number of + signs and the number of − signs should be nearly equal.

### Comparison of two independent samples: Wilcoxon's rank sum test, the Mann-Whitney test, and the Kolmogorov-Smirnov test

### Wilcoxon's rank sum test and Mann-Whitney test

_{i}belonging in the X group and all data y

_{i}belonging in the Y group and calculates the probability of xi being greater than y

_{i}: P(x

_{i}> y

_{i}). The null hypothesis states that P(x

_{i}> y

_{i}) = P(x

_{i}< y

_{i}) = ½, while the alternative hypothesis states that P(x

_{i}> y

_{i}) ≠ ½. The process of the Mann-Whitney test is illustrated in Table 5. Although the Mann-Whitney test and Wilcoxon's rank sum test differ somewhat in their calculation processes, they are widely considered equal methods because they use the same statistics.

### Kolmogorov-Smirnov test (K-S test)

_{X}, S

_{Y}), and the value with the greatest difference between the cumulative distributions of two variables (Max(S

_{X}- S

_{Y})) must be determined. This maximum difference is the test statistic. We compare this difference to the reference value to test the homogeneity of the two samples. The actual analysis process is described in Table 6.

### Comparison of k independent samples: the Kruskal-Wallis test and the Jonckheere test

### Kruskal-Wallis test

### Jonckheere test

_{2}is better than the null hypothesis H

_{1}.

_{0}: [τ

_{1}= τ

_{2}= τ

_{3}]

_{1}: [τ

_{1}, τ

_{2}, τ

_{3}not all equal]

_{2}: [τ

_{1}≤ τ

_{2}≤ τ

_{3}, with at least strict inequality]