" alt="Top of the page image, it serves as decoration.">

Statistical
Tables

This Appendix contains the statistical tables you will need to perform sampling operations or to verify the results of a statistical test. We have ensured that the tables are referenced in the sessions where they appear.




walking Content

This Appendix contains thirteen statistical tables that you will need to perform sampling operations or to verify the results of a statistical test. We have ensured that the tables are referenced in the sessions where they appear.

The random number table is a statistical tool used to select random samples from a population. It consists of columns and rows filled with digits that are chosen randomly.

Generate a Random Number Table with Python

In Python, you can generate random numbers using the random module (refer to the online compiler Trinket, available since Session 1 of Block II), which offers a variety of functions for generating random numbers in different contexts.
Here is the procedure to follow to generate random numbers:

Import the random module

First, you need to import the random module:

import random
Generate a random integer

Use random.randint(a, b) to generate a random integer N such that a <= N <= b.

import random
    
    # Generate a random integer between 1 and 10
    random_number = random.randint(1, 10)
    print(random_number)
    
Generate a random floating-point number

Use random.uniform(a, b) to generate a random floating-point number N such that a <= N <= b.

import random
    
    # Generate a random floating-point number between 0 and 1
    random_number = random.uniform(0, 1)
    print(random_number)
    
Generate a random floating-point number between 0 and 1

Use random.random() to generate a random floating-point number between 0 and 1.

import random
    
    # Generate a random floating-point number between 0 and 1
    random_number = random.random()
    print(random_number)
    
Generate a random number following a normal distribution

Use random.gauss(mu, sigma) to generate a random number following a normal distribution with mean mu and standard deviation sigma.

import random

# Generate a number following a normal distribution with mean 0 and standard deviation 1
random_number = random.gauss(0, 1)
print(random_number)
Select a random element from a list

Use random.choice(sequence) to select a random element from a sequence (such as a list or a string).

import random

# Example list
list_example = [1, 2, 3, 4, 5]

# Select a random element from the list
random_element = random.choice(list_example)
print(random_element)
Shuffle a list randomly

Use random.shuffle(sequence) to randomly shuffle the elements of a list.

import random
  
  # Example list
  list_example = [1, 2, 3, 4, 5]
  
  # Shuffle the list
  random.shuffle(list_example)
  print(list_example)
  
Generate a sequence of random numbers

Use random.sample(population, k) to generate a list of k unique elements randomly chosen from a population.

import random
  
  # Example list
  population = list(range(1, 101))  # 1 to 100
  
  # Generate a list of 10 unique elements randomly chosen
  random_sample = random.sample(population, 10)
  print(random_sample)
  
# Number 1 Number 2 Number 3 Number 4 Number 5 Number 6

The table of the cumulative distribution function of the standard normal distribution, also known as the standard normal table or the standard normal cumulative distribution table, is a statistical tool used to find probabilities associated with specific values of a random variable following a standard normal distribution.

Definition of the Standard Normal Distribution
The standard normal distribution is a normal distribution with a mean of 0 and a standard deviation of 1. Its cumulative distribution function is denoted by \( \Phi(z) \) and gives the probability that a random variable \( Z \) following this distribution is less than or equal to a value \( z \). Mathematically, it is written as:

\[ \Phi(z) = P(Z \leq z) \]

where \( Z \) follows a standard normal distribution, i.e., \( Z \sim \mathcal{N}(0,1) \).

Using the Table
The table of the cumulative distribution function of the standard normal distribution provides the values of \( \Phi(z) \) for different values of \( z \). Here is how to use it:

  • Find the value of \( z \) : Identify the value of \( z \) for which you want to know the cumulative probability.
  • Look up in the table : Find the row and column corresponding to the value of \( z \) in the table. The table is usually divided into intervals of 0.01 or 0.1 for \( z \) values.
  • Read the probability : The value at the intersection of the row and column will give you the probability \( \Phi(z) \), i.e., the probability that the random variable is less than or equal to \( z \).
  • To find the probability that the random variable is greater than a value \( z \), use the complementarity:

    \[ P(Z > z) = 1 - \Phi(z) \]

\(x\) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986

The standard normal distribution (or standard normal distribution) is a normal distribution with a mean of 0 and a standard deviation of 1. The Z-table provides the cumulative distribution function of this standard normal distribution, indicating the probability that a random variable is less than or equal to a given value.

The Z-table is used to determine the probabilities associated with a random variable following a standard normal distribution. It allows you to find the probability that the random variable is less than a certain value \(z\) by looking up the value in the table. \(z\) values are often used for statistical tests and confidence interval calculations.

α 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 2.576 2.326 2.170 2.054 1.960 1.881 1.812 1.751 1.695
0.1 1.645 1.598 1.555 1.514 1.476 1.440 1.405 1.372 1.341 1.311
0.2 1.282 1.254 1.227 1.200 1.175 1.150 1.126 1.103 1.080 1.058
0.3 1.036 1.015 0.994 0.974 0.954 0.935 0.915 0.896 0.878 0.860
0.4 0.842 0.824 0.806 0.789 0.772 0.755 0.739 0.722 0.703 0.690
0.5 0.674 0.659 0.643 0.628 0.613 0.598 0.583 0.568 0.553 0.539
0.6 0.524 0.510 0.496 0.482 0.468 0.454 0.440 0.426 0.412 0.399
0.7 0.385 0.372 0.358 0.345 0.332 0.319 0.305 0.292 0.279 0.266
0.8 0.253 0.240 0.228 0.215 0.202 0.189 0.176 0.164 0.151 0.138
0.9 0.126 0.113 0.100 0.088 0.075 0.063 0.050 0.038 0.025 0.013

Student's t-distribution Table
The Student's t-distribution (or t-distribution) is used to estimate statistical parameters in the context of small samples when the population variance is unknown. The Student's t-distribution table provides the critical values of the t-distribution based on degrees of freedom and confidence level.

Using the Student's t-distribution Table
To use the Student's t-distribution table, follow these steps:

  • Determine the confidence level: For example, for a 95% confidence level, you will use the 0.025 column (as 0.025 at each end of the distribution).
  • Calculate the degrees of freedom: Often, the degrees of freedom are equal to the sample size minus one (n - 1).
  • Find the critical value: Look up the critical value in the table corresponding to your degrees of freedom and confidence level.
\(V\)
\(\alpha\)
0.90 0.50 0.30 0.20 0.10 0.05 0.02 0.01 0.001
1 0.158 1.000 1.963 3.078 6.314 12.706 31.821 63.657 636.619
2 0.142 0.816 1.386 1.886 2.920 4.303 6.965 9.925 31.598
3 0.137 0.765 1.250 1.638 2.353 3.182 4.541 5.841 12.924
4 0.134 0.741 1.190 1.533 2.132 2.776 3.747 4.604 8.610
5 0.132 0.727 1.156 1.476 2.015 2.571 3.365 4.032 6.869
6 0.131 0.718 1.134 1.440 1.943 2.447 3.143 3.707 5.959
7 0.130 0.711 1.119 1.415 1.895 2.365 2.998 3.499 5.408
8 0.130 0.706 1.108 1.397 1.860 2.306 2.896 3.355 5.041
9 0.129 0.703 1.100 1.383 1.833 2.262 2.821 3.250 4.781
10 0.129 0.700 1.093 1.372 1.812 2.228 2.764 3.169 4.587
11 0.128 0.697 1.088 1.363 1.796 2.201 2.718 3.106 4.437
12 0.128 0.695 1.083 1.356 1.782 2.179 2.681 3.055 4.318
13 0.128 0.694 1.079 1.350 1.771 2.160 2.650 3.012 4.221
14 0.128 0.692 1.076 1.345 1.761 2.145 2.624 2.977 4.140
15 0.128 0.691 1.074 1.341 1.753 2.131 2.602 2.947 4.073
16 0.128 0.690 1.071 1.337 1.746 2.120 2.583 2.921 4.015
17 0.128 0.689 1.069 1.333 1.740 2.110 2.567 2.898 3.965
18 0.127 0.688 1.069 1.330 1.734 2.101 2.552 2.878 3.922
19 0.127 0.688 1.066 1.328 1.729 2.093 2.539 2.861 3.883
20 0.127 0.687 1.064 1.325 1.725 2.086 2.528 2.845 3.850
21 0.127 0.686 1.063 1.323 1.721 2.080 2.518 2.831 3.819
22 0.127 0.686 1.061 1.321 1.717 2.074 2.508 2.819 3.792
23 0.127 0.685 1.060 1.319 1.714 2.069 2.500 2.807 3.767
24 0.127 0.685 1.059 1.318 1.711 2.064 2.492 2.797 3.745
25 0.127 0.684 1.058 1.316 1.708 2.060 2.485 2.787 3.725
26 0.127 0.684 1.058 1.315 1.706 2.056 2.479 2.779 3.707
27 0.127 0.684 1.058 1.314 1.703 2.052 2.473 2.771 3.690
28 0.127 0.683 1.056 1.313 1.701 2.048 2.467 2.763 3.674
29 0.127 0.683 1.055 1.311 1.699 2.045 2.462 2.756 3.659
30 0.127 0.683 1.055 1.310 1.697 2.042 2.457 2.750 3.646
40 0.126 0.681 1.050 1.303 1.684 2.021 2.423 2.704 3.551
80 0.126 0.679 1.046 1.296 1.671 2.000 2.390 2.660 3.460
120 0.126 0.677 1.041 1.289 1.658 1.980 2.358 2.617 3.373
0.126 0.674 1.036 1.282 1.645 1.960 2.326 2.576 3.291
Note: When the degrees of freedom are infinite, it corresponds to the number \(z_\alpha\) from the standard normal distribution (see the previous table).

The Chi-Square Distribution (\(\chi^2\)), or Pearson's Chi-Square, is a statistical distribution primarily used for hypothesis testing. It is used to assess the goodness of fit between an observed distribution and a theoretical distribution, as well as to test the independence between two qualitative variables in a contingency table.

If \(Y\) is a random variable following the Chi-Square Distribution (\(\chi^2\)) with \(v\) degrees of freedom, the table provides for a chosen \(\alpha\) the number \(\chi_{\alpha}^{2}\) such that \(P(Y \geq \chi_{\alpha}^{2}) = \alpha\).

\(V\)
\(\alpha\)
0,99 0,975 0,95 0,9 0,1 0,05 0,025 0,01 0,001
10.00020.0010.0040.0162.713.845.026.6310.83
20.020.050.10.214.615.997.389.2113.82
30.120.220.350.586.257.819.3511.3416.27
40.30.480.711.067.789.4911.1413.2818.47
50.550.831.151.619.2411.0712.8315.0920.52
60.871.241.642.210.6412.5914.4516.8122.46
71.241.692.172.8312.0214.0716.0118.4724.32
81.652.182.733.4913.3615.5117.5320.0926.13
92.092.73.334.1714.6816.9219.0221.6727.88
102.563.253.944.8715.9918.3120.4823.2129.59
113.053.824.575.5817.2719.6721.9224.7231.26
123.574.45.236.3018.5521.0323.3426.2232.91
134.115.015.897.0419.8122.3624.7427.6934.53
144.665.636.577.7921.0623.6826.1229.1436.12
155.236.267.268.5522.3125.0027.4930.5837.70
165.816.917.969.3123.5426.3028.8432.0039.25
176.417.568.6710.0824.7727.5930.1933.4140.79
187.018.239.3910.8625.9928.8731.5334.8042.31
197.638.9110.1211.6527.2030.1432.8536.1943.82
208.269.5910.8512.4428.4131.4134.1737.5745.32
218.9010.2811.5913.2429.6132.6735.4838.9346.80
229.5410.9812.3414.0430.8133.9236.7840.2948.27
2310.2011.6913.0914.8532.0135.1738.0841.6449.73
2410.8612.4013.8515.6633.2036.4139.3742.9851.18
2511.5213.1214.6116.4734.3837.6540.6544.3152.62
2612.2013.8415.3817.2935.5638.8841.9245.6454.05
2712.8814.5716.1518.1136.7440.1143.1946.9655.48
2813.5715.3116.9318.9437.9241.3444.4648.2856.89
2914.2616.0517.7119.7739.0942.5645.7249.5958.30
3014.9516.7918.4920.6040.2643.7746.9850.8959.70
Remarque :
  • When the degrees of freedom \(v\) are such that \(v > 30\), the random variable:
    \(Z = \sqrt{Y} - \sqrt{2~v - 1}\) follows the standard normal distribution (see table 2);
  • The Chi-Square test is sensitive to sample size. For very small or very large samples, it may not be appropriate.
  • Theoretical frequencies \( E_i \) should be sufficiently large (generally at least 5) for the test to be valid.
  • The Chi-Square test does not provide information on the direction or size of the difference, only on the existence of a significant difference.

The Snedecor table, or F-distribution table, is used in hypothesis testing to compare the variances of two populations. The F-distribution is the distribution of the F-statistic, which is the ratio of two estimated variances.

The Snedecor table is primarily used in analysis of variance (ANOVA) and F-tests. It helps determine whether the observed differences between groups are statistically significant.

If \(F\) is a random variable following the Snedecor-Fisher distribution with degrees of freedom (\(v_{1}, v_{2}\)), the table provides the number \(f_\alpha\) such that \(P(F \geq f_{\alpha}) = \alpha = 0.025\).

To use the Snedecor table:

  • Determine the degrees of freedom for the numerator (\(v_1\)) and the denominator (\(v_2\)).
  • Find the critical value of F in the table corresponding to your significance level (alpha = 0.025), (\(v_1\)) and (\(v_2\)).
  • Compare the calculated \(F\) statistic to the critical value to decide whether to accept or reject the null hypothesis.
\(v_{1}\)
\(v_{2}\)
1 2 3 4 5 6 7 8 9 10 12 15 20 30
1 647.7 799.5 864.1 899.5 921.8 937.1 948.2 956.6 963.2 968.6 976.7 984.8 993.1 1001 1018
2 38.50 39.00 39.16 39.24 39.29 39.33 39.35 39.37 39.38 39.39 39.41 39.43 39.44 39.46 39.49
3 17.44 16.04 15.43 15.10 14.88 14.73 14.62 14.53 14.47 14.41 14.33 14.25 14.16 14.08 13.90
4 12.21 10.64 9.979 9.604 9.364 9.197 9.074 8.979 8.904 8.843 8.751 8.656 8.559 8.461 8.257
5 10.00 8.433 7.763 7.387 7.146 6.977 6.853 6.757 6.681 6.619 6.524 6.427 6.328 6.227 6.015
6 8.813 7.259 6.598 6.227 5.987 5.819 5.695 5.599 5.523 5.461 5.366 5.268 5.168 5.065 4.849
7 8.072 6.541 5.889 5.522 5.285 5.118 4.994 4.899 4.823 4.761 4.665 4.567 4.466 4.362 4.142
8 7.570 6.059 5.416 5.052 4.817 4.651 4.528 4.433 4.357 4.295 4.199 4.101 3.999 3.894 3.670
9 7.209 5.714 5.078 4.718 4.484 4.319 4.197 4.102 4.026 3.963 3.868 3.769 3.666 3.560 3.333
10 6.936 5.456 4.825 4.468 4.236 4.072 3.949 3.854 3.779 3.716 3.620 3.521 3.418 3.311 3.080
11 6.724 5.255 4.630 4.275 4.044 3.880 3.758 3.663 3.587 3.525 3.429 3.329 3.226 3.118 2.883
12 6.553 5.095 4.474 4.121 3.891 3.728 3.606 3.511 3.435 3.373 3.277 3.177 3.072 2.963 2.725
13 6.414 4.965 4.347 3.995 3.766 3.604 3.482 3.388 3.312 3.249 3.153 3.052 2.947 2.837 2.595
14 6.297 4.856 4.241 3.891 3.663 3.501 3.379 3.285 3.209 3.146 3.050 2.949 2.843 2.732 2.487
15 6.199 4.765 4.152 3.804 3.576 3.414 3.293 3.198 3.122 3.060 2.963 2.862 2.755 2.644 2.395
16 6.115 4.686 4.076 3.729 3.502 3.340 3.219 3.124 3.048 2.986 2.889 2.787 2.680 2.568 2.316
17 6.042 4.618 4.011 3.664 3.437 3.276 3.155 3.061 2.984 2.922 2.824 2.723 2.615 2.502 2.247
18 5.978 4.559 3.953 3.608 3.382 3.220 3.099 3.005 2.929 2.866 2.768 2.666 2.559 2.445 2.187
19 5.921 4.507 3.903 3.558 3.332 3.171 3.050 2.956 2.880 2.817 2.719 2.617 2.508 2.394 2.133
20 5.871 4.461 3.858 3.514 3.289 3.128 3.007 2.912 2.836 2.773 2.675 2.573 2.464 2.349 2.085
21 5.826 4.419 3.818 3.475 3.250 3.089 2.968 2.874 2.797 2.734 2.636 2.533 2.424 2.308 2.042
22 5.786 4.382 3.782 3.440 3.215 3.054 2.933 2.839 2.762 2.699 2.601 2.498 2.389 2.272 2.003
23 5.749 4.349 3.750 3.408 3.183 3.023 2.902 2.807 2.731 2.668 2.569 2.466 2.356 2.239 1.968
24 5.716 4.318 3.721 3.379 3.154 2.994 2.873 2.779 2.702 2.639 2.541 2.437 2.327 2.209 1.935
25 5.686 4.290 3.694 3.353 3.128 2.968 2.847 2.753 2.676 2.613 2.514 2.411 2.300 2.182 1.906
26 5.658 4.265 3.669 3.328 3.104 2.944 2.824 2.729 2.652 2.589 2.490 2.386 2.275 2.157 1.878
27 5.633 4.242 3.647 3.306 3.082 2.922 2.802 2.707 2.630 2.567 2.468 2.364 2.253 2.133 1.853
28 5.609 4.220 3.626 3.286 3.062 2.902 2.782 2.687 2.610 2.547 2.448 2.343 2.232 2.112 1.829
29 5.587 4.200 3.607 3.267 3.043 2.884 2.763 2.668 2.591 2.528 2.429 2.324 2.213 2.092 1.807
30 5.567 4.182 3.589 3.249 3.026 2.866 2.746 2.651 2.574 2.511 2.412 2.307 2.195 2.074 1.787
40 5.423 4.051 3.463 3.126 2.903 2.744 2.623 2.528 2.451 2.388 2.288 2.181 2.067 1.943 1.637
60 5.285 3.925 3.342 3.007 2.786 2.627 2.506 2.411 2.334 2.270 2.169 2.061 1.944 1.815 1.482
120 5.152 3.804 3.226 2.894 2.674 2.515 2.394 2.299 2.221 2.157 2.054 1.945 1.824 1.690 1.310
5.023 3.688 3.116 2.785 2.566 2.408 2.287 2.191 2.113 2.048 1.944 1.832 1.708 1.566 1.000

The Snedecor table, or F-distribution table, is a statistical table that provides the critical values of the F-distribution. This distribution is used to compare the variances of two populations and for hypothesis testing in analysis of variance (ANOVA).

If \(F\) is a random variable following the Snedecor-Fisher distribution with degrees of freedom (\(v_{1}, v_{2}\)), the table provides the number \(f_\alpha\) such that \(P(F \geq f_{\alpha}) = \alpha = 0.05\).

The Snedecor table is used in F-tests and analysis of variance (ANOVA) to determine whether the observed differences between groups are statistically significant. It helps to check the homogeneity of variances and to compare multiple means.

\(v_{2}\)
\(v_{1}\)
1 2 3 4 5 6 7 8 9 10 12 15 20 30
1 161.4 199.5 215.7 224.5 230.1 233.9 236.7 238.8 240.5 241.8 243.9 245.9 248.0 250.0 254.3
2 18.51 19.00 19.16 19.24 19.29 19.32 19.35 19.37 19.38 19.39 19.41 19.42 19.44 19.46 19.49
3 10.12 9.552 9.276 9.117 9.013 8.940 8.886 8.845 8.812 8.785 8.744 8.702 8.660 8.616 8.526
4 7.708 6.944 6.591 6.388 6.256 6.163 6.094 6.041 5.998 5.964 5.911 5.857 5.802 5.745 5.628
5 6.607 5.786 5.409 5.192 5.050 4.950 4.875 4.818 4.772 4.735 4.677 4.618 4.558 4.495 4.365
6 5.987 5.143 4.757 4.533 4.387 4.283 4.206 4.146 4.099 4.060 3.999 3.938 3.874 3.808 3.668
7 5.591 4.737 4.346 4.120 3.971 3.866 3.787 3.725 3.676 3.636 3.574 3.510 3.444 3.375 3.229
8 5.317 4.459 4.066 3.837 3.687 3.580 3.500 3.438 3.388 3.347 3.283 3.218 3.150 3.079 2.927
9 5.117 4.256 3.862 3.633 3.481 3.373 3.292 3.229 3.178 3.137 3.072 3.006 2.936 2.863 2.706
10 4.964 4.102 3.708 3.478 3.325 3.217 3.135 3.071 3.020 2.978 2.913 2.845 2.774 2.699 2.537
11 4.844 3.982 3.587 3.356 3.203 3.094 3.012 2.948 2.896 2.853 2.787 2.718 2.646 2.570 2.404
12 4.747 3.885 3.490 3.259 3.105 2.996 2.913 2.848 2.796 2.753 2.686 2.616 2.543 2.466 2.296
13 4.667 3.805 3.410 3.179 3.025 2.915 2.832 2.766 2.714 2.671 2.603 2.533 2.458 2.380 2.206
14 4.600 3.738 3.343 3.112 2.958 2.847 2.764 2.698 2.645 2.602 2.534 2.463 2.387 2.308 2.130
15 4.543 3.682 3.287 3.055 2.901 2.790 2.706 2.640 2.587 2.543 2.475 2.403 2.327 2.246 2.065
16 4.494 3.633 3.238 3.006 2.852 2.741 2.657 2.591 2.537 2.493 2.424 2.352 2.275 2.193 2.009
17 4.451 3.591 3.196 2.964 2.810 2.698 2.614 2.548 2.494 2.449 2.380 2.307 2.230 2.147 1.960
18 4.413 3.554 3.159 2.927 2.772 2.661 2.576 2.510 2.456 2.411 2.342 2.268 2.190 2.107 1.916
19 4.380 3.521 3.127 2.895 2.740 2.628 2.543 2.476 2.422 2.377 2.308 2.234 2.155 2.071 1.878
20 4.351 3.492 3.098 2.866 2.710 2.599 2.514 2.447 2.392 2.347 2.277 2.203 2.124 2.039 1.843
21 4.324 3.466 3.072 2.840 2.684 2.572 2.487 2.420 2.366 2.321 2.250 2.175 2.096 2.010 1.811
22 4.300 3.443 3.049 2.816 2.661 2.549 2.463 2.396 2.341 2.296 2.225 2.150 2.070 1.984 1.783
23 4.279 3.422 3.028 2.795 2.640 2.527 2.442 2.374 2.320 2.274 2.203 2.128 2.047 1.960 1.757
24 4.259 3.402 3.008 2.776 2.620 2.508 2.422 2.355 2.300 2.254 2.183 2.107 2.026 1.939 1.733
25 4.241 3.385 2.991 2.758 2.603 2.490 2.404 2.337 2.282 2.236 2.164 2.088 2.007 1.919 1.711
26 4.225 3.369 2.975 2.742 2.586 2.474 2.388 2.320 2.265 2.219 2.147 2.071 1.989 1.901 1.690
27 4.210 3.354 2.960 2.727 2.571 2.459 2.373 2.305 2.250 2.204 2.132 2.055 1.973 1.884 1.671
28 4.196 3.340 2.946 2.714 2.558 2.445 2.359 2.291 2.236 2.190 2.117 2.041 1.958 1.868 1.654
29 4.183 3.327 2.934 2.701 2.545 2.432 2.346 2.278 2.222 2.176 2.104 2.027 1.944 1.854 1.637
30 4.170 3.315 2.922 2.689 2.533 2.420 2.334 2.266 2.210 2.164 2.092 2.014 1.931 1.840 1.622
40 4.084 3.231 2.838 2.606 2.449 2.335 2.249 2.180 2.124 2.077 2.003 1.924 1.838 1.744 1.508
60 4.001 3.150 2.758 2.525 2.368 2.254 2.166 2.097 2.040 1.992 1.917 1.836 1.748 1.649 1.389
120 3.920 3.071 2.680 2.447 2.289 2.175 2.086 2.016 1.958 1.910 1.833 1.750 1.658 1.554 1.253
3.841 2.995 2.604 2.371 2.214 2.098 2.009 1.938 1.879 1.830 1.752 1.666 1.570 1.459 1.000

The Mann-Whitney test, also known as the Wilcoxon-Mann-Whitney test, is a non-parametric statistical test used to determine if two independent samples come from the same population. It is often used when the data do not follow a normal distribution.

The table provides the value \(m_\alpha\) such that \(P(M \leq m_{\alpha}) = \alpha = 0.05\) for two samples with sizes \(n_1\) and \(n_2\) where \(n_{1} \leq n_{2}\).

The Mann-Whitney test is useful for comparing two groups when the conditions for using parametric tests, such as Student's t-test, are not met. It does not require the data to be normally distributed and can be used with samples of different sizes.

\(n_{1}\)
\(n_{2}\)
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
2----0000111112222
3-0112233445566778
401234456789101111121314
52356789111213141517181920
6568101113141617192122242527
7810121416182022242628303234
813151719222426293134363841
9172023262831343739424548
102326293336394245485255
1130333740444751555862
12374145495357616569
134550545963677276
1455596469747883
15647075808590
167581869298
17879399105
1899106112
19113119
20127

The table provides the value \(m_\alpha\) such that \(P(M \leq m_{\alpha}) = \alpha = 0.01\) for two samples with sizes \(n_1\) and \(n_2\) where \(n_{1} \leq n_{2}\).

\(n_{1}\)
\(n_{2}\)
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
2---------------00
3-----000111222233
4--001122334556678
501123456778910111213
623456791011121315161718
7467910121315161819212224
8791113151718202224262830
9111316182022242729313336
101618212426293134373942
1121242730333639424548
12273134374144475154
133438424549535760
1442465054586367
15515760646873
166065707479
1770758186
18818792
199399
20105

The Wilcoxon test compares the differences between pairs of values in matched samples. It assesses whether the observed differences are significantly different from zero. The test is also used to determine if the median of a symmetric distribution differs from a hypothetical value.

The table provides the value \(w_{\alpha}\) such that \(P (W \leq w_{\alpha}) = \alpha\), for cases \(\alpha = 0.05\) and \(\alpha = 0.01\).

The Wilcoxon test is used in the following situations:

  • Comparison of two matched samples: When comparing two repeated measures on the same subject, for example, before and after a treatment.
  • Small samples: Useful when sample sizes are small and parametric tests are not appropriate.
  • Non-normality of data: When the data do not follow a normal distribution and parametric tests are not applicable.
  • Ordinal or continuous data: Can be used with ordinal or continuous data.
\(\alpha\)
\(N\)
789101112 131415161718 19202122232425
0.05 24681114 172125303540 465259667381 89
0.01 -02357 101316202328 323843495561 68

The table of the linear correlation coefficient provides critical values that help assess the statistical significance of \(r\). These values depend on the sample size (\(n\)) and the significance level (\(\alpha\)).

The table of the linear correlation coefficient is used to:

  • Evaluate the relationship between two variables: It helps quantify the strength and direction of the linear relationship between two quantitative variables;
  • Test the correlation hypothesis: It allows testing whether the observed correlation is statistically significant or if it could be due to chance;
  • Forecasting: A strong correlation between two variables can be used to make predictions about one variable given the value of the other.

\(d.f.\) \(\alpha\)
0,10 0,05 0,01
10,98770,99690,9999
20,90000,95000,9900
30,80540,87830,9587
40,72930,81140,9172
50,66940,75450,8745
60,62150,70670,8343
70,58220,66640,7977
80,54940,63190,7646
90,52140,60200,7348
100,49730,57600,7079
110,47620,55290,6835
120,45750,53240,6614
130,44090,51390,6411
140,42590,49730,6226
150,41240,48210,6055
160,40000,46830,5897
170,38870,45550,5751
180,37830,44380,5614
190,36870,43290,5487
200,35980,42270,5368
250,32330,38090,4869
300,29600,34940,4487
350,27460,32460,4182
400,25730,30440,3932
450,24280,28750,3721
500,23060,27320,3541
600,21080,25000,3248
700,19540,23190,3017
800,18290,21720,2830
900,17260,20500,2673
1000,16380,19460,2540

The Spearman's Rank Correlation Coefficient table provides critical values for assessing the statistical significance of the Spearman correlation coefficient (\(\rho\)). This coefficient measures the strength and direction of the monotonic relationship between two ordinal or quantitative variables. The values of \(\rho\) range from -1 to 1, where -1 indicates a perfect negative correlation, 1 indicates a perfect positive correlation, and 0 indicates no correlation.

The Spearman's Rank Correlation Coefficient table is used to:

  • Evaluate the relationship between two variables: It helps quantify the strength and direction of the monotonic relationship between two variables, whether ordinal or quantitative.
  • Test the correlation hypothesis: It allows testing whether the observed correlation is statistically significant or if it could be due to chance.
  • Analysis of ordinal data: It is particularly useful for ordinal data where parametric methods are not appropriate.

The table provides the value \(r_{\alpha}\) such that \(P(|R_{s}| > r_{\alpha}) = \alpha\).

\(\alpha\) \(n\)
4 5 6 7 8 9 10 11 12 13
0,10 0,990,870,770,690,64 0,590,560,530,510,49
0,05-0,950,850,780,73 0,680,640,610,590,56
0,02-0,990,930,870,820,770,730,700,670,64
0,01--0,970,910,860,820,790,750,720,70

The Kruskal-Wallis test table provides critical values needed to assess the results of the eponymous test. It is used to determine whether three or more samples come from the same distribution.

The Kruskal-Wallis test is used in the following situations:

  • Comparison of multiple independent groups: It allows comparing the distribution of three or more groups to determine if they come from the same population.
  • Ordinal or non-normally distributed continuous data: Useful when data do not follow a normal distribution and parametric tests (such as ANOVA) are not appropriate.
  • Analysis of variance between groups: It helps identify significant differences in the distributions among groups.

The table provides the value \(h_{\alpha}\) such that \(P(H \geq h_{\alpha}) = \alpha\).

Sample Size \(\alpha = 0,05\) \(\alpha = 0,01 \)
3 3 2 4,71
3 3 1 5,10
3 2 2 5,22 6,26
3 3 3 5,60 6,50
4 2 1 4,94
4 2 2 5,15 6,30
4 3 1 5,21
4 3 2 5,42 6,35
4 3 3 5,73 6,75
4 4 1 4,93 6,67
4 4 2 5,45 6,90
4 4 3 5,60 7,14
4 4 4 5,70 7,60
5 2 1 5,00
5 2 2 5,10 6,40
5 3 1 4,91 6,42
5 3 2 5,25 6,82
5 3 3 5,66 7,03
5 4 1 4,92 6,90
5 4 2 5,27 7,12
5 4 3 5,63 7,44
5 4 4 5,62 7,75
5 5 1 5,00 7,08
5 5 2 5,27 7,30
5 5 3 5,64 7,55
5 5 4 5,64 7,80
5 5 5 5,72 7,98