Making statistical significance more significant



We routinely set significance levels at 0.05, giving us one chance in 20 of a false positive result if the null hypothesis were true. Why? Why not instead choose values that minimise the combined chances of both false positives and false negatives? It is easy, say Leanne F. Baker and Joseph F. Mudge, so why not do it?