The usage of testing True or False are connected intimately to the Theory of Probability. This is scientific way to look at the subject of Truth, False, and Ignorance as I have explained before.

Any statements, to be known whether it is True is tested through Probability statements. And hence, an experimenter, after getting the data, will perform statistical tests, and conclude with say 99% confidence that the statement is True (i.e. The Null Hypothesis is True against the Alternative Hypothesis).

Hidden in all of these exercise, is one major assumption: that the process of the data (that are observed or collected) are KNOWN, and to be following some form of probability functions that converges, stationary, as well testing methods exhibits minimum errors as possible. What we could say here is that: the experimenter is NOT IGNORANT about all these subject matters (or at least assumed to have full knowledge about it).

The second problem, which can never be solved is the hidden Null Hypothesis, that is ALL the assumptions regarding the probability functions and data are TRUE. Therefore, any statistical test is always a joint test of the actual Null Hypothesis in question and the Hypothesis about Probability.

The problem is, the joint Null Hypothesis is never directly tested. In fact there is always the possibility of the joint hypothesis is done with ignorance. In fact, we don’t know whether the data follows such process (i.e. the probability functions) or not. Most of the time we have very limited knowledge as well as information to determine with certainty that such process even exists or present.

For that reason, many researchers, I observed are so quick to go right into statistical testing, and jump into conclusions based on whatever test they have designed; without truly having thought and investigated the process in question. This happened a lot in finance and economics, as well as many social science research.

Truly, the domain of probability is in the domain of ignorance (i.e. little that we know). What I meant here is not the Mathematical Statistics, which is called as the “Calculus of Probability”; but the existence of the Probability itself, or sometime it is called the “Epistemic Probability”. It is quite a big mistake for us to jump into the Calculus of Probability itself, without even knowing the essence of the probability itself – which could leads to massive errors.

For example, we always used past data (or historical data) as the benchmark, to conclude that such data are stationary and exhibits some statistical properties (such as normal distribution). While such is commendable (i.e. it has been proven well to work in many cases), it is mired with so many hidden problems. So many things are obscured from the researcher such that all these hidden errors crop into the testing process. Most often than not, the statement that “I found a statement to be 99% confidence to be True” is so vacuous that it means nothing. And worst still, if it use to predict future events, that “I have 99% confidence that the same thing is True” for the next observation as well.

The deep discussion on this subject relates to the subject of Ignorance that I allude to earlier.