Bayesianism has been an object of discussion in the past. The issue is sufficiently important to warrant a revisit. In recent years, Bayesianism has been growing in popularity and visibility. Outside of the probability community, there are many misconceptions about Bayesianism. In many cases, Bayesian X merely means that probability theory is used in X. Within the probability community, Bayesianism has a much narrower meaning. In this perspective, Bayesian networks is a misnomer. Bayesian networks are simply probabilistic networks. Note that there are possibilistic networks.
So what is Bayesianism? The answer is rooted in the difference between can and should. Let me elaborate on this statement. Traditionally, probability theory has been associated with randomness and repeated events. Example. What is the probability that a fair coin will fall heads m times in n tosses? However, consider the statement, p: It is likely that Robert is rich. In this case, no repetition of trials is involved and there is no overt randomness. What is involved in this case is lack of knowledge. This simple example suggests that probability theory can be applied when there is uncertainty, but no repetition of trials and no overt randomness. This is the principal tenet of Bayesianism. So far there is nothing that is controversial. A problem arises when "can" is replaced with "should," as in the following dictum of a noted Bayesian, Professor E. Lindley.
The only satisfactory description of uncertainty is probability. By this I mean that every uncertainty statement must be in the form of a probability; that several uncertainties must be combined using the rules of probability; and that the calculus of probabilities is adequate to handle all situations involving uncertainty…probability is the only sensible description of uncertainty and is adequate for all problems involving uncertainty. All other methods are inadequate…anything that can be done with fuzzy logic, belief functions, upper and lower probabilities, or any other alternative to probability can better be done with probability (Lindley 1987).
Professor Lindley's dictum is challenged by fuzzy logic. In the perspective of fuzzy logic there are many different kinds of uncertainty. The Bayesian "One size fits all" approach is misdirected. This view is the point of departure in my 2002, 2005 and 2006 papers. Here are two deceptively simple problems which are a challenge to Bayesians. The second problem is less simple than the first. A box contains twenty balls of various sizes. Most are large. How many are small? A box contains approximately twenty balls of various sizes. There are many more large balls than small balls. What is the number of small balls? Note that interval interpretation of fuzzy terms is not acceptable. Comments are welcome.
Evidence marshaling software MarshalPlan