One possible escape from the riddle of causation and inference that I have been exploring is the notion that there is a distinction between causal theory and explanation: there are (some observers think) valid or good explanations that are not yet -- or theories or accounts that fall short of -- potentially deeper causal theories or explanations.
If this distinction between causal and other explanations works, it provides a solution to the mystery of why "mere association" bereft of causal theory works. The suggested answer: sometimes mere association is not mere association at all: the so-called mere association sometimes rests on an explanation.
But in science -- and perhaps (one hopes) in other fields as well -- mere explanation is not enough, an explanation must be a good explanation).
In science -- and, again one hopes, in other intellectual domains as well -- a theory, even a "merely explanatory" one [n.b., my phrase], must prove its mettle, it must be put to the test. In science this often means that it must be shown that the theory or explanation in question can predict the occurrence or non-occurrence of phenomena or events better than pot-shot strategies do, better than helter-skelter guessing does.
By embracing "explanation" some theorists do seem to be advancing the claim that a theory short of a causal one can achieve have predictive force, that it can achieve a genuine understanding of Nature without having in hand anything that we might call a theory of what causes Nature to act as it does.
But if explanation sans causality can achieve such understanding -- if it can indeed have this kind of epistemological power --, a big question quite naturally arises:
How? How does "explanation without causation" achieve such understanding?
The difficulty here is that a "mere explanation" falls short (by hypothesis) of a causal theory: it is short of, it is less than, a theory that rests on an understanding of the mechanisms or principles that (may) underlie phenomena and events in nature.
The move to "explanation" suggests that those who make this move accept the notion that ignorant human beings can (somehow) achieve (scientific) understanding. Paolo Garbolino -- revealingly and perceptively -- points to this issue (and attempts to resolve it) by suggesting that the proper epistemological alternative to reliance on (i) causal explanation and, alternatively, (ii) statistical explanations is the deployment of (iii) "potential explanatory accounts" [P. Garbolino, "Explaining Relevance," The Dynamics of Judicial Proof: Computation, Logic, and Common Sense 179, at 187-191 (M. MacCrimmon & P. Tillers, eds., Physica-Verlag, 2002].
A problem is not solved merely because it is named -- even if the name happens to be a seductive word such as "explanation." (But, note, attaching a name to a problem or process sometimes does advance understanding "simply" by identifying a problem, phenomenon, or process).
The label "explanation" does not fully explain -- but whoever thought it would? --, the label "explanation" does not resolve the question of how ignorant human beings -- human beings who lack a full understanding of Nature and the mechanisms or principles that drive or explain it --, a phrase such as "explanatory theory" does not by itself explain how such ignorant human beings sometimes manage to make very good judgments about the behavior of Nature.
The riddle just posed may provide us with part of an answer; it may at least yield or suggest a useful "non-answer answer," viz., a partial answer that helps even if it does not altogether satisfy the hunger for knowledge.
The non-answer answer I have in mind is that human beings know more than they know, they have understanding that eludes their comprehension, they have some tacit knowledge.
There can no longer be any serious doubt that this is the case; it is no longer possible to doubt that people have knowledge that they cannot articulate, spell out, make (fully) explicit. (This general insight is not new; it goes back at least to Plato.)
But, of course, there is at least one big problem with being told that you know more than you know, that you already have much knowledge: sometimes -- and now is one of those times! -- you would like to know more than you now know. Does it do any good to be told you know more than you know? For example, does the notion of unknowing knowing tell us -- or help us to decide -- how a judge should instruct a jury to think about evidence that suggests that this or that substance causes cancer or does or when if ever evidence that this or that substance causes cancer or did cause cancer should be withheld from a jury?
So -- to repeat the general question -- is the notion of unknowing knowing helpful?
I must, alas, suspend discussion of this big question for now. But let us leave our conversation (are we having a conversation?)--, let us suspend this discussion with one caveat in mind: no theory of knowing that fails to account for advances in human knowledge can satisfy. It is true that know more than we know. But we also manage to learn; there are new things -- and new insights -- under the sun; and knowledge -- including our knowledge of how we know -- is not entirely circular.