Some knowledge involves explicit knowledge. But it does not necessarily follow that a given body of developed explicit knowledge (even esoteric explicit knowledge -- e.g., mathematically-formulated knowledge about the world) altogether avoids reliance on tacit knowledge. For example, perhaps the predictions of a physicist, some other learned person, or artillery officer about the trajectory of a projectile fired from a cannon ordinarily depend on a certain amount (perhaps a large amount) of tacit knowledge as well as on explicit calculations involving Newtonian mechanics, friction coefficients, and other such matters. (But it is possible -- is it possible? -- that some explicit human knowledge can be fully automated by being deposited into an autonomous non-human device -- i.e., that such knowledge may take the form of a fully autonomous robot whose intended operations never require or depend on human intervention.)
Can expressly-formulated principles improve the inferential performance of a device -- be that device mechanical or biological -- whose inferential processes are imperfectly, only partially, understood?
I realize my ruminations here are primitive, probably even sophomoric. So forgive me for that. I am taking the liberty of doing some exploratory thinking "out loud." Later (probably only much later) I will make an effort to be more systematic.Consider recipes for batters in baseball games. Can expressly-formulated recipes, maxims, or precepts for batters "work"? (Plainly such recipes -- "Keep your eye on the ball!", "Watch the pitcher's grip!", etc. -- do not fully capture or express the way a batter's brain, eyes, etc., work to lead the batter to draw certain inferences -- very quickly! -- about the velocity and trajectory of the ball that he or she hopes to hit out of the ballpark.) Consider, alternatively, rules built into thermostats -- e.g., "Thermostat, turn on switch X when sensor B shows t-1 or less; but turn on switch Y when sensor B shows t +1 or more." Can such a rule work if the physical processes by which the thermostat's sensors detect signals are not perfectly understood? (The answer would seem to be "yes." What are the implications of that?!)
N.B. The scholarship of Nancy Cartwright and the fuzzy logic-based science of Lotfi Zadeh have an important bearing, I think, on the questions I am raising here. And so does, of course, the vast body of learning now being produced by that vast army of scholars who are carefully studying the neurobiological, neurochemical, neuroelectrical [and neuromagnetic?] computational processes of the human animal. The next generation will be much better equipped than this one to tackle some of the epistemological and inferential puzzles that have bedeviled logicians, philosophers, epistemologists, psychologists, and legal scholars for many, many years.
A further N.B.: I have been talking here, once again, about the phenomenon and puzzle of partial knowledge. I also think I have also been viewing -- have I not? -- the human creature as an organism. These ways of thinking about the human situation have implications for attempts at conscious regulation of inference.
Coming soon: the law of evidence on Spindle Law
No comments:
Post a Comment