Saturday, April 11, 2009

Partial Knowledge of Boxes with Partial Self-Knowledge

Imagine a box.

The box sometimes hoists an umbrella. Sometimes it does not.

I wonder: Can I use the box's umbrella hoist to determine whether it is raining? (I am too lazy to go outside and put my finger in the air.) So I interrogate the box (an intelligent box):

Q. Box, when do you hoist an umbrella?
A. When it rains.
Q. How do you determine if it is raining?
A. When I hear raindrops splatter.
Q. How do you tell the difference between raindrop-splatters and other sounds?
A. I'm not sure. But there are some signs I think I use -- for example, the sounds come frequently but not with invariable regularity, they create an echoing or pinging sound within me (I'm made out of metal), that sort of thing. I'm not sure I can tell you all the clues I use. But I'm sure I can distinguish rain-drop sounds from other sounds. Why do you ask?
Q. Oh, I'm just curious. Do you hoist an umbrella when it is not raining?
A. No, not usually. Why would I?
Q. Thank you, Box.
A. You're most welcome.
Later, looking down from the fourth floor of a building, I cannot see if it is raining but I see Mr. Box hoist an umbrella. I notice some people nearby also have hoisted their umbrellas. But I notice that some people are carrying umbrellas but have not opened or raised them. I wonder to myself: Is it raining?

I decide to focus on the behavior of Mr. Box. I decide, first, that it was trying to be truthful when it told me what leads it to hoist an umbrella. (So, to that extent, I think I can see inside the unusually-articulate and -intelligent box.) N.B. I need to keep in mind that Box might have bad sound sensors. But I decide to ignore this complication for now.

I recall that Box itself said it could not list all of the factors (sounds) that lead it to conclude that it is raining. So Box itself, if it is being honest, cannot clearly identify the factors that make it think (about rain or not-rain) what it thinks and do (hoist or not-hoist) what it does. But I think: with some effort perhaps both the box and I could imagine the factors that influence the box's decision or belief about whether it is raining.)

I then think: "Mr. Box was trying to be truthful. But is it possible Box uses clues and signals other than sound to determine whether it it is raining? Yes, I think that's possible, even unbeknownst to Box. For example, perhaps umbrella-hoisting by people influences what Box thinks about rain or not-rain. Or perhaps, unbeknownst to Box, it senses increases in moisture levels in the air." However, I caution myself: "It does not necessarily follow, of course, that Box's beliefs are uninfluenced by the clues it listed."

&&&

This box metaphor or parable (as ungainly as it is) highlights some factors that may have to be taken into account when we attempt to assess the accuracy of the reports of, say, people who claim they are "bite-mark experts" or, say, "fingerprint identification experts" or, say, "polygraph experts." (One interesting potential lesson is that even bogus experts who may nevertheless be good detectors [of, e.g., rain or not-rain] may be influenced by their bogus methods even if those bogus experts have little self-understanding, little understanding, that is, of what leads them to reach the conclusions and make the reports that they do. [Consider, for example, a polygraph expert who has good hunches about the people she tests with her fancy-looking polygraph equipment.])

But my ungainly parable also has a very broad moral: the possible omnipresence of genuine but partial knowledge and the frightening difficulties (both practical and theoretical) this sort of knowledge presents.

One illustration of partial but genuine knowledge: without knowing everything about the box but by having or making some pretty good explicit or tacit guesses about the workings of the box, I may be able to make pretty good guesses about rain or not-rain if I see the box hoist an umbrella. Of course, it is also possible -- particularly if I am dealing with a strange metal box -- that my guesses based on the box's umbrella-hoisting will be almost entirely worthless.

An important hypothesis: Sometimes we know -- whether tacitly or explicitly -- much but not everything about people who know much or something but not everything about themselves.

Question: Suppose we have such fragmentary but real knowledge. What steps (if any) can we take to improve it? And what steps (if any) can we take to increase the accuracy of judgments that other people (e.g., jurors) make about the reports of yet other people (e.g., witnesses) about still other people (e.g., a defendant who is, say, related to the witness who professes to be reporting and explaining that defendant's behavior)?

the dynamic evidence page

coming soon: the law of evidence on Spindle Law

Friday, April 10, 2009

Boxes That Misunderstand Themselves

The explanations that some putative experts give for their decisions may not be accurate; i.e., those given reasons may not explain why those putative experts decide as they do (or say what they say). If that is the case, understanding these experts' stated reasons would not allow us to predict the decisions (or inferences) of these supposed experts. However, it does not follow that such putative experts do not follow some rules or principles (that are unknown to them). In short, sometimes we will distrust the explanations that some experts give but we may yet believe that these supposed experts will sometimes be useful barometers (for reasons they themselves do not accurately understand). But to figure out just when these nincompoopish experts will be useful barometers and when they won't, we need to understand what makes them tick -- what really makes them tick. Otherwise we may foolishly say, "Well, these people correctly predicted the last recession. Even if they can't explain what leads them to make the predictions they do, it's a good bet they'll correctly predict the next recession." Well, maybe, and maybe not.

the dynamic evidence page

coming soon: the law of evidence on Spindle Law

B.F. Skinner's Rats & Pigeons in B.F. Skinner's Mazes

B.F. Skinner (the famous behaviorist) wasn't interested in the internal mechanisms of the rats and pigeons in his mazes. B.F. Skinner was interested only (he said, as I recall) in the responses of his animals to positive and negative inputs (rewards).

But B.F. Skinner's animals would not have responded the way they did in the past if someone had snipped the chains of neurons and axons (or whatnot) that transmitted sensory signals from the animals' environments to the innards of the animals that B.F. Skinner (said he) didn't care about.

So what does this have to do with proficiency testing of experts?

the dynamic evidence page

coming soon: the law of evidence on Spindle Law

Predicting (Inferring) a Box's Behavior (Outputs, Reports)

Assume:
Input: an observation (a signal, a possibly-sensed event).

Output: a statement.

Intermediary: a box.

Question: To predict a box's outputs given specified inputs, do you have to be able to see (or infer) the innards (or workings) of the box or is it sufficient to be able to observe the box's outputs in the past given specified inputs?

the dynamic evidence page

coming soon: the law of evidence on Spindle Law

There Is No Law against Blue Skies, Is There?

the dynamic evidence page

coming soon: the law of evidence on Spindle Law

Evidence of Things to Come

the dynamic evidence page

coming soon: the law of evidence on Spindle Law

Spring: The Slow-Thinking Season

Legal business, law schools, and analytical thinking all slow during the "spring break," while everyone (here in the Northeast, in any event) impatiently awaits the arrival of warm & sunny weather. (But migratory ducks are already here and are almost gone.)

Well, in recompense perhaps I'll post something tonight or tomorrow about measuring the proficiency of putative experts. Otherwise I'll post a digital image or two.

the dynamic evidence page

coming soon: the law of evidence on Spindle Law