Since I don't share Noam Comsky's political views, I wish he would stick to topics such as language and logic. There is a wonderful article in the Atlantic that purportedly deals with Chomsky's views of (the limitations of) "artificial intelligence" a/k/a computational intelligence. The article includes a perceptive introduction by the author (Yarden Katz) and a transcript of an interview with Chomsky. (A link to a video of the Chomsky interview is also provided.)
In the course of the interview Chomsky makes a variety of points about a great variety of matters. One of the matters he discusses is associationist psychology, Bayesian techniques, and statistical approaches -- "big data" -- to an understanding of natural and social phenomena. He says, for example:
"Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is. In my own field, language fields, it's all over the place. Like computational cognitive science applied to language, the concept of success that's used is virtually always this. So if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
"A very different approach, which I think is the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach. These are just two different concepts of science. The second one is what science has been since Galileo, that's modern science. The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability..."
In the course of the interview Chomsky makes a variety of points about a great variety of matters. One of the matters he discusses is associationist psychology, Bayesian techniques, and statistical approaches -- "big data" -- to an understanding of natural and social phenomena. He says, for example:
"Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is. In my own field, language fields, it's all over the place. Like computational cognitive science applied to language, the concept of success that's used is virtually always this. So if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
"A very different approach, which I think is the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach. These are just two different concepts of science. The second one is what science has been since Galileo, that's modern science. The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability..."
Comment by Tillers: I have always liked what I take to be Chomsky's neo-Kantian and neo-Platonic approach to an understanding of "human behavior" such as language. In a broad sense, he takes the position that some sort of a logic dwells within the human animal (for example?) that generates, or causes, or explains, what the human animal does (with, for example, language).
- If you share Chomsky's general theoretical orientation, I think it follows that Bayesian accounts of factual inference about human behavior must be supplemented by "nomological structures" (usually called "generalizations"), which, in the case of the human animal, must describe -- or attempt to describe -- the internal (cognitive) "operating system," or mental world (both tacit and explicit) that the human animal in question uses. Cf. Peter Tillers, P. Tillers, "Are There Universal Principles or Forms of Evidential Inference? Of Inference Networks and Onto-Epistemology," in William Twining, Philip Dawid & Dimitra Vasilaki, eds., Evidence, Inference and Enquiry (Oxford & British Academy, 2011) (SSRN prepublication version of the paper is available here; you can also access the paper here).
Evidence marshaling software MarshalPlan
2 comments:
Chomsky: That's true. But what he says is: Let's ask ourselves how the biological system is picking out of that noise things that are significant. The retina is not trying to duplicate the noise that comes in. It's saying I'm going to look for this, that and the other thing. And it's the same with say, language acquisition. The newborn infant is confronted with massive noise, what William James called "a blooming, buzzing confusion," just a mess. If say, an ape or a kitten or a bird or whatever is presented with that noise, that's where it ends. However, the human infants, somehow, instantaneously and reflexively, picks out of the noise some scattered subpart which is language-related. That's the first step. Well, how is it doing that? It's not doing it by statistical analysis, because the ape can do roughly the same probabilistic analysis. It's looking for particular things. So psycholinguists, neurolinguists, and others are trying to discover the particular parts of the computational system and of the neurophysiology that are somehow tuned to particular aspects of the environment."
Id.: "Chomsky: You don't study the lung by asking what cells compute. You study the immune system and the visual system, but you're not going to expect to find the same answers. An organism is a highly modular system, has a lot of complex subsystems, which are more or less internally integrated. They operate by different principles. The biology is highly modular. You don't assume it's all just one big mess, all acting the same way."
Post a Comment