This interesting article describes a China research venture that rests on the assumption that truly intelligent machines must engage in "deep learning" rather than (merely) data crunching: Daniela Hernandez, ‘Chinese Google’ Opens Artificial-Intelligence Lab in Silicon Valley Wired (April 12, 2013). Hernandez seems to assert that the China AI lab's research project rests on the assumption that conceptual structures, or representations, must be used to develop the most intelligent nonhuman machines possible, machines that are capable of "deep learning." But in at least part of this article Hernandez also seems to assert that the backers of this AI research project assume that such representations, or conceptual structures, can be identified simply by discovering and studying the physical structure and operations of the human brain (e.g., neural networks). It is not at all clear to me that this assumption (if it is in fact made by the AI lab) is warranted. The assumption does not follow merely from the plausible premise that the logical operations of human machines must be done in or through the brain.
Evidence marshaling software MarshalPlan