- Paul M. Churchland, “Cognitive Activity in Artificial Neural Networks”, in D. Osherson and E. E. Smith (editors), An Invitation to Cognitive Science, Volume 3: Thinking, MIT Press, 1990
- This paper discusses NetTalk, a neural network program that solved the text-to-speech problem. You can see a demo of NetTalk on Youtube. The voice of the demo is like that of a child. Does this bias us toward believing that NetTalk is a truer simulation of child language learning?
- David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty, “Building Watson: An Overview of the DeepQA Project”, AI Magazine 31(3), Fall 2010
- DeepQA uses a “kitchen sink” approach to computing answer probabilities. It allows different techniques to weigh in on the answer, rather than opting for one technique, applying the Yogi Bera adage, “When you get to the fork in the road, take it.” Does this tell us anything about debates over which framework is the correct one for understanding cognition (e.g. bayesian nets, neural nets, logic, heuristic rules)?
- Jonathan Rauch, “Seeing Around Corners”, The Atlantic Monthly, April 2002
- With agent-based modeling, as in many other types of simulation, it can be hard to tell whether we should take the model seriously just because it duplicates some observed pattern of behavior in the real world, when we also know that it makes simplifying or incorrect assumptions. Nonetheless, the methodology is interesting and still of growing interest.
- I defined “model” circularly here, or as we say in computing, “recursively.” 🙂