Intelligence without Representation: A Historical Perspective
Abstract
:1. Introduction
2. The State of Artificial Intelligence Research Prior to 1991, as the Context for Intelligence without Representation
“No one talks about replicating the full gamut of human intelligence any more. Instead we see a retreat into specialized subproblems … Amongst the dreamers still in the field of AI … there is a feeling that one day all these pieces will all fall into place and we will see “truly” intelligent systems emerge.
However I, and others, believe that human level intelligence is too complex and little understood to be correctly decomposed into the right subpieces at the moment.”[1] (p. 140)
“The physical symbol system approach seems to be failing because it is simply false to assume that there must be a theory of every domain.”[10] (p. 330)
“It just couldn’t be right. We had a very complex mathematical approach, but that couldn’t be what was going on with animals moving their limbs about. Look at an insect, it can fly around and navigate with just a hundred thousand neurons. It can’t be doing this very complex symbolic mathematical computations. There must be something different going on.”[14]
3. Fundamental Aspects of Intelligence without Representation
3.1. Discarding Representation in Favour of Physical Embodiment in the Real-World Environment
“Representation is the wrong unit of abstraction in building the bulkiest parts of intelligent systems”[1] (p.140)
“The idea was that by representing only the pertinent facts explicitly, the semantics of a world (which on the surface was quite complex) were reduced to a simple closed system once again. Abstraction to only the relevant details thus simplified the problems.”[1] (p. 140)
“(CAN (SIT-ON PERSON CHAIR)), (CAN (STAND-ON PERSON CHAIR)) ”[1] (p. 140)
“The common view in Artificial Intelligence, and particularly in the knowledge representation community, is that there is a central storage system which links together the information about concepts, individuals, categories, goals, intentions, desires, and whatever else might be needed by the system. In particular there is a tendency to believe that the knowledge is stored in a way that is independent from the way or circumstances in which it was acquired.”[16] (p. 14)
“Over the years within traditional Artificial Intelligence, it has become accepted that they will need an objective model of the world with individuated entities, tracked and identified over time—the models of knowledge representation that have been developed expect and require such a one-to-one correspondence between the world and the agent’s representation of it.”[16] (p. 16)
“My earlier paper [Intelligence without representation] is often criticized for advocating absolutely no representation of the world within a behavior-based robot. This criticism is invalid. I make it clear in the paper that I reject traditional Artificial Intelligence representation schemes (see Section 5). I also made it clear that I reject explicit representations of goals within the machine. There can, however, be representations which are partial models of the world.”[16] (p. 21)
3.2. Subsumption Architecture and Emergent Behaviour
“We don’t have any plans! [for our robots] These are all research robots built experimentally …… patch upon patch upon kludge.”
4. Explaining Cognitive Processes Through Brooks’ Approach: How Does This Differ to Other AI Approaches?
“AI can help us understand the mind-what it is, as well as how it works.”[7]
“Computational studies, investigating similar problems being solved in a mind-body complex, help develop a more rigorous model and, more importantly, provide an understanding of the flow of information in and between these processes.”[19]
“Classicists believe that thinking just is the manipulation of items having propositional or logical form; connectionists insist that this is just the icing on the cake and that thinking (“deep” thinking, rather than just sentence rehearsal) depends on the manipulation of quite different structure. As a result, the classicist attempts to give a level 2 processing model which is defined over the very same kinds of structure as figure in her level 1 theory. 5 Whereas the connectionist insists on dissolving that structure and replacing it with something quite different.
A curious irony emerges. In the early days of Artificial Intelligence, the rallying cry was ‘Computers do not crunch numbers, they manipulate symbols’. This was meant to inspire a doubting public by showing how much computation was like thinking. Now the wheel has come full circle. The virtue of connectionist systems, it seems, is that ‘they do not manipulate symbols, they crunch numbers’. And nowadays we all know (don’t we?) that thinking is not mere symbol manipulation! So the wheel turns.”[21] (p. 306)
“neither classical nor nouvelle [behaviour-based] AI seem close to revealing the secrets of the holy grail of AI, namely general purpose human level intelligence equivalence”[8] (p. 4)
“Symbol systems in their purest forms assume a knowable objective truth. It is only with much complexity that modal logics, or non-monotonic logics, can be built which better enable a system to have beliefs gleaned from partial views of a chaotic world.
As these enhancements are made, the realization of computations based on these formal systems becomes more and more biologically implausible. But once the commitment to symbol systems has been made it is imperative to push on through more and more complex and cumbersome systems in pursuit of objectivity.”[8] (p. 4)
“Under the current scheme the abstraction is done by the researchers leaving little for the AI programs to do but search.”[1] (p. 143)
“In classical AI, none of the modules themselves generate the behavior of the total system. Indeed it is necessary to combine together many of the modules to get any behavior at all from the system. ”[8] (p. 3)
5. Discussion of Brooks’ Approach
“This perspective has its intellectual roots in parts of recent sociological thinking which reject the entire fabric of western science.”[27]
“There are no variables … that need instantiation in reasoning processes. There are no rules which need to be selected through pattern matching. There are no choices to be made. To a large extent the state of the world determines the action of the Creature”[1] (p. 149)
“As a very approximate hand waving model of evolution, things get built up and accreted over time, and maybe new accretions interfere with the lower levels.”[14]
“Nouvelle AI relies on the emergence of more global behavior from the interaction of smaller behavioral units. As with heuristics there is no a priori guarantee that this will always work. However, careful design of the simple behaviors and their interactions can often produce systems with useful and interesting emergent properties.”[1]
6. Brooks’ Impact and Views on Subsequent Artificial Intelligence Research
6.1. Behaviour-Based Robotics after Intelligence without Representation
“automatic representation development, evolution, and repair must be a major goal of AI research over the next 50 years.”[50] (p. 85)
[and]
“Reasoning systems must be able to develop, evolve, and repair their underlying representations as well as reason with them. The world changes too fast and too radically to rely on humans to patch the representations.”[50] (p. 86)
“[The hope of Traditional AI] is that the ideas used will generalize to robust behavior in more complex domains. … [The hope of Nouvelle AI] is that the ideas used will generalize to more sophisticated tasks.
Thus the two approaches appear somewhat complementary. It is worth addressing the question of whether more power may be gotten by combining the two approaches.”
“As tasks become more representation-hungry-more concerned with the distal, abstract and non-existent-we will see more and more evidence of some kinds of internal representation and inner models.”[34] (p. 349)
“I believe myself and my children all to be mere machines. But this is not how I treat them. I treat them in a very special way, and I interact with them on an entirely different level … I maintain two sets of inconsistent beliefs and act on each of them in different circumstances. It is this transcendence between belief systems that I think will be what enables mankind to ultimately accept robots as emotional machines.”[60]
6.2. Brooks’ Views on Other Areas in Current AI Research
“In 1991 I wrote a long … paper16 on the history of Artificial Intelligence and how it had been shaped by certain key ideas. In the final paragraphs of that paper I lamented that there was a bandwagon effect in Artificial Intelligence Research, and said that ‘[m]any lines of research have become goals of pursuit in their own right, with little recall of the reasons for pursuing those lines’.
I think we are in that same position today in regard to Machine Learning. The papers in conferences fall into two categories. One is mathematical results showing that yet another slight variation of a technique is optimal under some carefully constrained definition of optimality. A second type of paper takes a well know learning algorithm, and some new problem area, designs the mapping from the problem to a data representation … and show the results of how well that problem area can be learned.
This would all be admirable if our Machine Learning ecosystem covered even a tiny portion of the capabilities of human learning. It does not. And, I see no alternate evidence of admirability.
Instead I see a bandwagon today, where vast numbers of new recruits to AI/ML have jumped aboard after recent successes of Machine Learning, and are running with particular versions of it as fast as they can. They have neither any understanding of how their tiny little narrow technical field fits into a bigger picture of intelligent systems, nor do they care. They think that the current little hype niche is all that matters, are blind to its limitations, and are uninterested in deeper questions.”[Postscript to [65]
“we are in an intellectual cul-de-sac, in which we model brains and computers on each other, and so prevent ourselves from having deep insights that would come with new models.”[66]
“The dramatic success in machine learning has led to an explosion of artificial intelligence (AI) applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations have, however, met with fundamental obstacles that cut across many application areas. One such obstacle is adaptability, or robustness. Machine learning researchers have noted current systems lack the ability to recognize or react to new circumstances they have not been specifically programmed or trained for.”[67]
7. Concluding Remarks
“Can there be a theoretical analysis to decide whether one organization for intelligence is better than another? Perhaps, but I think we are so far away in understanding the correct way of formalizing the dynamics of interaction with the environment that no such theoretical results will be forthcoming in the near term.”[8] (p. 13)
“We have only just begun to explore the space of computational possibilities [for modelling our minds]. Changes of direction, and even the occasional dead end, should not be scorned as folly. Science grows not only by conjectures, but also by refutations.”[26] (p. 9)
Funding
Acknowledgments
Conflicts of Interest
References
- Brooks, R.A. Intelligence without representation. Artif. Intell. 1991, 47, 139–159. [Google Scholar] [CrossRef]
- McCulloch, W.S.; Pitts, W.H. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
- Piccinini, G. The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s “Logical Calculus of Ideas Immanent in Nervous Activity”. Synthese 2004, 141, 175–215. [Google Scholar] [CrossRef]
- Hayes, P.J. The Naive Physics Manifesto. In The Philosophy of Artificial Intelligence (1990); First published in Expert Systems in the Micro-Electronic Age (1979); Edinburgh University Press: Edinburgh, UK, 1979; pp. 171–205. [Google Scholar]
- Newell, A.; Simon, H.A. Computer Science as Empirical Inquiry: Symbols and Search. Commun. ACM 1976, 19, 113–126. [Google Scholar] [CrossRef] [Green Version]
- Paulius, D.; Sun, Y. A Survey of Knowledge Representation in Service Robotics. Robot. Auton. Syst. 2019, 118, 13–30. [Google Scholar] [CrossRef] [Green Version]
- Boden, M.A. (Ed.) The Philosophy of Artificial Intelligence; Oxford University Press: Oxford, UK, 1990. [Google Scholar]
- Brooks, R.A. Elephants Don’t Play Chess. Robot. Auton. Syst. 1990, 6, 3–14. [Google Scholar] [CrossRef] [Green Version]
- Turing, A.M. Computing Machinery and Intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
- Dreyfus, H.L.; Dreyfus, S.E. Making a Mind versus Modelling the Brain: Artificial Intelligence Back at a Branch-Point. In The Philosophy of Artificial Intelligence (1990); First published in Artificial Intelligence 117 No. 1, 1988; Springer: London, UK, 1988; pp. 309–333. [Google Scholar]
- Dreyfus, H.L. What Computers Can’t Do: The Limits of Artificial Intelligence; Harper and Row: New York, NY, USA, 1979. [Google Scholar]
- McDermott, D. A Critique of Pure Reason. Comput. Intell. 1987, 3, 151–160. [Google Scholar] [CrossRef]
- Brooks, R.A. Rodney Brooks-Roboticist. 2008. Available online: https://people.csail.mit.edu/brooks/publications.html (accessed on 14 September 2020).
- Brooks, R.A.; Brockman, J. The Deep Question: A Talk with Rodney Brooks. 1997. Available online: https://www.edge.org/conversation/rodney_a_brooks-the-deep-question (accessed on 14 September 2020).
- Brooks, R.A.; Connell, J.; Ning, P. Herbert: A Second Generation Mobile Robot; AI Memos (1959–2004) AIM-1016; MIT: Cambridge, MA, USA, 1988. [Google Scholar]
- Brooks, R.A. Intelligence without Reason; Technical Report [Computers and Thought, IJCAI-91]; Artificial Intelligence Laboratory, Massachusetts Institute of Technology: Cambridge, MA, USA, 1991. [Google Scholar]
- Simon, H.A. The Sciences of the Artificial; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
- Brooks, R.A. FAQ. 2007. Available online: https://web.archive.org/web/20070225132932/http://people.csail.mit.edu/brooks/faq.shtml (accessed on 14 September 2020).
- Yeap, W.K. Emperor AI, where is your new mind? AI Mag. 1997, 18, 137. [Google Scholar]
- Marr, D. Artificial intelligence: A personal view. In Mind Design; Haugeland, J., Ed.; MIT/Bradford Books: Cambridge, MA, USA, 1977. [Google Scholar]
- Clark, A. Connectionism, Competence and Explanation. In The Philosophy of Artificial Intelligence(1990); Also published in The British Journal for the Philosophy of Science, 1990; Oxford University Press: Oxford, UK, 1990; pp. 281–308. [Google Scholar]
- Kuhn, T.S. The Structure of Scientific Revolutions; The University of Press: Chicago, IL, USA, 1962. [Google Scholar]
- Searle, J.R. Minds, Brains and Programs. Behav. Brain Sci. 1980, 3, 417–424. [Google Scholar] [CrossRef] [Green Version]
- Gibson, J.J. The Ecological Approach to Visual Perception; Houghton-Mifflin: Boston, MA, USA, 1979. [Google Scholar]
- Harvey, I. Misrepresentations. In Proceedings of the Eleventh International Conference on Artificial Life; Bullock, S., Noble, J., Watson, R.A., Bedau, M.A., Eds.; MIT Press: Cambridge, MA, USA, 2008; pp. 227–233. [Google Scholar]
- Boden, M.A. New breakthroughs or dead-ends? Philos. Trans. Phys. Sci. Eng. 1994, 349, 1–13. [Google Scholar]
- Hayes, P.J.; Ford, K.M.; Agnew, N. On Babies and Bathwater: A Cautionary Tale. AI Mag. 1994, 15, 15–26. [Google Scholar]
- McCarthy, J.; Hayes, P.J. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Machine Intelligence 4; Meltzer, B., Michie, D., Eds.; Edinburgh University Press: Edinburgh, UK, 1969; pp. 463–502. [Google Scholar]
- Avraham, H.; Chechik, G.; Ruppin, E. Are There Representations in Embodied Evolved Agents? Taking Measures. Lect. Notes Artif. Intell. 2003, 2801, 743–752. [Google Scholar]
- Steels, L. Intelligence with Representation. Philos. Trans. Math. Phys. Eng. Sci. 2003, 361, 2381–2395. [Google Scholar] [CrossRef] [PubMed]
- Min, H.; Yi, C.; Luo, R.; Zhu, J.; Bi, S. Affordance research in developmental robotics: A survey. IEEE Trans. Cogn. Dev. Syst. 2016, 8, 237–255. [Google Scholar] [CrossRef]
- Zech, P.; Haller, S.; Lakani, S.R.; Ridge, B.; Ugur, E.; Piater, J. Computational models of affordance in robotics: A taxonomy and systematic classification. Adapt. Behav. 2017, 25, 235–271. [Google Scholar] [CrossRef]
- Jonschkowski, R.; Brock, O. Learning State Representations with Robotic Priors. Auton. Robot. 2015, 39, 407–428. [Google Scholar] [CrossRef]
- Clark, A. An embodied cognitive science? Trends Cogn. Sci. 1999, 3, 345–351. [Google Scholar] [CrossRef]
- Etzioni, O. Intelligence without Robots: A Reply to Brooks. AI Mag. 1993, 14, 7–13. [Google Scholar]
- Nilsson, N.J. Eye on the Prize. AI Mag. 1995, 16, 9–17. [Google Scholar]
- Brooks, R. The big problem with self-driving cars is people. In IEEE Spectrum: Technology, Engineering, and Science News; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
- Arkin, R.C. Behavior-Based Robotics; MIT press: Cambridge, MA, USA, 1998. [Google Scholar]
- Michaud, F.; Nicolescu, M. Behavior-based systems. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer: Cham, Swizerland, 2016. [Google Scholar]
- Watanabe, S.; Dunbar, B. People Are Robots, Too. Almost. 2007. Available online: https://www.nasa.gov/vision/universe/roboticexplorers/robots_like_people.html (accessed on 14 September 2020).
- Lyons, D.M.; Arkin, R.C.; Jiang, S.; Liu, T.M.; Nirmal, P. Performance verification for behavior-based robot missions. IEEE Trans. Robot. 2015, 31, 619–636. [Google Scholar] [CrossRef]
- Martín, F.; Aguero, C.E.; Canas, J.M. A Simple, Efficient, and Scalable Behavior-Based Architecture for Robotic Applications. In Robot 2015: Second Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2016; pp. 611–622. [Google Scholar]
- Lee, G.; Chwa, D. Decentralized behavior-based formation control of multiple robots considering obstacle avoidance. Intell. Serv. Robot. 2018, 11, 127–138. [Google Scholar] [CrossRef]
- Nolfi, S.; Bongard, J.; Husbands, P.; Floreano, D. Evolutionary robotics. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2016; pp. 2035–2068. [Google Scholar]
- Rajagopalan, P.; Holekamp, K.E.; Miikkulainen, R. Factors that Affect the Evolution of Complex Cooperative Behavior. In Proceedings of the ALIFE 2019: The 2019 Conference on Artificial Life, Newcastle, UK, 29 July–2 August 2019; pp. 333–340. [Google Scholar]
- Urashima, H.; Wilson, S.P. A Self-organising Animat Body Map. In Living Machines: Conference on Biomimetic and Biohybrid Systems; Springer International Publishing: Milan, Italy, 2014; pp. 439–441. [Google Scholar]
- Williams, P.; Beer, R. Environmental Feedback Drives Multiple Behaviors from the Same Neural Circuit. In Proceedings of the ECAL 2013: The Twelfth European Conference on Artificial Life, Taormina, Italy, 2–6 September 2013; pp. 268–275. [Google Scholar]
- Brooks, R. The Philosophical Underpinnings of Work in Artificial Life. In Proceedings of the ALIFE 2018: The 2018 Conference on Artificial Life, Tokyo, Japan, 23–27 July 2018. [Google Scholar]
- Brooks, R.A. The Seven Deadly Sins of Predicting the Future of AI. 2017. Available online: https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ (accessed on 14 September 2020).
- Bundy, A.; McNeill, F. Representation as a Fluent: An AI Challenge for the Next Half Century. IEEE INtelligent Syst. 2006, 21, 85–87. [Google Scholar] [CrossRef] [Green Version]
- Cunha, J.M.; Martins, P.; Lourenço, N.; Machado, P. Emojinating Co-Creativity: Integrating Self-Evaluation and Context-Adaptation. In Proceedings of the 11th International Conference on Computational Creativity, Coimbra, Portugal, 29 June–3 July 2020. [Google Scholar]
- Dahl, S.; Cenek, M. Towards Emergent Design: Analysis, Fitness and Heterogeneity of Agent Based Models Using Geometry of Behavioral Spaces Framework. In Proceedings of the ALIFE 2016: The Fifteenth International Conference on the Synthesis and Simulation of Living Systems, Cancún, Mexico, 4–8 July 2016; pp. 46–53. [Google Scholar]
- Berners-Lee, T.; Hendler, J.; Lassila, O. The Semantic Web. Sci. Am. 2001, 284, 34–43. [Google Scholar] [CrossRef]
- Wallis, P. Intention without Representation. Philos. Psychol. 2004, 4, 209–223. [Google Scholar] [CrossRef]
- Konidaris, G.D.; Hayes, G.M. An architecture for Behavior-Based Reinforcement Learning. Adapt. Behav. 2005, 13, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Wilson, S.W. The animat path to AI. In From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior; Meyer, J.A., Wilson, S.W., Eds.; The MIT Press: Cambridge, MA, USA, 1991; pp. 15–21. [Google Scholar]
- Braitenberg, V. Vehicles: Experiments in Synthetic Psychology; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
- al Rifaie, M.; Bishop, J.; Caines, S. Creativity and Autonomy in Swarm Intelligence Systems. Cogn. Comput. 2012, 4, 320–331. [Google Scholar] [CrossRef] [Green Version]
- Jordanous, A. A Fitness Function for Creativity in Jazz Improvisation and Beyond. In Proceedings of the International Conference on Computational Creativity, Lisbon, Portugal, 7–9 January 2010; pp. 223–227. [Google Scholar]
- Brooks, R.A. Flesh and Machines: How Robots Will Change Us; Pantheon Books: Rome, Italy, 2002. [Google Scholar]
- Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni, O.; Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar, E.; Kraus, S.; et al. “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel; Technical report; Stanford University: Stanford, CA, USA, 2016; Available online: http://ai100.stanford.edu/2016-report (accessed on 14 September 2020).
- Brooks, R.; Caine, M. Patent: Hybrid Training with Collaborative and Conventional Robots. 2019. Available online: https://patents.google.com/patent/US10514687B2/ (accessed on 14 September 2020).
- Ivanov, Y.A.; Brooks, R. Patent: Robotic Placement and Manipulation with Enhanced Accuracy. 2016. Available online: https://patents.google.com/patent/US9457475B2/ (accessed on 14 September 2020).
- Brooks, R.; Buehler, C.J.; Cicco, M.D.; Ens, G.; Huang, A.; Siracusa, M.; Williamson, M.M. Patent: Training and Operating Industrial Robots. 2015. Available online: https://patents.google.com/patent/US8965580B2/ (accessed on 14 September 2020).
- Brooks, R.A. Machine Learning Explained. 2017. Available online: https://rodneybrooks.com/forai-machine-learning-explained/ (accessed on 14 September 2020).
- Brooks, R. Avoid the Cerebral Blind Alley. Response to: Is the Brain a good model for machine intelligence? Nature 2012, 482, 462. [Google Scholar]
- Pearl, J. The seven tools of causal inference, with reflections on machine learning. Commun. ACM 2019, 62, 54–60. [Google Scholar] [CrossRef] [Green Version]
- Brooks, R. The Seven Deadly Sins of AI Predictions. Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future. MIT Technol. Rev. 2017, 6. Available online: https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/ (accessed on 14 September 2020).
- Strannegård, C.; Svangård, N.; Lindström, D.; Bach, J.; Steunebrink, B. The animat path to artificial general intelligence. In Proceedings of the Workshop on Architectures for Generality and Autonomy, IJCAI-17, Melbourne, Australia, 19 August 2017. [Google Scholar]
- Wiedermann, J.; van Leeuwen, J. Understanding and Controlling Artificial General Intelligent Systems. In Proceedings of the 10th AISB Symposium on Computing and Philosophy, in AISB Symposium X, Atlanta, GA, USA, 19–23 June 2017. [Google Scholar]
- Sloman, A. A Philosopher-Scientist’s View of AI. J. Artif. Gen. Intell. 2020, 11, 91–96. [Google Scholar]
- Brooks, R.A. My Dated Predictions. 2018. Available online: https://rodneybrooks.com/my-dated-predictions/ (accessed on 14 September 2020).
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
1. | See the Concluding Remarks to see how this definition has evolved since. |
2. | Certainly AI was not progressing as hoped for: Turing expected his test of intelligence [9] to have been passed by the end of the Twentieth Century; however at the time of writing, the Turing test still has not been passed. |
3. | It is interesting to note that although Brooks includes these publications in his list of publications on his academic profile [13], none of the symbolic AI papers are linked to full texts of the paper (though the full texts are accessible from other sources), whereas the vast majority of his papers post 1985 do have links. |
4. | As noted by a reviewer of this paper, “this failure to define the term “representation” fuels a lot of discussion in cognitive science. This implies that the term is not intuitively precise and that it means different things to different people”. |
5. | This is a reference to Marr’s 3-level cognitive model of information processing [20]. Level 1, the computational level, looks at what a system does, what functions it performs, and why. Level 2, the algorithmic level, looks at how a system operates, and the representations and processes it employs. Level 3, not mentioned here, looks at the physical realisation of the system, and is perhaps the most relatable to Brooks’ ideas (though by no means an accurate one-to-one mapping). Clark is making the point that, in contrast to a connectionist’s perspective, someone from a classicist (traditional AI) perspective treats how we think (Level 2) in terms of what functions may be occurring (Level 1). |
6. | Even then, Searle says this is only Weak AI, or the simulation of intelligence, rather than the demonstration of true intelligence itself. |
7. | And still emphasises; see the discussions later in this paper on Brooks’ impact on and opinions of modern-day AI research. |
8. | However, Brooks’ emphasis is on research of a implementational nature rather than theorising so it is unlikely he would enter into much philosophical debate about his approach. Instead Brooks offers his robots as direct evidence of his theory. |
9. | Uncertainty may, of course, be no bad thing. |
10. | As a reviewer of this article notes, arguably this argument could be extended to animals as well. |
11. | Brooks refers to the inability of elephants to understand the game of chess rather than the rather obvious physical difficulties they would have in picking up the pieces. |
12. | Although Brooks later goes on to say that it is best to use his approach on its own rather than in combination with other AI approaches [14]. |
13. | https://www.ijcai.org/awards, last accessed June 2020. |
14. | |
15. | |
16. | The paper that Brooks refers here to is [16]. |
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jordanous, A. Intelligence without Representation: A Historical Perspective. Systems 2020, 8, 31. https://doi.org/10.3390/systems8030031
Jordanous A. Intelligence without Representation: A Historical Perspective. Systems. 2020; 8(3):31. https://doi.org/10.3390/systems8030031
Chicago/Turabian StyleJordanous, Anna. 2020. "Intelligence without Representation: A Historical Perspective" Systems 8, no. 3: 31. https://doi.org/10.3390/systems8030031
APA StyleJordanous, A. (2020). Intelligence without Representation: A Historical Perspective. Systems, 8(3), 31. https://doi.org/10.3390/systems8030031