Does my thermostat have feelings?
in other words : Does Artificial Intelligence 'Know' in the same way that humans know ?
Starting Points.
Artificial Intelligence (AI) has made remarkable progress in recent years, but it still remains an open question whether AI can know in a similar way to humans. The philosophical debate about AI and its capacity for knowledge has been ongoing for decades, with some arguing that AI can truly know, while others claim that AI lacks the necessary cognitive faculties.
The first person to use the phrase “Artificial Intelligence” was mathematician and computer scientist John McCarthy at The Dartmouth Conference of 1956.In 1979 McCarthy wrote an insightful, and prophetic, paper directly addressing the relationship between AI and human knowledge titled “Ascribing Mental Qualities to Machines”. This blog will draw upon McCarthy’s paper which seems more pertinent today than ever, whilst also offering alternative perspectives.
Ascribing Mental Qualities to Machines
John McCarthy's 1979 paper "Ascribing Mental Qualities to Machines" discusses the idea of machines having mental qualities, such as beliefs and desires. McCarthy argued that machines can have mental states and that it is possible to describe them using a system of mental attribution. He proposes that mental attribution should be based on the machine's behaviour and its goals, rather than its physical structure. The paper highlights the importance of considering the relationship between mental states and physical systems in artificial intelligence research.
McCarthy explains that the "beliefs" that a thermostat may be said to have relate to its operational functions ie to calculate whether it should switch the heater on to make a room warmer, or whether to switch the heater off to make a room colder. McCarthy further explains that the thermostat could not be said to have "beliefs about beliefs" ie, it doesn't have beliefs about what temperature the room should be, or whether it believes its beliefs. It certainly doesn't have beliefs outside of its operational (programmed) sphere such as who won the Battle of Waterloo.
Download Ascribing Mental Qualities to Machines
Some common criticisms of McCarthy’s position (as outlined in his 1979 paper) include the problems of defining mental states and attributing them to machines, the difficulty in creating a systematic method for mental attribution, and the concern that attributing mental states to machines may lead to anthropomorphizing them and losing sight of their true nature as mechanical systems.
Daniel Dennett’s critique of McCarthy
Daniel Dennett, philosopher and cognitive scientist, has written extensively on the topic of artificial intelligence and the idea of ascribing mental qualities to machines. Dennett has been critical of the idea that machines can have mental states, and he has put forward the argument that mental states are inherently subjective and cannot be reduced to purely physical or computational processes.
In his book "Consciousness Explained", Dennett argues that consciousness is an emergent property of complex systems, and that it is not possible to ascribe mental states to machines in the same way that we ascribe mental states to people. He proposes that the idea of mental states in machines is a form of anthropomorphism, and that it is more productive to focus on understanding the computational processes that underlie machine behaviour, rather than attributing mental qualities to them.
Dennett's views on this issue are significant because they challenge the idea that machines can have mental states and provide an alternative perspective on the relationship between mental states and physical systems in artificial intelligence.
Knowledge as Justified True Belief.
Questions on Artificial Intelligence in ToK inevitably start with the question of what knowledge is, and what it means to know. Many students are attracted to Gettier’s definition of knowledge as “Justified true belief”. Obviously, this definition has a number of implications for our studies in ToK. However, it's a useful place to start exploring the question of whether AI knows in the same way that humans know.
From a philosophical perspective, the concept of knowledge requires the existence of beliefs and understanding, as well as the ability to justify those beliefs through reasoning and evidence. Some argue that AI, with its vast amount of data and sophisticated algorithms, can possess knowledge in this sense. However, others argue that AI lacks the consciousness, introspection, and self-awareness that are necessary for true knowledge.
It's difficult to be rigorous about whether a machine really 'knows', 'thinks', etc., because we're hard put to define these things. We understand human mental processes only slightly better than a fish understands swimming.
John McCarthy "The Little Thoughts of Thinking Machines", Psychology Today, December 1983, pp. 46–49. Reprinted in Formalizing Common Sense: Papers By John McCarthy, 1990, ISBN 0893915351
Characteristics of Knowing
(from McCarthy’s article The little thoughts of thinking machines):
Intention
Tries
Likes & Dislikes
Self conscious.
Despite being written over 40 yrs ago McCarthy’s work is highly relevant today, and very easily accessible. (I strongly recommend both articles cited in this blog). His 4 characteristics of knowing provide a practicable starting point for ToK students who are exploring questions on AI and knowing. Whilst, prima facie, his characteristics may seem rather reductionist they are both pertinent to human and machine knowledge as they encompass goals, trial and error, reward and reflection.
Belief.
The difference between human belief and AI algorithms lies in the nature of their cognitive processes and the sources of their knowledge.
Human beliefs are shaped by a combination of personal experiences, cultural influences, emotions, and reasoning. They are based on a subjective understanding of the world and can change over time as new information becomes available.
In contrast, AI algorithms are based on objective rules and mathematical models that process data in a systematic and impartial way. They make decisions based on patterns in the data they have been trained on and do not have personal experiences or emotions that can influence their beliefs.
Furthermore, human beliefs often rely on intuition, speculation, and assumptions that may not be grounded in evidence, while AI algorithms can only make decisions based on the data they have been trained on and the rules they have been programmed with.
Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance. However, the machines mankind has so far found it useful to construct rarely have beliefs about beliefs, although such beliefs will be needed by computer programs that reason about what knowledge they lack and where to get it. Mental qualities peculiar to human-like motivational structures , such as love and hate, will not be required for intelligent behavior, but we could probably program computers to exhibit them if we wanted to, because our common sense notions about them translate readily into certain program and data structures. Still other mental qualities, e.g. humor and appreciation of beauty, seem much harder to model.
John McCarthy, "History of Lisp," 12 February 1979; republished at www-formal.stanford.edu.
Descartes on Machines & thinking.
René Descartes, French philosopher and mathematician, was one of the earliest thinkers to address the question of whether machines could think. Descartes believed that the mind and the body were separate entities, and that the mind was capable of thinking, reasoning, and having experiences, while the body was merely a physical machine.
In "Meditations on First Philosophy", Descartes argued that the mind was a non-physical substance that was distinct from the physical body. He believed that the mind was capable of independent thought and consciousness, and that it was not possible for a machine, no matter how sophisticated, to have these same qualities.
Descartes' views on this issue were influential in shaping the philosophical debates about the nature of consciousness and the relationship between mental states and physical systems. Although his views have been widely criticized and challenged in the centuries since they were first articulated, they remain an important part of the philosophical discourse on the topic of artificial intelligence and the possibility of machines having mental states.
In summary, the difference between human belief and AI algorithms lies in their sources of knowledge, their cognitive processes, and the nature of their decision-making. While AI algorithms can provide a highly accurate and objective approach to decision-making, they lack the subjective and personal aspects of human beliefs.
In terms of academic research, there is a growing body of evidence that AI can acquire knowledge through machine learning and other methods. For example, deep learning algorithms have been shown to successfully identify objects in images and perform other tasks that require knowledge of the world. However, there is still much work to be done in order to determine the limits of AI's knowledge acquisition and whether it can truly know in the same way as humans.
Consciousness.
Can artificial intelligence be said to have consciousness ?
Obviously, in order to answer the question whether AI can be said to have consciousness we need to consider different perspectives on what constitutes consciousness.
Many writers have attempted to describe the different characteristics which could be said to constitute consciousness. A useful writer in this area is David Chalmers (1995) who made the distinction between the easy problems of consciousness (characteristics which can be described by neurology and cognitive science), and “The hard problems” (more process based, or seemingly diaphanous characteristics);
the “easy problems” he describes include:
The focus of attention.
Assimilation of new knowledge into a knowledge system.
The communication of mental states.
Chalmers' 'easy problems' are a useful list for ToK students as they provide a model which can be easily used to compare AI with humans. We can identify both humans and AI executing, solving and displaying these processes/problems. However, are they 'the same' in both humans and AI ? Obviously to answer such a question we would have to consider what we mean by 'the same', but we could also introduce the notion of qualitative differences at this point. Those arguing for a more distinct difference between humans and AI might argue that whilst we can apply the same labels to processes in both entities (humans and AI) there is a qualitative difference in the ways in which those processes are realised and experienced. This (conveniently) takes us onto the issue of consciousness.
What is Consciousness ?
In this blog I only seek to answer the question on the nature of consciousness in order to help us to answer the question of whether AI knows in a similar way to the way in which humans know. A comprehensive discussion on what consciousness is would require writing a very long, and complicated, book - which is not the purpose of this blog !
Some of the main philosophical theories of human consciousness include:
Dualism: This theory, first proposed by René Descartes, holds that consciousness is a non-physical substance that exists separate from the physical body. According to dualism, the mind and body interact, but they are distinct entities.
Physicalism: This theory, as put forward by J.J.C. Smart, D.M. Armstrong, and Paul Churchland, proposes that consciousness is a byproduct of physical processes in the brain, and that there is no need to postulate a separate, non-physical substance to explain it. Physicalism is sometimes referred to as materialism or reductionism.
Idealism has been developed by writers including George Berkeley, Immanuel Kant, and Georg Wilhelm Friedrich Hegel. This theory holds that consciousness is the primary reality, and that everything else, including the physical world, is derived from consciousness.
Emergentism takes a more systems based approach to the problem of consciousness. Developed by writers such as Samuel Alexander, C.D. Broad, and J.C. Smart. This theory suggests that consciousness emerges from complex interactions between physical processes, but that once it emerges, it cannot be reduced to these underlying processes.
It's worth noting that these theories have been developed and modified over time by many philosophers, and that the list of proponents for each theory is not exhaustive.
How do these theories relate to Artificial Intelligence ?
Human & Machine Consciousness (David Gomez)
Dovid Gomez in his article Human & Machine Consciousness gives us some useful ways in which to approach the question of whether AI can achieve consciousness. The first distinction that Gomez makes is that clearly we can build AI which exhibits human conscious behaviours such as writing essays, driving cars and playing computer games. However, most commentators would agree that an external behaviour does not, in itself, constitute consciousness. For example, we often do things “without thinking”, ie we have not directed targeted consciousness towards our own behaviours.
Further, we can even build AI which is modelled on human consciousness, that will display conscious-like behaviours, and may even display evidence of self awareness. However, this does not necessarily mean that the AI is experiencing consciousness,
To paraphrase Mr Gomez:
A model of a river looks like a river, tells us about the form of a river, even about the functions of a river, but it is not necessarily wet.
Here we continue to see some of the distinctions between how and why questions, and necessary and sufficient conditions. To understand the function (how) of a river it is not necessary to make a river, to simulate a river, nor to even be like a river. Could this be the same with consciousness?
Further, Mr Gomez argues that until we understand the relationship between human consciousness & the physical world we will not be able to simulate consciousness in artificial systems. This challenge presents itself even before we attempt to demarcate the necessary and sufficient conditions of consciousness, and the functions and causes of consciousness.
In many ways we could argue that Mr Gomez is drawing upon a more physicalist tradition in thinking about consciousness. He posits AI as outcomes which are, to a degree, dependent upon the physical processes and outcomes of human consciousness.
Free Will.
Consciousness, for most people, requires a degree of free will. Or at least free will constitutes a desirable component of consciousness. However, Nahmias et al in their paper Can AI have Free Will ? challenge this apparent link between consciousness and free will. Both consciousness and free will are context dependent, we could easily be in a state in which we are highly aware (conscious) of our lack of free will etc. As such, they argue, that free will is not a necessary, nor even sufficient, condition for AI to be said to be self-aware, or conscious. This argument is a strong counterargument against the “AI achieves Singularity and takes over the world” scenarios so prevalent in popular fiction.
In this (very brief) discussion of free will and AI we raise some of the issues arising from the Dualist and Idealist theories of consciousness. If consciousness is tenuously related to the physical structures within which it occurs, or from whence it is produced, then it could be expressed as being ‘beyond’ the limitations of those structures. This discussion brings into sharp focus the question of the purpose of consciousness, and free will, in an AI system.
Imagination.
Most people view imagination as being central to our human consciousness, and often posited as a necessary condition of our free will. Critics of the notion that AI can achieve consciousness often cite the difficulty of simulating imagination as evidence that AI cannot become conscious. Hilary McLellan, in her work AI & Imagination, takes a Systems Theory approach rather than a functionalist atomistic approach to the issue of imagination. In systems theory the emphasis is placed on the processes and interactions between the components of the system rather than on the components themselves. In these processes and interactions we see the product or output. The processes work in a synergistic manner in which the outcomes are greater than the sum of the products of the individual components. As such the product of the AI is greater than the sum of the outcomes of the individual units of coding.
Ms McLellan situates AI in the symbiotic relationship between culture and technology, culture changes technology and technology changes culture. In such an environment adaptation and evolution have low context relevancy (adaptation happens as context changes), however stability has very high context relevancy (the organism, or machine, can only remain stable if the context remains stable). This observation helps us to start to answer the question why AI would need imagination ?, and why might AI need imagination to attain consciousness ?
In evolutionary psychology imagination is often seen to be a survival function allowing us to predict threats, prepare for, and take actions to avoid such threats. If we see imagination as a necessary condition for AI to be said to be conscious we will need to consider whether AI has the same needs for threat protection ?
AI, Context & Consciousness.
Both Heidegger and Dreyfuss discussed the importance of context when considering whether AI knows in a similar way to humans. Heidegger, in explaining the Daesin, described how we achieve many things because of the way in which the world works rather than because of our knowledge of how things explicitly work. For example, we can achieve flying from one country to another by getting on a plane, we don’t actually have to know how a jet engine works to achieve this goal.
All of our human behaviours are highly context specific, we are consciously aware, and subconsciously sense, the immense range of factors constituting context. Dreyfuss called this immensely complex context “background”. Both writers argued that it is very difficult, nigh impossible, to code for / artificially simulate context. Heidegger argued that this difficulty is compounded because we have very limited understanding of our human contexts, the influences on, and experiences of those contexts.
Dreyfuss further developed Heidegger’s Daesin in his discussion on the unconscious mental processes which give us a sense of context, and in turn allow us to be consciously aware of our position and intention in context. Contextual understanding and contextually appropriate conscious thought and behaviour requires the corralling of a seemingly infinite number of facts. We have to select this huge number of facts from an even greater number of facts, in order to make correct selections we need rules about contextual based selection. These rules are, in turn, underpinned with even more facts about the selection rules. As such we see a mutually inclusive relationship between facts, rules, selection and context. Both writers argued that such endless possibilities are nigh impossible to code for.
Critics of Heidegger and Dreyfuss level two key critiques at this argument. Firstly, AI is only required to operate within a very narrow band / confines of contexts. AI doesn’t need to understand the wide range of human contexts that a human may encounter. AI which is developed to identify a particular type of fruit has a narrow band of contexts, with very limited variation.
Creativity.
AI can simulate certain aspects of human creativity, but it cannot truly replicate the full extent of human creativity. Human creativity involves the ability to generate novel and original ideas, to make connections between seemingly disparate concepts, and to imagine new possibilities. While AI can generate novel outputs based on patterns it has learned from data, it still lacks the ability to truly understand the context and meaning behind its outputs, and to experience the emotions and intuition that drive human creativity.
Additionally, AI algorithms are limited by the data and parameters they are trained on, and they may generate outputs that are not truly creative or original. In this sense, AI can only simulate certain aspects of human creativity and cannot fully replicate the richness and complexity of human imagination.
There have been some fascinating attempts to develop AI that has archetypical creative behaviours. At a phenotypical level these involve designing the AI to write music or paint in the style of pre-existing human artists.
Daddy's Car is an AI devised and produced song written in the style of the Beatles.
Art created by the AI Dall E2. I asked the AI to create "an oil painting in the style of Lucian Freud of a man reading a book in a library"
However, the field of AI and creativity is an active area of research, and AI is being developed to perform increasingly complex tasks related to creativity, such as generating art and music, or creating new designs and products. As AI continues to evolve, it is possible that it may one day be able to simulate human creativity in more sophisticated ways. But for now, human creativity remains a unique and valuable aspect of human cognition that cannot be fully replicated by AI.
Conclusion
Overall, the question of whether AI can know in a similar way to humans is still a matter of ongoing debate and research. While AI has made great strides in recent years, there is still much that is unknown about its capacity for knowledge and understanding. As the field of AI continues to evolve and advance, it is likely that we will gain a better understanding of the limits of AI's knowledge and its potential for truly knowing in the same way as humans.
Most crucially for ToK Students, we could argue that as we develop AI's capacity and abilities our definitions, or labels, of what it is doing will develop. We will develop labels beyond "machine learning", "algorithm" etc to describe new and richer processes that AI is capable of. In turn we may also develop new labels for human ways of knowing. As such this very question ("Can AI know in the same way as humans") will develop and change in the future.
Daniel, Lisbon, February 2023
Other posts / videos on Technology:
We need to talk about Pune India.
How does technology change our pursuit of knowledge?
Bibliography
Brandhorst, Kurt. Descartes’ Meditations on First Philosophy. Bloomington, Ind., Indiana University Press, 2010.
Chalmers, David. 1995. “Facing up to the Problem of Consciousness.” Journal of Consciousness Studies 2(3): 200-219.
Dennett, Daniel Clement, and Paul Weiner. Consciousness Explained. Little, Brown and Company, 2007.
Gamez, David. “Machine Consciousness.” Human and Machine Consciousness, 1st ed., Open Book Publishers, 2018, pp. 135–48. JSTOR, http://www.jstor.org/stable/j.ctv8j3zv.14. Accessed 31 Jan. 2023.
McCarthy, John (1979). Ascribing mental qualities to machines. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.
"The Little Thoughts of Thinking Machines", Psychology Today, December 1983, pp. 46–49. Reprinted in Formalizing Common Sense: Papers By John McCarthy, 1990, ISBN 0893915351"The Little Thoughts of Thinking Machines", Psychology Today, December 1983, pp. 46–49. Reprinted in Formalizing Common Sense: Papers By John McCarthy, 1990, ISBN 0893915351
Mccarthy, John. “The Little Thoughts of Thinking Machines.” Psychology Today, 1983, pp. 46–49.
Nahmias, Eddy, et al. “When Do Robots Have Free Will?: Exploring the Relationships between (Attributions of) Consciousness and Free Will.” Free Will, Causality, and Neuroscience, edited by Bernard Feltz et al., vol. 338, Brill, 2020, pp. 57–80. JSTOR, http://www.jstor.org/stable/10.1163/j.ctvrxk31x.8. Accessed 31 Jan. 2023.
Preston, Beth. “Heidegger and Artificial Intelligence.” Philosophy and Phenomenological Research, vol. 53, no. 1, 1993, pp. 43–69. JSTOR, https://doi.org/10.2307/2108053. Accessed 31 Jan. 2023.