As part of my work on the CRONOS project I developed a new approach to synthetic phenomenology that tackles both the a priori question about whether robots are capable of conscious states, and the empirical analysis and description of a machine's consciousness at any point in time. This page briefly covers the approach that I have taken to these problems - much more information about this work can be found in Chapter 4 and Chapter 7 of my recent PhD.
Can a Machine Experience Phenomenal States?
One of the most common questions that people ask about a purportedly conscious artificial system is: "It is behaving in an apparently purposive and even conscious manner, but is it really conscious?" The discussion then shifts to an examination of the type of architecture that is used to create the system's behaviour. If the behaviour is produced by the population of China communicating with radios and satellites or through some arbitrary manipulation of symbols, then people are likely to say that the system is not conscious and has just a surface appearance of consciousness. On the other hand, if the system is using biological neurons to produce its behaviour, then people are much more likely to attribute real consciousness to it.
Since type I potential correlates of consciousness cannot be empirically separated out, we will never be able to say for certain whether or not a system is conscious. Although we might be tempted by some form of mysterianism, this leaves ethical questions unanswered and in the future we are likely to attribute phenomenal states to machines independently of any justification that we have for this. The solution that I have developed to this problem is an ordinal machine consciousness (OMC) scale that models our subjective judgements about consciousness and makes predictions about what people would say about the consciousness of non-human systems based solely on their type I potential correlates of consciousness. More information about the ordinal probability scale can be found here.
The Contents of a Machine's Consciousness
Once the a priori question about a machine's consciousness has been answered, there remains the empirical question about how much consciousness the system is experiencing on a moment to moment basis and the contents of this consciousness. My approach is based around precise definitions of mental states and representational mental states, with different theories of consciousness being used to make predictions about the association between mental states and phenomenal states. In my recent PhD I used Tononi's, Aleksander's and Metzinger's theories to make predictions about the distribution of consciousness in a spiking neural network, which are shown in figures 1-3 below.
Figures 1-3 show that Tononi's, Aleksander's and Metzinger's theories make very different predictions about the distribution of consciousness in the network, which could be tested by applying this methodology to the human brain. Information about how these predictions were generated can be found in Chapter 7 of my recent thesis.
Describing a Machine's Phenomenal States
Nagel and Chrisley have highlighted a number of issues with the description of artificial phenomenal states in human language. My solution to these problems is to use XML to describe phenomenal states, which makes less assumptions about the common ground between the consciousness of human and artificial systems. More information about this approach to synthetic phenomenology can be found in Chapter 4 of my recent PhD thesis.