“‘Come now, and let us reason together,’
Says the Lord,
‘Though your sins are like scarlet,
They shall be as white as snow;
Though they are red like crimson,
They shall be as wool’” (Isaiah 1:18)
In part 1 and part 2 of this series I hope I explained with a modicum of clarity that NLP is hard — in fact, NP-Hard, which is to say we probably do not yet have the means nor materials to get there. What does it mean to ‘get there’? Per Turing, when a machine cannot be distinguished from a human in an open-ended conversation, we might then presume that the machine is ‘thinking’ simply because we have no other mechanism to measure ‘thinking’ otherwise. Three questions come to mind:
- Can we reliably measure thinking in some other way besides conversation?
- If a machine does ‘think’ does that imply machine consciousness?
- Is thinking required in order for ‘reason’ to emerge?
Measuring Thinking
Human brain activity is routinely measured via fMRI; tools sensitive enough to record the activity of a single neuron. https://engineering.mit.edu/engage/ask-an-engineer/how-are-thoughts-measured/
Neuron activity does not necessarily correlate to ‘thinking’ however. Your heart beats whether you are aware of it or not. Does that sort of nervous system activity mean ‘thought’ or thinking is involved? If so, then doesn’t a computer program that manages some real-time process also qualify? If so, then doesn’t any insilico neural network also qualify? If not, then perhaps neuron activity alone is not sufficient to conclude that a human (or machine) is actually thinking, and therefore, other measures must at least be included.
What does it mean to think? Per Turing once more, from his seminal paper (FIRST PARAGRAPH):
I propose to consider the question, ‘Can machines think?’ This should
begin with definitions of the meaning of the terms ‘machine’ and
‘think’. The definitions might be framed so as to reflect so far as
possible the normal use of the words, but this attitude is dangerous. If
the meaning of the words ‘machine’ and ‘think’ are to be found by
examining how they are commonly used it is difficult to escape the
conclusion that the meaning and the answer to the question, ‘Can
machines think?’ is to be sought in a statistical survey such as a Gallup
poll. But this is absurd.
Brain activity alone cannot be the measure of thinking. Yes, there are clearly strong correlations between thought and measurable brain activity. But the cause? Does brain activity ’cause’ thinking? Or is it the other way around? Or is brain activity Plato’s shadows on the cave wall as it were, with consciousness being the source of light? I would argue that we are still swimming in very shallow waters when it comes to a science-based (i.e.: neurological) understanding of consciousness. When we have evidence for human perception absent of brain activity in near death experiences, and no tractable explanation from reductionist approaches, I believe it is safe to say there is so much we just don’t know. Rather than delve further into that topic, when it comes to consciousness, let me say that we simply do not yet have a satisfactory explanation from science for the emergence nor nature of human consciousness. Theories, yes. But science is still watching shadows dance on the wall.
Where does that leave us with respect to measuring thinking? Turing thought it through pretty well, in my view. Short of joining the Society of Solipsists on Facebook, which is absurd, how can we know thought exists outside my own internal universe? Which brings us back to a belief: I can start to presume you think too if we satisfactorily engage in conversation of some kind. Why? Because language is infinite in possibilities. Yes, I can and do predict what you might say in response to something I might say. But more often than not, I am surprised by what you say, and whether we agree, disagree, entertain, or confound each other, we are engaged in a process that, by it’s very nature, is delicately poised on an infinite adjacent possible.
While we can produce applications that do appear to exhibit some level o ‘understanding’ of human language, no system to date has truly passed Turing’s test. None. Despite decades of research, incredible Moore’s Law fueled super computers, application specific chips, trillions of gigabytes o data, and well-funded teams of researchers all around the world, we have yet to pass with clarity and consistency Turing’s (simple?) test. Can we reliably, consistently, and accurately measure ‘thinking’ in some way other than conversation? No. We cannot. We can measure, to some extent, use case-specific language abilities. But nothing in the way of general conversational skills when it comes to machines.
Thinking vs Consciousness
Is there a difference between thinking and consciousness? Can one be conscious without thinking? Can one think without being conscious? Are they synonyms? Depending on academic perspective, the specific answers to those questions might take on different tones but the colors would likely be similar, whether neurological, philosophical, psychological or spiritual. Thinking and consciousness are not the same thing. Consciousness is a state of being. The car is running and the engine is idling but nothing is moving until we put it in gear and hit the gas peddle. Thinking is what happens when we start moving. Energy is expended and a series of actions (thoughts) occur or are allowed to occur. Although some may take issue with this wording, I think it is fair to suggest, based on evidence, that one can be conscious or aware of existing but not in a thinking state. Meditation techniques routinely encourage practitioners to achieve such pure, albeit difficult, states of consciousness. We can measure brain waves that tend to correlate with categorical states of consciousness, some of which imply a type of thinking. Perhaps would could do the same with machines? Would that then mean the machine is ‘thinking’?
Some noted philosophers would argue that conscious thoughts are an illusion. That’s not to say necessarily that consciousness is an illusion, but the idea of being conscious of a thought is an illusion. Others would argue that consciousness itself is an illusion. (https://mindmatters.ai/2019/01/has-science-shown-that-consciousness-is-only-an-illusion/) Given the unsettled and often unsettling debates regarding the nature of human consciousness and thought as a function of that specific thing, illusion or not, it should be clear to any thinking person that the experience of consciousness and the experience of thinking can be quite distinct, although very much related. One (consciousness) contains the other (thinking) and not the other way around. Mind gives rise to brain. Mind can change brain (neuroplasticity).
For humans, thinking implies consciousness. Might we say the same for machines? If so then the question Turing posed: “Can machines think?”, must also therefore imply, “Can a machine be conscious?” And we continue to be faced with the solipsistically unsolvable question: How can we ever know?
Whence Reason
To the third question above, is thinking required for ‘reason’ to emerge? Clearly it depends on what we mean by ‘reason.’
Depending on the definition, ‘reason’ implies conscious(ness) application of logic. In other words, thinking. Yes we can produce logic-based applications that blindly apply rules to given inputs. And we can produce networks of simple logical mechanisms named for what we believe are the basic function of human neurons. Such networks also learn and then apply rules to a given input. Strictly speaking, is that reasoning? If we leave thinking out of the definition, is it accurate to say that output based on the application of logical rules to a given input is ‘reason’? If so, then isn’t a mechanical relay machine as proposed 70 years ago a reasoning device? Philosophically, reason might be reduced to the process of drawing logical inferences, in which case our machines are becoming increasingly more reasonable, are they not?
If reason emerges from machines, or at least a level of reason such as to actually give rise to an illusion of consciousness, answering an emphatic ‘yes’ to Turing’s original question, then perhaps NLP is not the answer. At least not NLP alone.
A relatively recent blog entry from DeepMind of Google discussing Open-Ended Learning Leads to Generally Capable Agents, points to some interesting possibilities. The upshot of the research is promising insofar as AGI may be concerned, which could also bring us asymptotically closer to passing the Turing Test hurdle. Imagine a game space, some 3-d environment in which autonomous agents learn and compete or game-space dominance. Borrowing on lessons learned from other game solving approaches such a Q-Learning, populations of agents are trained sequentially, with a Darwinian seasoning: each new generation of agents capturing from the best agent in the previous generation, iteratively improves the frontier of normalized score percentiles, while at the same time redefining the evaluation metric itself – an open-ended learning process. To me, reading this blog in 2021, I was reminded of many of my own musings, published in 2004, regarding autonomous agents, Darwinian selection processes, and the evolution of intelligence insilico. If you haven’t read my book, it may yet be of some interest as the Network Age continues to unfold.
If general capabilities can and do emerge in such a fitscape, as described by DeepMind, then it seems to me that NLP, married to agents in such an environment, perhaps even married to hybrid systems such as Marcus described, might actually allow Pinocchio to at least discuss matters with us, allowing us to suspend disbelief long enough to accept Turing’s wisdom and the test he provided so many years ago.
So here today I declare there is yet hope. These are dangerous times. Despite disturbing evidence to the contrary in recent years, I still believe there is yet hope for humanity. And software may yet be tamed to work with us lowly humans rather than controlled by nefarious global entities to enslave us.
A final word on Turing for this series. Assembling notes from several students who attended Wittgenstein’s Lectures on the Foundations of Mathematics at Cambridge in 1939, Cora Diamond of UVA gave us a glimpse into a special moment in the 20th century, when the gifted student Alan Turing studied and debated with the renowned philosopher, who presumably influenced Turing’s views regarding both language and mathematics.
Human language has infinite possibilities — the infinite adjacent possible. Words alone are symbols. The meaning of words provide the essence of language understanding. Humans have experience, culture, interactions, cycles, seasons, and a mortal coil from which to breathe. Machines do not. Meaning comes from our human lives, not inherent in the language. Per Wittgenstein, “For a large class of cases – though not for all – in which we employ the word meaning it can be defined thus: the meaning of a word is its use in the language.” From this small bit of insight, perhaps the path to Turing’s ultimate test was paved.
Let us reason together. We can still do this.
Leave a Reply