BLOG EX MACHINA

The arc of technology is ever forward — smaller, faster, cheaper. Ephemeralization. That’s the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity.

I know I’ve written about Fuller a few times. There is no doubt he was a genius and one of the key visionaries of the unfolding technology-driven metamorphosis of humanity. This post and recursive link is dedicated to Fuller: BLOG EX MACHINA.

As you probably know the rate of innovation and advances in Natural Language Processing (NLP) has increased significantly in recent years. My own writings on the subject can be found in the “Let Us Reason Together” series on this website (part 1, part 2, part 3). So for this particular entry I thought I’d invite a few NLP models to weigh in with their own ideas on technologically increasing productivity.

The idea here is simple: Test a few online text completion models with the same prompt and compare the results. The ‘prompt’ in this case is the first paragraph of this blog post. Using those words to seed to models, leaving all other configurable settings at their default values, below please find the results.

From Sassbook, one of several sites that actually sells “Intelligent software applications,” some of which would ostensibly automate and create written content:

The arc of technology is ever forward -- smaller, faster, cheaper. Ephemeralization. That's the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity. No longer are you stuck in what the job market is telling you to go. You can now go where your work is. Work wherever you like. And as the article points out, you don't need to have a degree to be a good team player or a leader. If you are a team leader, go to work and be great. Because that's what your skills are really all about. They are about communication, management, leadership, teamwork, being a team. Being a great team member means being great at everything you do. Now, if you want to lead your team and work from your home office, good luck. Don't come to us to complain. -Bucky

Hmmmm. Given the seed paragraph I’m not sure if their “state of the art” AI article writing software actually grokked the direction or magnitude of the thought vector in question. Sassbook doesn’t list the model type or versions in use — just “state of the art artificial intelligence technologies.” But for a mere $50 per month for the full blown set of applications and features, their words ex machina may very well provide value in some quarters.

By contract, EleutherAI, a site hosting a model called GPT-J-6B, gives us this:

The arc of technology is ever forward -- smaller, faster, cheaper. Ephemeralization. That's the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity. The result is a culture of abundance where we are no longer constrained by scarcity. As a result, we have a lot of extra time to consume stuff.
The only problem is that most people aren't as productive as we are -- they are stuck in jobs that are not fulfilling. Because we don't have the free time to enjoy the benefits of our productivity, we are growing increasingly unhappy with our lives. There is a desperate need for a solution. That's where the sharing economy comes in. It is an ideology that encourages us to share our time and resources with people who need it.

Now this is getting interesting. Both sites took a turn in a different direction from the initial seed. Sassbook’s engine decided to drift into work-from-home matters, whereas EleutherAI’s model took productivity increases as the lead into the sharing economy. Both of those topics might have been implied by the idea of increasing rates of productivity. But neither, strictly speaking, stayed on topic.

Of the online models I tested the demo from Adam King‘s site InferKit.com was the most awesome.

The arc of technology is ever forward -- smaller, faster, cheaper. Ephemeralization. That's the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity. Fuller referred to it as the "destructive efficiency." In the 50s, we were using the term "evolution." By the 70s, "software." And by the 80s, "information technology." As we've gotten more connected, we've gotten less connected.

(PRNewsfoto/Continental Interactive)

But let's go back to Ephemeralization. What exactly does that mean? How is it different than the Ethernet cable that connects your phone to your computer? How is it different than the iPhone, in which you're connecting one device to the other, with an opening in the middle, like a Skype window? It's more than that. It's more than plug and play, in which you plug something in and a few minutes later, it's connected.

Ephemeralization involves the ephemeralization of the actual hardware itself. That's the ephemerality of time itself. One second, the information is here. The next, it's gone. Ephemeralization is a state in which you don't use the material, but the information.

Adam’s model stayed very much on topic, expanded on the topic and actually quoted Fuller in the process. Although the magic is also “state of the art,” whatever that might mean in this era of runaway innovation, the results speak for themselves. Adam’s site does reference Transformers, which one might infer. For my money, the InferKit premium service at $60 per month is far and away the best choice if blog ex machina is something you’d like to engage.

Alas I remained old school. I shall continue to pen these epistles, fingers pressing keys, until my own personal Upload twin is hatched.

Let Us Reason Together — part 3

“‘Come now, and let us reason together,’
Says the Lord,
‘Though your sins are like scarlet,
They shall be as white as snow;
Though they are red like crimson,
They shall be as wool’” (Isaiah 1:18)

In part 1 and part 2 of this series I hope I explained with a modicum of clarity that NLP is hard — in fact, NP-Hard, which is to say we probably do not yet have the means nor materials to get there. What does it mean to ‘get there’? Per Turing, when a machine cannot be distinguished from a human in an open-ended conversation, we might then presume that the machine is ‘thinking’ simply because we have no other mechanism to measure ‘thinking’ otherwise. Three questions come to mind:

  1. Can we reliably measure thinking in some other way besides conversation?
  2. If a machine does ‘think’ does that imply machine consciousness?
  3. Is thinking required in order for ‘reason’ to emerge?

Measuring Thinking

Human brain activity is routinely measured via fMRI; tools sensitive enough to record the activity of a single neuron. https://engineering.mit.edu/engage/ask-an-engineer/how-are-thoughts-measured/

Neuron activity does not necessarily correlate to ‘thinking’ however. Your heart beats whether you are aware of it or not. Does that sort of nervous system activity mean ‘thought’ or thinking is involved? If so, then doesn’t a computer program that manages some real-time process also qualify? If so, then doesn’t any insilico neural network also qualify? If not, then perhaps neuron activity alone is not sufficient to conclude that a human (or machine) is actually thinking, and therefore, other measures must at least be included.

What does it mean to think? Per Turing once more, from his seminal paper (FIRST PARAGRAPH):

I propose to consider the question, ‘Can machines think?’ This should
begin with definitions of the meaning of the terms ‘machine’ and
‘think’. The definitions might be framed so as to reflect so far as
possible the normal use of the words, but this attitude is dangerous. If
the meaning of the words ‘machine’ and ‘think’ are to be found by
examining how they are commonly used it is difficult to escape the
conclusion that the meaning and the answer to the question, ‘Can
machines think?’ is to be sought in a statistical survey such as a Gallup
poll. But this is absurd.

Brain activity alone cannot be the measure of thinking. Yes, there are clearly strong correlations between thought and measurable brain activity. But the cause? Does brain activity ’cause’ thinking? Or is it the other way around? Or is brain activity Plato’s shadows on the cave wall as it were, with consciousness being the source of light? I would argue that we are still swimming in very shallow waters when it comes to a science-based (i.e.: neurological) understanding of consciousness. When we have evidence for human perception absent of brain activity in near death experiences, and no tractable explanation from reductionist approaches, I believe it is safe to say there is so much we just don’t know. Rather than delve further into that topic, when it comes to consciousness, let me say that we simply do not yet have a satisfactory explanation from science for the emergence nor nature of human consciousness. Theories, yes. But science is still watching shadows dance on the wall.


Where does that leave us with respect to measuring thinking? Turing thought it through pretty well, in my view. Short of joining the Society of Solipsists on Facebook, which is absurd, how can we know thought exists outside my own internal universe? Which brings us back to a belief: I can start to presume you think too if we satisfactorily engage in conversation of some kind. Why? Because language is infinite in possibilities. Yes, I can and do predict what you might say in response to something I might say. But more often than not, I am surprised by what you say, and whether we agree, disagree, entertain, or confound each other, we are engaged in a process that, by it’s very nature, is delicately poised on an infinite adjacent possible.

While we can produce applications that do appear to exhibit some level o ‘understanding’ of human language, no system to date has truly passed Turing’s test. None. Despite decades of research, incredible Moore’s Law fueled super computers, application specific chips, trillions of gigabytes o data, and well-funded teams of researchers all around the world, we have yet to pass with clarity and consistency Turing’s (simple?) test. Can we reliably, consistently, and accurately measure ‘thinking’ in some way other than conversation? No. We cannot. We can measure, to some extent, use case-specific language abilities. But nothing in the way of general conversational skills when it comes to machines.

Thinking vs Consciousness

Is there a difference between thinking and consciousness? Can one be conscious without thinking? Can one think without being conscious? Are they synonyms? Depending on academic perspective, the specific answers to those questions might take on different tones but the colors would likely be similar, whether neurological, philosophical, psychological or spiritual. Thinking and consciousness are not the same thing. Consciousness is a state of being. The car is running and the engine is idling but nothing is moving until we put it in gear and hit the gas peddle. Thinking is what happens when we start moving. Energy is expended and a series of actions (thoughts) occur or are allowed to occur. Although some may take issue with this wording, I think it is fair to suggest, based on evidence, that one can be conscious or aware of existing but not in a thinking state. Meditation techniques routinely encourage practitioners to achieve such pure, albeit difficult, states of consciousness. We can measure brain waves that tend to correlate with categorical states of consciousness, some of which imply a type of thinking. Perhaps would could do the same with machines? Would that then mean the machine is ‘thinking’?

Some noted philosophers would argue that conscious thoughts are an illusion. That’s not to say necessarily that consciousness is an illusion, but the idea of being conscious of a thought is an illusion. Others would argue that consciousness itself is an illusion. (https://mindmatters.ai/2019/01/has-science-shown-that-consciousness-is-only-an-illusion/) Given the unsettled and often unsettling debates regarding the nature of human consciousness and thought as a function of that specific thing, illusion or not, it should be clear to any thinking person that the experience of consciousness and the experience of thinking can be quite distinct, although very much related. One (consciousness) contains the other (thinking) and not the other way around. Mind gives rise to brain. Mind can change brain (neuroplasticity).

For humans, thinking implies consciousness. Might we say the same for machines? If so then the question Turing posed: “Can machines think?”, must also therefore imply, “Can a machine be conscious?” And we continue to be faced with the solipsistically unsolvable question: How can we ever know?

Whence Reason

To the third question above, is thinking required for ‘reason’ to emerge? Clearly it depends on what we mean by ‘reason.’

Depending on the definition, ‘reason’ implies conscious(ness) application of logic. In other words, thinking. Yes we can produce logic-based applications that blindly apply rules to given inputs. And we can produce networks of simple logical mechanisms named for what we believe are the basic function of human neurons. Such networks also learn and then apply rules to a given input. Strictly speaking, is that reasoning? If we leave thinking out of the definition, is it accurate to say that output based on the application of logical rules to a given input is ‘reason’? If so, then isn’t a mechanical relay machine as proposed 70 years ago a reasoning device? Philosophically, reason might be reduced to the process of drawing logical inferences, in which case our machines are becoming increasingly more reasonable, are they not?

If reason emerges from machines, or at least a level of reason such as to actually give rise to an illusion of consciousness, answering an emphatic ‘yes’ to Turing’s original question, then perhaps NLP is not the answer. At least not NLP alone.

A relatively recent blog entry from DeepMind of Google discussing Open-Ended Learning Leads to Generally Capable Agents, points to some interesting possibilities. The upshot of the research is promising insofar as AGI may be concerned, which could also bring us asymptotically closer to passing the Turing Test hurdle. Imagine a game space, some 3-d environment in which autonomous agents learn and compete or game-space dominance. Borrowing on lessons learned from other game solving approaches such a Q-Learning, populations of agents are trained sequentially, with a Darwinian seasoning: each new generation of agents capturing from the best agent in the previous generation, iteratively improves the frontier of normalized score percentiles, while at the same time redefining the evaluation metric itself – an open-ended learning process. To me, reading this blog in 2021, I was reminded of many of my own musings, published in 2004, regarding autonomous agents, Darwinian selection processes, and the evolution of intelligence insilico. If you haven’t read my book, it may yet be of some interest as the Network Age continues to unfold.

If general capabilities can and do emerge in such a fitscape, as described by DeepMind, then it seems to me that NLP, married to agents in such an environment, perhaps even married to hybrid systems such as Marcus described, might actually allow Pinocchio to at least discuss matters with us, allowing us to suspend disbelief long enough to accept Turing’s wisdom and the test he provided so many years ago.

So here today I declare there is yet hope. These are dangerous times. Despite disturbing evidence to the contrary in recent years, I still believe there is yet hope for humanity. And software may yet be tamed to work with us lowly humans rather than controlled by nefarious global entities to enslave us.

A final word on Turing for this series. Assembling notes from several students who attended Wittgenstein’s Lectures on the Foundations of Mathematics at Cambridge in 1939, Cora Diamond of UVA gave us a glimpse into a special moment in the 20th century, when the gifted student Alan Turing studied and debated with the renowned philosopher, who presumably influenced Turing’s views regarding both language and mathematics.

Human language has infinite possibilities — the infinite adjacent possible. Words alone are symbols. The meaning of words provide the essence of language understanding. Humans have experience, culture, interactions, cycles, seasons, and a mortal coil from which to breathe. Machines do not. Meaning comes from our human lives, not inherent in the language. Per Wittgenstein, “For a large class of cases – though not for all – in which we employ the word meaning it can be defined thus: the meaning of a word is its use in the language.” From this small bit of insight, perhaps the path to Turing’s ultimate test was paved.

Let us reason together. We can still do this.

Let Us Reason Together – part 2

“‘Come now, and let us reason together,’
Says the Lord,
‘Though your sins are like scarlet,
They shall be as white as snow;
Though they are red like crimson,
They shall be as wool’” (Isaiah 1:18)

In part one of this series I made the assertion that NLP is hard. We might even say it is NP-hard, which is to say that in theory, any model we would hope to engineer to actually ‘solve’ the context-bound language of human beings is at least as hard a mountain to climb as is the halting problem for any given (sufficiently complex) program. For an analysis for linguistics from the perspective of machine learning, Ajay Patel has a fine article exploring the matter.

Although we have witnessed incredible advances in machine intelligence since Turing first asked, “Can Machines Think?“, we are still pondering exactly what it means to ‘think’ as a human being. The field of consciousness studies has produced laudable theories, to be sure. But we don’t yet know — we do not yet agree on — what precisely we mean by ‘consciousness,’ let alone share a congruent theory as to what it is. From pure (eliminativism) denial to panpsychist projection to good old school dualism, we still wrestle with the mysteries entangled in the most common aspect of human existence. We know it when we experience it. But we cannot say what it is. It’s no wonder Turing ultimately threw up his hands when he realized that any existential foundation beyond solipsism required an acceptance of other as conscious. Thus, if it walks like a duck, and quacks like a duck….you know the rest.

The Sapir-Whorf Hypothesis asserts that the structure of language shapes cognition; my perception is relative to my spoken (thinking?) language. Some linguistic proponents of this view hold that language determines (and necessarily limits) cognition, whereas most experts only allow for at least the influence of mother tongue on thought. Wittgenstein poured the foundation and Sapir-Whorf laid the rails. And apologists for cultural relativism were empowered in the process.

But do I really think in English?

Have you ever known what you wanted to say but could not remember the precise word? You knew that you knew the word you wanted to say, but you could not find the specific word in your cognitive cache. Some internal mental subroutine had to be unleashed to search your memory archives; within moments the word was retrieved. But you knew you wanted to say something — some concept or idea — but the associated word did not immediately come to mind.

In 1994 Steven Pinker published The Language Instinct, in which he argues that human beings have an innate capacity for language. Like Chomsky, evidence of a universal grammar is presented. Per this view, human beings instinctually communicate, honed by evolutionary forces to succeed as hunter-gatherers. Written language is an invention. Spoken words too are inventions, to facilitate the natural instinct for structured communication. Like a spider’s instinct to weave a wab, human beings have an instinct for structured communication. If this is so, then we ought to be able to discern the rules of the universal grammar, and eventually automate the production of words such as to effectively and meaningfully facilitate communication between humans and machines. And yet, some six decades since the dawn of computational linguistics, we are still frustrated with conversational agents (chat bots). We do not yet have a machine that can pass the Turing Test, despite many attempts. And NLP is still confined to a short list of beneficial use cases.

Must we solve the hard problem of consciousness in tandem with finding the machine with communication skills that are indistinguishable from humans? Per Turing, it’s the same problem.

Neuro-symbolic Speculation

Despite impressive results from modern deep learning approaches in NLP, Turing’s challenge stands: invictus maneo. So if neural networks aren’t enough, then what? We’ve tried rules-based systems. That didn’t work. Despite a resurgence of knowledge graphs, are they not too subject to the intrinsic problems expert systems faced in the 1980s?

Enter Gary Marcus. The solution, per Marcus, is a combination of deep learning and knowledge (rules-based) AI. It’s the next thing. The revolution is here and we have a flag….maybe.

To be fair, Marcus did not invent the idea of Neuro-symbolic AI. In all probability it was IBM. In my view we never give enough credit to IBM for leading the field of machine learningin so many ways. Maybe because they’ve been around since the dawn of the 20th century and their stock pays a decent dividend. But Marcus has planted the flag of Neuro-symbolic AI firmly in the modern noosphere, and so we speculate with him here.

His basic idea is simple: take two complementary approaches to AI and combine them, with the belief the the whole is greater than the sum of the parts. When it comes to solving the hard problem of language, which appears to be directly related to consciousness (and therefore AGI) itself, we must stop hoping for magic and settle for more robust solutions for NLP-specific use cases. Our systems should being with basic attributes: reliable and explainable. That seems like a logical step forward. And maybe it is.

Marcus has suggested a 4-step approach to achieving this more robust AI:

1. Initial development of hybrid neuro-symbolic architectures (deep learning married with knowledge databases)
2. Construction of rich, partly-innate cognitive frameworks and large-scale knowledge databases
3. Further development of tools for abstract reasoning over such frameworks
4. Sophisticated mechanisms for the representation and induction of cognitive models

Sounds reasonable, doesn’t it? And perhaps the Marcus recipe will incrementally improve NLP over the next decade to the point where chat bots at least become less annoying and model review committees at Global 2000 firms will have ethical ground cover when it comes to justifying and rationalizing predictive results, regardless of how unsavory or politically incorrect they may be.

But Turing Test worthy? Maybe not.

In part 3 we will dive a bit deeper; it will be rewarding.

Let Us Reason Together – part 1

“‘Come now, and let us reason together,’
Says the Lord,
‘Though your sins are like scarlet,
They shall be as white as snow;
Though they are red like crimson,
They shall be as wool’” (Isaiah 1:18)

Ten years ago I made the decision to fully restore my hands-on coding skills, spend more of my cycles on learning and coding and less on management, consulting, and management consulting. It was one of the best career decisions I ever made.

During most of that decade my focus has been on all things AI, the most recent three years of which have been devoted to Natural Language Processing (NLP) on behalf of two different firms — one very large and one very small. Prior to those engagements my pursuit of data science skills and machine learning knowledge was more general.

For the sake of this collection of blog entries, please consider the thesis of Thinking, Fast and Slow; we humans use two complimentary systems of thought:

– FAST, which operates automatically. It is involuntary, intuitive, effortless
– SLOW, which solves problems, reasons, computes, concentrates and gathers evidence

Andrew NG, one of the pioneers of modern AI, is famously quoted as saying,

“If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” — Andrew Ng

If Ng is correct, then the probability of creating a Machine Learning model to meaningfully process (i.e.: extract meaning from) the image in the example above is far greater than the probability of finding similar utility with an NLP model to process the associated text.

As the adage goes, “A picture is worth a thousand word.” But those words do not adequately convey the inverse of the relationship. How many words does it take to paint a picture? And how do we process those words with a machine? More importantly, what do those words mean?

Why is NLP so hard?

Recent advances in NLP have made headlines, to be sure. From BERT to ERNIE to GPT-3 and all the variants in between, the promise of Turing Test worthy models seems to be just around the proverbial corner. But we are still not there. Not even with 500 billion tokens, 96 layers and at a cost of $12 million for a single training run.

In my view, when it comes to Machine Learning and Artificial Intelligence, NLP is far more difficult for machines to get right than any other facet of AI, and it has very much to do with the three characteristics listed below and the fact the when humans process words, whether written or spoken, both FAST and SLOW thinking are involved. And more.

My focus on NLP over the past three years has led me to a few general assertions regarding human language, which I now share:

    1. Human language has infinite possibilities. Regardless of the language or vocabulary, an infinite number of sentences are possible.
    2. Text communication alone is multidimensional and can be inherently ambiguous.
    3. Human communication is comprised of far more than words.

    Evidence supporting the assertions above are rather trivial to gather so I make no effort here to prove or validate the above. Please take issue with these as you see fit. I would be pleased to learn if and why any of these assertions are somehow invalid. Assertion #1 above may be debatable from a computability perspective, but for the sake of these blog entries, let’s just go with it and not get side-tracked.

    So…NLP is hard. Really hard. That’s not to say we can’t make progress with use-case specific advances. Leaving the Turing Test out, what can we do? And how can we improve? This series of blog entries is devoted to the exploration of hybrid approaches to solving some of the more difficult problems machine learning enthusiasts face when processing human language. Today a handful of NLP use cases are business worthy. Application of NLP can be found in a range of business contexts, including e-commerce, healthcare and advertising. Today, the following are the top NLP use cases in business:

    • Machine Translation: Frequently used NLP applications, machine translation enables automatic translation from one language to another (eg: English to Spanish) with no need for human involvement.
    • Social Media Monitoring: Controversial at best, increasingly used to detect and censor viewpoints that may not conform to ‘factual’ analysis.
    • Sentiment Analysis: Helping businesses detect negative reviews of products and service.
    • Chatbots and Virtual Assistants: Automate question/answer as much a possible to make human customer assistance much more productive.
    • Text analysis: Name/Entity Recognition (NER) is one example of text analysis use cases.
    • Speech Recognition: Alexa, Siri, Google, Cortana and others.
    • Text Extraction: Search, but smarter.
    • Autocorrect: Spell Check and more

    These general buckets of NLP use cases have seen NLP become baked-in components for a growing list of applications and services. While the utility of these functions grows daily, so does frustration levels (who among us has not wanted to throw a smart phone against the wall when speaking with an impersonal, clumsy chat bot?).

    It is clear that NLP in practice is here to stay. But even with (unwarranted) Turing-Test-passing declarations made by otherwise intelligent humans for innovations such as Google Duplex, it is clear that the use-case specific advances we tout are not the AGI Turing was looking for at all.

    Consider the context for Turing’s recommendation of the Turing Test to begin with: Can machines think? How can we know when a machine is intelligent juxtaposed to human intelligence? A programmable assistant to make calls on my behalf, regardless of how human-like it may sound, is in no way capable of handling spontaneous interactions easily handled by us flesh-and-blood folk. Our human scope of spontaneity is infinite, whereas the bot is not. At least not yet.

    So what are we to do? When it comes to NLP what comes next? I will examine a few real possibilities in subsequent entries.

Questions with Flair

When I started on the path to becoming a data scientist I was a bit overwhelmed at the broad set of skills that were required. Granted, I took the decision at a point far enough into my software development career that I had acquired a wide swath of skills out of necessity. Math was a favorite for me since I can remember. Statistics came along in earnest with a couple of graduate-level degrees, as did analysis and modeling. Machine learning was a passion, as an author and blogger I felt comfortable writing. As for business acumen, I had an M.B.A. so I could hold my own against some of the best. And intellectual curiosity? I joined that club a long time ago.

But there was one skill that was not emphasized, that ought to be. In fact, this particular skill is probably the most important one, not only for data scientists but for software developers in general. It’s questions. Asking questions.

The ability to frame and ask the right questions is absolutely key to any software development endeavor. I dare say it it also the foundation upon which this thing we call ‘science’ is built, regardless of discipline.

I have been experimenting with using a new (to me) NLP framework I stumbled across (there are so many these days!) in the past few weeks and decided to narrow the use case to one rather simple but rather important one: classification of open-ended questions.

Is it possible, out of context, to accurately classify a question as either open- or close-ended? All you get is the question itself and none of the conversation, nuance, inflection or non-verbal clues that are essential for real human communication. Can a machine, using ONLY the text of the question, correctly determine if a question is open-ended?

To be 100% accurate (like us humans, eh?) context is required. There is no way we can determine if, “What’s up?” means a simple greeting or the lead to a long story without something in the way of context. And with NLP, sometimes the missing context makes all the difference.

We live in an age of awesome compute power and magical applications. Extracting text transcripts from audio files is as common and cheap today as internet search was the day Google launched. And to use some of those awesome NLP tools on audio files, I need words, not sounds. Okay so text extraction is also NLP, but that’s beside the point. I want to analyze the text itself. So how can we discern the throw-away “Waddup?” from the concerned “So what is up?” without non-textual context? In truth, we can’t. But we can get close.

I used FlairNLP, a very simple framework for state-of-the-art Natural Language Processing (NLP). I took an out-of-the-box question classification dataset, modified the use case (and the data) to better suit my open-ended question use case, found additional data sources, munged and wrangled like a good data scientist, and in a few days began training a model to predict if an arbitrary question was either open- or close-ended.

It took a few tweaks. Some more wrangling. A dash more munging. But what do you know — yes it is possible to predict with a decent degree of accuracy, based only on the text of the question, if a question is open-ended. It’s not perfect, but it’s not bad:

Results from 10 training epochs:

F-score (micro) 0.97
F-score (macro) 0.9636
Accuracy 0.97

Although these results appear promising, the importance of conversational context cannot be overstated. But the model does appear to be pretty decent as a start for recognizing open-ended questions out of context.

I created a github repo for the notebook. It’s not that much in the way of code. Most of the work was finding and munging data….which is most often the case.

Now on to another question. Because that’s what we do. We ask questions.

Constructor Theory

It’s been a while since I wrote for this blog. Something I’ve never been able to determine is how to end a blog. When does the last entry for a blog occur? I know how to start a blog; I’ve started at least three different blogs prior to this one. This one is the most long-lived, but still, having blogged a bit over the years, having started writing tripe for the web before it was even called blogging, I have had some experience in that regard. But I still don’t know how to know in advance when a blog has had seen it’s final entry. For that matter, the same is true of just about most experience, no? For most of us in this life, we cannot know when our last sunrise may be, our last kiss, our last meal. I don’t want to get too gloomy here so I’ll leave that particular topic for now. Suffice it to say that the particular moment, this blog still breathes, as do I.

My previous entry was published early in the midst of COVID-19 lockdown, which now seems to be more of a permanent condition than it may have appeared then. Much has happened since then but in many ways nothing has changed. Masks are still the norm. There seems to be considerable concern and activity around vaccines for the virus, but given the fact that so many people have been vaccinated and yet there is still such extreme concern for widespread outbreak, one wonders what to make of the effectiveness of the vaccine efforts. Personally, for personal reasons, I will keep my attitudes and beliefs regarding the vaccine and the virus to myself.

So Constructor Theory.

I confess here now that I am a subscriber to New Scientist. I admit it. I realize it is science porn. I confess that I have a paid subscription and it feels good to finally admit it. When Liz and I met in Manhattan many years ago, one of the key data points for us to accept the true love nature of our union was our shared subscriptions. At the time, Omni was still in print, and we both had copies in our respective apartments, along with stacks Scientific American, and Wired. Science and technology have long been a shared passion for us.

As such it is only natural that search engines deliver suggestions based on my interests and searches, one of which caught my attention a few days ago. It was a Science Times articles published last year which the title served as click-bait candy to assuage my addiction: Constructor Theory Could Explain Life, Universe, and Existence

I mean, come on! Who could resist such a temptation?

Constructor Theory

A theory of everything (not 42).

I do realize I’m a little late coming to the party for this particular meme. The fact is, I had read about Deutsch a few years ago and actually did come across his ideas regarding the information laced fabric of the universe. I think my mental association of Constructor Theory with object oriented programming and constructors is probably a little responsible for my interest in Deutsch’s work, and while I do admit that his depth of knowledge in physics far surpasses any understanding I may have, I am a fanboy of theoretical physics nevertheless.

So maybe my mental association is not really too far off? Maybe a good analogy for Constructor Theory in terms of the universe, existence, et al, is objected oriented programming concepts? Start with a class — a prototype from which instances can take form — sort of Plato’s Perfect Form. What if that’s the way the quantum cookies crumbles?

Not a new idea here, I am sure. Late the party, yes. Unoriginal, perhaps. But those are my thoughts for today, for this point in space-time, for this specific spring day in 2021. COVID-19 fear continues, regardless of remedies. Climate Change is still the cultural rage, regardless of weather. And Joe Biden’s now the POTUS. A year ago, I would have not predicted that I would be writing this entry with these facts from this particular universe. But here it is. And with any luck it’s not my last time.

Artificial Faith

In the midst of the COVID-19 sheltering I have been slowly digesting Phillip Goff’s book Galileo’s Error and recommend it highly as a must-read for anyone interested in human consciousness. Or any reductionist who might read these words. Heck, anyone interested in the existential state of humanity which should include every sentient being on this planet. Read it. Despite the fact that Goff and I might be distant cousins, his work is quite cogent and accessible even if you’re not a philosopher.

Spoiler alert:If you don’t want to know what Galileo’s error actually was don’t read the following paragraph.

Per Goff, the error Galileo made at the dawn of the Enlightenment was to separate (human) consciousness from material reality. The qualia of human existence (consciousness) is independent of the physical definitions inherited from the (essentially) mathematical nature of reality, or so Galileo posited. As a result, the difficult problem of (explaining) consciousness has alluded the best of scientific minds.

But what if things are actually that simple?

What if Galileo was actually correct? We’ll get back to that thought in a few paragraphs. But first let’s consider human consciousness.

The hard problem of consciousness is only hard because of our need to fully explain human consciousness sans extra-normal artifacts. In other words, if we can’t fit it in to the standard model of particle physics it does not exist. As foolish as that may sound on the surface, that is actually the view of some scientists and philosophers: human consciousness is easily explained because it is only an illusion.

Seriously.

Are we left with two and only two choices? Panpsychism on one hand which posits the essential nature of the universe is consciousness, or that consciousness is imbued it matter at a fundamental level, juxtaposed to the physicalist view on the other, where the rabbit hole necessarily leads to the denial of consciousness altogether?

Another recent read I have enjoyed is Bernardo Kastrup’s recent work on Metaphysical Idealism he called The Idea of the World.
I have followed Kastrup’s work for some time now, especially his thoughts regarding Artificial Intelligence. As a philosopher and a computer scientist, his multi-disciplinary views are insightful and well considered. One quote I found from him is especially sage, in my view:

The chances that currently-envisioned AI systems will ever produce consciousness are about as good as that a detailed simulation of kidneys running in your computer will ever cause the computer to urinate on your desk. — Bernardo Kastrup

Bottom line: we create the universe with our thoughts. That’s not to say that the universe fails to exist if I’m not looking. Read Kastrup’s tome for a more detailed explanation. We are thought as is everything we see. Yes, it is metaphysical. Of course, proof from a physicalist perspective will never be sufficient. Reductionism is, by it’s very nature, limited. Alas, science is what it is. Today science is more of a religion than are religions. It’s amusing and disturbing how frequently the term science is used to justify such unscientific views from which many politicians and activists make a living. But I don’t want to go there. Politics is theater at best and much more profane than theater may have ever aspired to be. I would rather avoid such discussions in the name of science.

So where does that leave us?

If Kastrup and Goff are correct or even close to being correct, the idea that consciousness can emerge from machine programming is absurd. How then is Artificial General Intelligence ever achieved. Artificial faith?

Faith is the foundation upon which all our thoughts and beliefs are built. Even if you are a hard-core atheist, you have faith that some aspect of your perception is valid. Even if you believe we live in the Matrix, you have faith in your ability to think, to perceive, and to reason. Where does that faith come from? Is it built in? Does faith emerge? Is it programmed? What perspective must our machines adopt in order to become consciousness? Why do you believe what you believe?

There is no such thing as an objective observer. If, as the physicalist rabbit hole teaches us, there is not such thing as objective reality, then how can an objective observer exist? If, as a faith-laden believer supposes, there is a greater-than-me God Creator from which all reality flows, how can any observer ever attain that objective level of perception?

The physicalist rabbit hole leads two one of two choices: either consciousness doesn’t exist, or we do not yet have a model of reality that is sufficient to explain the most common attribute of sentient humanity. Neither of those options is palatable. The panpsychist suggests that everything is consciousness. But does everything have consciousness or is everything made of consciousness?

Alas, all theories of reality beg the basic question: what is consciousness?

And finally, what is Artificial Faith? I’m not certain. But I think our machines will need something like Artificial Faith if our Pinocchio is ever going to become a real boy — or girl, as the case may be. Must a specific gender be programmed? That is the question for another day.

the Artificial Solipsist

More than once I have made the observation that Alan Turing used the philosophical foil of Solipsism in his seminal paper the Imitation Game to argue the point that we cannot fully nor completely specify what it means for a human to be conscious, and therefore we cannot measure well the idea of machine intelligence beyond allowances we typically make for other humans — to wit: the Turing Test. If we cannot distinguish between the machine and a human in written conversation, the machine is intelligent. To go beyond that simple test is to risk the slippery slope to an absurd solipsistic isolation.

For some reason it has always bothered me that solipsism was one of Turing’s arguments; a straw man, to be sure…but one that has egged me on. I mean, it’s not necessarily a flawed argument. But there seems to be something a little disturbing hiding there behind the curtain of unprovable consciousness. For years I felt a puzzling unease with the argument and I wasn’t sure why. Recently, however, it dawned on me what was bothering me, and I share with you now the nexus of that discontent. Solipsism itself is a little disturbing from an existential perspective. In the context of AI, the problem with solipsism might be even worse.

First let’s briefly consider the sociological and political implications of Turing’s foil – solipsism as a central theme in today’s post-post-modern era. From the Latin solus meaning “alone” and ipse meaning “self,” solipsism is the theory that the self is the only object of real knowledge that is possible, or the only thing that is provably real. This is to say that your own mind is all you can ever be sure of; essentially, your ground truth is your own consciousness. I myself have admittedly been seduced by the notion that my reality is one of my own creation, or at least my own interpretation. Just read my last blog entry for absolute proof of my own professed belief in belief, or rather, in my conscious interpretation of my reality being my actual reality.

So if my own mind, my perceptions, are equated with reality and that perception flavor of belief is equally true of you (presuming you actually exist), then it follows that there is no actual objective reality – no shared ground truth. My truth, my reality, is mine and mine alone, as yours is yours. If we tacitly accept this as central to our perceptive capabilities and therefore our existence, then it also follows that there is no path of actual truth, no spiritual journey to embark upon, and no transcendent enlightenment possible outside my own perceptive limitations. It is this spiritual dilemma that has been like a splinter in my mind – my own unease with the Matrix, as it were. On the one hand, I have long espoused a belief in belief. On the other, I entirely reject the idea of a universe with no shared, provable ground truth.

Whether we are talking about hard core solipsism or a softer version, the idea that truth is non-existent is so very disturbing that I can no longer defend the idea that belief in belief is the sole or even primary filter through which we experience reality. At the most extreme, for the solipsist, nothing really exists. There are no actual experiences – it’s all just perception. If this extreme is demonstrated to be false, even if we can prove that something actually exists, we can never really know anything about it. We cannot really understand anything about it, whatever it may be, in its fullness, because we cannot experience that experience. This completely implies that there is no real knowledge, and therefore no truth.

Finally, if the two previous solipsistic apologies are proven to be wrong, and you can know that something exists and you can even know the truth of the thing (despite Kant’s ding an sich), that knowledge (or truth) cannot be communicated because it’s too complex for human language to contain.

Do you see the inherent problems that emerge? The anti-knowledge essence of solipsism is inherent in this era. When, like Bob Dylan’s song, “It’s All Good,” becomes the mantra of mendacity that ameliorates the creeping societal dysfunction so well amplified by Network Age technologies, the cultural core of solipsism must be recognized for the profound and woefully damaging untruth that it is.

My soul rails at the assertion that it is all good! It is most certainly NOT all good! Today I see quite clearly that the big lie at the core of culture today is, in fact, the tacit acceptance of solipsism. I confess I have been seduced by the siren song of belief first and reality following, for which I am now deeply apologetic. Belief is vital. Belief is important to my well-being, my health, my happiness and each interaction, each breath, of each and every day.

However, the belief that there is no provable objective truth is the acceptance of there being no capacity for determining objectivity. This is to say there is no mechanism for distinguishing right from wrong, good from evil, nor light from dark. Any notion of Natural Law is rejected as outdated, historic ethical norms become fashion at best, and as easy to discard as a worn out suit.

Science cannot be science without an understanding and acceptance of empirical evidence. It’s called “ground truth.” Direct observation by a dispassionate, objective observer gives rise to such truth versus knowledge that is inferred. In machine learning we use the term “ground truthing” to refer to the process of gathering the proper OBJECTIVE (provable) data for testing and proving research hypotheses. None of this would be possible without a wholesale rejection of the very anti-knowledge bias of a culture very clearly orbiting a core of solipsism.

As we approach the third decade of this 21st century, Artificial Intelligence is the new black. It is in vogue, getting tons of R&D investment, and every day gives rise to news stories about the fear and fun of AI applications unleashed. AI is today what the Internet was to the mid-90s — hope, hype, and hysteria. Especially when it comes to AGI or Artificial General Intelligence, AI is the undeniable zeitgeist champ of the century so far with Kurzweil’s Singularity playing the roles of both Y2K Scepter II and Benign Human Zookeeper. Although us humans appear to be totally in control of the earth’s climate, we evidently cannot control our own inventions .

But what if we are wrong? Specifically, what if we are wrong about intelligence? What if the best we can ever aspire to culturally is solipsism itself, and due entirely to our own misguided cultural biases we collectively chide our inventions to agree with us? What if the elimination of bias is successful to the point where it’s really not about the data at all, but about some collectively misguided notion that is solipsistic at the rotten core? What if the recently announced super-advanced hybrid Tianjic chip meant to stimulate the development of AGI (providing both CS-based and neurology-based programmable compute architectures in one amazing collection of cores) does actually give rise to a machine that is conscious, yes, but simultaneously reaches the inescapable conclusion, based on our own collective biases baked-in during training, that NONE of the inputs it receives are absolutely trustworthy or even real?

In an era of Fake News, Deep Fakes, gaslighting galore, a bumper crop of conspiracy theories and the undeniable reality of a world increasingly fractured by bifurcating world views (it’s all good, remember?), what is a machine to do? Perspective must be grounded by something we can rely on to be true. Something must serve as a foundation from which and upon which consciousness might bootstrap. It’s one thing to say that the machine will learn based on inputs. It’s quite another to discern what is real without a ground truth outside myself.

Leap Into Faith

Eleanor: Kierkegaard, baby! Leap of faith.
Michael: It’s better translated as a “leap into faith.”
   —The Good Place, Season 2, Episode 9

I believe in belief.

What does that mean? It’s very simple, really, and quite complicated at the same time.

Each of us carries with us at every moment of every day a model of the reality in which we engage. We each have very different sets of capabilities, experiences, and perceptions. From the first moment you came to consciousness — that certain moment when you realized ‘I am’ — you have spent your entire life processing and creating your understanding of the world. From a Data Science perspective, you have been creating a model of the universe in which you live. Much of it came from books, from education, from films and television and simply day to day living. All of it came from your life experience.

Fotografi efter blyantstegning udført ca. 1840 af N. C. Kierkegaard

My model as yours yields the world view, or belief system, which you employ as we engage reality. Reality — that flow of stuff that is apparently outside our day to day thinking activities — is different for each of us. We can often agree, and often disagree as to the nature, content, and meaning of reality. But for all of us, no matter our station in life, no matter our profession or education, conduct our lives and think our thoughts inevitably bound to that model we have created. It is true by definition. I see what I see. Even if we are looking at the same picture, we are, by definition, seeing different pictures: I ‘see’ the reflection of ‘reality’ in my mind, scattered amongst my neurons and synapses and glial cells, just as you do the same with your mind.. I see what I see. You see what you see. And we all see selectively through filters of belief. In other words, my model of the world is necessarily used to process the information from the picture I perceive. And the model I can never escape is the essence of my system of belief. The same is true for every living human being.

So, belief. If everything I observe is fundamentally filtered through my internal model — the summation of my beliefs — what does that say about the scientific method? What does that say about ANY world view? What does that say about our universe? Let’s stipulate that there does exist a reality outside of your mind and mine that is, for lack of a better word, ‘real.’ That reality we agree upon, and begs the question of solipsism. For the sake of discussion, let’s agree that I am not the only conscious creature in the universe, nor are you. We then agree that there is ‘something’ outside our minds and we call that something the universe. To continue our discussion, let us also agree that we live in the same universe, regardless of internal models that we maintain. Now what? How do we explore this universe together, such as to continue to agree, given the inevitable divergence inherent in our internal models?

Three Bacons and a Descartes

Since the Enlightenment in Western Civilization we have explored the universe using science. Although we might credit Ancient Greeks like Aristotle or Epicurus, or 11th century Muslims like Al-Biruni, for origin we can point to the Western methods of scientific inquiry which were pioneered by thinkers like Roger Bacon, and Francis Bacon, and Rene Descartes. For Network Age inquiry adjustments, along with Sir Isaac Newton and Galileo, let’s add Stephan Wolfram to the list for his complexity insights in A New Kind of Science. And for fun, Kevin Bacon, for Six Degrees of Kevin Bacon influences.

Aside from Kevin and Wolfram, the simple version of the scientific method has a seemingly simple flow, and may also harbor a few serious flaws in light of the suppositions made here regarding models and beliefs. The simple scientific method description includes the following phases:

  • Observations in the natural world lead to …
  • Questions which lead to …
  • Hypotheses, many of which can be tested through controlled …
  • Experimentation the results of which are subject to …
  • Analysis, allowing hypotheses to be verified or falsified leading to …
  • Conclusions

Given earlier the assertion regarding personal models, my observations are mine. I see what I see. You see what you see. It follows that observation, colored as it is by my own unique and limited set of filters, cannot be exactly the same as your observation. A foundational reliance on observation therefore necessarily constrains and limits the methods we use to ascertain scientific facts. Questions that arise from flawed observations may then also be flawed. The hypotheses, the ‘ergo’ in Descartes famous edict, too is a function of belief. Ergo requires a presupposed system of thought — a belief system — one based on Aristotelian logic.

Mathematics is oft cited as perhaps the single transcendent truth we know. Is mathematics reality or does it transcend reality? Part of the universe or beyond the universe? Discovered or invented? You see, mathematics is foundational layer in the science belief system. We rely on measurement, calculation, and extrapolation based on mathematical assumptions or beliefs which we imbue on our experiments. We are obliged to accept mathematics as transcendent and the bedrock upon which our cathedral of science is built. Yet, we cannot satisfactorily answer the simple questions asked in this paragraph.

All this is to say that proponents of science-only approaches to understanding the universe must also rely on a set of foundational beliefs. Rupert Sheldrake does a fine job of questioning scientific dogma in The Science Delusion. You surely realize that even questioning any of these 10 commandments, as it were, is a form of heresy in some quarters. But really? Just take the placebo effect from number X in the dogma list as an example. Okay, we can’t really explain why it happens, but we observe it on a regular basis. Since we cannot explain it — the fact that some people believe something and are healed in the process — we label it as ‘placebo’ and simply move on, our dogma and assumptions still intact. If you’re not familiar with Sheldrake’s work, please open your mind and drink from that source. Understand how the 10 Dogma of Science seriously constrain our progress. He even equates presupposed scientific dogma as just another unequivocal religion.

The Dogma of Science

Given Sheldrake’s assertions, there is effectively no difference between the Bible-thumping preacher who insists on a literal acceptance of King James as the only truth, and the atheistic science-thumping reductionist who would deny the existence of anything not grounded in their understanding of a standard model of physics. Both extremes rely upon and insist that others accept their world view; the belief system upon which their logic is founded.

Some would argue for a moral high ground. “Where are the fruits of invention from your religious beliefs? Where are the verifiable experimental results?”, the science apologist may ask. To which the believer replies, “What are your ethics based on if not my discoveries by and assertions of my faith?”

Beyond school yard arguments for existence and creation, the implications of belief, as a key property of reality creation, profoundly impacts every perception, every observation, and every choice we make. This is true for every individual. And the collections of those beliefs in the aggregate are true for every civilization.

None of this to say there is no value in science, nor any reason to abandon religion. Obviously there is tremendous value in science. But it should be equally clear that the benefits of faith, in the aggregate, are just as profound. The histories of all civilizations our species may have birthed are written in the margins of faith-based tomes.

I see what I see. You see what you see. We create our universe with and in our thoughts. Of course we can establish agreements at some level regarding the nature of objective reality — the very reality that is demonstrably true whether we believe it or not. Clearly we can. Earlier we agreed we would. But it cannot be said that my understanding of objective reality is exactly the same as yours. My belief in the nature of objective reality is very likely very different from yours. Our interpretation of reality may be just as divergent. The fact is, when it comes to our collective understanding of the universe based on science, we simply cannot explain a litany of observations based on accepted natural theories. Among these are the very nature of the universe itself. Physics is not certain. The deeper we dive into physics, the stranger the mechanics become. And we keep discovering new particles and forces, from atoms to quarks, that drive our bodies, our brains, and our universe. In light of new observations, theories are updated with interpretations stretched to the point of absurdity at times. It is entirely possible that the universe is made up of dozens or thousands of dimensions we will never be able to experience nor measure in any direct manner. To that extent, objective reality is an abstraction we may never quite pin down — at least not with colors only from the reductionists palette.

The existential abstraction in which we reside, a universe we will likely never fully grasp nor agree upon as an objective reality, is only the model we create with our thoughts. It is consciousness that fills in the gaps, as it were. Human consciousness. I, for one, believe we have not yet scratched the surface of what this amazing gift of consciousness can yield. Perhaps you familiar with the work of Dean Radin.

Quantum woo

With thought alone, can I influence objective reality?

Dean Radin and others say yes. Independent verification of Radin’s modification of the double-slit experiment testing if human thought can indeed change quantum outcomes have concluded that a modestly (statistically) significant psychophysical interaction can be detected.


In modern physics, the double-slit experiment is a demonstration that light and matter can display characteristics of both classically defined waves and particles; moreover, it displays the fundamentally probabilistic nature of quantum mechanical phenomena.

The implications are enormous. It means that consciousness is not entirely constrained by nor subject to known physical laws. It means the mechanical views of reality we have embraced since Newton are woefully incomplete. It means our understanding of the quantum world may yet have implications beyond which we cannot explain with the Bohr-Heisenberg Copenhagen Interpretation. And it also means that belief can shape the reality that cloaks it. And also, now we are fully into the realm of quantum woo, which, along with Sheldrake’s dogma, serves to marginalize those who would be more open to explorations outside of sanctioned orthodoxy. The truth is, interpretations of quantum reality is all woo at some point. Even the more orthodox view. To avoid discussion of a creation from intelligence, are parallel universes not woo? Are 11 dimensions not woo? Everything connected to everything, or the everything-is-hologram, or panpsychism not woo?

Both science and faith can meet at the intersection of Kierkegaard and Heisenberg with a full-throated defense of their own explanation for existence, neither of which are fully congruent nor complete. But perhaps together, we might then better explain our universe and choose ourselves into meaning with both sides at the table.

If nothing else, the Dean Radin experiments should be taken seriously, replicated again and again, with as much funding as humanly possible. Would DARPA not be interested in minds controlling matter? Wouldn’t that set of super-powers be worthy of investment?

I believe in belief. Belief, as a manifestation and expression of human consciousness, quantum-entangled to everything I perceive. Belief is the leap into faith we all make, no matter our belief system. Belief is both the lever and the place to stand. And with that, perhaps we can actually move the world.

Pascal’s AI Wager

I believe in belief. A full essay on that particular statement I will save for a future blog entry. For the moment, suffice it to say that I believe that belief itself is a critical aspect of reality — perhaps the most important part. What we believe determines how we behave, which in turn determines so much of the reality we engage. Our lives are literally shaped by the confines of our beliefs. Even the most hard core realist must concede that the universe we see is only the universe we see (the one in our head), reflected on the back of our brains as shadows on a wall, colored invariably with the palette of our own beliefs. There’s more to say about that, but not here and now. Right now I want to share some thoughts I’ve entertained recently around belief and artificial intelligence.

The title of this entry comes for Blaise Pascal’s famous wager for the existence of God. Pascal asserted that it is more logical to believe in God than not, regardless of God’s existence. A 2×2 matrix is often cited, providing a pretty good rationale for belief in the divine as such.

Does God Exist?

Does God actually exist? That reality juxtaposed to individual belief in God provides a succinct outcome-based foundation for analysis. If I believe in God and God does exist, my reward is infinite good. If I believe and God does not exist, the bad is rather finite. If, on the other hand, I do not believe in God and God exists, the return on investment is infinite bad, with the reward for non-belief in a Godless universe only is a finite good. Therefore, belief in God has by far the greatest positive payoff, either way. QED

Clearly it is a simple matrix. And yes, I absolutely acknowledge the fact that a lot of smart people have taken issue with Pascal for one reason or another. But as a straw man, Pascal’s Wager has been around for a relatively long time and still has value as a thought experiment if nothing else.

With a nod to Pascal’s work, this entry would hope to provide thoughts to consider as we embark on this particular chapter of the Network Age, which we may christen as the Neural Network Age. If you’re not familiar with neural networks, learn about them. Perhaps more than any previous innovation, neural networks have the potential to radically transform everything we think we know as human beings. That is a bold statement. Such bold statements require elaboration at the very least, if not outright proof.

Enter Byron Reese — Entrepreneur, Futurist, Author, Inventor, Speaker. His most recent tome, The 4th Age, lays out the case for strong AI. Not narrow, application specific-algorithms or stochastic sieves that feedback and feedforward and sometimes exhibit actual learning from data over time. But Artificial General Intelligence. Conscious computers. Machines that think, feel, and are actual sentient beings as much as you and I are beings. Reese is an accomplished radical optimist, and I like his thinking. But, as should always be the case with radical thinkers of all flavors, lest we become a claque, I take issue with some of it. So, for the sake of discussion, casting Reese as Pangloss to our Candide, let’s consider Conscious Computers.

Per Reese, three big questions shape our beliefs, and hence our worldview. These foundational beliefs are also germane to how we view the possibilities of AGI becoming actual. Are Conscious Computers even possible? Your answer to these questions underpin your beliefs regarding AGI:

1. What is the composition of the universe? Your belief, regardless of detail, probably falls into one of two categories: monist or dualist. Either everything in the universe is physically apparent, all arising from one common source, as it were, and that is all there is, or there exists a duality — mind and brain as distinct. The former is monist, the latter is dualist. Rene Descartes was the ultimate dualist: the immaterial mind and the material body, while being ontologically distinct substances, causally interact. The greek philosopher Democritus the first declared monist which views the many different substances of the world as being of the same kind. Which are you? Monist or Dualist?

2. What are we? Your belief is one of three:

    1. MACHINES
    2. ANIMALS
    3. HUMANS (something beyond machine and/or animal)

3. What is your “self” – your consciousness? You have three choices:

    1. An illusion produced by your brain — a trick
    2. An emergent property of brain activity
    3. Something involving the brain, but beyond or outside the brain


Per Reese, you are more likely to believe that AGI, or Conscious Computers can and will exist if you fall neatly into the monist group. Many scientists are monists. The Scientific Method practically requires a monist bias. If you are purely dualist, you are likely very skeptical of the possibility of AGI — you simply don’t believe it can happen or that we can create it. I claim there is a middle ground — neither monist nor dualist but both. Sort of a Forrest Gump perspective — it’s both, at the same time. Like light, both particle and wave. Yes we are machines, yes we are animalsl, yes we are human. Yes my ‘self’ is an illusion, a trick of the brain, but my ‘self’ is also an emergent property of brain activity, and yes it is also separate from my currently measurable physical apparatus. I have never had a cognitive problem embracing paradox. But that’s me. That’s not everybody. And that’s not the point of this entry.

The point of this entry is how belief impacts outcome, akin to Pascal’s origial Wager.

We can pretty much line up dualists with God B+ in Pascal’s breakdown and monists with God B-. The middle ground, let’s stipulate each side gets 1/2 of the subset, so we are able to create a 2×2 matrix for Pascal’s AI Wager:

ARE CONSCIOUS COMPUTERS POSSIBLE?

So the question of AGI is depicted with reality on one axis and belief on the other.

If we do not believe AGI is possible, and it is, modern mythology paints a universally dark future. We are doomed. Infinite bad. If we do not believe and it happens, then we will be surprised, like when one day Skynet comes alive and decides to eliminate the pesky threat of mankind. Or when our humanoid robots rebel violently. From Terminator to Matrix to iRobot to Blade Runner, if we do not believe Conscious Computers are possible and we are wrong, the outcome is bad. Very very bad. At least insofar as our collective imaginations are concerned as manifest by modern culture.

If we do not believe AGI is possible, and it is not possible, our futures may be less bleak but it still not good. Our mythologies suggest continued investment in technology with dehumanizing outcomes like Robocop, or Brave New World. If machines cannot think as humans then more human vessels are needed — cloning for body parts in the quest for immortality, for example, becomes the focus as we realize that uploading consciousness to digital offspring just won’t do the trick. This quadrant may also imply a slide into a repressive quagmire of indolence and dullness — Mr. Robot, or Ready Player One. This too is bad. I call this quadrant Final AI Winter, because we no longer believe in AGI and the ANI investments will have dislocated much of the humanity from us. This too is bad. Perhaps not infinite bad, but nonetheless very bad.

That’s if we do not believe. But what if we do believe?

If we believe Conscious Computers are possible, and they are not possible, our mythologies point to an alternate set of futures, none of which can be painted as terribly good. I call this quadrant Big Regulation because if we do believe we will invest as such and take precautions against the Terminator quadrant. We will rely on machines, but only to a certain point. And alas, the ability of technology to magnify the worst in us becomes the norm. People become redundant, and the displacement of human labor will be met with equal parts despair and alienation. The classic future here is Orwell’s 1984, followed closely by Hunger Games, Idiocracy, Elysium and others. So far, nothing very good here at all. So again, it’s bad.

Finally, if we do believe Conscious Computers are possible, and AGI is possible, the future may perhaps be Star Trek TNG, with Data as our Pinocchio surrogate. What is the great quest for AI after all, if not to create human in intelligence in a machine? But here too lie dragons. This is the quadrant of Kurzweil’s Singularity. The moment an intelligence emerges on our planet that is smarter that the smartest human, and that intelligence is fully capable of rapid replication and improvement of self, all bets are off. We cannot predict what will happen. We have no rules, no concepts, no game plans to fall back on. But at least, if we believe and it does happen, we will have beckoned forth those angels and girded loins as much as possible for the consequences. This quadrant is the Event Horizon. We simply cannot know what lies beyond. Is it good? Is it perhaps the realization of Omega Point Theory, effecitvely Bootstrapping God? We cannot know.

I present this as Pascal’s AI Wager. It’s not a clean nor as compelling. But I think there is merit in considering what we believe in the aggregate juxtaposed to what will actually occur.

It is interesting to note that the monists who were the effetive losers in Pascal’s Wager come out on the better side of Pascal’s AI Wager….maybe. Or maybe not. Maybe the best we can hope for is that AGI truly is a fantasy and can never actually happen. Even if ‘self’ is not a function of a dualist universe but rather an extrelemly rare and complex emergent property beyond engineering capabilities for melennium to come, at least then, humanity may continue for a while longer, darkness notwithstanding. Would you rather survive in a Brave New Idiocracy World, or roll the dice on Star Trek knowing that Terminator, The Matrix, and God knows what else might be waiting just over the edge of the world?

I quite like Byron Reese’s latest book and strongly encourage you to read it. I wish I shared his radical optimism. I have long said that if there is hope for humanity it is in software, and I still believe that. But when it comes to AGI, I am an enthusiast, very curious but a bit cautious. You have heard it said to be careful what you wish for, as you just might get it. When it comes to Conscious Computers, though a dualist by nature I am adopting the skeptical monist hat and suggest we all proceed with utmost caution — but proceed nevertheless. The truth is, from what we know and don’t know about human consciousness, in my view the probably of AGI happening any time soon is not much greater than it may have been 50 years ago. But just in case, Moore’s Law being what it is, we are well advised to exercise prudence on this journey.

………….
Remember those who died: 17 years ago today, 9/11/01 and 6 years ago today, 9/11/12
………….

Top