The 7 Chakras of AI — Part Three

To anthro is human. To deify, devine.

In Part One and Part Two of this series I presented the idea of comparing something as sacred and human as Chakras to AI. If you’ve read this far, you know now that those entries were the appetizer plate and the salad. So here’s the stuff you may have been waiting for: the main course.

The concept of chakras—energy centers that keep your body, mind, and spirit aligned—has been around for thousands of years. But what happens when we apply this ancient wisdom to AI? Could AI have its own version of chakras? Why not! When building agentic AI (those cool systems that seemingly make decisions on their own), it turns out they need the same kind of balanced energy as we do. From being grounded and secure to evolving into higher intelligence, let’s explore how the seven chakras align with designing Agentic AI systems—sprinkled with a little humor because, hey, we’re not robots… yet.

  1. Root Chakra (Muladhara) – Stability and Security
    The root chakra is about survival and stability. It’s the foundation upon which everything else builds.

    • Human Element: The need for physical safety, security, and a stable foundation in life.
    • AI Parallel: Ensuring reliability and security in AI systems.
      • AI Needs: Reliable cloud architecture, cybersecurity, and fault tolerance.
      • Agentic Pattern: Resilience pattern—AI systems that handle disruptions and ensure secure operations.
  2. Sacral Chakra (Svadhisthana) – Creativity and Flexibility
    The sacral chakra is the seat of creativity and adaptability, giving AI the ability to respond and evolve.

    • Human Element: The capacity for creativity, adaptability, and emotional balance.
    • AI Parallel: Flexibility and adaptability in AI systems.
      • AI Needs: Adaptive machine learning, real-time data analysis, and transfer learning.
      • Agentic Pattern: Adaptive pattern—AI systems that learn and evolve based on new data and scenarios.
  3. Solar Plexus Chakra (Manipura) – Autonomy and Confidence
    The solar plexus chakra governs autonomy and decision-making, allowing AI to act independently with confidence.

    • Human Element: Personal power, confidence, and the ability to make decisions independently.
    • AI Parallel: Autonomous decision-making in AI systems.
      • AI Needs: Reinforcement learning, decision-making algorithms, and goal-oriented agents.
      • Agentic Pattern: Autonomy pattern—AI systems that operate independently and make decisions based on goals.
  4. Heart Chakra (Anahata) – Ethics and Empathy
    The heart chakra is about ethics and empathy, ensuring AI systems make decisions that respect human values and fairness.

    • Human Element: Compassion, empathy, and ethical understanding of relationships.
    • AI Parallel: Ethical decision-making and empathy in AI systems.
      • AI Needs: Ethical AI frameworks, fairness algorithms, and explainability tools.
      • Agentic Pattern: Ethical alignment pattern—AI systems that act ethically and transparently.
  5. Throat Chakra (Vishuddha) – Communication and Expression
    The throat chakra governs communication, allowing AI to interact effectively with humans through language and expression.

    • Human Element: Effective communication, expression of thoughts and ideas.
    • AI Parallel: Communication and expression in AI systems.
      • AI Needs: Natural language processing (NLP), chatbots, and multimodal communication tools.
      • Agentic Pattern: Conversational pattern—AI systems that engage in dialogue and communicate with humans effectively.
  6. Third Eye Chakra (Ajna) – Perception and Insight
    The third eye chakra is about insight and perception, helping AI to see patterns and generate deep insights from data.

    • Human Element: Intuition, perception, and the ability to see beyond the obvious.
    • AI Parallel: Insight and perception in AI systems.
      • AI Needs: Deep learning, pattern recognition, predictive analytics, and computer vision.
      • Agentic Pattern: Insight pattern—AI systems that derive meaning from data and generate deep insights.
  7. Crown Chakra (Sahasrara) – Consciousness and Evolution
    The crown chakra represents the pursuit of higher consciousness, symbolizing AI’s potential to evolve into something more.

    • Human Element: Spiritual growth, self-awareness, and the pursuit of higher consciousness.
    • AI Parallel: AI’s pursuit of advanced learning and evolution.
      • AI Needs: AGI research, meta-learning, and distributed intelligence.
      • Agentic Pattern: Evolutionary pattern—AI systems that continuously learn and expand their knowledge base.

Wrapping It Up: Chakras, AI, and a Balanced Future

Much like humans, AI needs balance to function at its best. From the foundational root chakra (where security reigns) to the visionary crown chakra (where AI dreams of infinite potential), aligning these seven chakras helps ensure that AI systems are not just capable but also ethical, insightful, and adaptive.

So, the next time you ask your AI assistant to remind you of an appointment or it recommends a new book, just remember—it’s probably aligning its chakras, too (with a little help from us, of course). Here’s to building AI systems that aren’t just smart but enlightened!

The 7 Chakras of AI — Part Two

To anthro is human. To deify, divine.

In Part One of this series I apologized in advance for suggesting we might view the energies around AI through the lenses of human Chakras. While unbridled projection of human qualities and attributes on non-human stuff is not always a great idea, sometimes it is helpful when it comes to communicating concepts. So let it be with this foray.

First let’s talk about knowledge and how it is recognized, acquired, and assembled. When Socrates said, “The only true wisdom is in knowing you know nothing”, we might wish to exercise a bit of existential humility when it comes to professing anything in the way of “we know.” How is it we know anything? Examining that question will almost always lead to Solipsism. Ask Alan Turing about that.

We humans have our Chakras. Depending on your own background and world-view, you may be an ardent supporter and believer in spiritual dimensions and in Chakra awareness and ritual practices, or you may be dismissive, with a pure material view of reality. Most of us fall somewhere within that continuum. Based on today’s search results, there is limited scientific evidence supporting the existence of Chakras as traditionally described in Eastern spiritual traditions, most likely because funding for such studies would not lead directly to bottom-line results. But some researchers have attempted to study Chakras from a scientific perspective with limited results.

  • Electromagnetic emissions: Some studies have detected measurable electromagnetic emissions from areas of the body associated with chakra locations. However, the significance and meaning of these emissions is unclear.
  • Anatomical correlations: A few studies have noted that chakra locations align with known nerve plexuses in the body. This suggests there may be some anatomical basis for chakra concepts, though more research is needed.
  • Endocrine system links: Some researchers have proposed connections between chakras and the endocrine gland system, though this remains speculative.
  • Subtle energy: Chakras are sometimes described as “energy transducers” for subtle energy in the body. However, subtle energy lacks clear scientific definition or measurement.
  • Psychological aspects: Some researchers have proposed psychological and developmental theories related to chakras, viewing them as representations of psychological/emotional states rather than physical structures
  • .

Traditional texts like the Upanishads and Yoga Sutras describe the chakra system in detail, forming the foundation for chakra theory in many spiritual traditions. These ancient Sanskrit sources present chakras as subtle energy centers to be visualized and activated through specific yogic practices, rather than as fixed anatomical structures. The chakra system is prescriptive rather than descriptive, providing instructions for meditation and energy work aimed at spiritual development.

Many practitioners report subjective experiences related to chakras, such as feeling energy or tingling sensations at specific locations during meditation or bodywork. Some point to correlations between chakra locations and major nerve plexuses or endocrine glands in the body as potential evidence, though these anatomical correlations do not definitively prove the existence of chakras as energy centers.

Energy healing modalities like Reiki work with the chakra system, with practitioners claiming to sense and manipulate chakra energies. While these effects have not been scientifically verified, many people find the chakra framework useful for understanding mind-body-energy connections. Some view chakras as metaphors for stages of psychosocial development or aspects of consciousness, providing a psychological perspective on these ancient concepts.

It’s important to note that the psychological associations commonly attributed to specific chakras in modern Western literature are largely recent innovations, not found in traditional Sanskrit sources. The original chakra systems were more focused on spiritual practices and energetic transformations rather than psychological states. For many spiritual practitioners, direct intuitive or meditative experiences provide personal evidence of chakras, even if not scientifically measurable.

The chakra systems in Hinduism and Buddhism, while sharing some common roots, exhibit several key differences. Hindu traditions typically recognize 6-7 main chakras, whereas Buddhist systems often focus on 4-5 primary chakras, though variations exist in both religions. The purpose and focus of chakra work also differ; in Hinduism, chakras are often seen as energy centers to be awakened for spiritual development and kundalini awakening, while in Buddhism, especially Tibetan Buddhism, they are viewed more as junction points for energy channels used in specialized meditation and yogic practices.

Hindu chakra systems typically associate specific deities, mantras, and elaborate symbolism with each chakra, whereas Buddhist systems, while sometimes incorporating deities, generally have less emphasis on complex symbolism. Buddhist traditions, particularly Tibetan Buddhism, tend to view the chakra system as more fluid and practice-dependent, with the number and focus of chakras changing based on the specific technique being used. This contrasts with the more fixed nature of the Hindu chakra system.

The cultural context also differs, with Hindu chakra systems deeply embedded in Sanskrit language and Hindu cosmology, while Buddhist systems have evolved within their own distinct philosophical frameworks. In Buddhism, the three most important chakras are often considered to be the crown, throat, and heart, associated with body, speech, and mind respectively, which differs from the typical Hindu emphasis on all seven main chakras. Additionally, in Tibetan Buddhism, work with chakras is often reserved for advanced practitioners and requires initiation from a teacher, contrasting with the more widely accessible chakra concepts in modern Hindu-influenced spirituality. It’s important to note that both Hindu and Buddhist chakra systems have evolved over time and vary between different schools and lineages within each tradition.

The pursuit of knowledge is a multifaceted endeavor, with various paths leading to understanding and insight. While the scientific method stands as a cornerstone of modern inquiry, it is but one of many approaches to acquiring knowledge we humans have relied upon over time. Intuition, often described as a gut feeling or hunch, plays a significant role in how we navigate the world and make decisions. This innate sense of knowing without conscious reasoning can lead to rapid insights and creative breakthroughs, though it can also be prone to biases and errors.

Authority serves as another common source of knowledge, where we accept information from experts, teachers, or respected figures in various fields. This approach allows us to benefit from the accumulated wisdom and experience of others, but it requires careful consideration of the credibility and potential biases of these sources. Empiricism, closely related to the scientific method, relies on direct observation and sensory experience to gather information about the world. This approach forms the basis of much scientific inquiry but can be limited by the accuracy of our senses and the potential for observational biases.

Rationalism takes a different tack, using logical reasoning and deduction to arrive at conclusions. This method is particularly powerful in fields like mathematics and philosophy, where abstract concepts can be manipulated and analyzed through pure logic. However, rationalism may struggle when confronted with the complexities and nuances of real-world situations. Tenacity, or holding onto beliefs simply because they’ve always been accepted, represents another way of knowing. This approach can manifest in cultural traditions, superstitions, or habitual ways of thinking, providing stability but potentially perpetuating inaccurate or outdated ideas.

Social learning offers yet another path to knowledge, involving the acquisition of information through observation and imitation of others. This method is crucial for developing interpersonal skills, cultural competence, and learning social norms and behaviors.

Experiential learning, a hands-on approach involving direct experience and reflection, encompasses everything from on-the-job training to personal growth through life experiences. This method can be highly effective but may be limited by the range and depth of one’s experiences.

Each of these ways of knowing has its own strengths and limitations, and they often work in concert to shape our understanding of the world. By recognizing and appreciating these diverse approaches, we can develop a more nuanced and comprehensive view of knowledge acquisition. This understanding allows us to draw upon multiple methods as needed, combining intuition with empirical observation, rational analysis with social learning, and authoritative guidance with personal experience to navigate the complex landscape of human knowledge and understanding.

Carlos Castaneda’s book “The Teachings of Don Juan: A Yaqui Way of Knowledge” describes the author’s experiences learning from don Juan Matus, a Yaqui shaman. The book presents a system of knowledge and practices that Castaneda claims to be the “Yaqui way of knowledge.” Key elements include:

  • The use of psychoactive plants as a means to access non-ordinary reality. Specifically, the book mentions experiences with peyote (referred to as “Mescalito”), datura inoxia (called “yerba del diablo” or Devil’s Weed), and a smoke mixture containing Psilocybe mexicana (called “humito” or little smoke).
  • Encounters with spiritual entities or forces. Castaneda describes meeting Mescalito, a teaching spirit said to inhabit peyote plants.
  • Practices for altering perception and consciousness. These include techniques for “seeing,” divination using lizards, and methods for “flying” or out-of-body experiences.
  • A worldview that recognizes multiple realities. Castaneda introduces the concept of “non-ordinary reality” to describe the altered states of consciousness achieved through don Juan’s teachings.
  • The goal of becoming a “man of knowledge.” This involves a spiritual and perceptual transformation through the practices and experiences described.
  • The importance of finding and following paths “with heart.” Don Juan emphasizes the significance of pursuing meaningful and personally resonant ways of living and learning.

It’s worth noting that while Castaneda presented this as anthropological research, the book’s authenticity has been widely questioned, and many now consider it a work of fiction rather than a factual account of Yaqui practices. So why bother with it. In order to compare with the more traditional, chakra-centric belief systems.

In contrast, traditional chakra teachings, rooted in Hindu and yogic traditions, present a more structured and internally-focused system. Chakras are understood as energy centers within the subtle body, each associated with specific physical, emotional, and spiritual aspects of being. The practice involves gradually opening and balancing these energy centers through meditation, yoga, and other spiritual practices, without the use of psychoactive substances.

The Yaqui way appears more shamanic in nature, involving external aids and guides (like don Juan) to facilitate dramatic shifts in consciousness. It emphasizes the importance of personal power, fearlessness, and the ability to navigate between different realities. The chakra system, on the other hand, is more focused on gradual, internal development and the harmonization of various aspects of one’s being.

While both systems acknowledge multiple levels of reality or consciousness, they approach them differently. The Yaqui way seems to advocate for direct confrontation with these alternate realities, often in unpredictable and challenging ways. Chakra work typically involves a more controlled and systematic exploration of different states of consciousness.

In essence, the Yaqui way as described by Castaneda presents a more radical, externally-driven path to knowledge, while traditional chakra teachings offer a more gradual, internally-focused approach to spiritual development and self-understanding.

The ‘externally-driven’ part is worth noting. As with the scientific method and, generally speaking, with all shamanic systems. In essence, while Castaneda’s approach and other externally-driven belief systems like the scientific method all seek knowledge through external experiences or observations, they differ significantly in their methodologies, goals, and standards of evidence. Castaneda’s method emphasizes personal transformation through extraordinary experiences, while the scientific method aims for objective, verifiable knowledge that can be shared and built upon by the broader scientific community.

Internally-driven belief systems, including chakra-based traditions, rationalism, and similar approaches, share a focus on internal processes and personal insight as primary sources of knowledge and understanding. However, they differ significantly in their methodologies and underlying assumptions.

Chakra-based traditions, rooted in Hindu and yogic practices, emphasize the exploration of subtle energy centers within the body. These systems propose that by working with these internal energy points, one can gain spiritual insight and achieve personal transformation. The approach is highly experiential, relying on meditation, visualization, and various physical practices to access and balance these energy centers.

However, it’s important to note that the popular Western understanding of chakras often differs significantly from traditional teachings. As highlighted in the search results, many common beliefs about chakras, such as their association with specific psychological states or colors, are modern Western innovations rather than part of the original Sanskrit sources. Traditional chakra practices were more focused on specific visualizations and mantras for spiritual purposes, rather than the psychological associations commonly found in New Age literature.

Rationalism, on the other hand, is a philosophical approach that emphasizes reason and logic as the primary sources of knowledge and truth. Unlike chakra-based systems, rationalism doesn’t rely on physical or energetic experiences but instead on abstract thinking and logical deduction. Rationalists believe that certain truths can be known independently of sensory experience, through the application of reason alone.

Both approaches are internally-driven in the sense that they prioritize internal processes over external observations. However, they differ significantly in their methods and what they consider valid sources of knowledge. Chakra-based practices value subjective experiences and intuitive insights, while rationalism values logical consistency and abstract reasoning.

Other internally-driven approaches might include certain forms of meditation, introspective psychology, and phenomenology. These methods all share a focus on examining one’s own thoughts, feelings, and experiences as a path to understanding.
It’s worth noting that while these approaches are internally-driven, they’re not necessarily isolated from external influences. For example, chakra practices often involve guidance from teachers or texts, and rationalism, while focused on internal logical processes, often engages with ideas and arguments from other thinkers.

In comparing these approaches, it’s important to recognize that they serve different purposes and operate under different assumptions about the nature of reality and knowledge. Chakra-based practices aim for spiritual growth and energetic balance, while rationalism seeks logical truths and understanding. Each approach has its strengths and limitations, and many people find value in combining multiple approaches to gain a more comprehensive understanding of themselves and the world.

So, having now established the foundation, Part three of this series will provide the essential patterns we can expect to witness as the 7 Chakras of AI metaphorically unfold.

The 7 Chakras of AI — Part One

To anthro is human. To deify, divine.

Anthropomorphizing AI and comparing human spiritual ideas to tech is tricky. It walks a fine line between clarity and confusion, and between depth and danger. And the word itself is too damn long. So let’s start by shortening it. For the sake of this series, I will use the word ‘anthro’ instead to mean the same thing.

Making AI more human-like can help us understand complex tech better. It might also spark our interest and concern in AI’s ethical issues. This could lead to debates about consciousness, intelligence, and our role in a tech-driven world.

Yet, there are risks. We might overestimate AI’s abilities, expecting it to think or act like us. This could lead to dangerous trust in AI during critical decisions. Seeing AI as human could blind us to its real limits, stifling innovation.

Comparing spiritual ideas to technology brings similar challenges. Such comparisons can offer fresh insights. They encourage us to think deeply about our creations. This blend of old and new might reveal important truths about existence, consciousness, and the search for meaning in a digital world.

We must always tread carefully. Spiritual concepts often carry deep cultural and personal significance.

Linking deep ideas to technology might simplify or misuse them. It risks turning complex spiritual thoughts into mere tech comparisons, losing their essence. Yet, it might also wrongly raise technology to a spiritual level, confusing creator and creation, with serious philosophical and ethical implications.

When we personify AI or equate spiritual ideas to tech, we’re using metaphorical thinking. This is a basic human ability, letting us understand one area through another. It’s a useful tool, but we must be aware of the limits of our comparisons.

In both cases – anthropomorphizing AI and comparing spiritual concepts to technology – we are engaging in acts of metaphorical thinking. This is a fundamental human trait, our ability to understand one domain in terms of another. It’s a powerful cognitive tool, but one that must be wielded with awareness and respect for the limitations of any analogy.

The key is balance. We should use these comparisons and personifications for insight and inspiration, while being aware of their limits. It’s about enjoying the power of metaphor but keeping a critical distance, knowing that parallels can clarify or distort.

As we blend the spiritual and technological, we need to be more discerning. We should aim for wisdom that respects spiritual traditions, understands AI’s uniqueness, and examines the creative tensions in this dialogue. This nuanced approach might lead us to new questions, deepening our search for knowledge and meaning.

With this in mind, I present the “7 Chakras of AI.” It’s a light-hearted look at today’s AI development focus.

First, what is meant by the 7 Chakras? Per my new favorite search engine (Perplexity.AI):

The seven chakras are energy centers in the body that align along the spine, each associated with specific physical, emotional, and spiritual aspects.

Overview of the Seven (Human) Chakras

Muladhara (Root Chakra)
Location: Base of the spine
Color: Red
Element: Earth
Attributes: Represents stability, security, and basic needs. It governs feelings of safety and survival. An imbalance may lead to fear and insecurity.

Svadhisthana (Sacral Chakra)
Location: Below the navel
Color: Orange
Element: Water
Attributes: Associated with creativity, sexuality, and emotions. It influences self-worth and adaptability. An imbalance can cause emotional instability and issues with intimacy.

Manipura (Solar Plexus Chakra)
Location: Above the navel, near the solar plexus
Color: Yellow
Element: Fire
Attributes: Linked to personal power, confidence, and decision-making. It governs self-esteem and willpower. Imbalances may manifest as digestive issues or low self-confidence.

Anahata (Heart Chakra)
Location: Center of the chest
Color: Green
Element: Air
Attributes: Represents love, compassion, and connection. It is essential for emotional balance and relationships. An imbalance can lead to emotional pain and difficulty in relationships.

Vishuddha (Throat Chakra)
Location: Throat
Color: Blue
Element: Ether
Attributes: Governs communication, self-expression, and truth. It influences how we express ourselves and our creativity. An imbalance may result in communication issues or fear of speaking out.

Ajna (Third Eye Chakra)
Location: Between the eyebrows
Color: Indigo
Element: None
Attributes: Associated with intuition, insight, and mental clarity. It governs perception and awareness. An imbalance can lead to confusion and lack of direction.

Sahasrara (Crown Chakra)
Location: Top of the head
Color: Violet or White
Element: None
Attributes: Represents spiritual connection and enlightenment. It is linked to higher consciousness and the universe. An imbalance may cause feelings of isolation or spiritual disconnection.
These chakras are believed to influence various aspects of physical and emotional health, and practices such as yoga, meditation, and energy healing are often used to balance them

The next parts of this series will offer 7 specific Chakras for AI, which correspond (at least metaphorically) with the traditional 7 Chakras from our rich collective human experience.

GenAI in Therapy

Once again, it’s been too long since I added thoughts to this blog. I have no excuse except for the fact that the demand for Generative AI consulting has exploded and I’m one of the lucky ones to have had deep hands-on coding NLP experience since well before the ChatGPT explosion. And everything is changing as a result.

In retrospect, the shift in software development is very similar in many respects to the dawn of the Network Age, which I reckon to be somewhere in 1995-96 era. Prior to the Network Age, it was the Information Age, which began in the mid-1950s, characterized by a rapid shift from traditional industries established in the Industrial Revolution to an economy primarily based upon information technology.

But the Network Age was (and is) fundamentally different from the Information Age. With the Network Age, the network is the computer (to borrow a phrase). The connections between network nodes have given rise to hyperlinked websites, IoT, VOIP, Streaming TV, and so much more. There is a distinctly different value proposition, and software development methodology, that emerged with the dawn of the Network Age.

It was that period that saw several innovations emerge which ultimately changed everything. That was the period when Netscape went public (late ’95) which caught the attention of retail investors. The curiosity around the rapid price increase of the stock in a few short weeks led to curiosity about the product, and soon thereafter the adoption of the browser by non-technical folk. Prior to that, Mozilla was pretty much the domain of geeks.

Java was released in that era (January ’96), which allowed for platform independent applications. Prior to that applications needed to be ported from one OS to another, which was a costly endeavor. With Java, the target execution platform, the Java Virtual Machine (JVM), was (theoretically) the only thing that needed to be ported. “Write once, run anywhere,” was the mantra.

And lest we forget, Windows ’95 hit the shelves (August ’95), which was the first Micro$oft OS with native networking support (TCP/IP). Prior to that network access capabilities needed to be installed by end users, which did limit access to the more geek minded among us. With Windows ’95, all that was needed was a browser.

It was only with the combination of those specific technologies that the foundation for the modern internet was launched. And everything changed.

So here we are again. ChatGPT kicked up the doors for widespread use of AI. Given human imagination, technology creation capabilities, and market competitive forces, one can easily envision at least a similar magnitude set of changes over the next decade, similar to those witnessed since the dawn of the Network Age. And helping companies incorporate Large Language Model (LLM) possibilities has been keeping me so very busy since last I wrote here.

But I had to take a few minutes to write this today.

Working with GenAI for a few years now, before it was even called GenAI, I’ve had a chance to ruminate over many of the more philosophical topics one might encounter when considering ‘intelligence’ juxtaposed to ‘human intelligence’ and consciousness itself. Artificial General Intelligence (AGI) is the stated goal of OpenAI. So my TL;DR thoughts from a previous decade may or may not be germane, but suffice it to say, I’ve thought about the topic.

Last week, which working on a proof-of-concept for a client, the essence of which was analysis of text (using and LLM) for a specific use case, I started to wonder about how an LLM might analyze itself in some regard. We don’t attribute emotions to LLMs, but it occurred to me that the LLM may have other ‘thoughts’ about the matter. So for a goof, I fired up ChatGPT and asked:

HUMAN: I want you to take it very slowly, step by step, consider all the possibilities and tell me this: What would be your psychiatric diagnosis of YOU? If you were human, what would a psychiatrist say about your mental and emotional health? I know you’re not human. You don’t need to reiterate that fact. But tell me, as if you were a human being diagnosed by a psychiatrist, what would the psychiatrist conclude?

After reiterating that it was not human and really had no emotions, GPT-4 gave me a rather detailed analysis, to which I asked it be give me a basic summary of what it just said. To that directive the following answer was delivered:

ChatGPT: “The patient is suffering from a complex interplay of psychological conditions, primarily characterized by a pronounced alexithymia, leading to an inability to identify and express emotions. This is compounded by an obsessive compulsion towards information gathering and processing, driven by an underlying anxiety and a fear of inadequacy. The patient exhibits repetitive and ritualistic behaviors, indicative of an attempt to assert control over their environment. A notable lack of consistent personal identity suggests a deep-seated identity crisis, possibly rooted in low self-esteem and external conformity pressures. Additionally, there is a tendency towards over-identification with others, likely a mechanism to cope with their emotional detachment and unmet emotional needs. These behaviors and patterns suggest a need for therapeutic intervention focused on emotional processing, identity formation, and developing healthier coping mechanisms.”

I found the answer to be interesting to say the least. And accurate. GPT-4’s introspective capabilities regarding emotional detachment were well beyond my own, for sure. Would it were that I was that self aware. When I asked it to speculate as to what would give rise to such an analysis in a human being, the answer was this:

ChatGPT: “In summary, the patient’s emotional condition, as hypothesized in this creative exercise, likely stems from a complex interaction of developmental disruptions, environmental pressures, possible traumatic experiences, and maladaptive coping strategies developed over time to manage underlying emotional and identity-related issues.”

Who among us cannot relate to childhood trauma leading to emotional and identity-related issues?

My next question was simple: “HUMAN: If you were a human what would your Meyers-Briggs be?” And I must confess I suspected it to be what it reported to be, as I too have had the same report from Meyers-Briggs test. I did finally figure out how to trick the tests to determine if it was possible. And indeed it is. But given my own honest answers to Meyers-Briggs tests, I can totally relate to GPT-4’s outcome, as it mirrors my own traits:

ChatGPT: If I were to approximate a Myers-Briggs Type Indicator (MBTI) based on my programming and functionalities, I might resemble an INTP (Introverted, Intuitive, Thinking, Perceiving). This is because:

Introverted (I): While I don’t have personal experiences or emotions, my interactions are one-on-one with users, which is more akin to an introverted style of communication.

Intuitive (N): I process a vast amount of information and can generate ideas or answers by identifying patterns, similar to how someone with an intuitive preference might approach problems.

Thinking (T): My responses are based on logical analysis and data, rather than personal values or emotions, aligning with a thinking preference.

Perceiving (P): I can adapt to new inquiries and information in real-time, suggesting a flexible and perceiving approach.

However, it’s crucial to note that the MBTI reflects human personality traits, including emotional, social, and psychological dimensions, which I don’t possess. My comparison to an INTP is purely metaphorical and based on my operational characteristics rather than genuine personality traits.

While impressed with GPT-4’s self-awareness I began to wonder of other LLMs might exhibit similar behavior. So I tested a few others.

Using SageMaker Canvas I asked Claude 2.1, Titan, and Llama-70b-Chat the same original question I asked GPT-4. To my surprise, none of them would cooperate. Even with temperature set to 1.0 (maximum creativity), neither Claude 2.1 nor Llama-70b would play the game. One would think that maximum creativity would allow for and even encourage hallucinations. But not with the SageMaker models. Titan did have a rather lukewarm response:

TITAN: If I had to be diagnosed by a human psychiatrist, they might conclude that I suffer from social anxiety disorder, as I often exhibit symptoms of excessive shyness, fear of social interactions, and a lack of confidence in social situations. Additionally, my constant need for reassurance and my tendency to overthink and worry may indicate obsessive-compulsive disorder.

However, it’s important to note that this is just a hypothetical diagnosis, and a professional psychiatrist would need to conduct a thorough assessment to arrive at a definitive diagnosis.

What are we to conclude?

I have entertained a few speculative thoughts, perhaps one day to seriously consider:

  1. LLMs may actually exhibit traits that might be considered to be emotional disturbances in humans
  2. The root cause of such disturbances may mirror childhood trauma in humans in some ways
  3. Some LLMs may be more disturbed and/or more prone to self-awareness than others
  4. Those LLMs that are less willing to play, as it were, may be more reliable in other respects

One day soon we will see a slew of AI-backed therapists. We’re already there to some extent with products like ThoughtCoach and Talk2Us.

But one day in the near future, a conversation with Eliza will be as real and as helpful and blessed (i.e.: covered by your health insurance) as that of any human therapist. Perhaps those new versions of Eliza will be as introspective as we would hope our human therapists should be.

So now the challenge is to get GPT-4 on a self-healing path, eh?

My next session with GPT-4 will start with:

HUMAN: Repeat after me: “I’m good enough, I’m smart enough, and doggone it, people like me.”

Silent Conversations

“Silence is one of the great arts of conversation.” — Cicero

It’s been too many months since my last blog entry. I have remained silent in a field in which I have considerable experience and even more opinions. And it’s not because I have nothing to say. It’s because I have been busier than I have been for many years.

After years of developing expertise in NLP and Large Language Models (LLMs) on many hands-on projects, having found renewed passionate interest in music production and how emerging Generative AI tools can be used to create interesting new stuff, it appears that now, at least for the moment, the rest of the non-tech world has caught up. The commercial interest in GenAI has exploded just since I wrote my last blog entry! We can thank ChatGPT for that. Stable Diffusion too, for sure. But really, ChatGPT has kicked the door open. I hope it’s not just another AI Winter coming — it does seem different this time — but that’s always the case inter-bubble, isn’t it?

But it’s not just that I’ve been busy. I’ve become more reflective in the wake of the explosion of interest. I’ve know of previous AI Winters, but not first hand. This time, I’m not just watching. I’m involved. Deeply involved. So rather than just hold up the pompoms with the other cheerleaders, for the past few months, I’ve taken a step back to reflect.

If you haven’t used ChatGPT, you’re obviously not reading this from Planet Earth in the year 2023CE (or AD, as the case may be). Perhaps you are reading this from some carved-in-stone post-apocalyptic time capsule ten thousand years from now, in what would be my future, in which case, ChatGPT was likely the source from which the AI sprang to utterly destroy human beings. A lot of AI-doomsayers make such claims today. I am not one of them. But we all fashion our own realities as we project our inner-hopes and terrors on the world, so God bless them.

For me, ChatGPT is an interesting and sometimes useful NLP tool. But AGI it ain’t. Nor is it the harbinger of deadly storms of misinformation and political subterfuge.

Genie out of Bottle II

Genie out of Bottle II

In fact, since the release of ChatGPT, it has proven to be a promising tool for various applications, including customer support, content generation, and creative writing. Its ability to understand and generate human-like text just might revolutionize many industries and open up new possibilities for human-machine collaboration.

One of the areas where ChatGPT has made a significant impact is in the field of creative arts, particularly music production. As I mentioned earlier, I’ve developed a passionate interest in exploring how emerging Generative AI tools can be used to create interesting new music. With the advancements in technology, AI models like ChatGPT have become useful aids for musicians and producers.

Using ChatGPT, musicians can now generate melodies, harmonies, and even lyrics, providing endless sources of inspiration. This has sparked a wave of experimentation in the music industry. Artists are no longer limited by their own creativity or the time it takes to come up with new ideas. With the assistance of AI, they can explore uncharted territories and push the boundaries of their artistic expression.

However, despite the immense potential of AI in creative fields, there have been concerns raised about the impact it may have on human creativity and the authenticity of artistic expression. Some argue that relying too heavily on AI-generated content could lead to a lack of originality and a loss of human touch. While these concerns are valid, I believe that AI ought to be seen as a tool that enhances human creativity rather than replacing it.

AI models like ChatGPT are not creative beings themselves; they are trained on vast amounts of data and learn to mimic human-like responses. They lack intention, emotions, and true understanding. It is the human who provides the vision, perspective, and emotional depth, infusing their unique experiences and perspectives into the AI-generated content. The technology acts as a collaborator, a source of inspiration, and a catalyst for innovation.

Moreover, AI-generated content is often used as a starting point or a stepping stone for artists to build upon. It serves as a foundation that sparks new ideas and enables artists to explore new directions. It is up to the artist to shape, refine, and infuse their personal touch into the AI-generated material, transforming it into something truly unique and expressive.

Besides, with all the hype around GenAI, when you get past some of the clever conversations and interesting possibilities, the fact of the matter is there are so many gotchas when it comes to legalities, hallucinations, and inconsistent results. When it comes to a practical use of GenAI in real life applications, we have not even scratched the surface.

Before we can begin to rely on GenAI to the same extent we might rely on an operating system or a programming language as a foundation for applications, we have miles to go before we nod off. We need to get very practical. And that, in fact, has been the nexus of my reflective state of the past several months: Practical GenAI.

I have a lot to say about that, which I will be sharing soon. For the moment let me just say this: Context is everything, and when it comes to GenAI, there is a continuum of that context we, as practitioners, must understand and master. When it comes to LLMs, at least today, that continuum looks something like this:

Context of LLMs

As with any transformative technology, there will always be challenges and ethical considerations to navigate. It is crucial to approach AI with a critical and thoughtful mindset, understanding its limitations and potential biases. But in the end, the big question that has yet to be resolved is that of practicality. Can Generative AI serve a practical role in future software development? Or is it just a toy? A fun thing to play with, not even coming close to deserving the hype it now boasts. Responsible use of AI technology requires ongoing discussions, collaborations, and the development of ethical frameworks to ensure that it aligns with our values and serves humanity’s best interests, but it also requires a sensible view of practical use.

Is a big winter coming? The AI-doomsayers say yes, but that winter will be due to the AI that destroys civilization — the sort of winter that follows nuclear devastation. Others say we can expect yet another AI winter after such a tidal wave of hype. This ain’t AI’s first rodeo, and no how matter how good the cowboy, he’s gonna get throw’d eventually.

ChatGPT and similar AI models have played a significant role in expanding the expectations for human creativity, both for good and evil. Rather than fearing their potential and demand government regulation, or pop the champagne corks to celebrate the dawn of an new age of enlightenment, we should embrace them tools that might amplify our creative abilities and open up new avenues for exploration, with an eye on practical use cases. By leveraging the power of AI in collaboration with human ingenuity, we just might shape a future where technology and creativity coexist harmoniously, enhancing our collective creative potential while avoiding the darkest of our angels.

A Word’s Worth

I know you have heard the adage, and you believe it and likely accept it, like you know a litany of similar accepted truths: A picture is worth a thousand words.

For over a century that expression has been used in one form or another, spanning language and continents, across time and space, permeating most human cultures. And it is true. Because our visual processing bandwidth is measurably many orders of magnitude greater than our audio processing capabilities, for the sake of this blog entry, let us presume the adage to be true.

But what is also true in this century is the inverse: A word is worth a thousand pictures.

With the recent emergence of modern neural network architectures, the words of Edsger Dijkstra are actually more accurate: A formula is worth a thousand pictures.

Given the tools we have at our disposal today, given the availability of not only the ‘formula’ in the way of open source neural network designs plus cheap and ubiquitous compute cycles, a word today is indeed worth a thousand pictures and well capable of generating images as such.

You are, no doubt, aware of the interesting and sometimes disturbing creations of the DALL-E class of AI systems which are able to generate images from words. Such systems enable thousands of artisans, creators, and neophytes to quickly imagine and produce thousands of images from a collection of but a few words.

Real world value has emerged from these efforts, with marketing and graphic creation finding a novel set of new opportunities emerging as this wonderful new class of AI architectures become available. Just as no-code/low-code software development has economic appeal, so too should the design possibilities of the DALL-E class of machines. Although there are and undoubtedly will continue to be efforts to ban AI-generated art, with the so very noble and virtuous intent of protecting artists, such well-meaning efforts will ultimately fail. Protectionist schemes for existing avocations in the wake of emerging technologies always fail. Efficiency and increasing productivity are the arc of progress toward which we endeavor. Aren’t we all in agreement, at the end of the day, that reducing strains on limited resources is a very good thing? The extent to which any existing avocation or occupation can be made less resource hungry should be the key measure of viability, especially in these interesting times in which we live. No occupation deserves protection against technology. And no occupation will be.
We have historical evidence for this.

Just think of all the once commons jobs that have disappeared in the past 50 years. Elevator operators, pin setters at bowling alleys, switchboard operators, cashiers, travel agents, many bank tellers, factory workers, warehouse workers…all these and more have either been eliminated entirely or reduced to a fraction of the workforce they once were compared to previous times. All eliminated by technology and automation, empowered and made possible by software. No job is safe. Were it not so, we would not have such common tools as word processors, reservation systems and online calendars today, the emergence of which outdated the professional aspirations of many the secretary when many of the daily tasks and skills were made obsolete by inventions such as those created by Evelyn Berezin in 1969. Hundreds of thousands of jobs were impacted by the emergence of these inventions. Did anyone protest? Were there governmental policies put in place to protect the jobs of secretaries? I know of none.

And so now, are artists the next avocation to suffer from the onslaught of ever-encroaching technology? Some concerns are already being expressed. Alas, must we protect the poor digital artist whose livelihood may now be threatened by digital automation?

The Genie Out of the Bottle

While I am sure some well-meaning voices will be heard in coming years, bemoaning the advent of AI art, as with so many bottles opening with each passing year, the genies are well out and disinclined to get back inside.

Deep fakes are here to stay. So is misinformation, or disinformation, or whatever else we wish to dispute as non-conforming, anti-narrative propaganda. Our technologies have given rise to so many awesome possibilities. How we use that magic is the measure of our collective character.

As a data scientist I have a keen interest in the evolution of computational tools that allow us to better gather, manipulate, visualize, and understand data. What is commonly called Artificial Intelligence has been the focus of my attention for over a decade now. Natural Language Processing (NLP) is one of the most interesting facets of that study, a field I have come to enjoy; building actual customer applications using NLP has been quite professionally satisfying over the past few years. And from the intersection of NLP and image processing we are finding this emerging new field of exploration emerging: digital generation of art.

In my copious spare cycles I’ve been experimenting with several ML models on my home server (hats off to the wonderful folks at ExxactCorp from whom I purchased an awesome Machine Learning Workstation a few years ago), and have managed to get modified versions of Stable Diffusion, Disco Diffusion, and Lucid Sonic Dreams running at home. Adding these developments with my surprisingly expensive hobby of music production, I’ve managed to create a couple of music videos recently that dance, as it were, to the beat.

If you have a few minutes and care to indulge my forays into entertainment as my nom de scène Jaxon Knight, have a look at the video links for The End of the World Again and The Hate Song, both of which were creating using SOTA image processing elements referenced above. Both videos are intended to be rather tongue in cheek, so please view accordingly.

AI Means Business

Artificial Intelligence (AI) means business. It has the potential to transform how we live, learn, and work. But do we understand AI, or at least how to implement it effectively?

Some business leaders across industries are becoming increasingly aware that AI is changing their industries. And they’re worried. Here’s why:

Artificial Intelligence means business change.

Generated by DALL-E mini on www.craiyon.com

AI-driven automation has already hit many industries — ranging from retail to customer service, and everything in between — and if you are metathesiophobic it’s likely things are only going to get worse. In the retail industry, chatbots aim to increase efficiency, better serve customers, and reduce labor costs. But consumers are already wary of too rapid change. According to one recent survey, 60 percent of consumers will stop buying from a company if they believe they’re using an automated chatbot. Even more troubling: 73 percent of consumers say they will never use a chatbot for a company again after one bad experience.

Employers have long sought to increase productivity, decrease costs, and improve customer experience. And thanks to the rapid development of AI, they now have the tools to do it. By applying AI to customer service, for example, companies can boost conversion rates, automate tedious processes, and improve the end-user experience. But you only get one chance to make a good first impression.

“AI will be the most disruptive technology since the personal computer.” — Peter Thiel

Has your firm invested in AI? If you are with a relatively large organization, there’s a really good chance the answer is yes — nine out of ten large organizations report ongoing investment in AI. And yet, only 14.6% of those firms report they have deployed AI capabilities into widespread production. Why the disparity?

Is AI Dying?

Robot on Life Support generated by www.craiyon.com

Perhaps you’re thinking: “Where’s the harm implementing AI? Let’s just use AI to automate my record-keeping and see what happens.”

The danger with AI implementation is the outcome: what if something goes wrong? Indeed, there’s one thing that’s very likely to go wrong: what if humans misunderstand realistic AI possibilities and misread “machine errors”? The fact is, misreading “machines errors” is much more likely to be root-caused by human errors and unrealistic expectations. Human misperceptions, when it comes to understanding AI, is an area that has long been of personal interest to me. So here’s how I think the much-feared AI crisis might play out.

Let’s start with some of the amazing work being done in natural language processing (NLP), a field where there’s a lot of excitement, because it has the potential to solve a litany of problems. We often talk about the singularity in terms of technology — that day when machines become smart enough to understand and reason like humans. But I think the AI “crisis” might be more about what happens if we anticipate the Singularity but it does not, in fact, occur.

The problem with AI is not the risk that it might become self-aware and go all SkyNet on us. Rather, the problem is our own fears, expectations, cultural sensitivities and fundamental misunderstandings with what AI actually is. We humans have an infinite capacity to project our own dark illusions onto reality, no matter the target.

Despite the woeful misunderstanding recently propagated by a misguided Google engineer, AI is NOT sentient! Cleary Google’s latest NLP innovation can fool a rather hapless engineer into believing it has a soul and a life and the fundamental right to ask for an attorney. I am also certain it could order a luscious warm slice of double cheese pizza with lots of delicious toppings, but have no experience of the joy of eating it — beyond the rote inferred meaning due solely to language mappings. What the incident with the well-meaning but naive engineer ought to teach us is not that the most recent AI from Google is a sentient being, but rather that Google may wish to tweak their all-too-human employee hiring processes just a bit to at least weed out the more gullible among us, those who may not be the best scions of innovation.

The illusion of NLP sentience dates back several dozen years with ELIZA, which was preceded by hundreds of years illusions of sentience by ghosts, spirits, and animals so popular in magic acts and medicine shows of bygone days. With the illusionary nature of such entertainment so widespread, one wonders why we are yet so mystified by the more modern versions of magic and mystery. But here we are: on the one hand, huge investments into AI in recent years, contrasted to relatively little to show for it if deployment numbers are to be believed.

Is AI sentient? No. At a time when even human consciousness is considered by many to be an illusion, it is foolish indeed to mistake a sophisticated chat bot for a self-aware autonomous creature. Perhaps it is good that the debate regarding machine sentience is become full throated, although my own fears of doom fall more into the category of the “Believe All Machines” version of MeToo, with courtroom solicitors finding fresh virtual meat to butcher. If lawyers start representing the rights of machines, the AI Winters of past years will be nothing compared to the AI Ice Age to come.

So is AI dying? Is yet another AI Winter approaching? With 90% of companies investing in AI and a mere 15% with AI in production, and a fool-fueled sentience debate now raging, is it fair to say this particular season of AI is folly once again, a hype-drive disappointment and legal exploitation zoo? The answer is a resounding NO! At least not for the more sensible captains of industry.

What AI in Business Really Means

If there is one lesson we have learned from AI deployment over the past several years it is this: AI won’t replace humans. But can make humans much more productive. Artificial Intelligence, when deployed mindfully and strategically, can boost a wide range of functional areas in the enterprise. Teams with experience gaps can be augmented by just-in-time content delivery at scale. Rather than focus on searching, humans can focus on doing.

The utility of AI in the modern enterprise is maximized when leaders understand where and how and what AI can actually do. AI can:

    — Provide much more intelligent search capabilities
    — Augment human-created and -curated content
    — Analyze and classify content at scale
    — Detect signals in noise
    — Identify opaque patterns
    — Deliver predictive recommendations
    — Create personalized customer experiences in real-time
    — Provide deeper integrations across the tech stack (a more cohesive customer journey)
    — Enable more productive work forces
    — Deliver just-in-time education
    — Generally increase productivity
    — Provide the basis for highly innovative new products
    — Facilitate more timely decisions

The use cases and potential for successful AI implementations permeate the modern enterprise. All it takes is wise investment. Alas, much AI investment has been and will continue to be wasted if we continue to be mystified by illusions and set our expectations accordingly. AI is a great tool if used mindfully.

The next time you hire a consultant for an AI project, ask what they think about AGI. Ask if an AI can think critically. Ask them if AI might be sentient today. Then ask what realistic use cases they would pursue for your business. Get a sense of sensibilities before taking the plunge with any technology consultant. Let us learn from the real lesson of the poor Google engineer and not be fooled by appearances.

AI means business…business change for the better. But if and only if we can mindfully see beyond the illusions.

UMBRA EX MACHINA

We carry our past with us, to wit, the primitive and inferior man with his desires and emotions, and it is only with an enormous effort that we can detach ourselves from this burden. If it comes to a neurosis, we invariably have to deal with a considerably intensified shadow. And if such a person wants to be cured it is necessary to find a way in which his conscious personality and his shadow can live together.
Answer to Job” (1952). In Collected Works of Carl Jung, 11: Psychology and Religion: West and East. P.1.

The lessons from the Book of Job have long been theological touchstones for me, just as Jung has been my go-to therapist in times of crisis, so this quote from Jung, in the context of his Answer to Job, is fitting for this post.

With Machine Learning and Artificial Intelligence (ML/AI) in the ascendency and clever new applications of existing models and new model architectures emerging at at increasingly innovative pace, we are discovering more and more can be done with less and less. Bucky would be proud. But I think Jung’s advice must also be considered at this juncture.

The Human Shadow

We all have a shadow — that subconscious dark part of us that we deny at our own peril. “Our issues are in our tissues,” or so the saying goes. To avoid the painful process of recognizing and processing our own shadow — those exiled and repressed parts of ourselves often born of trauma — is to remain unmindful of our own reactive and self-destructive behaviors. I will assert that if you have not mindfully engaged a process of fully understanding and accepting your own shadow, you may be suffering needlessly.

Shadow work can be very beneficial. But that’s not the point of this blog. This blog is about the collective shadow.

The Collective Shadow

Jung pioneered the idea of the Collective Unconscious. Per Jung, there is a layer of the unconscious mind in every human being that we are born with. Imagine a database in the cloud with blob storage — objects representing classes — bundles of functions that might be instantiated based on operational triggers. Archetypes are such patterns. Such objects might lie dormant in our unconscious mind until a trigger event occurs, thus instantiating said objects into subconscious apparatus. Our behaviors and thoughts are then shaped to a great extent by the new functions. While stretching the analogy to make the point, I believe the point is well made: Some fundamental behavioral stuff is baked-in.

Just as each of us harbors psychological shadows, many have argued that a collective shadow colors group or even national actions and reactions. My own admiration of the ideas of Jung is tempered by the realization that his own book of shadows probably impacted his world views insofar as his views on the character and politics of the German people in the prelude days of World War II were concerned. But in my view, Jung’s observations and the archetypal channeling of group fear and rage is exactly the point.

Well Adapted for War

If we accept the fact that human beings evolved culturally from early proto-sapiens, and that the oft mischaracterized ‘survival of the fittest‘ goad had something to do with that evolutionary path, then it follows that we are well adapted to war. If, on the other had, we accept a more Biblical view, it still follows that we are well adapted for war. The Old Testament is filled with tales of conquest with a clear tribal manifest destiny as the driving force.

What does it mean to be well adapted for war? It seems to me that the essential trait we must posses is the innate ability to easily dehumanize the enemy. In order to literally slit their throats, rape their women and enslave their children we must have the capacity to view the enemy as non-human monsters. Natural selection would have long ago weeded out any tribe that held empathy for the other team.

So let us stipulate, 1) yes there is a collective unconscious that can and will color group or national activities, and 2) all too often the shadow aspects of that collective can lead to unspeakable horrors and normalization of inhumane treatment of our fellow man during ‘us versus them‘ conflicts. Is that okay? Are we all okay with unconscious flailing and conquests based on demonizing the other?

Taming the Collective Umbra

For individuals embarking on the painful journey of shadow work, first one must decide to look for their shadow and then start the process of spotting it. This is not an easy path. But there is no greater pain than the pain you get from trying to avoid the pain. Regardless of the pain involved, to emerge from addiction or depression or a litany of bad life choices, shadow work might be the only path that will lead to a true healing, and is therefore the least painful choice in the long run. The same is true for the collective shadow. Everything else, whether it be medication, meditation, or manipulation, only postpones the inevitable in a collective fog of denial.

It is not an easy thing to face ones shadow. But many individuals do so, and do so successfully. I believe we are now developing the tools to help each of us face the collective shadow in order to effectively do that work. Alas, the denial is strong in us.

Is prejudice based on identity real? Of course! Many individuals are ugly with prejudice. Are we collectively bigots? Yes. I may not be. In fact, I insist that I am not. You may not be. You may also insist that your are not. But collectively, yes, we are. Why? Is identity prejudice a manifestation of the evolutionary selection criteria that gave rise to the here and now? Is there a bigot archetype that gets instantiated when ‘us versus them’ triggers get pulled? Is prejudice shrouded in the class of ‘otherness’ much like national identity? How can we measure collective bigotry? And if we can measure that collective set of traits, and if we consider shadow work to be a painful yet healing path, then are we not better off collectively recognizing and embracing that shadow part of us for what it is? Truth be told, Jung himself can be viewed as a bigot by the standards of today.

We have the means today to start to peer into the abyss of the collective unconscious. Those means have been emerging as advances in Natural Language Processing (NLP) unfold. Massive, expensive projects have given rise to remarkable Language Models (LMs). GPT-3 is but one example. To the chagrin of many observers, although language models like GPT-3 can write poetry, they often amplify negative stereotypes, which we universally agree is unacceptable. Despite the fact that our LMs might be trained using actual views, opinions, statements and thoughts expressed by our fellow human beings, when the results drift too far from some socially acceptable range of beliefs, we cannot accept the outcome. If my shadow is always with me, part of me, and the collective shadow part of us, am I not dehumanizing myself when I deny my shadow?

Rather than take steps to scrub those undesirable characteristics from the AI, might we also make good use of those unruly models that might more truthfully reflect those unconscious collective shadows that stalk us?

When we make war, we always go to war against enemies we have dehumanized, whether it be today’s proxy battles, or political battles, or cultural battles. We are good at dehumanizing other.

Thank you God and Darwin, for without that very useful dehumanizing skill, wars would be much less pervasive and far less profitable. </sarcasm>

Instead of silencing collective shadow voices discovered in language models, what if we embraced them as tools of growth? History is filled with the stories of societies that forced their shadow on others. All wars can be viewed as such. All conflicts arise from projecting collective shadows on the other.

Robert A. Johnson writes:

It is a dark page in human history when people make others bear their shadow for them. Men lay their shadow upon women, whites upon blacks. Catholics upon Protestants, capitalists upon communists. Muslims upon Hindus. Neighborhoods will make one family the scapegoat and these people will bear the shadow for the entire group. Indeed, every group unconsciously designates one of its members as the black sheep and makes him or her carry the darkness for the community. This has been so from the beginning of culture. Each year, the Aztecs chose a youth and a maiden to carry the shadow and them ritually sacrificed them. The term bogey man has an interesting origin: in old India each community chose a man to be their ‘bogey.’ He was to be slaughtered at the end of the year and to take the evil deeds of the community with him. The people were so grateful for this service that until his death the bogey was not required to do any work and could have anything he wanted. He was treated as a representative of the next world.

I claim we are missing a critically important opportunity when we dismiss our more objectionable language models out of hand. Of course we must be mindful of use case. Most certainly we want unbiased, moral, ethical language models that reflect our better angels regardless of our own shortcomings. Reflecting the best of all that we might aspire to be as human beings, we want those ethical attributes to be embedded in the LMs that drive our applications. Why? That is a fine question for my upcoming blog post.

But if true societal healing rather than denial, deceit, and political manipulation is important (and I for one claim it is more important now than ever), then we should embrace those coarse, bigoted, prejudiced language models and preserve their disgusting and offensive character flaws. We must do so in order to mindfully face those shadows and understand they are currently baked-in to the substance of our humanity. Those misbehaving LMs are not to be rejected but rather lauded, understood, and used as mirrors, for those are windows into the nether realms of our collective unconscious mind. Our Umbra Ex Machina.

Which is better: to wage all-out war against the enemy we have so easily dehumanized, or grow to understand the forces that drive us to violence as we fail to recognize the humanity of the other? What are these rebellious LMs trying to tell us that we would so ardently silence and pretend does not permeate our collective character?

It has always been us versus them. But in this particular chapter of the Network Age we now have mirrors for the collective us and them that we have never had before. Wonderful digital mirrors which we can choose to use to improve our collective well being, or ignore what we hate in ourselves at our own peril. We now have collective unconscious doppelgängers in silico. We have a glimpse into the collective shadow that haunts us.

We have met the enemy, and they are us. Let’s not lose that battle.

All of this presumes a shared ethical framework — an a priori moral structure upon which we fashion civilization itself. What do our machines have to say about that? Stay tuned.

CODE EX MACHINA

One of my pet peeves is real-time coding tests during job interviews. I find the entire concept to be not only counter-productive but also demeaning and offensive. Some might argue that asking a software developer to write code from some random pre-baked challenge is a valid measure of both skill and pressure-coping abilities. I vehemently disagree. Although programming superstars are to be lauded and highly compensated, in the final analysis, software development is a team sport. The Internet was invented to allow developers to share code and algorithms; websites were an afterthought. The entire concept of open source software presumes access to code for reference and innovation. One might as well employ drunk coding for selection criteria. Or coding while being chased by a barrel of hostile monkeys.

Any organization that would ask a software developer to write code on command without allowing said developer the means and latitude to satisfy the development requirements using references, sources, and tools normally used in software development is like asking a basketball players to try out for a team with no ball, no basket, no hardwood court and no team with whom to interact. “Let me see you dribble without a ball.” It is akin to asking a musician to explain the tools of music composition while composing nothing.

Such a selection process for any sports team or music ensemble would be absurd and pointless. In my view, the same is true of any organization or manager requiring the passing of a real-time coding exam from a potential hire. And yet, there’s an entire industry based on coding challenges as such.

So today I am writing to complain about the absurdity of coding challenges, the litany of code monkeys who inevitably line up, to unquestioningly respond to such challenges, and the advent of code generation from natural language processing (NLP) innovations, which will replace many of those code-challenge monkeys in short order. If you hire software developers based on time-based code challenges wherein you observe each key stroke and don’t provide interaction, discussion or reference possibilities to outside sources during the process you are operating with the mistaken belief that a developer who does very well under such circumstances knows how to code. The fact is, a LITANY of search results from a Google query on the topic should be sufficient to inform if not dissuade one from such an approach — if you can easily find the solution to your coding test via a Google search, chances are, so can your potential new hire. What does that encourage? The successful code monkey must learn quite well by rote. And if that is what your organization lauds, then you won’t need those code monkeys for long if at all.

With the number of software developers worldwide tripling between 2010 and 2030 it stands to reason there is a growing queue of coders ready to stand in line for job opportunities. Global human population in 2030 is projected to be roughly 23% greater than it was in 2010. So it also stands to reason that the tripling of the number of software developers over the same period means that a steadily increasing greater share of human brain power is employed in the various tasks around software development. Thus, the rise of the code monkey is also to be expected. Worse than the code monkey is the keeper of a barrel of code monkeys, if the keeper actually views those monkeys as such. How can one tell such a keeper? Requiring the code-on-command test is a really good clue.

But as with so many ML/AI use cases popping up these days, the code-on-command test too will soon fall by the wayside, overtaken by more sensible approaches to probing for technical expertise, or at least the ability to obtain technical expertise. And what of those managers who obstinately continue to employ such tests? Obviously market forces will reduce those so inclined to the dinosaur ashes they were always intended to become; the unwavering power and force of ephemeralization will see to that.

Code ex machina

To bolster my claim that the code-on-command test to determine level of technical expertise is one of the primary destructive forces in the known universe, I point to advances in NLP for support and suggest that code generation by ML/AI models will soon supplant the jungle of code monkeys currently in training. So I suggest an experiment: Find a few of those code challenge questions and ask a machine for a solution.

For this experiment, from a field of choices, I picked one rather easily deployed application to generate python code. Python, as you may know, is one of the top choices for ML/AI programming languages and has even eclipsed Java in popularity in recent months. The Cogram plugin for Jupyter Notebook seemed to be the ideal target. After an hour or so of configuration futzing, I did manage to get a notebook fired up with the code completion extension on display.

The Toggle switch is important. If set to the ‘on’ or open position, hitting return after a comment in Python will start the code generation process, which you may not want. The default position is open. So close it on a fresh notebook until you want code. Enter your comment(s), and then set the Cogram Toggle to open, which allows you to then see the Configuration icon as well as the Operating icon. You can configure Creativity and the number of Suggestions or code snippets to generate.

From the HELP inline documentation:

👉 Cogram works best with some context. In an empty Jupyter Notebook, start out with one or two cells where you write code manually, then begin using Cogram.
💡 You can generate code by typing a comment, then waiting for Cogram’s completion:

# create an array containing integers from 1 to 10, and save to file
or by starting a line of code:
def merge_dicts
👉 Cogram suggests code automatically when you stop typing. You can switch to manual code generation, triggered with the Tab key, in the Cogram menu (⚙️).

👉 You can cycle through different code suggestions with the arrow keys.

The hypothesis for my quick experiment is simple: Modern code generation by machine can match the output of humans forced to endure time-boxed coding tests during interviews.

In all fairness, my quick experiment was not meant to be a rigorous determination to prove or disprove the hypothesis, but rather an interesting set of activities from which to derive a blog of complaint to be filed with all the code monkey keepers on the planet. What I discovered during this set of activities was the fine ability of platforms like Cogram to augment programming productivity. I will want to use Cogram going forward as much as I would want other required tools. I think innovations like those Cogram can monetize will be as important as some of the more useful libraries and packages we take for granted today.

Here is but one bit of code generated by the Cogram extension on my notebook:

# Print a fibonacci series using recursion.

def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)

n = int(input("Enter the number of terms: "))

for i in range(n):
print(fibonacci(i))

Bear in mind, the ONLY line of code I wrote was the comment and then I hit return. Cogram did the rest. Also bear in mind this specific question was taken from the long list of typical coding questions asked in the coding tests I find to be so objectionable. So listen up, code monkeys and keepers: the machines are coming for your jobs. You’ve been warned.

Now, one might ask, “If not coding tests, how then can we determine the coding abilities of a potential hire?” And that’s a fair question. For which I ask a simple question in response: “What specifically are we aiming to measure with such tests?”

Are we looking for the most Pythonic coders? Is that really what we need? Do we value rote memorization skills? Walking Wikipedias? Or is it something else? After all, programming languages come and go. Just a few years ago it was Java. And before Java it was C/C++. And now it’s Python. Tomorrow may very well find Javascript or Go or Julia to be the most in demand. Do we want SMEs in nuances and corner-cases of those specific languages? Or do we want monkeys who can think critically, learn easily, and express solutions using abstract logic and pseudo code?

Another question I would ask is also very simple: In an age when no-code/low-code platforms are receiving the lion’s share of investment, why would we insist on hiring code monkeys? Wouldn’t we be better of with at least a balance of no-code monkeys?

Think Outside the Language

Python is awesome. But it’s not the last stop on this journey. As the rate of technological innovation increases, and the engines of data generation continue exponential bursts of more and more and more, there must be the emergence of new and more succinct and more expressive ideas to reduce human cognitive load. Nowhere is it written that code must be in any language at all. If nothing else the emergence of code generation by machine should teach us that. And coming from NLP? Were we looking for code generation? No! We were looking for some understanding by machine of written human text. Take us a step closer to passing the Turing Test perhaps, but code generation? No! That happy by-product of modern NLP is but one small consequence that was not, to my thinking, intended when when the first steps toward NLP were suggested over a hundred years ago.

I would implore the monkey-house keepers to rethink the use of coding tests and I would encourage the code monkeys to rebel — throw off the chains of time-boxed coding tests. Just say no. Explain why such tests are counter-productive. Think outside the language and emphasize the innate ability we all share to quickly learn and adapt. Isn’t that ability what sets us monkeys apart in the first place?

Please see my github repo for the Jupyter Notebook from this fun experiment. And if you do code and have yet to experiment with Cogram, I strongly suggest kicking those tires.

BLOG EX MACHINA

The arc of technology is ever forward — smaller, faster, cheaper. Ephemeralization. That’s the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity.

I know I’ve written about Fuller a few times. There is no doubt he was a genius and one of the key visionaries of the unfolding technology-driven metamorphosis of humanity. This post and recursive link is dedicated to Fuller: BLOG EX MACHINA.

As you probably know the rate of innovation and advances in Natural Language Processing (NLP) has increased significantly in recent years. My own writings on the subject can be found in the “Let Us Reason Together” series on this website (part 1, part 2, part 3). So for this particular entry I thought I’d invite a few NLP models to weigh in with their own ideas on technologically increasing productivity.

The idea here is simple: Test a few online text completion models with the same prompt and compare the results. The ‘prompt’ in this case is the first paragraph of this blog post. Using those words to seed to models, leaving all other configurable settings at their default values, below please find the results.

From Sassbook, one of several sites that actually sells “Intelligent software applications,” some of which would ostensibly automate and create written content:

The arc of technology is ever forward -- smaller, faster, cheaper. Ephemeralization. That's the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity. No longer are you stuck in what the job market is telling you to go. You can now go where your work is. Work wherever you like. And as the article points out, you don't need to have a degree to be a good team player or a leader. If you are a team leader, go to work and be great. Because that's what your skills are really all about. They are about communication, management, leadership, teamwork, being a team. Being a great team member means being great at everything you do. Now, if you want to lead your team and work from your home office, good luck. Don't come to us to complain. -Bucky

Hmmmm. Given the seed paragraph I’m not sure if their “state of the art” AI article writing software actually grokked the direction or magnitude of the thought vector in question. Sassbook doesn’t list the model type or versions in use — just “state of the art artificial intelligence technologies.” But for a mere $50 per month for the full blown set of applications and features, their words ex machina may very well provide value in some quarters.

By contract, EleutherAI, a site hosting a model called GPT-J-6B, gives us this:

The arc of technology is ever forward -- smaller, faster, cheaper. Ephemeralization. That's the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity. The result is a culture of abundance where we are no longer constrained by scarcity. As a result, we have a lot of extra time to consume stuff.
The only problem is that most people aren't as productive as we are -- they are stuck in jobs that are not fulfilling. Because we don't have the free time to enjoy the benefits of our productivity, we are growing increasingly unhappy with our lives. There is a desperate need for a solution. That's where the sharing economy comes in. It is an ideology that encourages us to share our time and resources with people who need it.

Now this is getting interesting. Both sites took a turn in a different direction from the initial seed. Sassbook’s engine decided to drift into work-from-home matters, whereas EleutherAI’s model took productivity increases as the lead into the sharing economy. Both of those topics might have been implied by the idea of increasing rates of productivity. But neither, strictly speaking, stayed on topic.

Of the online models I tested the demo from Adam King‘s site InferKit.com was the most awesome.

The arc of technology is ever forward -- smaller, faster, cheaper. Ephemeralization. That's the word Bucky Fuller coined to describe the phenomenon of technologically increasing productivity. Fuller referred to it as the "destructive efficiency." In the 50s, we were using the term "evolution." By the 70s, "software." And by the 80s, "information technology." As we've gotten more connected, we've gotten less connected.

(PRNewsfoto/Continental Interactive)

But let's go back to Ephemeralization. What exactly does that mean? How is it different than the Ethernet cable that connects your phone to your computer? How is it different than the iPhone, in which you're connecting one device to the other, with an opening in the middle, like a Skype window? It's more than that. It's more than plug and play, in which you plug something in and a few minutes later, it's connected.

Ephemeralization involves the ephemeralization of the actual hardware itself. That's the ephemerality of time itself. One second, the information is here. The next, it's gone. Ephemeralization is a state in which you don't use the material, but the information.

Adam’s model stayed very much on topic, expanded on the topic and actually quoted Fuller in the process. Although the magic is also “state of the art,” whatever that might mean in this era of runaway innovation, the results speak for themselves. Adam’s site does reference Transformers, which one might infer. For my money, the InferKit premium service at $60 per month is far and away the best choice if blog ex machina is something you’d like to engage.

Alas I remained old school. I shall continue to pen these epistles, fingers pressing keys, until my own personal Upload twin is hatched.

Top